If you're trying to make an HTTP connection to Hadoop but it doesn't work at all, see "This is a pretty unknown part" at the bottom.
If you think so, you have to start by thinking about Hadoop. I was. And this Hadoop, just the other day 3.0.0 came out, and I was warm with 2 system until then, I was worried while shaving my mind and dropping my hair.
Roughly speaking, it is openjdk-8 of Arch Linux (EFI) in VirtualBox. Let's assume that openssh and standard ones are okay. The user name will be in the form ** test **, so replace it with "vagrant" or whatever you like. In addition, this time we will explain up to pseudo-dispersion.
If you enter Yes or any character string, it will proceed in a flowing manner.
#ls / sys / firmware / efi / efivars ;; Make sure it's inside #parted /dev/sda () mklabel gpt () mkpart ESP fat32 1MiB 513MiB () set 1 boot on () mkpart primary ext4 513MiB 100% () quit
# mkfs.vfat -F32 /dev/sda1 # mkfs.ext4 /dev/sda2 # mount /dev/sda2 /mnt # mkdir /mnt/boot && mount /dev/sda1 /mnt/boot # nano /etc/pacman.d/mirrorlist
;; Bring the Japan server to the top ;; C-W Japan / Select range with Shift / Cut with C-k / Paste with C-u # pacman -Syyu archlinux-keyring # pacstrap /mnt base base-devel # genfstab -U /mnt >> /mnt/etc/fstab # arch-chroot /mnt # ln -sf /usr/share/zoneinfo/Asia/Tokyo /etc/localtime # hwclock --systohc --utc # nano /etc/locale.gen en_US.UTF-8 UTF-8 ja_JP.UTF-8 Added UTF-8 # locale-gen # echo LANG=en_US.UTF-8 > /etc/locale.conf
127.0.0.1 localhost.localdomain localhost
Updated to 192.168.1.31 test.localdomain test! # mkinitcpio -p linux # passwd ;; to whatever you like # pacman -S grub efibootmgr # grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck # mkdir /boot/EFI/boot # cp /boot/EFI/arch_grub/grubx64.efi /boot/EFI/boot/bootx64.efi # grub-mkconfig -o /boot/grub/grub.cfg # exit # umount -R /mnt ;; virtualbox menu => remove iso disk # reboot login : root passwd: ;; What I just put in # systemctl enable dhcpcd & systemctl start dhcpcd # systemctl status dhcpcd ;; Confirm that it is working properly
# paddwd test
;; vagrant user should be vagrant ... # EDITOR=nano visudo test ALL = (ALL) NOPASSWD: Add ALL to any place # pacman -S openssh wget # systemctl enable sshd.service # systemctl start sshd.service # systemctl status sshd.service ;; Make sure the sshd service is enabled # pacman -S emacs git curl jre8-openjdk jdk8-openjdk # su - test $ sudo emacs /etc/pacman.conf [archlinuxfr] SigLevel = Never Added Server = http://repo.archlinux.fr/ $ arch [multilib] Include = uncommented /etc/pacman.d/mirrorlist $ sudo pacman --sync --refresh yaourt $ sudo pacman -Syu yaourt $ sudo emacs /etc/yaourtrc Added TMPDIR = "/ home / test / Downloads" $ sudo pacman -S virtualbox-guest-modules-arch $ sudo modprobe -a vboxguest vboxsf vboxvideo $ sudo emacs /etc/modules-load.d/virtualbox.conf vboxguest vboxsf Write vboxvideo $ sudo pacman -S virtualbox-guest-utils $ sudo systemctl enable vboxservice $ sudo systemctl start vboxservice $ sudo systemctl status vboxservice $ sudo reboot login : test password: ;; What I just put in $ ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ sudo emacs /etc/ssh/sshd_config
$ ssh localhost $ exit $ sudo pacman -S xorg xorg-server deepin $ sudo pacman -S deepin-extra $ sudo systemctl enable deepin-desktop $ sudo emacs /etc/lightdm/lightdm.conf
Changed to greeter-session = lightdm-deepin-greeter ;; Search for greeter-session = and rewrite that part (uncomment if commented) $ sudo systemctl start lightdm ;; The GUI should start here ;; Once started, right click on the desktop to open terminal ;; How to use deepin desktop is not covered here $ sudo systemctl enable lightdm $ sudo shutdown -h now that's all! (Let's take a snapshot)
This time, we will install it as it is without creating a new hadoop user. If you want to create a new user, create a hadoop user such as ``
sudo useradd hadoop``` ``` sudo passwd hadoop``` and log in to it ( `su --hadoop```). .. Again, we will proceed with the user name ** test **.
I haven't investigated it in detail, but maybe hadoop etc. will get angry if you link directly, so please get the path
Launch the created Arch Linux (here, make it possible to share the clipboard from the virtualbox settings)
$ cd ~/Downloads
XXXX SecondaryNameNode XXXX NameNode XXXX ResourceManager XXXX NodeManager XXXX Jps If it is XXXX DataNode, it is correct
Now you've roughly set up Hadoop.
Normally, from here, "Let's access http: // localhost: 50070!" Is displayed, but for some reason this does not work. I'm not sure why, but it seems that the available port numbers are different. Please check as follows and do "http: // localhost: [hogehoge]" from one end. In my environment, 0.0.0.0:8XXX, 0.0.0.0:9XXX, 127.0.0.1:28XXX corresponded to the rewritten destination. Also, localhost: 8088 was able to connect as it is.
$ netstat -an | grep LISTEN ;; <== pretty important here tcp 0 0 0.0.0.0:22 ..... LISTEN tcp 0 0 0.0.0.0:13562 ..... LISTEN tcp 0 0 127.0.0.1:9000 .... LISTEN tcp 0 0 ::::8088 .......... LISTEN