We will try to create an image from an existing AWS EC2 instance after installing java and hadoop on it. If there is no instance created yet, create one and login to the instance using this article.
Install Java And Hadoop
- Its always a good way to upgrade the repositories first.
apt-get updatedownloads the package lists from the repositories and "updates" them to get information on the newest versions of packages and their dependencies.
$ sudo apt-get update && sudo apt-get dist-upgrade
- Installing latest java.
$ sudo apt-get install openjdk-8-jdk
- Download Hadoop from one of these mirrors. Select appropriate version number. Below command will download gzip file and copies it to Downloads directory, which is created using -P paramter.
$ wget http://apache.mirrors.tds.net/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz -P ~/Downloads
- We will now try to extract it to
$ sudo tar zxvf ~/Downloads/hadoop-* -C /usr/local
- Renaming the hadoop-* to hadoop under /usr/local directory.
$ sudo mv /usr/local/hadoop-* /usr/local/hadoop
Now the Java and Hadoop are installed. We will declare the environmental variables in the instance, which helps applications locate hadoop.
Setting up Environmental Variables
- To know where the java is installed (where the java executable is), execute the below command. Path may be different for you.
- Open .bashrc file in your home directory with your favorite editor. Include the below lines .
$ vi ~/.bashrc
For Hadoop Configuration directory:
For reflecting to current session with out restarting.
Check whether the environmental variables are available or not.
Creating an Image
- We will create an image from AWS console, with all the above configurations. This helps us in creating nodes in hadoop cluster with out repeating the above steps for each node.
- On EC2 management console, select “Instances” under INSTANCES. And then “Actions” -> “Image” -> “Create Image”
- Provide any name, description and click on “Create Image”.
- You will be able to find the created image under IMAGES -> “AMIs”.
Setting up Cluster Nodes from the Image Created
- You have created an image with Java and Hadoop installed, which you can use it to create nodes in the cluster. Select the created image and click “Launch”
- Choose an Instance Type according to the requirement. Here we stick the default t2.micro instance type. Click on “Next: Configure Instance Details”
- Change the “Number of instances” from 1 to 4 in Configure Instance Details. Out of 4 (NameNode -1 , DataNodes-3).
- Default storage. Click on “Next: Add Tags”
- Optional: Create a rule with Name as Key and “Hadoop Cluster” as Value and click on “Next: Configure Security Group”
- Select “All Traffic” from the dropdown and click on “Review and Launch”. And then Launch with key pair already created.
- Instances will be created as shown below. I have edited the Names for each node.
- Lets create a SSH config file to log in to the instances easily. On your computer we could use either Putty (as showed here) or GIT BASH (ensure it is installed). I will be using GIT BASH here.
$ touch ~/.ssh/config
- Edit the config file.
- Copy the below lines to the file. (Probably you need click on the middle button of mouse to paste in the file)
IdentityFile ~/.ssh/MyLab_Machine.pemHost datanode1
IdentityFile ~/.ssh/MyLab_Machine.pemHost datanode2
IdentityFile ~/.ssh/MyLab_Machine.pemHost datanode3
- This file lets SSH associate a shorthand name with a hostname, a user, and the private key, so you don’t have to type those in each time. This is assuming your private key
.ssh. If it isn't be sure to move or copy it there:
cp key_file ~/.ssh/MyLab_Machine.pem. Now you can log into the NameNode with just
$ ssh namenode. Also, copy the config file to the NameNode.
$ scp ~/.ssh/config namenode:~/.ssh
- We need to make sure the NameNode can connect to each DataNode over ssh without typing a password. You’ll do this by creating a public key for the NameNode and adding it to each DataNode.
- Log in to NameNode, create a public key using ssh-keygen and copy it to authorized_keys.
$ ssh namenode
$ ssh-keygen -f ~/.ssh/id_rsa -t rsa -P ""
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
- In order to login to the each DataNode without a password from NameNode. Copy the authorized_keys to each DataNode.
$ ssh datanode1 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub
$ ssh datanode2 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub
$ ssh datanode3 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub
- Try logging into DataNodes and test if you are able to login without a password.
Configuring the Cluster
- First, you’ll deal with the configuration on each node, then get into specific configurations for the NameNode and DataNodes. On each node, go to the Hadoop configuration folder, you should be able to get there with
$ cd $HADOOP_CONF_DIRsince we set that in
.bashrcearlier. When editing these configuration files, you'll need root access so remember to use
$ sudo. In the configuration folder, edit
<value>hdfs://<your namenode public dns name>:9000</value>
- This configuration
fs.defaultFStells the cluster nodes which machine the NameNode is on and that it will communicate on port 9000 which is for hdfs.
- On each node, in
yarn-site.xmlyou set options related to YARN, the resource manager:
<!— Site specific YARN configuration properties -->
<value><your namenode public dns name></value>
- Similarly with fs.defaultFS, yarn.resourcemanager.hostname sets the machine that the resource manager runs on.
- On each node, copy
$ sudo cp mapred-site.xml.template mapred-site.xml
- Add below to the mapred-site.xml
<value><your namenode public dns name>:54311</value>
mapreduce.jobtracker.addresssets the machine the job tracker runs on, and the port it communicates with. The other option here
mapreduce.framework.namesets MapReduce to run on YARN.
NameNode specific configuration:
- Now, NameNode specific configuration, these will all be configured only on the NameNode. First, add the DataNode hostnames to
/etc/hosts. You can get the hostname for each DataNode by entering
$ hostname, or
$ echo $(hostname)on each DataNode.
- Now edit
/etc/hostsand include these lines:
hdfs-site.xmlfile on NameNode as below:
dfs.replicationsets how many times each data block is replicated across the cluster.
dfs.namenode.name.dirsets the directory for storing NameNode data (
.fsimage). You’ll also have to create the directory to store the data.
$ sudo mkdir -p $HADOOP_HOME/data/hdfs/namenode
- Next, you’ll create the
mastersfile sets which machine the secondary namenode runs on. In your case, you'll have the secondary NameNode run on the same machine as the NameNode, so edit
masters, add the hostname of NameNode (Note: Not the public hostname, but the hostname you get from
$ hostname). Typically though, you would have the secondary NameNode run on a different machine than the primary NameNode.
- Next, edit the
HADOOP_CONF_DIR, this file sets the machines that are DataNodes. In
slaves, add the hostnames of each datanode (Note: Again, not the public hostname, but
$ hostnamehostnames). The
slavesfile might already contain a line
localhost, you should remove it, otherwise the NameNode would run as a DataNode too. It should look like this:
- Finally on the NameNode, change the owner of
$ sudo chown -R ubuntu $HADOOP_HOME
DataNode specific configura:
HADOOP_CONF_DIR/hdfs-site.xmlon each DataNode:
- Again, this sets the directory where the data is stored on the DataNodes. And again, create the directory on each DataNode. Also change the owner of the Hadoop directory.
$ sudo mkdir -p $HADOOP_HOME/data/hdfs/datanode
$ sudo chown -R ubuntu $HADOOP_HOME
Launch Hadoop Cluster
- On the NameNode, format the file system, then start HDFS.
$ hdfs namenode -format
- Start YARN.
- Start the job history server.
$ $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver
- To see the Java processes (Hadoop daemons for instance), enter
You can access the NameNode WebUI.