Showing posts with label namenode ha using ambari.. Show all posts
Showing posts with label namenode ha using ambari.. Show all posts

Sunday, May 22, 2016

NAMENODE HA set up



Total of  6 NODE cluster details:

NODE1 :HDFS master component ,Zookeeper Sever ,Ambari Agent, Journal node,         Resource Manager ,App Timeline Server, History Server, Hiveserver 2

NODE 2:HDFS master component, Zookeeper Server, Ambari Agent, Journal node

NODE 3:Amabri server, zookeeper server, Journal Node, Clients, Hive Metastore, WebHcat Server, clients, Hive server 2, Metrics Collector

NODE 4:Ambari Agent, HDFS Worker component, Node Manager component, Hive Client,Pig

NODE 5:Ambari Agent, HDFS Worker component, Node Manager component, Hive client,Pig

NODE 6:Ambari Agent, HDFS Worker component, Node Manager component, Hive client,Pig


1)     If necessary, use Ambari Web UI > Services > ZooKeeper > Service Actions >
Add ZooKeeper Server to add more ZooKeeper servers.(3 servers minimum for Namenode HA configuration)
2)     Ambari click Services > HDFS > Service Actions > Enable NameNode HA. This opens a configuration wizard.
In the  service action drop down list  enable Namenode HA
3)     Review:In the Getting Started window, type the Nameservice ID. The Nameservice ID is the logical name of the HDFS cluster.In the wizard you have to enter different properties in the each steps of GUI screens

--- Logical Name (dfs.nameservices)

--fs.defaultFS(In core-site.xml the default path prefix used by the Hadoop FS client when none is given)

---installation (NameNode Current On NODE 1) and (Additional Namenodes on NODE 2)

---(Journal Nodes one on Current Namenode NODE 1), (Second on Additional Namenode NODE 2) ,(third journal node on   NODE 3).

---On current journal nodes on their installation paths in hdfs-site.xml set the property
Dfs.journalnode.edits.dir =”/path/to/edits/info/data”  where editlogs are stored in the directory paths.

--Locating journal nodes will be set by property in hdfs-site.xml in
Dfs.namenode.shared.edits.dir “qjournal://jn1:8485;jn2:8485;j3:8485”

--dfs.nameservices =”haclustersetup”(The logical hdfs cluster name points to the two namenodes)

-- dfs.ha.namenode.haclustersetup=”nn1,nn2”(names of namenodes)

--dfs.namenode.http-address.<logical clustername>.<names of nodes>

Ex:dfs.namenode.http-address.<haclustersetup>.<nn1>= “node1:50070”
dfs.namenode.http-address.<haclustersetup>.<nn2>= “node2:50070”


--dfs.namenode.rpc-address.<logical clustername>.<name of node>
Ex:dfs.namenode.rpc-address.<haclustersetup>.<nn1>= “node1:8020”
dfs.namenode.rpc-address.<haclustersetup>.<nn2>= “node2:8020”


-- dfs.ha.fencing.methods(values: shell or sshfence)

-- dfs.client.failover.proxy.provider.mycluster property determines the Java class used by the client to determine which NameNode is currently the Active NameNode.”org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider”

4)     Manually Create a checkpoint
--sudo su hdfs -1 –c ‘Hdfs dfsadmin safemode enter’
--sudo su hdfs -1 –c ‘hdfs dfsadmin safeNameSpace’

5)     Manually initialize the journal nodes
----sudo su hdfs -1 –c ‘hdfs namenode  initalizeShareEdits’

6)     Manually initialize the metadata for namenode automatic failover by running  
--sudo su hdfs -1 –c ‘Hdfs zkfc formatZk’

7)     Manually initialize the metadata for the additional namenode by running
--sudo su hdfs -1 –c ‘Hdfs namenode bootstrapStandby’

8)     hdfs haadmin –getServiceState.(to get service state)

9)     hdfs   haadmin –failover (to manual initiate a failover)