搭建FULL

1:服务器规划

NN:192.168.0.201 DN:192.168.0.202 192.168.0.203 192.168.0.204 SNN:192.168.0.202

2:部署配置

NN:core-site.xml: fs.defaultFS hdfs://node01:9000 DN: slaves: node02 node03 node04 SNN: hdfs-siet.xml: dfs.namenode.secondary.http.address node01:50090

3:角色规划

node02~node04:

分配,安装jdk
rpm -i jdk...

免密登录
node01:
scp /root/.ssh/id_dsa.pub node02:/root/.ssh/node01.pub
scp /root/.ssh/id_dsa.pub node03:/root/.ssh/node01.pub
scp /root/.ssh/id_dsa.pub node04:/root/.ssh/node01.pub
node02:
cd ~/.ssh
cat node01.pub >> authorized_keys
node03:
cd ~/.ssh
cat node01.pub >> authorized_keys
node04:
cd ~/.ssh
cat node01.pub >> authorized_keys

4:配置部署

node01:
cd $HADOOP/etc/hadoop
vi core-site.xml 不需要改
vi hdfs-site.xml

  1. <property>
  2. <name>dfs.replication</name>
  3. <value>2</value>
  4. </property>
  5. <property>
  6. <name>dfs.namenode.name.dir</name>
  7. <value>/var/bigdata/hadoop/full/dfs/name</value>
  8. </property>
  9. <property>
  10. <name>dfs.datanode.data.dir</name>
  11. <value>/var/bigdata/hadoop/full/dfs/data</value>
  12. </property>
  13. <property>
  14. <name>dfs.namenode.secondary.http-address</name>
  15. <value>node02:50090</value>
  16. </property>
  17. <property>
  18. <name>dfs.namenode.checkpoint.dir</name>
  19. <value>/var/bigdata/hadoop/full/dfs/secondary</value>
  20. </property>

vi slaves

  1. node02
  2. node03
  3. node04

分发:
cd /opt
scp -r ./bigdata/ node02:pwd<br />`scp -r ./bigdata/ node03:`pwd
scp -r ./bigdata/ node04:pwd``

格式化启动
hdfs namenode -format
start-dfs.sh

关闭:
stop-dfs.sh

5:访问

http://node01:50070/dfshealth.html#tab-datanode