- 搭建FULL
- 分配,安装jdk
rpm -i jdk... - 免密登录
node01:scp /root/.ssh/id_dsa.pub node02:/root/.ssh/node01.pubscp /root/.ssh/id_dsa.pub node03:/root/.ssh/node01.pubscp /root/.ssh/id_dsa.pub node04:/root/.ssh/node01.pub
node02:cd ~/.sshcat node01.pub >> authorized_keys
node03:cd ~/.sshcat node01.pub >> authorized_keys
node04:cd ~/.sshcat node01.pub >> authorized_keys
搭建FULL
1:服务器规划
NN:192.168.0.201 DN:192.168.0.202 192.168.0.203 192.168.0.204 SNN:192.168.0.202
2:部署配置
NN:core-site.xml: fs.defaultFS hdfs://node01:9000 DN: slaves: node02 node03 node04 SNN: hdfs-siet.xml: dfs.namenode.secondary.http.address node01:50090
3:角色规划
node02~node04:
分配,安装jdk
rpm -i jdk...
免密登录
node01:
scp /root/.ssh/id_dsa.pub node02:/root/.ssh/node01.pub
scp /root/.ssh/id_dsa.pub node03:/root/.ssh/node01.pub
scp /root/.ssh/id_dsa.pub node04:/root/.ssh/node01.pub
node02:
cd ~/.ssh
cat node01.pub >> authorized_keys
node03:
cd ~/.ssh
cat node01.pub >> authorized_keys
node04:
cd ~/.ssh
cat node01.pub >> authorized_keys
4:配置部署
node01:cd $HADOOP/etc/hadoopvi core-site.xml 不需要改vi hdfs-site.xml
<property><name>dfs.replication</name><value>2</value></property><property><name>dfs.namenode.name.dir</name><value>/var/bigdata/hadoop/full/dfs/name</value></property><property><name>dfs.datanode.data.dir</name><value>/var/bigdata/hadoop/full/dfs/data</value></property><property><name>dfs.namenode.secondary.http-address</name><value>node02:50090</value></property><property><name>dfs.namenode.checkpoint.dir</name><value>/var/bigdata/hadoop/full/dfs/secondary</value></property>
vi slaves
node02node03node04
分发:cd /optscp -r ./bigdata/ node02:pwd<br />`scp -r ./bigdata/ node03:`pwdscp -r ./bigdata/ node04:pwd``
格式化启动hdfs namenode -formatstart-dfs.sh
关闭:stop-dfs.sh
