下载安装包
Hadoop
- 官方地址:http://archive.apache.org/dist/hadoop/
国内镜像(推荐):https://repo.huaweicloud.com/apache/hadoop/common/
Java
Linux环境部署
创建账户
添加用户名和密码
[root@iZnm201imn18dkgebcpx40Z ~]# useradd hadoop[root@iZnm201imn18dkgebcpx40Z ~]# passwd hadoopChanging password for user hadoop.New password:Retype new password:passwd: all authentication tokens updated successfully.A&UAqqGqk55M9hPnvhL0EKAwk3fh#x4RyAMHwQ
配置hadoop用户具有root权限,方便后期加sudo执行root权限的命令
[root@iZnm201imn18dkgebcpx40Z ~]# vim /etc/sudoers
修改/etc/sudoers文件,在%wheel下添加一行,hadoop ALL=(ALL) NOPASSWD:ALL
## Allow root to run any commands anywhereroot ALL=(ALL) ALL## Allows people in group wheel to run all commands%wheel ALL=(ALL) ALLhadoop ALL=(ALL) NOPASSWD:ALL
注意:hadoop这一行不要直接放到root行下面,因为所有用户都属于wheel组,你先配置了hadoop具有免密功能,但是程序执行到%wheel行时,该功能又被覆盖回需要密码。所以hadoop要放到%wheel这行下面
修改HostName
vi /etc/hostname
查看:sysctl kernel.hostname
修改:sudo sysctl kernel.hostname=hadoop-node1
创建/opt/module、/opt/software文件夹,并修改文件所有者和所属组为hadoop用户
编辑hosts
vi /etc/hosts
编辑DNS
vi /etc/resolv.conf
更改yum源,root
源:网易163 http://mirrors.163.com/.help/
中科大 https://mirrors.ustc.edu.cn/help/
sohu http://mirrors.sohu.com/help/
阿里云 https://opsx.alibaba.com/mirror
清华大学 https://mirrors.tuna.tsinghua.edu.cn/
浙江大学 http://mirrors.zju.edu.cn/
中国科技大学 http://centos.ustc.edu.cn/
sudo sed ‘s/http:\/\/yum.tbsite.net\/centos/https:\/\/mirrors.ustc.edu.cn\/centos/g’ -i /etc/yum.repos.d/CentOS-Base.repo
sudo sed ‘s/https:\/\/mirrors.ustc.edu.cn\/centos/https:\/\/mirrors.aliyun.com\/centos/g’ -i /etc/yum.repos.d/CentOS-Base.repo
mv /etc/yum.repos.d /etc/yum.repos.d.bak
mkdir /etc/yum.repos.d
vim /etc/yum.repos.d/CentOS-Base.repo
[base]name=CentOS-$releasever - Basefailovermethod=prioritybaseurl=https://mirrors.ustc.edu.cn/centos/$releasever/os/$basearch/gpgcheck=1gpgkey=https://mirrors.ustc.edu.cn/centos/RPM-GPG-KEY-CentOS-7#released updates[updates]name=CentOS-$releasever - Updatesfailovermethod=prioritybaseurl=https://mirrors.ustc.edu.cn/centos/$releasever/updates/$basearch/gpgcheck=1gpgkey=https://mirrors.ustc.edu.cn/centos/RPM-GPG-KEY-CentOS-7#additional packages that may be useful[extras]name=CentOS-$releasever - Extrasfailovermethod=prioritybaseurl=https://mirrors.ustc.edu.cn/centos/$releasever/extras/$basearch/gpgcheck=1gpgkey=https://mirrors.ustc.edu.cn/centos/RPM-GPG-KEY-CentOS-7#additional packages that extend functionality of existing packages[centosplus]name=CentOS-$releasever - Plusfailovermethod=prioritybaseurl=https://mirrors.ustc.edu.cn/centos/$releasever/centosplus/$basearch/gpgcheck=1enabled=0gpgkey=https://mirrors.ustc.edu.cn/centos/RPM-GPG-KEY-CentOS-7#contrib - packages by Centos Users[contrib]name=CentOS-$releasever - Contribfailovermethod=prioritybaseurl=https://mirrors.ustc.edu.cn/centos/$releasever/contrib/$basearch/gpgcheck=1enabled=0gpgkey=https://mirrors.ustc.edu.cn/centos/RPM-GPG-KEY-CentOS-7
yum clean all
yum makecache
[root@iZnm201imn18dkgebcpx40Z ~]# mkdir /opt/module[root@iZnm201imn18dkgebcpx40Z ~]# mkdir /opt/software[root@iZnm201imn18dkgebcpx40Z ~]# chown hadoop:hadoop /opt/module[root@iZnm201imn18dkgebcpx40Z ~]# chown hadoop:hadoop /opt/software/
集群分发脚本
#!/bin/bash#1. 判断参数个数if [ $# -lt 1 ]thenecho Not Enough Arguement!exit;fi#2. 遍历集群所有机器for host in hadoop-node1 hadoop-node2 hadoop-node3doecho ==================== $host ====================#3. 遍历所有目录,挨个发送for file in $@do#4 判断文件是否存在if [ -e $file ]then#5. 获取父目录pdir=$(cd -P $(dirname $file); pwd)#6. 获取当前文件的名称fname=$(basename $file)ssh $host "mkdir -p $pdir"rsync -av $pdir/$fname $host:$pdirelseecho $file does not exists!fidonedone
安装Java环境
卸载自带JDK(如果有的话)
[root@iZnm201imn18dkgebcpx40Z ~]# rpm -qa | grep -i java | xargs -n1 rpm -e --nodepsrpm: no packages given for erase
- rpm -qa:查询所安装的所有rpm软件包
- grep -i:忽略大小写
- xargs -n1:表示每次只传递一个参数
- rpm -e –nodeps:强制卸载软件
安装JDK
```bash下载
[root@iZnm201imn18dkgebcpx40Z ~]# cd /opt/software切换到hadoop用户
[root@iZnm201imn18dkgebcpx40Z software]# su hadoop
[hadoop@iZnm201imn18dkgebcpx40Z software]# wget https://repo.huaweicloud.com/java/jdk/8u152-b16/jdk-8u152-linux-x64.tar.gz
解压到/opt/module文件夹
[hadoop@iZnm201imn18dkgebcpx40Z software]# tar -zxvf jdk-8u152-linux-x64.tar.gz -C /opt/module
<a name="JqePL"></a>### 配置JDK环境变量1. 新建/etc/profile.d/my_env.sh文件```bashsudo vim /etc/profile.d/my_env.sh#JAVA_HOMEexport JAVA_HOME=/opt/module/jdk1.8.0export PATH=$PATH:$JAVA_HOME/bin
使环境变量生效
source /etc/profile.d/my_env.sh测试 java -version
[hadoop@iZnm201imn18dkgebcpx40Z module]# java -version java version "1.8.0_152" Java(TM) SE Runtime Environment (build 1.8.0_152-b16) Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)安装Hadoop
下载并解压
# 下载 [hadoop@iZnm201imn18dkgebcpx40Z ~]# cd /opt/software [hadoop@iZnm201imn18dkgebcpx40Z software]# wget https://repo.huaweicloud.com/apache/hadoop/common/hadoop-3.1.3/hadoop-3.1.3.tar.gz # 解压到/opt/module文件夹 [hadoop@iZnm201imn18dkgebcpx40Z software]# tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module配置Hadoop环境变量
在/etc/profile.d/my_env.sh文件中添加 ```bash vim /etc/profile.d/my_env.sh
HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3 export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin
2. 使环境变量生效
```bash
source /etc/profile.d/my_env.sh
Hadoop单机部署配置
伪分布式部署
- 配置核心文件core-site.xml ```bash [hadoop@iZnm201imn18dkgebcpx40Z hadoop]$ cd $HADOOP_HOME/etc/hadoop [hadoop@iZnm201imn18dkgebcpx40Z hadoop]$ vim core-site.xml
2. 配置HDFS文件hdsf-site.xml
```bash
vim hdsf-site.xml
# 添加如下信息
<configuration>
<!-- 测试环境指定HDFS副本的数量1 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
初始化namenode
[hadoop@iZnm201imn18dkgebcpx40Z hadoop-3.1.3]$ pwd /opt/module/hadoop-3.1.3 [hadoop@iZnm201imn18dkgebcpx40Z hadoop-3.1.3]$ bin/hdfs namenode -format启动
sbin/start-dfs.sh启动报错了,提示意思是没有免密登录ssh localhost的权限:
[hadoop@iZnm201imn18dkgebcpx40Z hadoop-3.1.3]$ sbin/start-dfs.sh Starting namenodes on [iZnm201imn18dkgebcpx40Z] iZnm201imn18dkgebcpx40Z: Warning: Permanently added 'iznm201imn18dkgebcpx40z,10.10.178.233' (ECDSA) to the list of known hosts. iZnm201imn18dkgebcpx40Z: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Starting datanodes localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Starting secondary namenodes [iZnm201imn18dkgebcpx40Z] iZnm201imn18dkgebcpx40Z: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). [hadoop@iZnm201imn18dkgebcpx40Z hadoop-3.1.3]$ jps 28245 Jps设置免密登录后,再次执行命令sbin/start-dfs.sh
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys测试是否启动,jps
[hadoop@iZnm201imn18dkgebcpx40Z hadoop-3.1.3]$ jps 29441 DataNode 33993 Jps 29773 SecondaryNameNode 29279 NameNode [hadoop@iZnm201imn18dkgebcpx40Z hadoop-3.1.3]$配置MapReduce job on YARN
配置mapreduce由YARN vim etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
vim etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
启动yarn
sbin/start-yarn.sh
集群重启
# 1.先关闭yarn
sbin/stop-yarn.sh
# 2.关闭dfs
sbin/stop-dfs.sh
# 3.重新启动dfs
sbin/start-dfs.sh
# 4.重启yarn
sbin/start-yarn.sh
附录:Hadoop的启动和停止说明
sbin/start-all.sh #启动所有的Hadoop守护进程。包括NameNode、 Secondary NameNode、DataNode、ResourceManager、NodeManager
sbin/stop-all.sh #停止所有的Hadoop守护进程。包括NameNode、 Secondary NameNode、DataNode、ResourceManager、NodeManager
sbin/start-dfs.sh #启动Hadoop HDFS守护进程NameNode、SecondaryNameNode、DataNode
sbin/stop-dfs.sh #停止Hadoop HDFS守护进程NameNode、SecondaryNameNode和DataNode
sbin/hadoop-daemons.sh start namenode #单独启动NameNode守护进程
sbin/hadoop-daemons.sh stop namenode #单独停止NameNode守护进程
sbin/hadoop-daemons.sh start datanode #单独启动DataNode守护进程
sbin/hadoop-daemons.sh stop datanode #单独停止DataNode守护进程
sbin/hadoop-daemons.sh start secondarynamenode #单独启动SecondaryNameNode守护进程
sbin/hadoop-daemons.sh stop secondarynamenode #单独停止SecondaryNameNode守护进程
sbin/start-yarn.sh #启动ResourceManager、NodeManager
sbin/stop-yarn.sh #停止ResourceManager、NodeManager
sbin/yarn-daemon.sh start resourcemanager #单独启动ResourceManager
sbin/yarn-daemons.sh start nodemanager #单独启动NodeManager
sbin/yarn-daemon.sh stop resourcemanager #单独停止ResourceManager
sbin/yarn-daemons.sh stopnodemanager #单独停止NodeManager
sbin/mr-jobhistory-daemon.sh start historyserver #手动启动jobhistory
sbin/mr-jobhistory-daemon.sh stop historyserver #手动停止jobhistory
