安装宿主机的软件
- xshell
- xftp
- visualbox6.1
集群环境软件版本
- centos linux 8
- jdk-8u291-linux-x64.tar.gz
- hadoop-3.3.1-aarch64.tar.gz
配置模板虚拟机-hadoop100
测试网络连通性
ping www.baidu.com
切换到管理用户
[luyang@localhost ~]$ su root
Password:
安装必要的软件
epel
Extra Packages for Enterprise Linux 是为“红帽系”的操作系统提供额外的软件包,适用于 RHEL、CentOS 和 Scientific Linux。相当于是一个软件仓库,大多数 rpm 包在官方repository 中是找不到的)
[root@localhost ~]# yum install -y epel-release
CentOS Linux 8 - AppStream 1.2 MB/s | 8.1 MB 00:06
CentOS Linux 8 - BaseOS 1.5 MB/s | 3.6 MB 00:02
CentOS Linux 8 - Extras 11 kB/s | 9.8 kB 00:00
Dependencies resolved.
=============================================================================================================================================================================================
Package Architecture Version Repository Size
=============================================================================================================================================================================================
Installing:
epel-release noarch 8-11.el8 extras 24 k
Transaction Summary
=============================================================================================================================================================================================
Install 1 Package
Total download size: 24 k
Installed size: 35 k
Downloading Packages:
epel-release-8-11.el8.noarch.rpm 80 kB/s | 24 kB 00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 30 kB/s | 24 kB 00:00
warning: /var/cache/dnf/extras-2770d521ba03e231/packages/epel-release-8-11.el8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS Linux 8 - Extras 1.6 MB/s | 1.6 kB 00:00
Importing GPG key 0x8483C65D:
Userid : "CentOS (CentOS Official Signing Key) <security@centos.org>"
Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : epel-release-8-11.el8.noarch 1/1
Running scriptlet: epel-release-8-11.el8.noarch 1/1
Verifying : epel-release-8-11.el8.noarch 1/1
Installed products updated.
Installed:
epel-release-8-11.el8.noarch
Complete!
net-tool:工具包集合,包含 ifconfig 等命令
[root@localhost ~]# yum install -y net-tools
Last metadata expiration check: 0:01:26 ago on Tue 13 Jul 2021 12:55:58 AM CST.
Package net-tools-2.0-0.52.20160912git.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
vim:编辑器
[root@localhost ~]# yum install -y vim
Last metadata expiration check: 0:01:56 ago on Tue 13 Jul 2021 12:55:58 AM CST.
Package vim-enhanced-2:8.0.1763-15.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
rync:文件同步
[root@localhost ~]# yum install -y rsync
Last metadata expiration check: 0:02:15 ago on Tue 13 Jul 2021 12:55:58 AM CST.
Package rsync-3.1.3-12.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
添加必要的用户
创建 hadoop 用户,并修改 hadoop 用户的密码
[root@localhost ~]# useradd hadoop
[root@localhost ~]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.
配置用户的权限
[root@localhost ~]# vim /etc/sudoers
修改/etc/sudoers 文件,在%wheel 这行下面添加一行,如下所示:
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
## Allows members of the 'sys' group to run networking, software,
## service management apps and more.
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
# %wheel ALL=(ALL) NOPASSWD: ALL
hadoop ALL=(ALL) NOPASSWD: ALL
创建必要的目录
在/opt 目录下创建 module、software 文件夹
[root@localhost ~]# mkdir /opt/module
[root@localhost ~]# mkdir /opt/software
修改 module、software 文件夹的所有者和所属组均为 hadoop 用户
[root@localhost ~]# chown hadoop:hadoop /opt/module
[root@localhost ~]# chown hadoop:hadoop /opt/software
卸载自带的软件
卸载虚拟机自带的 JDK注意:如果你的虚拟机是最小化安装不需要执行这一步。
[root@localhost ~]# rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps
rpm -qa:查询所安装的所有 rpm 软件包
grep -i:忽略大小写
xargs -n1:表示每次只传递一个参数
rpm -e –nodeps:强制卸载软件
重新启动虚拟机
[root@localhost ~]# reboot
配置集群虚拟机-hadoop101
克隆集群虚拟机
利用模板机 hadoop100,克隆三台虚拟机:hadoop101 hadoop102 hadoop103。注意:克隆时,要先关闭 hadoop100
查看虚拟机网卡
以下以 hadoop101 举例说明:
- enp0s3
```basic
[root@localhost ~]# ifconfig
ens33: flags=4163
mtu 1500 inet 192.168.89.129 netmask 255.255.255.0 broadcast 192.168.89.255 inet6 fe80::20c:29ff:fe09:f272 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:09:f2:72 txqueuelen 1000 (Ethernet) RX packets 34012 bytes 49664594 (47.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8232 bytes 522556 (510.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73
virbr0: flags=4099
<a name="P3gUP"></a>
## 修改虚拟机网卡
以下以 hadoop101 举例说明:
- ens33
```basic
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=ens33
DEVICE=ens33
PREFIX=24
## 修改以下参数
ONBOOT=yes
UUID=bc7d4150-185b-41a2-91e8-7b1622ad38a1
IPADDR=192.168.89.101
GATEWAY=192.168.89.2
BOOTPROTO=static
DNS1=223.5.5.5
## 虚拟机找不到网卡恢复
ifconfig ens33 up
修改虚拟机名称
以下以 hadoop101 举例说明
[root@localhost ~]# vim /etc/hostname
hadoop101
修改虚拟机映射
以下以 hadoop101 举例说明
[root@localhost ~]# vim /etc/hosts
## 添加如下内容
192.168.89.101 hadoop101
192.168.89.102 hadoop102
192.168.89.103 hadoop103
重启集群虚拟机
以下以 hadoop101 举例说明
[root@localhost ~]# reboot
修改宿主机映射
C:\Windows\System32\drivers\etc\hosts
## 添加如下内容
192.168.89.101 hadoop101
192.168.89.102 hadoop102
192.168.89.103 hadoop103
关闭系统防火墙
#停止firewall
$ systemctl stop firewalld.service
#禁止firewall开机启动
$ systemctl disable firewalld.service
配置时间服务器
# 启动chronyd服务
[zk@zk101 ~]$ systemctl start chronyd
# 设置chronyd服务开机自启动
[zk@zk101 ~]$ systemctl enable chronyd
# 启动chronyd服务
[zk@zk101 ~]$ date
安装开发的软件
JDK
下载JDK
以下以 hadoop101 举例说明
hadoop@hadoop101 ~]$ cd /opt/software
hadoop@hadoop101 software]$ wget https://download.oracle.com/otn/java/jdk/8u291-b10/d7fc238d0cbf4b0dac67be84580cfb4b/jdk-8u291-linux-x64.tar.gz
安装JDK
以下以 hadoop101 举例说明
$ hadoop@hadoop101 software~]$ tar -zxvf jdk-8u291-linux-x64.tar.gz -C /opt/module/
配置JDK
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ sudo vim /etc/profile.d/my_env.sh
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_291
export PATH=$PATH:$JAVA_HOME/bin
:wq
生效JDK
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ source /etc/profile
测试JDK
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ java -version
Hadoop
下载Hadoop
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ cd /opt/software
[hadoop@hadoop101 software]$ wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.3.1/hadoop-3.3.1-aarch64.tar.gz
安装Hadoop
以下以 hadoop101 举例说明
[hadoop@hadoop101 software]$ tar -zxvf hadoop-3.3.1-aarch64.tar.gz -C /opt/module/
配置Hadoop
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ sudo vim /etc/profile.d/my_env.sh
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
:wq
生效Hadoop
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ source /etc/profile
测试Hadoop
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ hadoop version
写集群管理脚本-hadoop101
创建hadoop执行目录
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ cd /home/hadoop/
[hadoop@hadoop101 ~]$ mkdir bin
[hadoop@hadoop101 ~]$ cd bin
编写xsync执行脚本
以下以 hadoop101 举例说明
[hadoop@hadoop101 bin]$ vim xsync
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in hadoop101 hadoop102 hadoop103
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4. 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
编写myhadoop.sh执行脚本
以下以 hadoop101 举例说明
[hadoop@hadoop101 bin]$ vim myhadoop.sh
#!/bin/bash
if [ $# -lt 1 ]
then
echo "No Args Input..."
exit ;
fi
case $1 in
"start")
echo " =================== 启动 hadoop 集群 ==================="
echo " --------------- 启动 hdfs ---------------"
ssh hadoop101 "/opt/module/hadoop-3.3.1/sbin/start-dfs.sh"
echo " --------------- 启动 yarn ---------------"
ssh hadoop102 "/opt/module/hadoop-3.3.1/sbin/start-yarn.sh"
echo " --------------- 启动 historyserver ---------------"
ssh hadoop101 "/opt/module/hadoop-3.3.1/bin/mapred --daemon start historyserver"
;;
"stop")
echo " =================== 关闭 hadoop 集群 ==================="
echo " --------------- 关闭 historyserver ---------------"
ssh hadoop101 "/opt/module/hadoop-3.3.1/bin/mapred --daemon stop historyserver"
echo " --------------- 关闭 yarn ---------------"
ssh hadoop102 "/opt/module/hadoop-3.3.1/sbin/stop-yarn.sh"
echo " --------------- 关闭 hdfs ---------------"
ssh hadoop101 "/opt/module/hadoop-3.3.1/sbin/stop-dfs.sh"
;;
*)
echo "Input Args Error..."
;;
esac
编写jpsall执行脚本
以下以 hadoop101 举例说明
[hadoop@hadoop101 bin]$ vim jpsall
#!/bin/bash
for host in hadoop102 hadoop103 hadoop104
do
echo =============== $host ===============
ssh $host jps
done
修改脚本执行权限
以下以 hadoop101 举例说明
[hadoop@hadoop101 bin]$ chmod +x xsync
[hadoop@hadoop101 bin]$ chmod +x myhadoop.sh
[hadoop@hadoop101 bin]$ chmod +x jpsall
配置mybin环境变量
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ sudo vim /etc/profile.d/my_env.sh
#MY_BIN_HOME
export MY_BIN_HOME=/home/hadoop
export PATH=$PATH:$MY_BIN_HOME/bin
:wq
生效xsyn环境变量
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ source /etc/profile
配置集群机免密-hadoop101~103
ssh连通性测试
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ ssh hadoop@hadoop102
The authenticity of host 'hadoop102 (192.168.19.102)' can't be established.
ECDSA key fingerprint is SHA256:XKjW0b6BpH8OY5Em7AtimGiNK0b7a49FUTbGfXs4aVI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'hadoop102,192.168.19.102' (ECDSA) to the list of known hosts.
hadoop@hadoop102's password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sat Jul 3 23:47:52 2021
[hadoop@hadoop102 ~]$ exit
logout
Connection to hadoop102 closed.
[hadoop@hadoop101 ~]$ ssh hadoop@hadoop103
The authenticity of host 'hadoop103 (192.168.19.103)' can't be established.
ECDSA key fingerprint is SHA256:XKjW0b6BpH8OY5Em7AtimGiNK0b7a49FUTbGfXs4aVI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'hadoop103,192.168.19.103' (ECDSA) to the list of known hosts.
hadoop@hadoop103's password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sun Jul 4 00:15:45 2021
[hadoop@hadoop103 ~]$ exit
logout
Connection to hadoop103 closed.
[hadoop@hadoop101 bin]$
ssh秘钥生成操作
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:JXmXVo9ChneVj7NHSUmWVEdRAmgLMfL/y197ZFhbejs hadoop@hadoop101
The key's randomart image is:
+---[RSA 3072]----+
| . o. o+.=B&|
| o.o+o.ooO.|
| +ooo=.o.+|
| =.o .o.=|
| S . Bo|
| . +.=|
| . =o|
| . . Eo|
| o...o|
+----[SHA256]-----+
[hadoop@hadoop101 ~]$
ssh秘钥复制操作
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ ssh-copy-id hadoop101
The authenticity of host 'hadoop101 (192.168.19.101)' can't be established.
ECDSA key fingerprint is SHA256:XKjW0b6BpH8OY5Em7AtimGiNK0b7a49FUTbGfXs4aVI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@hadoop101's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop101'"
and check to make sure that only the key(s) you wanted were added.
[hadoop@hadoop101 ~]$ ssh-copy-id hadoop102
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@hadoop102's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop102'"
and check to make sure that only the key(s) you wanted were added.
[hadoop@hadoop101 ~]$ ssh-copy-id hadoop103
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@hadoop103's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'hadoop103'"
and check to make sure that only the key(s) you wanted were added.
ssh免密登录测试
以下以 hadoop101 举例说明
[hadoop@hadoop101 ~]$ ssh hadoop102
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sun Jul 4 00:45:41 2021 from 192.168.19.101
[hadoop@hadoop102 ~]$ exit
logout
Connection to hadoop102 closed.
[hadoop@hadoop101 bin]$ ssh hadoop103
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sun Jul 4 00:45:51 2021 from 192.168.19.101
[hadoop@hadoop103 ~]$ exit
logout
Connection to hadoop103 closed.
[hadoop@hadoop101 bin]$
ssh免密登录配置
hadoop102 重复上面的操作
hadoop103 重复上面的操作
配置Hadoop集群-hadoop101
默认Hadoop配置文件
默认core-default.xml配置文件
hadoop-common-3.3.1.jar/core-default.xml默认hdfs-default.xml配置文件
hadoop-hdfs-3.3.1.jar/hdfs-default.xml默认yarn-default.xml配置文件
hadoop-yarn-common-3.3.1.jar/yarn-default.xml默认mapred-default.xml配置文件
hadoop-mapreduce-client-core-3.3.1.jar/mapred-default.xml
我的Hadoop配置文件
Hadoop集群规划表格
| hadoop101 | hadoop102 | hadoop103 | |
|---|---|---|---|
| HDFS | NameNode & DataNode | DataNode | SecondaryNameNode & DataNode |
| YARN | NodeManager | ResourceManager & NodeManager | NodeManager |
core
我的 $HADOOP_HOME/etc/hadoop/core-site.xml配置文件
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!-- 指定 NameNode 的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop101:8020</value> </property> <!-- 指定 hadoop 数据的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop-3.3.1/data</value> </property> <!-- 配置 HDFS 网页登录使用的静态用户为 hadoop --> <property> <name>hadoop.http.staticuser.user</name> <value>hadoop</value> </property> </configuration>hdfs
我的$HADOOP_HOME/etc/hadoop/hdfs-site.xml配置文件 ```xml <?xml version=”1.0” encoding=”UTF-8”?> <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<a name="a4VZU"></a>
### yarn
- 我的$HADOOP_HOME/etc/hadoop/yarn-site.xml配置文件
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定 MR 走 shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定 ResourceManager 的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop102</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!-- 开启日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 设置日志聚集服务器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://hadoop101:19888/jobhistory/logs</value>
</property>
<!-- 设置日志保留时间为 7 天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
mapred
我的$HADOOP_HOME/etc/hadoop/mapred-site.xml配置文件 ```xml <?xml version=”1.0” encoding=”UTF-8”?> <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop101:10020 mapreduce.jobhistory.webapp.address hadoop101:19888
<a name="qJoyU"></a>
###
<a name="XQXBv"></a>
### workers
集群机器
- 我的$HADOOP_HOME/etc/hadoop/workers配置文件
```basic
hadoop101
hadoop102
hadoop103
分发Hadoop集群软件
module
将hadoop101在module下的软件和配置被分发到hadoop102和hadoop103上
[hadoop@hadoop101 bin]$ xsync /opt/module
profile.d
将hadoop101在/etc/profile.d下的配置文件分发到hadoop102和hadoop103上
[hadoop@hadoop101 bin]$ xsync /etc/profile.d
Hadoop集群启动
Hadoop集群规划表格
| hadoop101 | hadoop102 | hadoop103 | |
|---|---|---|---|
| HDFS | NameNode & DataNode | DataNode | SecondaryNameNode & DataNode |
| YARN | NodeManager | ResourceManager & NodeManager | NodeManager |
首次启动Hadoop集群
如果集群是第一次启动,需要在 hadoop101 节点格式化 NameNode(注意:格式 化 NameNode,会产生新的集群 id,导致 NameNode 和 DataNode 的集群 id 不一致,集群找不到已往数据。如果集群在运行过程中报错,需要重新格式化 NameNode 的话,一定要先停止 namenode 和 datanode 进程,并且要删除所有机器的 data 和 logs 目录,然后再进行格式化。)
格式化NN-hadoop101
[hadoop@hadoop101 ~]$ hdfs namenode -format
WARNING: /opt/module/hadoop-3.3.1/logs does not exist. Creating.
2021-07-04 13:19:56,315 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop101/192.168.19.101
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.3.1
STARTUP_MSG: classpath = /opt/module/hadoop-3.3.1/etc/hadoop:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-lang3-3.7.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-guava-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-net-3.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/re2j-1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/json-smart-2.4.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-io-2.8.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/slf4j-api-1.7.30.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/curator-client-4.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-io-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/hadoop-annotations-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-databind-2.10.5.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-ajax-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/curator-framework-4.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-security-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/asm-5.0.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-compress-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/nimbus-jose-jwt-9.8.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jul-to-slf4j-1.7.30.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-annotations-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/avro-1.7.7.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/accessors-smart-2.4.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/hadoop-auth-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-server-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/snappy-java-1.1.8.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-xml-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-3.5.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-http-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-text-1.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/hadoop-nfs-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1-tests.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/hadoop-kms-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/common/hadoop-registry-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/json-smart-2.4.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-io-2.8.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-io-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-annotations-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-databind-2.10.5.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-security-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-compress-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-annotations-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/accessors-smart-2.4.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-auth-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-server-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/snappy-java-1.1.8.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-xml-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-http-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-all-4.1.61.Final.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1-tests.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1-tests.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1-tests.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1-tests.jar:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-client-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/fst-2.50.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jline-3.9.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/guice-4.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-common-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/asm-tree-9.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-api-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-client-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-server-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-jndi-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/asm-commons-9.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-servlet-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-annotations-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-base-2.10.5.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/asm-analysis-9.0.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-plus-9.4.40.v20210413.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-registry-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-common-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-router-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-api-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-core-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-client-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-api-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-common-3.3.1.jar:/opt/module/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.1.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2; compiled by 'ubuntu' on 2021-06-15T10:51Z
STARTUP_MSG: java = 1.8.0_291
************************************************************/
2021-07-04 13:19:56,397 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-07-04 13:19:56,831 INFO namenode.NameNode: createNameNode [-format]
2021-07-04 13:19:57,746 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2021-07-04 13:19:59,521 INFO namenode.NameNode: Formatting using clusterid: CID-7de8cfe3-e091-47c1-a508-ffa2cfc06124
2021-07-04 13:19:59,708 INFO namenode.FSEditLog: Edit logging is async:true
2021-07-04 13:19:59,866 INFO namenode.FSNamesystem: KeyProvider: null
2021-07-04 13:19:59,871 INFO namenode.FSNamesystem: fsLock is fair: true
2021-07-04 13:19:59,871 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-07-04 13:20:00,007 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
2021-07-04 13:20:00,007 INFO namenode.FSNamesystem: supergroup = supergroup
2021-07-04 13:20:00,007 INFO namenode.FSNamesystem: isPermissionEnabled = true
2021-07-04 13:20:00,007 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2021-07-04 13:20:00,007 INFO namenode.FSNamesystem: HA Enabled: false
2021-07-04 13:20:00,284 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-07-04 13:20:00,327 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-07-04 13:20:00,327 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-07-04 13:20:00,331 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-07-04 13:20:00,331 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jul 04 13:20:00
2021-07-04 13:20:00,335 INFO util.GSet: Computing capacity for map BlocksMap
2021-07-04 13:20:00,335 INFO util.GSet: VM type = 64-bit
2021-07-04 13:20:00,345 INFO util.GSet: 2.0% max memory 904.8 MB = 18.1 MB
2021-07-04 13:20:00,345 INFO util.GSet: capacity = 2^21 = 2097152 entries
2021-07-04 13:20:00,382 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2021-07-04 13:20:00,382 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-07-04 13:20:00,391 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2021-07-04 13:20:00,391 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-07-04 13:20:00,391 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: defaultReplication = 3
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: maxReplication = 512
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: minReplication = 1
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: encryptDataTransfer = false
2021-07-04 13:20:00,392 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2021-07-04 13:20:00,555 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2021-07-04 13:20:00,555 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2021-07-04 13:20:00,555 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2021-07-04 13:20:00,555 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2021-07-04 13:20:00,724 INFO util.GSet: Computing capacity for map INodeMap
2021-07-04 13:20:00,724 INFO util.GSet: VM type = 64-bit
2021-07-04 13:20:00,724 INFO util.GSet: 1.0% max memory 904.8 MB = 9.0 MB
2021-07-04 13:20:00,724 INFO util.GSet: capacity = 2^20 = 1048576 entries
2021-07-04 13:20:00,725 INFO namenode.FSDirectory: ACLs enabled? true
2021-07-04 13:20:00,725 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-07-04 13:20:00,725 INFO namenode.FSDirectory: XAttrs enabled? true
2021-07-04 13:20:00,728 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-07-04 13:20:00,736 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-07-04 13:20:00,738 INFO snapshot.SnapshotManager: SkipList is disabled
2021-07-04 13:20:00,784 INFO util.GSet: Computing capacity for map cachedBlocks
2021-07-04 13:20:00,784 INFO util.GSet: VM type = 64-bit
2021-07-04 13:20:00,784 INFO util.GSet: 0.25% max memory 904.8 MB = 2.3 MB
2021-07-04 13:20:00,784 INFO util.GSet: capacity = 2^18 = 262144 entries
2021-07-04 13:20:00,807 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-07-04 13:20:00,807 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-07-04 13:20:00,807 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-07-04 13:20:00,859 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-07-04 13:20:00,859 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-07-04 13:20:00,861 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-07-04 13:20:00,861 INFO util.GSet: VM type = 64-bit
2021-07-04 13:20:00,861 INFO util.GSet: 0.029999999329447746% max memory 904.8 MB = 278.0 KB
2021-07-04 13:20:00,861 INFO util.GSet: capacity = 2^15 = 32768 entries
2021-07-04 13:20:00,973 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1424670839-192.168.19.101-1625376000899
2021-07-04 13:20:01,098 INFO common.Storage: Storage directory /opt/module/hadoop-3.3.1/data/dfs/name has been successfully formatted.
2021-07-04 13:20:01,240 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/hadoop-3.3.1/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-07-04 13:20:01,519 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/hadoop-3.3.1/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
2021-07-04 13:20:01,600 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-07-04 13:20:01,674 INFO namenode.FSNamesystem: Stopping services started for active state
2021-07-04 13:20:01,674 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-07-04 13:20:01,743 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-07-04 13:20:01,743 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop101/192.168.19.101
************************************************************/
[hadoop@hadoop101 ~]$
启动HDFS-hadoop101
在配置了 NameNode 的节点(hadoop101)启动 HDFS
[hadoop@hadoop101 ~]$ cd /opt/module/hadoop-3.3.1/
[hadoop@hadoop101 hadoop-3.3.1]$ sbin/start-dfs.sh
Starting namenodes on [hadoop101]
Starting datanodes
hadoop102: WARNING: /opt/module/hadoop-3.3.1/logs does not exist. Creating.
hadoop103: WARNING: /opt/module/hadoop-3.3.1/logs does not exist. Creating.
Starting secondary namenodes [hadoop103]
2021-07-04 13:20:57,294 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
启动HDFS 历史服务器-hadoop1
在配置了 ResourceManager 的节点(hadoop101)启动 historyserver
[hadoop@hadoop101 ~]$ mapred --daemon start historyserver
启动YARN-hadoop102
在配置了 ResourceManager 的节点(hadoop102)启动 YARN
[hadoop@hadoop102 ~]$ cd /opt/module/hadoop-3.3.1/
[hadoop@hadoop102 hadoop-3.3.1]$ sbin/start-yarn.sh
Starting resourcemanager
Starting nodemanagers
[hadoop@hadoop102 hadoop-3.3.1]$
Hadoop集群测试
Web端启动测试
HDFS的NameNode
- 查看 HDFS 上存储的数据信息
http://hadoop101:9870
HDFS 的jobhistory
查看 HDFS 上运行的 Job 信息
http://hadoop101:19888/jobhistory
查看 2NN
http://hadoop103:9868
YARN的ResourceManager
查看 YARN 上运行的 Job 信息
http://hadoop102:8088
HDFS集群进程查看-jpsall
| hadoop101 | hadoop102 | hadoop103 | |
|---|---|---|---|
| HDFS | NameNode & DataNode | DataNode | SecondaryNameNode & DataNode |
| YARN | NodeManager | ResourceManager & NodeManager | NodeManager |
[hadoop@hadoop101 hadoop-3.3.1]$ jpsall
=============== hadoop101 ===============
5473 NodeManager
5330 JobHistoryServer
4854 NameNode
6761 Jps
5001 DataNode
=============== hadoop102 ===============
4933 NodeManager
6166 Jps
4794 ResourceManager
4591 DataNode
=============== hadoop103 ===============
4881 SecondaryNameNode
4754 DataNode
5973 Jps
5021 NodeManager
HDFS 文件操作测试-hadoop1
创建HDFS目录
[hadoop@hadoop101 hadoop-3.3.1]$ hadoop fs -mkdir /input
上传HDFS小文件
[hadoop@hadoop101 hadoop-3.3.1]$ hadoop fs -put /opt/module/jdk1.8.0_291/jmc.txt /input
上传HDFS大文件
[hadoop@hadoop101 hadoop-3.3.1]$ hadoop fs -put /opt/software/hadoop-3.3.1-aarch64.tar.gz /input
下载HDFS大文件
[hadoop@hadoop101 hadoop-3.3.1]$ mkdir /home/hadoop/output/
[hadoop@hadoop101 hadoop-3.3.1]$ hadoop fs -get /input/hadoop-3.3.1-aarch64.tar.gz /home/hadoop/output/
Hadoop集群维护
基于服务分开启动/停止
(1)整体启动/停止 HDFS 服务
start-dfs.sh/stop-dfs.sh
(2)整体启动/停止 YARN 服务
start-yarn.sh/stop-yarn.sh
基于服务组件启动/停止
(1)启动/停止 HDFS服务的 namenode
hdfs --daemon start/stop namenode
(2)启动/停止 HDFS服务的 datanode
hdfs --daemon start/stop datanode
(3)启动/停止 HDFS服务的 secondarynamenode
hdfs --daemon start/stop secondarynamenode
(4)启动/停止 YARN服务的 resourcemanager
yarn --daemon start/stop resourcemanager
(5)启动/停止 YARN服务的 nodemanager
yarn --daemon start/stop nodemanager
Hadoop集群常用端口
| 端口名称 | 端口名称 | 端口名称 |
|---|---|---|
| NameNode 内部通信端口 | 8020 / 9000 | 8020 / 9000/9820 |
| NameNode HTTP UI | 50070 | 9870 |
| MapReduce 查看执行任务端口 | 8088 | 8088 |
| 历史服务器通信端口 | 19888 | 19888 |
Hadoop集群故障
DN只启动了一个
hadoop101
[hadoop@hadoop101 current]$ pwd
/opt/module/hadoop-3.3.1/data/dfs/name/current
[hadoop@hadoop101 current]$ ll
total 1040
-rw-rw-r--. 1 hadoop hadoop 1048576 Jul 4 14:33 edits_inprogress_0000000000000000001
-rw-rw-r--. 1 hadoop hadoop 401 Jul 4 13:20 fsimage_0000000000000000000
-rw-rw-r--. 1 hadoop hadoop 62 Jul 4 13:20 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 hadoop hadoop 2 Jul 4 13:20 seen_txid
-rw-rw-r--. 1 hadoop hadoop 219 Jul 4 13:20 VERSION
[hadoop@hadoop101 current]$ cat VERSION
#Sun Jul 04 13:20:01 CST 2021
namespaceID=2084417269
clusterID=CID-7de8cfe3-e091-47c1-a508-ffa2cfc06124
cTime=1625376000899
storageType=NAME_NODE
blockpoolID=BP-1424670839-192.168.19.101-1625376000899
layoutVersion=-66
