安装准备
一、部署环境
配置名称 | 配置详情 |
---|---|
系统 | CentOS Linux release 7.7.1908 (Core) |
软件名称 | 部署环境准备 |
SSH端口 | 10086 |
用户角色 | root |
二、安装清单
软件名称 | 软件说明 |
---|---|
pqtel-bigdata.tar | 大数据组件压缩包 |
mysql-5.7.30-1.el7.x86_64.rpm-bundle.tar | mysql rpm压缩包 |
screen-4.1.0-0.26.20120314git3c2946.el7.x86_64.rpm | screen rpm安装包 |
三、配置
1)选择解压目录
执行命令 df -h,选择最大分区作为解压目录,一般默认选择home目录,如若选择其他目录后续路径应做相应修改
[root@hadoop-node1 home]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 908M 0 908M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 8.7M 911M 1% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
/dev/mapper/centos-root 17G 8.5G 8.6G 50% /
/dev/sda1 1014M 149M 866M 15% /boot
tmpfs 184M 0 184M 0% /run/user/0
2)解压软件包
执行命令:
[root@hadoop-node1 home]# tar -zxvf pqtel-bigdata.tar
解压后软件清单:
[root@hadoop-node1 pqtel-bigdata]# ll
total 4
drwxr-xr-x. 6 root root 134 Sep 9 2021 apache-zookeeper-3.5.7-bin
drwxr-xr-x. 8 631 503 143 Dec 13 2019 elasticsearch-6.8.6
drwxr-xr-x. 9 12334 systemd-journal 149 Oct 22 2019 hadoop-2.10.0
drwxr-xr-x. 7 root root 182 Sep 9 2021 hbase-2.0.6
drwxr-xr-x. 2 root root 82 Sep 11 2021 init
drwxr-xr-x. 8 10 143 255 Mar 15 2017 jdk1.8.0_131
drwxr-xr-x. 6 root root 89 Jul 7 2018 kafka_2.11-1.1.1
drwxr-xr-x. 2 root root 4096 Sep 10 2021 sbin
drwxr-xr-x. 13 mysql mysql 211 Feb 2 2020 spark-2.4.5-bin-hadoop2.7
3)修改主机名
修改主机名为hadoop-node1,如若其他名称则修改对应参数
[root@hadoop-node1 home]# hostnamectl set-hostname hadoop-node1
4)更改Host
修改命令:
[root@hadoop-node1 home]# vi /etc/hosts
修改内容:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.251 hadoop-node1
5)SSH设置免密登录
- 生成ssh key:ssh-keygen,然后一直回车就行
- ssh默认端口22,但是为了安全一般会修改端口,我这里是10086, 如果默认端口则不加 -p 10086参数
[root@hadoop-node1 home]# ssh-copy-id -i ~/.ssh/id_rsa.pub -p 10086 hadoop-node1
6)设置软件环境
修改命令:
[root@hadoop-node1 home]# vi /etc/profile
修改内容:
export BASE_DIR=/home/pqtel-bigdata
export JAVA_HOME=${BASE_DIR}/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
export HADOOP_HOME=${BASE_DIR}/hadoop-2.10.0
export PATH=${HADOOP_HOME}/bin:$PATH
export ZOOKEEPER_HOME=${BASE_DIR}/apache-zookeeper-3.5.7-bin
export PATH=$ZOOKEEPER_HOME/bin:$PATH
export HBASE_HOME=${BASE_DIR}/hbase-2.0.6
export PATH=$HBASE_HOME/bin:$PATH
export SPARK_HOME=${BASE_DIR}/spark-2.4.5-bin-hadoop2.7
export PATH=${SPARK_HOME}/bin:$PATH
使配置生效:
[root@hadoop-node1 home]# source /etc/profile
7)关闭SELinux
临时关闭:
[root@hadoop-node1 home]# setenforce 0
永久关闭:
修改命令:
[root@hadoop-node1 home]# vi /etc/selinux/config
修改内容:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
8)安装Screen
安装命令:
[root@hadoop-node1 home]# rpm -ivh screen-4.1.0-0.26.20120314git3c2946.el7.x86_64.rpm