Zookeepr集群部署
一、修改三台节点的主机名
[root@zookeeper1 ~]# hostnamectl set-hostname zookeeper1
[root@zookeeper2 ~]# hostnamectl set-hostname zookeeper2
[root@zookeeper3 ~]# hostnamectl set-hostname zookeeper3
二、配置三台节点的主机名映射文件(三台节点同时操作,相同内容)
[root@zookeeper1 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.40 zookeeper1
192.168.200.50 zookeeper2
192.168.200.60 zookeeper3
三、配置三个节点安装jdk环境(三台节点同时操作,相同内容)
[root@zookeeper1 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@zookeeper1 ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
四、将第一台节点中的zookeeper压缩包发送至其他两台节点中
[root@zookeeper1 ~]# scp zookeeper-3.4.14.tar.gz 192.168.200.50:/root
root@192.168.200.50's password:
zookeeper-3.4.14.tar.gz 100% 36MB 114.3MB/s 00:00
[root@zookeeper1 ~]# scp zookeeper-3.4.14.tar.gz 192.168.200.60:/root
root@192.168.200.60's password:
zookeeper-3.4.14.tar.gz 100% 36MB 104.8MB/s 00:00
五、解压三台节点的zookeeper压缩包(三个节点同时操作)
[root@zookeeper1 ~]# tar -zxvf zookeeper-3.4.14.tar.gz
六、修改三个节点的配置文件(三台节点同时操作,同样配置)
[root@zookeeper1 ~]# cd zookeeper-3.4.14/conf //进入配置文件目录下
[root@zookeeper1 conf]# mv zoo_sample.cfg zoo.cfg //给文件改名
[root@zookeeper1 conf]# vi zoo.cfg //在文件的末尾添加以下几行
server.1=192.168.200.40:2888:3888
server.2=192.168.200.50:2888:3888
server.3=192.168.200.60:2888:3888
[root@zookeeper1 conf]# cd //切换至root目录下
七、创建myid文件
[root@zookeeper1 ~]# mkdir /tmp/zookeeper //第一台节点
[root@zookeeper1 ~]# vi /tmp/zookeeper/myid
1
[root@zookeeper2 ~]# mkdir /tmp/zookeeper //第二台节点
[root@zookeeper2 ~]# vi /tmp/zookeeper/myid
2
[root@zookeeper3 ~]# mkdir /tmp/zookeeper //第三台节点
[root@zookeeper3 ~]# vi /tmp/zookeeper/myid
3
八、修改yum源
//三台都需要修改
[root@zookeeper1 ~]# cat /etc/yum.repos.d/local.repo
[centos]
name=centos7
baseurl=ftp://zookeeper1/centos
gpgcheck=0
enable=1
[root@zookeeper1 ~]# yum clean all
[root@zookeeper1 ~]# yum repolist
九、启动zookeeper服务
[root@zookeeper1 ~]# cd zookeeper-3.4.14/bin //进入第一台节点bin目录下
[root@zookeeper1 bin]# ./zkServer.sh start //启动zookeeper服务
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper1 bin]# ./zkServer.sh status //查看服务状态
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
[root@zookeeper2 ~]# cd zookeeper-3.4.14/bin //进入第二台节点bin目录下
[root@zookeeper2 bin]# ./zkServer.sh start //启动zookeeper服务
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... already running as process 10175.
[root@zookeeper2 bin]# ./zkServer.sh status //查看服务状态
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader
[root@zookeeper3 ~]# cd zookeeper-3.4.14/bin //进入第三台节点bin目录下
[root@zookeeper3 bin]# ./zkServer.sh start //启动服务
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper3 bin]# ./zkServer.sh status //查看服务状态
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
可以看到,3个节点,zookeeper2为leader,其他的都是follower。
注意:查看状态出现问题时,所有节点都启动一下,再查看状态。
Kafka集群部署
一、将第一台节点上的kafka压缩包上传至其他节点
//上传至第二台
[root@zookeeper1 ~]# scp kafka_2.11-1.1.1.tgz 192.168.200.50:/root/
root@192.168.200.50's password:
kafka_2.11-1.1.1.tgz 100% 55MB 122.6MB/s 00:00
//上传至第三台
[root@zookeeper1 ~]# scp kafka_2.11-1.1.1.tgz 192.168.200.60:/root/
root@192.168.200.60's password:
kafka_2.11-1.1.1.tgz 100% 55MB 105.6MB/s 00:00
二、分别解压三台节点kafka压缩包
[root@zookeeper1 ~]# tar -zxvf kafka_2.11-1.1.1.tgz //第一台节点解压
[root@zookeeper2 ~]# tar -zxvf kafka_2.11-1.1.1.tgz //第二台节点解压
[root@zookeeper3 ~]# tar -zxvf kafka_2.11-1.1.1.tgz //第三台节点解压
三、修改三台节点的kafka配置文件
[root@zookeeper1 ~]# cd kafka_2.11-1.1.1/config/ //第一台节点进入kafka中的配置目录
[root@zookeeper1 ~]# vi server.properties //编辑文件
注意:将21行和123行注释,并且在文件的底部位置添加如下几行内容
broker.id=1
zookeeper.connect=192.168.200.40:2181,192.168.200.50:2181,192.168.200.60:2181
listeners = PLAINTEXT://192.168.200.40:9092
//将配置好的文件发送给第二台节点
[root@zookeeper1 ~]# scp /root/kafka_2.11-1.1.1/config/server.properties 192.168.200.50:/root/kafka_2.11-1.1.1/config/server.properties
root@192.168.200.50's password:
server.properties 100% 6988 3.5MB/s 00:00
//将配置好的文件发送给第三台节点
[root@zookeeper1 ~]# scp /root/kafka_2.11-1.1.1/config/server.properties 192.168.200.60:/root/kafka_2.11-1.1.1/config/server.properties
root@192.168.200.60's password:
server.properties 100% 6988 5.5MB/s 00:00
[root@zookeeper2 ~]# cat /root/kafka_2.11-1.1.1/config/server.properties //发送后在第二台节点进行修改
broker.id=2
zookeeper.connect=192.168.200.40:2181,192.168.200.50:2181,192.168.200.60:2181
listeners = PLAINTEXT://192.168.200.50:9092
[root@zookeeper2 ~]# cat /root/kafka_2.11-1.1.1/config/server.properties //发送后在第三台节点进行修改
broker.id=3
zookeeper.connect=192.168.200.40:2181,192.168.200.50:2181,192.168.200.60:2181
listeners = PLAINTEXT://192.168.200.60:9092
四、三台节点启动kafka服务,此操作只演示一台节点,其他虚拟机自行配置
[root@zookeeper1 ~]# cd /root/kafka_2.11-1.1.1/bin/
[root@zookeeper1 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
[root@zookeeper1 bin]# jps
12193 QuorumPeerMain
11380 WrapperSimpleApp
12726 Kafka
12790 Jps
[root@zookeeper2 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
[root@zookeeper2 bin]# jps
12230 QuorumPeerMain
12682 Jps
12620 Kafka
[root@zookeeper3 ~]# cd /root/kafka_2.11-1.1.1/bin/
[root@zookeeper3 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
[root@zookeeper3 bin]# jps
12615 Jps
12200 QuorumPeerMain
12553 Kafka