资源规划

组件 LTSR003 LTSR005 LTSR006 LTSR007 LTSR008
OS centos7.6 centos7.6 centos7.6 centos7.6 centos7.6
JDK jvm jvm jvm jvm jvm
Zookeeper N.A QuorumPeerMain QuorumPeerMain QuorumPeerMain N.A
Kafka N.A kafka kafka/kafka-eagle kafka N.A

安装介质

版本:kafka_2.11-0.11.0.3.tgz
下载:http://kafka.apache.org/downloads.html

环境准备

安装JDK

参考:《 CentOS7.6-安装JDK-1.8.221

安装ZooKeeper

  1. 参考:《[CentOS7.6-安装ZooKeeper-3.4.10](https://www.yuque.com/polaris-docs/test/centos-setup-zookeeper)》

安装Kafka

解压缩

# 先在节点LTSR005上安装,之后分发到LTSR006、LTSR007
cd ~/software/
wget https://archive.apache.org/dist/kafka/0.11.0.3/kafka_2.11-0.11.0.3.tgz
tar -zxvf kafka_2.11-0.11.0.3.tgz -C ~/modules/
rm kafka_2.11-0.11.0.3.tgz

创建相关目录

# kafka数据目录(kafka集群节点上执行)
sudo mkdir -p /data/kafka-logs
sudo chown -R bigdata:bigdata /data/kafka-logs
sudo chmod -R a+w /data/kafka-logs

配置

  1. server.properties
    vi ~/modules/kafka_2.11-0.11.0.3/config/server.properties
    
    配置如下:
    # 节点唯一标识,注意该值为整形,建议设置为IP最后一段
    broker.id=5
    # 默认端口号
    listeners=PLAINTEXT://192.168.0.15:9092
    advertised.listeners=PLAINTEXT://192.168.0.15:9092
    # Kafka数据目录
    log.dirs=/data/kafka-logs
    # 配置Zookeeper
    zookeeper.connect=192.168.0.15:2181,192.168.0.16:2181,192.168.0.17:2181
    # 配置删除属性
    delete.topic.enable=true
    
    2.zookeeper.properties
    vi ~/modules/kafka_2.11-0.11.0.3/config/zookeeper.properties
    
    配置如下:
    #Zookeeper的数据存储路径与Zookeeper集群配置保持一致
    dataDir=/home/bigdata/modules/zookeeper-3.4.10/data
    
    3.consumer.properties
    vi ~/modules/kafka_2.11-0.11.0.3/config/consumer.properties
    
    配置如下:
    #配置Zookeeper地址
    zookeeper.connect=192.168.0.15:2181,192.168.0.16:2181,192.168.0.17:2181
    
    4.producer.properties
    vi ~/modules/kafka_2.11-0.11.0.3/config/producer.properties
    
    配置如下:
    #配置Kafka集群地址
    bootstrap.servers=192.168.0.15:9092,192.168.0.16:9092,192.168.0.17:9092
    
    5.kafka-run-class.sh
    vi ~/modules/kafka_2.11-0.11.0.3/bin/kafka-run-class.sh
    
    配置如下:
    # 行首新增JAVA_HOME配置
    export JAVA_HOME=/home/bigdata/modules/jdk1.8.0_221
    

    分发Kafka

    cd ~/modules/
    scp -r kafka_2.11-0.11.0.3 bigdata@LTSR006:~/modules/
    scp -r kafka_2.11-0.11.0.3 bigdata@LTSR007:~/modules/
    
    修改分发节点的server.properties。
    vi ~/modules/kafka_2.11-0.11.0.3/config/server.properties
    
    LTSR006节点配置如下:
    broker.id=6
    listeners=PLAINTEXT://192.168.0.16:9092
    advertised.listeners=PLAINTEXT://192.168.0.16:9092
    
    LTSR007节点配置如下:
    broker.id=7
    listeners=PLAINTEXT://192.168.0.17:9092
    advertised.listeners=PLAINTEXT://192.168.0.17:9092
    

    验证Kafka

    1.启动Zookeeper集群。
    2.启动Kafka集群。
    cd ~/modules/kafka_2.11-0.11.0.3/
    # 启动(每个节点)
    bin/kafka-server-start.sh config/server.properties >/dev/null 2>&1 &
    bin/kafka-server-start.sh -daemon config/server.properties
    # 停止(每个节点)
    bin/kafka-server-stop.sh config/server.properties
    
    3.创建topic。(任意Kafka集群节点)
    cd ~/modules/kafka_2.11-0.11.0.3/
    bin/kafka-topics.sh --zookeeper 192.168.0.15:2181,192.168.0.16:2181,192.168.0.17:2181 --create --topic test --replication-factor 1 --partitions 3
    
    4.查看topic列表。(任意Kafka集群节点)
    cd ~/modules/kafka_2.11-0.11.0.3/
    bin/kafka-topics.sh --zookeeper 192.168.0.15:2181,192.168.0.16:2181,192.168.0.17:2181 --list
    
    5.生产者生成数据
    cd ~/modules/kafka_2.11-0.11.0.3/
    bin/kafka-console-producer.sh --broker-list 192.168.0.15:9092,192.168.0.16:9092,192.168.0.17:9092 --topic test
    
    6.消费者消费数据
    cd ~/modules/kafka_2.11-0.11.0.3/
    # --from-beginning 从头开始消费,不加该参数则从最新的消息开始消费,之前的丢弃
    # --bootstrap-server 将在kafka集群上创建一个名称为“__consumer_offsets”的topic,50个分区,1个副本,用于存放消费者偏移量
    bin/kafka-console-consumer.sh --bootstrap-server 192.168.0.15:9092,192.168.0.16:9092,192.168.0.17:9092 --topic test --from-beginning
    
    7.删除topic
    cd ~/modules/kafka_2.11-0.11.0.3/
    bin/kafka-topics.sh --delete --zookeeper 192.168.0.15:2181,192.168.0.16:2181,192.168.0.17:2181 --topic test