1 运行 zookeeper
# 第一次启动docker run -dit --name zk -p 2181:2181 zookeeper# 重启docker restart zk# 查看日志docker logs -f zk
2 运行 kafka
启动的命令如下,注意将下面的 192.168.2.101 换成自己的宿主机IP,不同kafka节点只需修改端口即可。
运行后查看日志正常即可
# 第一次启动docker run -dit --name kafka0 -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.2.101:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.2.101:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafkadocker run -dit --name kafka1 -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=192.168.2.101:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.2.101:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafkadocker run -dit --name kafka2 -p 9094:9094 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=192.168.2.101:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.2.101:9094 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094 -t wurstmeister/kafka# 重启docker restart kafka0docker restart kafka1docker restart kafka2# 查看日志docker logs -f kafka0# 删除kafkadocker rm -f kafka0docker rm -f kafka1docker rm -f kafka2
3 测试
测试建立3的副本和5的partition,查看是否配置成功。然后在1和2上启动消费者,0生产消息
# 建立副本和partitiondocker exec -ti kafka0 kafka-topics.sh --create --zookeeper 192.168.2.101:2181 --replication-factor 3 --partitions 5 --topic TestTopic# 查看信息docker exec -ti kafka0 kafka-topics.sh --describe --zookeeper 192.168.2.101:2181 --topic TestTopicdocker exec -ti kafka1 kafka-topics.sh --describe --zookeeper 192.168.2.101:2181 --topic TestTopicdocker exec -ti kafka2 kafka-topics.sh --describe --zookeeper 192.168.2.101:2181 --topic TestTopic# 消费和生产,最后一个kafka0输出后在其他两个能看到docker exec -ti kafka1 kafka-console-consumer.sh --bootstrap-server 192.168.2.101:9093 --topic TestTopic --from-beginningdocker exec -ti kafka2 kafka-console-consumer.sh --bootstrap-server 192.168.2.101:9094 --topic TestTopic --from-beginningdocker exec -ti kafka0 kafka-console-producer.sh --broker-list 192.168.2.101:9092 --topic TestTopic# 性能测试docker exec -ti kafka0 kafka-producer-perf-test.sh --topic TestTopic --num-records 100000 --record-size 1000 --throughput 2000 --producer-props bootstrap.servers=192.168.2.101:9092docker exec -ti kafka0 kafka-consumer-perf-test.sh --bootstrap-server 192.168.2.101:9092 --topic TestTopic --fetch-size 1048576 --messages 100000 --threads 1
4 kafka manage
使用docker启动后,访问: http://localhost:9000/ ,
点击添加cluster,输入前两个(名称和zk地址),保存即可
docker run -dit -p 9000:9000 -e ZK_HOSTS="192.168.2.101:2181" hlebalbau/kafka-manager:stablehttp://localhost:9000/
5 docker-compose 一键构建
5.1 单节点
建 docker-compose-kafka-single-broker.yml 文件
version: '3'services:zookeeper:image: wurstmeister/zookeepercontainer_name: zookeeperports:- "2181:2181"kafka:image: wurstmeister/kafka # Docker宿主机IP,可以设置多个ports:- "9092:9092"environment:KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101KAFKA_CREATE_TOPICS: TestComposeTopic:2:1 # TestComposeTopic主题, 2分区,1副本KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_BROKER_ID: 1KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9092KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092container_name: kafka01volumes:- /var/run/docker.sock:/var/run/docker.sock
使用命令 docker-compose -f docker-compose-kafka-single-broker.yml up 启动单节点Kafka。
5.2 集群
建 docker-compose-kafka-single-broker.yml 文件
version: '3'services:zookeeper:image: wurstmeister/zookeepercontainer_name: zookeeperports:- "2181:2181"kafka1:image: wurstmeister/kafkaports:- "9092:9092"environment:KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101KAFKA_CREATE_TOPICS: TestComposeTopic:4:3KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_BROKER_ID: 1KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9092KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092container_name: kafka01volumes:- /var/run/docker.sock:/var/run/docker.sockkafka2:image: wurstmeister/kafkaports:- "9093:9093"environment:KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_BROKER_ID: 2KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9093KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093container_name: kafka02volumes:- /var/run/docker.sock:/var/run/docker.sockkafka3:image: wurstmeister/kafkaports:- "9094:9094"environment:KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_BROKER_ID: 3KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9094KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094container_name: kafka03volumes:- /var/run/docker.sock:/var/run/docker.sock
执行脚本:docker-compose -f docker-compose-kafka-cluster.yml up
将停止运行的容器,并且会删除已停止的容器以及已创建的所有网络,添加-v标记以删除所有卷。
docker-compose -f docker-compose-kafka-single-broker.yml down -v
