1 运行 zookeeper

  1. # 第一次启动
  2. docker run -dit --name zk -p 2181:2181 zookeeper
  3. # 重启
  4. docker restart zk
  5. # 查看日志
  6. docker logs -f zk

2 运行 kafka

启动的命令如下,注意将下面的 192.168.2.101 换成自己的宿主机IP,不同kafka节点只需修改端口即可。
运行后查看日志正常即可

  1. # 第一次启动
  2. docker run -dit --name kafka0 -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.2.101:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.2.101:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka
  3. docker run -dit --name kafka1 -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=192.168.2.101:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.2.101:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafka
  4. docker run -dit --name kafka2 -p 9094:9094 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=192.168.2.101:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.2.101:9094 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094 -t wurstmeister/kafka
  5. # 重启
  6. docker restart kafka0
  7. docker restart kafka1
  8. docker restart kafka2
  9. # 查看日志
  10. docker logs -f kafka0
  11. # 删除kafka
  12. docker rm -f kafka0
  13. docker rm -f kafka1
  14. docker rm -f kafka2

3 测试

测试建立3的副本和5的partition,查看是否配置成功。然后在1和2上启动消费者,0生产消息

  1. # 建立副本和partition
  2. docker exec -ti kafka0 kafka-topics.sh --create --zookeeper 192.168.2.101:2181 --replication-factor 3 --partitions 5 --topic TestTopic
  3. # 查看信息
  4. docker exec -ti kafka0 kafka-topics.sh --describe --zookeeper 192.168.2.101:2181 --topic TestTopic
  5. docker exec -ti kafka1 kafka-topics.sh --describe --zookeeper 192.168.2.101:2181 --topic TestTopic
  6. docker exec -ti kafka2 kafka-topics.sh --describe --zookeeper 192.168.2.101:2181 --topic TestTopic
  7. # 消费和生产,最后一个kafka0输出后在其他两个能看到
  8. docker exec -ti kafka1 kafka-console-consumer.sh --bootstrap-server 192.168.2.101:9093 --topic TestTopic --from-beginning
  9. docker exec -ti kafka2 kafka-console-consumer.sh --bootstrap-server 192.168.2.101:9094 --topic TestTopic --from-beginning
  10. docker exec -ti kafka0 kafka-console-producer.sh --broker-list 192.168.2.101:9092 --topic TestTopic
  11. # 性能测试
  12. docker exec -ti kafka0 kafka-producer-perf-test.sh --topic TestTopic --num-records 100000 --record-size 1000 --throughput 2000 --producer-props bootstrap.servers=192.168.2.101:9092
  13. docker exec -ti kafka0 kafka-consumer-perf-test.sh --bootstrap-server 192.168.2.101:9092 --topic TestTopic --fetch-size 1048576 --messages 100000 --threads 1

4 kafka manage

使用docker启动后,访问: http://localhost:9000/ ,
点击添加cluster,输入前两个(名称和zk地址),保存即可

  1. docker run -dit -p 9000:9000 -e ZK_HOSTS="192.168.2.101:2181" hlebalbau/kafka-manager:stable
  2. http://localhost:9000/

5 docker-compose 一键构建

5.1 单节点
建 docker-compose-kafka-single-broker.yml 文件

  1. version: '3'
  2. services:
  3. zookeeper:
  4. image: wurstmeister/zookeeper
  5. container_name: zookeeper
  6. ports:
  7. - "2181:2181"
  8. kafka:
  9. image: wurstmeister/kafka # Docker宿主机IP,可以设置多个
  10. ports:
  11. - "9092:9092"
  12. environment:
  13. KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101
  14. KAFKA_CREATE_TOPICS: TestComposeTopic:2:1 # TestComposeTopic主题, 2分区,1副本
  15. KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  16. KAFKA_BROKER_ID: 1
  17. KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9092
  18. KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
  19. container_name: kafka01
  20. volumes:
  21. - /var/run/docker.sock:/var/run/docker.sock

使用命令 docker-compose -f docker-compose-kafka-single-broker.yml up 启动单节点Kafka。

5.2 集群
建 docker-compose-kafka-single-broker.yml 文件

  1. version: '3'
  2. services:
  3. zookeeper:
  4. image: wurstmeister/zookeeper
  5. container_name: zookeeper
  6. ports:
  7. - "2181:2181"
  8. kafka1:
  9. image: wurstmeister/kafka
  10. ports:
  11. - "9092:9092"
  12. environment:
  13. KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101
  14. KAFKA_CREATE_TOPICS: TestComposeTopic:4:3
  15. KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  16. KAFKA_BROKER_ID: 1
  17. KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9092
  18. KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
  19. container_name: kafka01
  20. volumes:
  21. - /var/run/docker.sock:/var/run/docker.sock
  22. kafka2:
  23. image: wurstmeister/kafka
  24. ports:
  25. - "9093:9093"
  26. environment:
  27. KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101
  28. KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  29. KAFKA_BROKER_ID: 2
  30. KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9093
  31. KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
  32. container_name: kafka02
  33. volumes:
  34. - /var/run/docker.sock:/var/run/docker.sock
  35. kafka3:
  36. image: wurstmeister/kafka
  37. ports:
  38. - "9094:9094"
  39. environment:
  40. KAFKA_ADVERTISED_HOST_NAME: 192.168.2.101
  41. KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  42. KAFKA_BROKER_ID: 3
  43. KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.101:9094
  44. KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
  45. container_name: kafka03
  46. volumes:
  47. - /var/run/docker.sock:/var/run/docker.sock

执行脚本:docker-compose -f docker-compose-kafka-cluster.yml up

将停止运行的容器,并且会删除已停止的容器以及已创建的所有网络,添加-v标记以删除所有卷。
docker-compose -f docker-compose-kafka-single-broker.yml down -v