服务器信息

IP 配置 hostname
192.168.41.136 8g master
192.168.41.137 4g worker1
192.168.41.138 4g worker2
  1. # manager节点执行以下命令:
  2. hostnamectl --static set-hostname manager
  3. # worker节点执行以下命令:
  4. hostnamectl --static set-hostname worker[序号]

docker安装

  1. # docker install
  2. curl -fsSL get.docker.com -o get-docker.sh
  3. sudo sh get-docker.sh --mirror Aliyun
  4. # docker start
  5. systemctl start docker
  6. # 开机启动
  7. systemctl enable docker
  8. 或者
  9. chkconfig docker on
  1. docker login -u username -p password registry.cn-beijing.aliyuncs.com

docker swarm安装

开放swarm 使用端口

集群节点之间保证TCP 2377、TCP/UDP 7946和UDP 4789端口通信   

  • TCP端口2377集群管理端口
  • TCP与UDP端口7946节点之间通讯端口
  • TCP与UDP端口4789 overlay网络通讯端口
  1. firewall-cmd --zone=public --add-port=2377/tcp --permanent
  2. firewall-cmd --zone=public --add-port=7946/tcp --permanent
  3. firewall-cmd --zone=public --add-port=7946/udp --permanent
  4. firewall-cmd --zone=public --add-port=4789/tcp --permanent
  5. firewall-cmd --zone=public --add-port=4789/udp --permanent
  6. firewall-cmd --add-port=9000/tcp --permanent
  7. firewall-cmd --add-port=2375/tcp --permanent
  8. firewall-cmd --reload

查看已开启端口firewall-cmd --list-port

初始化管理节点

  1. docker swarm init --advertise-addr <MANAGER-IP>
  2. # docker swarm init --advertise-addr 192.168.41.136

执行结果
image.png
该结果中给出了后续操作引导信息,告诉我们如何将一个Worker Node加入到Swarm集群中。

添加其它节点前查看token

添加其它节点到集群,必须先在管理节点执行如下命令,它会打印出在其它节点将要执行的包含token的完整脚本。

查看如何添加work节点docker swarm join-token worker

  1. docker swarm join-token worker

image.png
查看docker swarm 集群信息 docker info,docker node ls
image.png
image.png
上面信息中,AVAILABILITY表示Swarm Scheduler是否可以向集群中的某个Node指派Task,对应有如下三种状态:

  • Active:集群中该Node可以被指派Task
  • Pause:集群中该Node不可以被指派新的Task,但是其他已经存在的Task保持运行
  • Drain:集群中该Node不可以被指派新的Task,Swarm Scheduler停掉已经存在的Task,并将它们调度到可用的Node上

查看某一个Node的状态信息,可以在该Node上执行如下命令:docker node inspect self

  1. [
  2. {
  3. "ID": "cavc7hrm8tj93do5rwtvm31lk",
  4. "Version": {
  5. "Index": 9
  6. },
  7. "CreatedAt": "2022-05-10T06:43:05.112178247Z",
  8. "UpdatedAt": "2022-05-10T06:43:05.723262101Z",
  9. "Spec": {
  10. "Labels": {},
  11. "Role": "manager",
  12. "Availability": "active"
  13. },
  14. "Description": {
  15. "Hostname": "manager",
  16. "Platform": {
  17. "Architecture": "x86_64",
  18. "OS": "linux"
  19. },
  20. "Resources": {
  21. "NanoCPUs": 2000000000,
  22. "MemoryBytes": 8181821440
  23. },
  24. "Engine": {
  25. "EngineVersion": "20.10.15",
  26. "Plugins": [
  27. {
  28. "Type": "Log",
  29. "Name": "awslogs"
  30. },
  31. {
  32. "Type": "Log",
  33. "Name": "fluentd"
  34. },
  35. {
  36. "Type": "Log",
  37. "Name": "gcplogs"
  38. },
  39. {
  40. "Type": "Log",
  41. "Name": "gelf"
  42. },
  43. {
  44. "Type": "Log",
  45. "Name": "journald"
  46. },
  47. {
  48. "Type": "Log",
  49. "Name": "json-file"
  50. },
  51. {
  52. "Type": "Log",
  53. "Name": "local"
  54. },
  55. {
  56. "Type": "Log",
  57. "Name": "logentries"
  58. },
  59. {
  60. "Type": "Log",
  61. "Name": "splunk"
  62. },
  63. {
  64. "Type": "Log",
  65. "Name": "syslog"
  66. },
  67. {
  68. "Type": "Network",
  69. "Name": "bridge"
  70. },
  71. {
  72. "Type": "Network",
  73. "Name": "host"
  74. },
  75. {
  76. "Type": "Network",
  77. "Name": "ipvlan"
  78. },
  79. {
  80. "Type": "Network",
  81. "Name": "macvlan"
  82. },
  83. {
  84. "Type": "Network",
  85. "Name": "null"
  86. },
  87. {
  88. "Type": "Network",
  89. "Name": "overlay"
  90. },
  91. {
  92. "Type": "Volume",
  93. "Name": "local"
  94. }
  95. ]
  96. },
  97. "TLSInfo": {
  98. "TrustRoot": "-----BEGIN CERTIFICATE-----\nMIIBajCCARCgAwIBAgIUfLOj3Lu72NmJ8Ru26dSuXqHGDVswCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjIwNTEwMDYzODAwWhcNNDIwNTA1MDYz\nODAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAcfyEUnuOC/D0oWCzPkSJTWWKovxOih9zWcVdtPgIGzdUIvw0K8b3AnOEiU\nUsuwUJz/euc06MDzVEtQ8mHYOT+jQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBRVWdfH72g1VDgsE7vbg+l9VG2IgzAKBggqhkjO\nPQQDAgNIADBFAiBXOIuel39krzyWleTfl5fuDyZLuh+/OVVzEta4PgepDAIhAJ1n\nLN44fHJbBCIFxWXv1rP4O0yJzfMX/84xFB9LntMC\n-----END CERTIFICATE-----\n",
  99. "CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
  100. "CertIssuerPublicKey": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEBx/IRSe44L8PShYLM+RIlNZYqi/E6KH3NZxV20+AgbN1Qi/DQrxvcCc4SJRSy7BQnP965zTowPNUS1DyYdg5Pw=="
  101. }
  102. },
  103. "Status": {
  104. "State": "ready",
  105. "Addr": "192.168.41.136"
  106. },
  107. "ManagerStatus": {
  108. "Leader": true,
  109. "Reachability": "reachable",
  110. "Addr": "192.168.41.136:2377"
  111. }
  112. }
  113. ]

查看如何添加manager节点docker swarm join-token manager

创建应用跨机器网络

  • 创建overlay网络(主节点执行) ```bash docker network create —attachable —driver overlay my_network

输出: 8fvtabqh3uce2bg19zkk84ype

  1. - --attachable 参数为了兼容单机的容器可以加入此网络。network_name 自定义
  2. - 创建完Overlay网络my-network以后,Swarm集群中所有的Manager Node都可
  3. 以访问该网络。然后,在创建服务的时候,只需要指定使用的网络为已存在<br />的Overlay网络即可,如下命令所示:
  4. ```bash
  5. docker service create \ --replicas 3 \
  6. --network my-network \
  7. --name myweb \
  8. nginx

创建ui管理容器

使用ui页面管理容器集群

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

需要修改docker 启动文件

vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock 
#ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock

systemctl daemon-reload
systemctl restart docker

访问地址: IP:9000

选择 remote,填写服务器ip端口信息,注意开通对应端口2375

firewall-cmd --add-port=2375/tcp --permanent
firewall-cmd --reload

此处设置docker远程接口为免密方式,生产环境不建议使用ui管理

添加节点配置

其他节点机器同样开通2375端口(修改docker配置)
image.png