1.理解Docker 0

清空所有网络
image.png

  • 问题: docker 是如果处理容器网络访问的? ```basic

    [root@localhost /]# docker run -d -P —name tomcat01 tomcat

查看容器的内部网络地址 ip addr, 发现容器启动的时候会得到一个 eth0@if43 ip地址,docker分配的!

[root@localhost /]# docker exec -it tomcat01 ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 42: eth0@if43: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever

思考:linux能不能ping通docker容器内部!

[root@localhost /]# ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.476 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.099 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.105 ms …

linux 可以ping通docker容器内部

  1. 原理:<br />1、我们每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要装了docker,就会有一个docker01网卡。<br />桥接模式,使用的技术是veth-pair技术!<br />再次测试 ip addr,发现多了一对网卡 : <br />![image.png](https://cdn.nlark.com/yuque/0/2020/png/1486377/1598157373752-b781e873-bae8-42fa-b27d-85fddb1fc90e.png#align=left&display=inline&height=278&margin=%5Bobject%20Object%5D&name=image.png&originHeight=555&originWidth=1213&size=310798&status=done&style=none&width=606.5)<br />2 、在启动一个容器测试,发现又多了一对网络<br />![image.png](https://cdn.nlark.com/yuque/0/2020/png/1486377/1598157388578-e10e6866-724a-410f-8f06-8cc8241fab85.png#align=left&display=inline&height=312&margin=%5Bobject%20Object%5D&name=image.png&originHeight=623&originWidth=1205&size=358671&status=done&style=none&width=602.5)<br />我们发现这个容器带来网卡,都是一对对的
  2. ```basic
  3. # 我们发现这个容器带来网卡,都是一对对的
  4. # veth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一段连着协议,一段彼此相连
  5. # 正因为有这个特性,veth-pair 充当一个桥梁,连接各种虚拟网络设备
  6. # OpenStack,Docker容器之间的连接,OVS的连接都是使用veth-pair技术

3、我们来测试下tomcat01和tomcat02是否可以ping通

  1. [root@localhost /]# docker exec -it tomcat02 ping 172.17.0.2
  2. PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
  3. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.556 ms
  4. 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.096 ms
  5. 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.111 ms
  6. ...
  7. # 结论:容器与容器之间是可以相互ping通的!!!

绘制一个网络模型图:
八、Docker 网络 - 图2
结论:tomcat01和tomcat02公用一个路由器,docker0。
所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用ip。
小结: Docker使用的是Linux的桥接,宿主机是一个Docker容器的网桥 docker0
八、Docker 网络 - 图3
Docker中所有网络接口都是虚拟的,虚拟的转发效率高(内网传递文件)
只要容器删除,对应的网桥一对就没了!

2.–link

思考一个场景:我们编写了一个微服务,database url=ip: 项目不重启,数据ip换了,我们希望可以处理这个问题,可以通过名字来进行访问容器?


  1. # tomcat02 想通过直接ping 容器名(即"tomcat01")来ping通,而不是ip,发现失败了!
  2. [root@localhost /]# docker exec -it tomcat02 ping tomcat01
  3. ping: tomcat01: Name or service not known
  4. # 如何解决这个问题呢?
  5. # 通过--link 就可以解决这个网络联通问题了!!! 发现新建的tomcat03可以ping通tomcat02
  6. [root@localhost /]# docker run -d -P --name tomcat03 --link tomcat02 tomcat
  7. 87a0e5f5e6da34a7f043ff6210b57f92f40b24d0d4558462e7746b2e19902721
  8. [root@localhost /]# docker exec -it tomcat03 ping tomcat02
  9. PING tomcat02 (172.17.0.3) 56(84) bytes of data.
  10. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.132 ms
  11. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.116 ms
  12. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.116 ms
  13. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=4 ttl=64 time=0.116 ms
  14. # 反向能ping通吗? 发现tomcat02不能oing通tomcat03
  15. [root@localhost /]# docker exec -it tomcat02 ping tomcat03
  16. ping: tomcat03: Name or service not known

探究:inspect !!!

image.png
image.png
其实这个tomcat03就是在本地配置了到tomcat02的映射:

  1. # 查看hosts 配置,在这里发现原理!
  2. [root@localhost /]# docker exec -it tomcat03 cat /etc/hosts
  3. 127.0.0.1 localhost
  4. ::1 localhost ip6-localhost ip6-loopback
  5. fe00::0 ip6-localnet
  6. ff00::0 ip6-mcastprefix
  7. ff02::1 ip6-allnodes
  8. ff02::2 ip6-allrouters
  9. 172.17.0.3 tomcat02 95303c12f6d9 # 就像windows中的 host 文件一样,做了地址绑定
  10. 172.17.0.4 87a0e5f5e6da

本质探究:
–link 本质就是在hosts配置中添加映射
现在使用Docker已经不建议使用–link了!
自定义网络,不适用docker0!
docker0问题:不支持容器名连接访问!

3.自定义网络

查看所有的docker网络
image.png

网络模式

bridge :桥接 docker(默认,自己创建也是用bridge模式)
none :不配置网络,一般不用
host :和所主机共享网络
container :容器网络连通(用得少!局限很大)
测试

  1. # 我们之前直接启动的命令 (默认是使用--net bridge,可省),这个bridge就是我们的docker0
  2. docker run -d -P --name tomcat01 tomcat
  3. docker run -d -P --name tomcat01 --net bridge tomcat
  4. # 上面两句等价
  5. # docker0(即bridge)默认不支持域名访问 ! --link可以打通连接,即支持域名访问!
  6. # 我们可以自定义一个网络!
  7. # --driver bridge 网络模式定义为 :桥接
  8. # --subnet 192.168.0.0/16 定义子网 ,范围为:192.168.0.2 ~ 192.168.255.255
  9. # --gateway 192.168.0.1 子网网关设为: 192.168.0.1
  10. [root@localhost /]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
  11. 7ee3adf259c8c3d86fce6fd2c2c9f85df94e6e57c2dce5449e69a5b024efc28c
  12. [root@localhost /]# docker network ls
  13. NETWORK ID NAME DRIVER SCOPE
  14. 461bf576946c bridge bridge local
  15. c501704cf28e host host local
  16. 7ee3adf259c8 mynet bridge local #自定义的网络
  17. 9354fbcc160f none null local

自己的网络就创建好了:
image.png

  1. [root@localhost /]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
  2. b168a37d31fcdc2ff172fd969e4de6de731adf53a2960eeae3dd9c24e14fac67
  3. [root@localhost /]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
  4. c07d634e17152ca27e318c6fcf6c02e937e6d5e7a1631676a39166049a44c03c
  5. [root@localhost /]# docker network inspect mynet
  6. [
  7. {
  8. "Name": "mynet",
  9. "Id": "7ee3adf259c8c3d86fce6fd2c2c9f85df94e6e57c2dce5449e69a5b024efc28c",
  10. "Created": "2020-06-14T01:03:53.767960765+08:00",
  11. "Scope": "local",
  12. "Driver": "bridge",
  13. "EnableIPv6": false,
  14. "IPAM": {
  15. "Driver": "default",
  16. "Options": {},
  17. "Config": [
  18. {
  19. "Subnet": "192.168.0.0/16",
  20. "Gateway": "192.168.0.1"
  21. }
  22. ]
  23. },
  24. "Internal": false,
  25. "Attachable": false,
  26. "Ingress": false,
  27. "ConfigFrom": {
  28. "Network": ""
  29. },
  30. "ConfigOnly": false,
  31. "Containers": {
  32. "b168a37d31fcdc2ff172fd969e4de6de731adf53a2960eeae3dd9c24e14fac67": {
  33. "Name": "tomcat-net-01",
  34. "EndpointID": "f0af1c33fc5d47031650d07d5bc769e0333da0989f73f4503140151d0e13f789",
  35. "MacAddress": "02:42:c0:a8:00:02",
  36. "IPv4Address": "192.168.0.2/16",
  37. "IPv6Address": ""
  38. },
  39. "c07d634e17152ca27e318c6fcf6c02e937e6d5e7a1631676a39166049a44c03c": {
  40. "Name": "tomcat-net-02",
  41. "EndpointID": "ba114b9bd5f3b75983097aa82f71678653619733efc1835db857b3862e744fbc",
  42. "MacAddress": "02:42:c0:a8:00:03",
  43. "IPv4Address": "192.168.0.3/16",
  44. "IPv6Address": ""
  45. }
  46. },
  47. "Options": {},
  48. "Labels": {}
  49. }
  50. ]
  51. # 再次测试 ping 连接
  52. [root@localhost /]# docker exec -it tomcat-net-01 ping 192.168.0.3
  53. PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
  54. 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.199 ms
  55. 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.121 ms
  56. ^C
  57. --- 192.168.0.3 ping statistics ---
  58. 2 packets transmitted, 2 received, 0% packet loss, time 2ms
  59. rtt min/avg/max/mdev = 0.121/0.160/0.199/0.039 ms
  60. # 现在不使用 --link,也可以ping 名字了!!!!!!
  61. [root@localhost /]# docker exec -it tomcat-net-01 ping tomcat-net-02
  62. PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
  63. 64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.145 ms
  64. 64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.117 ms
  65. ^C
  66. --- tomcat-net-02 ping statistics ---
  67. 2 packets transmitted, 2 received, 0% packet loss, time 3ms
  68. rtt min/avg/max/mdev = 0.117/0.131/0.145/0.014 ms

我们在使用自定义的网络时,docker都已经帮我们维护好了对应关系,推荐我们平时这样使用网 络!!!
好处:
redis——不同的集群使用不同的网络,保证了集群的安全和健康
mysql——不同的集群使用不同的网络,保证了集群的安全和健康
八、Docker 网络 - 图8

4.网络连通

image.png
image.png

  1. # 测试打通 tomcat01 — mynet
  2. [root@localhost /]# docker network connect mynet tomcat01
  3. # 连通之后就是将 tomcat01 放到了 mynet 网络下! (见下图)
  4. # 这就产生了 一个容器有两个ip地址 ! 参考阿里云的公有ip和私有ip
  5. [root@localhost /]# docker network inspect mynet

image.png

  1. # tomcat01 连通ok
  2. [root@localhost /]# docker exec -it tomcat01 ping tomcat-net-01
  3. PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
  4. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.124 ms
  5. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.162 ms
  6. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.107 ms
  7. ^C
  8. --- tomcat-net-01 ping statistics ---
  9. 3 packets transmitted, 3 received, 0% packet loss, time 3ms
  10. rtt min/avg/max/mdev = 0.107/0.131/0.162/0.023 ms
  11. # tomcat02 是依旧打不通的
  12. [root@localhost /]# docker exec -it tomcat02 ping tomcat-net-01
  13. ping: tomcat-net-01: Name or service not known

结论:假设要跨网络操作别人,就需要使用docker network connect 连通!

5.实战:部署Redis集群

停止所有正在运行的容器

  1. docker stop $(docker ps -aq)

八、Docker 网络 - 图12
启动6个redis容器,上面三个是主,下面三个是备!
使用shell脚本启动!

  1. # 创建redis集群网络
  2. docker network create redis --subnet 172.38.0.0/16
  3. # 通过脚本创建六个redis配置
  4. [root@localhost /]# for port in $(seq 1 6);\
  5. do \
  6. mkdir -p /mydata/redis/node-${port}/conf
  7. touch /mydata/redis/node-${port}/conf/redis.conf
  8. cat <<EOF>>/mydata/redis/node-${port}/conf/redis.conf
  9. port 6379
  10. bind 0.0.0.0
  11. cluster-enabled yes
  12. cluster-config-file nodes.conf
  13. cluster-node-timeout 5000
  14. cluster-announce-ip 172.38.0.1${port}
  15. cluster-announce-port 6379
  16. cluster-announce-bus-port 16379
  17. appendonly yes
  18. EOF
  19. done
  20. # 查看创建的六个redis
  21. [root@localhost /]# cd /mydata/
  22. [root@localhost mydata]# \ls
  23. redis
  24. [root@localhost mydata]# cd redis/
  25. [root@localhost redis]# ls
  26. node-1 node-2 node-3 node-4 node-5 node-6
  27. # 查看redis-1的配置信息
  28. [root@localhost redis]# cd node-1
  29. [root@localhost node-1]# ls
  30. conf
  31. [root@localhost node-1]# cd conf/
  32. [root@localhost conf]# ls
  33. redis.conf
  34. [root@localhost conf]# cat redis.conf
  35. port 6379
  36. bind 0.0.0.0
  37. cluster-enabled yes
  38. cluster-config-file nodes.conf
  39. cluster-node-timeout 5000
  40. cluster-announce-ip 172.38.0.11
  41. cluster-announce-port 6379
  42. cluster-announce-bus-port 16379
  43. appendonly yes
  1. docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
  2. -v /mydata/redis/node-1/data:/data \
  3. -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
  4. -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
  5. docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
  6. -v /mydata/redis/node-2/data:/data \
  7. -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
  8. -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
  9. docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
  10. -v /mydata/redis/node-3/data:/data \
  11. -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
  12. -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
  13. docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
  14. -v /mydata/redis/node-4/data:/data \
  15. -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
  16. -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
  17. docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
  18. -v /mydata/redis/node-5/data:/data \
  19. -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
  20. -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
  21. docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
  22. -v /mydata/redis/node-6/data:/data \
  23. -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
  24. -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

image.png

  1. [root@localhost conf]# docker exec -it redis-1 /bin/sh
  2. /data # ls
  3. appendonly.aof nodes.conf
  4. /data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
  5. >>> Performing hash slots allocation on 6 nodes...
  6. Master[0] -> Slots 0 - 5460
  7. Master[1] -> Slots 5461 - 10922
  8. Master[2] -> Slots 10923 - 16383
  9. Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
  10. Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
  11. Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
  12. M: c5551e2a30c220fc9de9df2e34692f20f3382b32 172.38.0.11:6379
  13. slots:[0-5460] (5461 slots) master
  14. M: d12ebd8c9e12dbbe22e7b9b18f0f143bdc14e94b 172.38.0.12:6379
  15. slots:[5461-10922] (5462 slots) master
  16. M: 825146ce6ab80fbb46ec43fcfec1c6e2dac55157 172.38.0.13:6379
  17. slots:[10923-16383] (5461 slots) master
  18. S: 9f810c0e15ac99af68e114a0ee4e32c4c7067e2b 172.38.0.14:6379
  19. replicates 825146ce6ab80fbb46ec43fcfec1c6e2dac55157
  20. S: e370225bf57d6ef6d54ad8e3d5d745a52b382d1a 172.38.0.15:6379
  21. replicates c5551e2a30c220fc9de9df2e34692f20f3382b32
  22. S: 79428c1d018dd29cf191678658008cbe5100b714 172.38.0.16:6379
  23. replicates d12ebd8c9e12dbbe22e7b9b18f0f143bdc14e94b
  24. Can I set the above configuration? (type 'yes' to accept): yes
  25. >>> Nodes configuration updated
  26. >>> Assign a different config epoch to each node
  27. >>> Sending CLUSTER MEET messages to join the cluster
  28. Waiting for the cluster to join
  29. ....
  30. >>> Performing Cluster Check (using node 172.38.0.11:6379)
  31. M: c5551e2a30c220fc9de9df2e34692f20f3382b32 172.38.0.11:6379
  32. slots:[0-5460] (5461 slots) master
  33. 1 additional replica(s)
  34. S: 79428c1d018dd29cf191678658008cbe5100b714 172.38.0.16:6379
  35. slots: (0 slots) slave
  36. replicates d12ebd8c9e12dbbe22e7b9b18f0f143bdc14e94b
  37. M: d12ebd8c9e12dbbe22e7b9b18f0f143bdc14e94b 172.38.0.12:6379
  38. slots:[5461-10922] (5462 slots) master
  39. 1 additional replica(s)
  40. S: e370225bf57d6ef6d54ad8e3d5d745a52b382d1a 172.38.0.15:6379
  41. slots: (0 slots) slave
  42. replicates c5551e2a30c220fc9de9df2e34692f20f3382b32
  43. S: 9f810c0e15ac99af68e114a0ee4e32c4c7067e2b 172.38.0.14:6379
  44. slots: (0 slots) slave
  45. replicates 825146ce6ab80fbb46ec43fcfec1c6e2dac55157
  46. M: 825146ce6ab80fbb46ec43fcfec1c6e2dac55157 172.38.0.13:6379
  47. slots:[10923-16383] (5461 slots) master
  48. 1 additional replica(s)
  49. [OK] All nodes agree about slots configuration.
  50. >>> Check for open slots...
  51. >>> Check slots coverage...
  52. [OK] All 16384 slots covered.

docker搭建redis集群完成!
image.png

我们使用docker之后,所有的技术都会慢慢变得简单起来!