配置单网卡

  1. nmcli connection modify p5p1 ipv4.addresses 10.183.4.8/24 gw4 10.183.4.254 ipv4.method manual
  2. nmcli connection up p5p1
  3. systemctl restart network.service

配置双网卡绑定

  1. nmcli connection add con-name team0 type team ifname team0 config '{"runner":{"name":"activebackup"}}'
  2. nmcli connection modify team0 ipv4.addresses 10.83.4.28/24 gw4 10.83.4.254 ipv4.method manual
  3. nmcli connection add con-name team0-port1 type team-slave ifname p5p1 master team0
  4. nmcli connection add con-name team0-port2 type team-slave ifname p6p1 master team0
  5. nmcli connection reload
  6. nmcli connection up team0
  7. nmcli connection add con-name team1 type team ifname team1 config '{"runner":{"name":"activebackup"}}'
  8. nmcli connection modify team1 ipv4.addresses 10.83.1.62/24 gw4 10.83.1.254 ipv4.method manual
  9. nmcli connection add con-name team1-port1 type team-slave ifname p5p2 master team1
  10. nmcli connection add con-name team1-port2 type team-slave ifname p6p2 master team1
  11. nmcli connection reload
  12. nmcli connection up team1
  13. systemctl restart NetworkManager
  14. nmcli connection down p5p1 && systemctl restart network.service
  15. nmcli connection down team1 && nmcli connection up team1

查看双网卡状态

  1. [root@localhost ~]# teamdctl team0 state
  2. setup:
  3. runner: activebackup
  4. ports:
  5. p5p1
  6. link watches:
  7. link summary: up
  8. instance[link_watch_0]:
  9. name: ethtool
  10. link: up
  11. down count: 0
  12. p6p1
  13. link watches:
  14. link summary: up
  15. instance[link_watch_0]:
  16. name: ethtool
  17. link: up
  18. down count: 0
  19. runner:
  20. active port: p5p1

注意 双网卡绑定后 不要随意使用 systemctl restart network.service systemctl restart NetworkManager.service 重启网络服务 应使用 ifconfig p5p1 down && ifconfig p5p1 up

  1. [root@localhost ~]# ifconfig p5p1 down
  2. [root@localhost ~]# ip addr
  3. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  5. inet 127.0.0.1/8 scope host lo
  6. valid_lft forever preferred_lft forever
  7. inet6 ::1/128 scope host
  8. valid_lft forever preferred_lft forever
  9. 2: em3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
  10. link/ether e4:43:4b:b5:2b:24 brd ff:ff:ff:ff:ff:ff
  11. 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
  12. link/ether e4:43:4b:b5:2b:20 brd ff:ff:ff:ff:ff:ff
  13. 4: em4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
  14. link/ether e4:43:4b:b5:2b:25 brd ff:ff:ff:ff:ff:ff
  15. 5: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
  16. link/ether e4:43:4b:b5:2b:22 brd ff:ff:ff:ff:ff:ff
  17. 6: p5p1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq master team0 state DOWN qlen 1000
  18. link/ether 16:14:e2:dc:b7:2f brd ff:ff:ff:ff:ff:ff
  19. 7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team1 state UP qlen 1000
  20. link/ether 56:cd:48:7a:0f:8f brd ff:ff:ff:ff:ff:ff
  21. 8: p6p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0 state UP qlen 1000
  22. link/ether 16:14:e2:dc:b7:2f brd ff:ff:ff:ff:ff:ff
  23. 9: p6p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team1 state UP qlen 1000
  24. link/ether 56:cd:48:7a:0f:8f brd ff:ff:ff:ff:ff:ff
  25. 10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
  26. link/ether 52:54:00:69:26:1c brd ff:ff:ff:ff:ff:ff
  27. inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
  28. valid_lft forever preferred_lft forever
  29. 11: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
  30. link/ether 52:54:00:69:26:1c brd ff:ff:ff:ff:ff:ff
  31. 13: team1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
  32. link/ether 56:cd:48:7a:0f:8f brd ff:ff:ff:ff:ff:ff
  33. inet 10.83.1.62/24 brd 10.83.1.255 scope global team1
  34. valid_lft forever preferred_lft forever
  35. inet6 fe80::d2c2:d9ee:a5d6:9711/64 scope link
  36. valid_lft forever preferred_lft forever
  37. 14: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
  38. link/ether 16:14:e2:dc:b7:2f brd ff:ff:ff:ff:ff:ff
  39. inet 10.83.4.28/24 brd 10.83.4.255 scope global team0
  40. valid_lft forever preferred_lft forever
  41. inet6 fe80::9946:fe80:81af:98f4/64 scope link
  42. valid_lft forever preferred_lft forever

验证网卡切换

  1. [root@localhost ~]# teamdctl team0 state
  2. setup:
  3. runner: activebackup
  4. ports:
  5. p5p1
  6. link watches:
  7. link summary: down
  8. instance[link_watch_0]:
  9. name: ethtool
  10. link: down
  11. down count: 1
  12. p6p1
  13. link watches:
  14. link summary: up
  15. instance[link_watch_0]:
  16. name: ethtool
  17. link: up
  18. down count: 0
  19. runner:
  20. active port: p6p1
  21. [root@localhost ~]#

team的四种模式

  • broadcast (广播模式)
  • activebackup (主备模式)
  • roundrobin (轮训模式)
  • loadbalance (负载均衡) LACP

(1) activebackup - 主备模式
一个网卡处于活动状态,另一个处于备份状态,所有流量都在主链路上处理,当活动网卡down掉时,启用备份网卡。
(2) roundrobin - 轮询模式
所有链路处于负载均衡状态,这种模式的特点增加了带宽,同时支持容错能力。

轮询聚合(两块网卡同时工作)
主备聚合(一块网卡工作,一块网卡预备

https://www.cnblogs.com/mude918/p/7987961.html

red hat 官方给出的team和bond特性对比

A Comparison of Features in Bonding and Team**

Feature Bonding Team
broadcast Tx policy Yes Yes
round-robin Tx policy Yes Yes
active-backup Tx policy Yes Yes
LACP (802.3ad) support Yes (active only) Yes
Hash-based Tx policy Yes Yes
User can set hash function No Yes
Tx load-balancing support (TLB) Yes Yes
LACP hash port select Yes Yes
load-balancing for LACP support No Yes
Ethtool link monitoring Yes Yes
ARP link monitoring Yes Yes
NS/NA (IPv6) link monitoring No Yes
ports up/down delays Yes Yes
port priorities and stickiness (“primary”option enhancement) No Yes
separate per-port link monitoring setup No Yes
multiple link monitoring setup Limited Yes
lockless Tx/Rx path No (rwlock) Yes (RCU)
VLAN support Yes Yes
user-space runtime control Limited Full
Logic in user-space No Yes
Extensibility Hard Easy
Modular design No Yes
Performance overhead Low Very Low
D-Bus interface No Yes
multiple device stacking Yes Yes
zero config using LLDP No (in planning)
NetworkManager support Yes Yes

https://blog.csdn.net/weixin_42123737/article/details/82707406?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromBaidu-1.not_use_machine_learn_pai&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromBaidu-1.not_use_machine_learn_pai

https://blog.csdn.net/qq_34556414/article/details/81837276?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-3.not_use_machine_learn_pai&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-3.not_use_machine_learn_pai

https://www.cnblogs.com/djlsunshine/p/9733182.html

https://www.cnblogs.com/leoshi/p/12603510.html

https://www.cnblogs.com/davidshen/p/8145971.html

  1. 网卡的链路聚合就是将多块网卡连接起来,当一块网卡损坏,网络依旧可以正常运行,可以有效的防止因为网卡损坏带来的损失,同时也可以提高网络访问速度。
  2. 网卡的链路聚合一般常用的有"bond""team"两种模式,"bond"模式最多可以添加两块网卡,"team"模式最多可以添加八块网卡。
  3. 1bond
  4. bond模式的配置步骤如下图所示,在配置之前需要有两块网卡:
  5. a"nmcli connection add type bond con-name bond0 mode active-backup ip4 172.25.254.102/24"。表示添加一个bond,名称为bond0,工作模式为主备,IP"172.25.254.102"
  6. b"cat /proc/net/bonding/bond0"。可以查看bond的信息。
  7. c"nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0"。将eth0网卡连接添加到这个bond中。
  8. d"nmcli connection add con-name eth1 ifname eth1 type bond-slave master bond0"。将eth1连接网卡添加到这个bond中。
  9. 至此bond模式的网卡链路聚合配置完成,网络可以正常使用。
  10. bond的常用工作模式有"active-backup"主备模式和"balance-rr"轮询模式两种。主备模式是使用一块网卡,这块网卡坏了使用另一块网卡。轮询模式是两块网卡轮流使用。
  11. 测试时可以使用"ifconfig eth0 down",去掉一块网卡,可以发现,网络依旧可以正常使用。
  12. 可以使用"nmcli connection delete eth0"来删除这个网络连接。
  13. 使用命令"nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0"可以重新添加一块网卡。这块网卡成为备用网卡。
  14. 如果需要删除bond模式的链路聚合,一次输入命令"nmcli connection delete bond0""nmcli connection delete eth0""nmcli connection delete eth1"即可。
  15. 2team
  16. team模式最多可以添加八块网卡,以下仅以两块为例,配置步骤如下:
  17. a"mcli connection add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}' ip4 172.25.254.102/24"。表示建立一个team,名称为team0,工作模式为主备,IP"172.25.254.102"
  18. b"teamdctl team0 state"。可以查看team模式的信息。
  19. c"nmcli connection add con-name eth0 ifname eth0 type team-slave master team0"。在team中添加eth0网络连接。
  20. d"nmcli connection add con-name eth1 ifname eth1 type team-slave master team0"。在team中添加eth1网络连接。
  21. 至此team模式的链路聚合配置完成,可以看到目前工作的是eth0,网络已经连通。
  22. team模式的工作模式与bond模式不同,有四种,分别是"broadcast"广播容错、"roundrobin"平衡轮询、"activebackup"主备和"loadbalance"负载均衡。在添加team模式时命令中的工作模式命令也不同,team模式的命令格式是'{"runner":{"name":"工作模式"}}',这点需要注意。
  23. 测试时也是通过"ifconfig eth0 down",去掉一块网卡,可以看到网络依旧连通着。
  24. team的删除方式与bond模式相同。
  25. 这就是常用的两种网卡链路聚合模式,网卡链路聚合对企业的正常运行有很大的作用,可以根据不同的工作需要使用不同的工作模式。
  1. 主备模式:
  2. nmcli connection add con-name team0 type team ifname team0 config '{"runner":{"name":"activebackup"}}'
  3. nmcli connection modify team0 ipv4.addresses 10.83.4.28/24 gw4 10.83.4.254 ipv4.method manual
  4. nmcli connection add con-name team0-port1 type team-slave ifname p5p1 master team0
  5. nmcli connection add con-name team0-port2 type team-slave ifname p6p1 master team0
  6. nmcli connection reload
  7. nmcli connection up team0
  8. nmcli connection up team0-1
  9. nmcli connection up team0-2
  10. teamdctl team0 state
  11. 轮询模式:
  12. nmcli connection add con-name team0 type team ifname team0 config '{"runner":{"name":"roundrobin"}}'
  13. nmcli connection modify team0 ipv4.addresses 10.83.4.28/24 gw4 10.83.4.254 ipv4.method manual
  14. nmcli connection add con-name team0-port1 type team-slave ifname p5p1 master team0
  15. nmcli connection add con-name team0-port2 type team-slave ifname p6p1 master team0
  16. nmcli connection reload
  17. nmcli connection up team0
  18. nmcli connection up team0-1
  19. nmcli connection up team0-2
  20. teamdctl team0 state
  1. nmcli connection add type bond con-name bond0 mode active-backup ip4 172.25.254.102/24
  2. nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0
  3. nmcli connection add con-name eth1 ifname eth1 type bond-slave master bond0
  4. cat /proc/net/bonding/bond0