配置单网卡
nmcli connection modify p5p1 ipv4.addresses 10.183.4.8/24 gw4 10.183.4.254 ipv4.method manual
nmcli connection up p5p1
systemctl restart network.service
配置双网卡绑定
nmcli connection add con-name team0 type team ifname team0 config '{"runner":{"name":"activebackup"}}'
nmcli connection modify team0 ipv4.addresses 10.83.4.28/24 gw4 10.83.4.254 ipv4.method manual
nmcli connection add con-name team0-port1 type team-slave ifname p5p1 master team0
nmcli connection add con-name team0-port2 type team-slave ifname p6p1 master team0
nmcli connection reload
nmcli connection up team0
nmcli connection add con-name team1 type team ifname team1 config '{"runner":{"name":"activebackup"}}'
nmcli connection modify team1 ipv4.addresses 10.83.1.62/24 gw4 10.83.1.254 ipv4.method manual
nmcli connection add con-name team1-port1 type team-slave ifname p5p2 master team1
nmcli connection add con-name team1-port2 type team-slave ifname p6p2 master team1
nmcli connection reload
nmcli connection up team1
systemctl restart NetworkManager
nmcli connection down p5p1 && systemctl restart network.service
nmcli connection down team1 && nmcli connection up team1
查看双网卡状态
[root@localhost ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
p5p1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
p6p1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
active port: p5p1
注意 双网卡绑定后 不要随意使用 systemctl restart network.service systemctl restart NetworkManager.service 重启网络服务 应使用 ifconfig p5p1 down && ifconfig p5p1 up
[root@localhost ~]# ifconfig p5p1 down
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether e4:43:4b:b5:2b:24 brd ff:ff:ff:ff:ff:ff
3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether e4:43:4b:b5:2b:20 brd ff:ff:ff:ff:ff:ff
4: em4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether e4:43:4b:b5:2b:25 brd ff:ff:ff:ff:ff:ff
5: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether e4:43:4b:b5:2b:22 brd ff:ff:ff:ff:ff:ff
6: p5p1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq master team0 state DOWN qlen 1000
link/ether 16:14:e2:dc:b7:2f brd ff:ff:ff:ff:ff:ff
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team1 state UP qlen 1000
link/ether 56:cd:48:7a:0f:8f brd ff:ff:ff:ff:ff:ff
8: p6p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0 state UP qlen 1000
link/ether 16:14:e2:dc:b7:2f brd ff:ff:ff:ff:ff:ff
9: p6p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team1 state UP qlen 1000
link/ether 56:cd:48:7a:0f:8f brd ff:ff:ff:ff:ff:ff
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:69:26:1c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
11: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:69:26:1c brd ff:ff:ff:ff:ff:ff
13: team1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 56:cd:48:7a:0f:8f brd ff:ff:ff:ff:ff:ff
inet 10.83.1.62/24 brd 10.83.1.255 scope global team1
valid_lft forever preferred_lft forever
inet6 fe80::d2c2:d9ee:a5d6:9711/64 scope link
valid_lft forever preferred_lft forever
14: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 16:14:e2:dc:b7:2f brd ff:ff:ff:ff:ff:ff
inet 10.83.4.28/24 brd 10.83.4.255 scope global team0
valid_lft forever preferred_lft forever
inet6 fe80::9946:fe80:81af:98f4/64 scope link
valid_lft forever preferred_lft forever
验证网卡切换
[root@localhost ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
p5p1
link watches:
link summary: down
instance[link_watch_0]:
name: ethtool
link: down
down count: 1
p6p1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
active port: p6p1
[root@localhost ~]#
team的四种模式
- broadcast (广播模式)
- activebackup (主备模式)
- roundrobin (轮训模式)
- loadbalance (负载均衡) LACP
(1) activebackup - 主备模式
一个网卡处于活动状态,另一个处于备份状态,所有流量都在主链路上处理,当活动网卡down掉时,启用备份网卡。
(2) roundrobin - 轮询模式
所有链路处于负载均衡状态,这种模式的特点增加了带宽,同时支持容错能力。
轮询聚合(两块网卡同时工作)
主备聚合(一块网卡工作,一块网卡预备
https://www.cnblogs.com/mude918/p/7987961.html
red hat 官方给出的team和bond特性对比
A Comparison of Features in Bonding and Team**
Feature | Bonding | Team |
---|---|---|
broadcast Tx policy | Yes | Yes |
round-robin Tx policy | Yes | Yes |
active-backup Tx policy | Yes | Yes |
LACP (802.3ad) support | Yes (active only) | Yes |
Hash-based Tx policy | Yes | Yes |
User can set hash function | No | Yes |
Tx load-balancing support (TLB) | Yes | Yes |
LACP hash port select | Yes | Yes |
load-balancing for LACP support | No | Yes |
Ethtool link monitoring | Yes | Yes |
ARP link monitoring | Yes | Yes |
NS/NA (IPv6) link monitoring | No | Yes |
ports up/down delays | Yes | Yes |
port priorities and stickiness (“primary”option enhancement) | No | Yes |
separate per-port link monitoring setup | No | Yes |
multiple link monitoring setup | Limited | Yes |
lockless Tx/Rx path | No (rwlock) | Yes (RCU) |
VLAN support | Yes | Yes |
user-space runtime control | Limited | Full |
Logic in user-space | No | Yes |
Extensibility | Hard | Easy |
Modular design | No | Yes |
Performance overhead | Low | Very Low |
D-Bus interface | No | Yes |
multiple device stacking | Yes | Yes |
zero config using LLDP | No | (in planning) |
NetworkManager support | Yes | Yes |
https://www.cnblogs.com/djlsunshine/p/9733182.html
https://www.cnblogs.com/leoshi/p/12603510.html
https://www.cnblogs.com/davidshen/p/8145971.html
网卡的链路聚合就是将多块网卡连接起来,当一块网卡损坏,网络依旧可以正常运行,可以有效的防止因为网卡损坏带来的损失,同时也可以提高网络访问速度。
网卡的链路聚合一般常用的有"bond"和"team"两种模式,"bond"模式最多可以添加两块网卡,"team"模式最多可以添加八块网卡。
1、bond
bond模式的配置步骤如下图所示,在配置之前需要有两块网卡:
a、"nmcli connection add type bond con-name bond0 mode active-backup ip4 172.25.254.102/24"。表示添加一个bond,名称为bond0,工作模式为主备,IP为"172.25.254.102"。
b、"cat /proc/net/bonding/bond0"。可以查看bond的信息。
c、"nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0"。将eth0网卡连接添加到这个bond中。
d、"nmcli connection add con-name eth1 ifname eth1 type bond-slave master bond0"。将eth1连接网卡添加到这个bond中。
至此bond模式的网卡链路聚合配置完成,网络可以正常使用。
bond的常用工作模式有"active-backup"主备模式和"balance-rr"轮询模式两种。主备模式是使用一块网卡,这块网卡坏了使用另一块网卡。轮询模式是两块网卡轮流使用。
测试时可以使用"ifconfig eth0 down",去掉一块网卡,可以发现,网络依旧可以正常使用。
可以使用"nmcli connection delete eth0"来删除这个网络连接。
使用命令"nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0"可以重新添加一块网卡。这块网卡成为备用网卡。
如果需要删除bond模式的链路聚合,一次输入命令"nmcli connection delete bond0"、"nmcli connection delete eth0"、"nmcli connection delete eth1"即可。
2、team
team模式最多可以添加八块网卡,以下仅以两块为例,配置步骤如下:
a、"mcli connection add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}' ip4 172.25.254.102/24"。表示建立一个team,名称为team0,工作模式为主备,IP为"172.25.254.102"。
b、"teamdctl team0 state"。可以查看team模式的信息。
c、"nmcli connection add con-name eth0 ifname eth0 type team-slave master team0"。在team中添加eth0网络连接。
d、"nmcli connection add con-name eth1 ifname eth1 type team-slave master team0"。在team中添加eth1网络连接。
至此team模式的链路聚合配置完成,可以看到目前工作的是eth0,网络已经连通。
team模式的工作模式与bond模式不同,有四种,分别是"broadcast"广播容错、"roundrobin"平衡轮询、"activebackup"主备和"loadbalance"负载均衡。在添加team模式时命令中的工作模式命令也不同,team模式的命令格式是'{"runner":{"name":"工作模式"}}',这点需要注意。
测试时也是通过"ifconfig eth0 down",去掉一块网卡,可以看到网络依旧连通着。
team的删除方式与bond模式相同。
这就是常用的两种网卡链路聚合模式,网卡链路聚合对企业的正常运行有很大的作用,可以根据不同的工作需要使用不同的工作模式。
主备模式:
nmcli connection add con-name team0 type team ifname team0 config '{"runner":{"name":"activebackup"}}'
nmcli connection modify team0 ipv4.addresses 10.83.4.28/24 gw4 10.83.4.254 ipv4.method manual
nmcli connection add con-name team0-port1 type team-slave ifname p5p1 master team0
nmcli connection add con-name team0-port2 type team-slave ifname p6p1 master team0
nmcli connection reload
nmcli connection up team0
nmcli connection up team0-1
nmcli connection up team0-2
teamdctl team0 state
轮询模式:
nmcli connection add con-name team0 type team ifname team0 config '{"runner":{"name":"roundrobin"}}'
nmcli connection modify team0 ipv4.addresses 10.83.4.28/24 gw4 10.83.4.254 ipv4.method manual
nmcli connection add con-name team0-port1 type team-slave ifname p5p1 master team0
nmcli connection add con-name team0-port2 type team-slave ifname p6p1 master team0
nmcli connection reload
nmcli connection up team0
nmcli connection up team0-1
nmcli connection up team0-2
teamdctl team0 state
nmcli connection add type bond con-name bond0 mode active-backup ip4 172.25.254.102/24
nmcli connection add con-name eth0 ifname eth0 type bond-slave master bond0
nmcli connection add con-name eth1 ifname eth1 type bond-slave master bond0
cat /proc/net/bonding/bond0