创建一个service服务来提供固定IP轮询访问上面创建的nginx服务的2个pod(nodeport)
# 给这个nginx的deployment生成一个service(简称svc)
# 同时也可以用生成yaml配置的形式来创建 kubectl expose deployment nginx --port=80 --target-port=80 --dry-run=client -o yaml
# 我们可以先把上面的yaml配置导出为svc.yaml提供后面,这里就直接用命令行创建了
[root@master1 ~]# kubectl expose deployment nginx --port=80 --target-port=80
service/nginx exposed
[root@master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 6d16h
nginx ClusterIP 10.68.142.234 <none> 80/TCP 18h
# 看下自动关联生成的endpoint
[root@master1 ~]# kubectl get endpoints nginx
NAME ENDPOINTS AGE
nginx 172.20.166.144:80,172.20.166.160:80 18h
# 接下来测试下svc的负载均衡效果吧,这里我们先进到pod里面,把nginx的页面信息改为各自pod的hostname
[root@master1 ~]# kubectl exec -it nginx-f89759699-bzwd2 -- bash
root@nginx-f89759699-bzwd2:/# echo nginx-f89759699-bzwd2 > /usr/share/nginx/html/index.html
root@nginx-f89759699-bzwd2:/# exit
[root@master1 ~]# kubectl exec -it nginx-f89759699-qlc8q -- bash
root@nginx-f89759699-qlc8q:/# echo nginx-f89759699-qlc8q > /usr/share/nginx/html/index.html
root@nginx-f89759699-qlc8q:/# exit
[root@master1 ~]# curl 10.68.142.234
v1111
[root@master1 ~]# curl 10.68.142.234
v2222
# 修改svc的类型来提供外部访问
[root@master1 ~]# kubectl patch svc nginx -p '{"spec":{"type":"NodePort"}}'
service/nginx patched
[root@master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 6d16h
nginx NodePort 10.68.142.234 <none> 80:32511/TCP 18h
[root@master1 ~]# curl 172.16.123.61:32511
v1111
[root@master1 ~]# curl 172.16.123.61:32511
v2222
我们这里也来分析下这个svc的yaml配置
cat svc.yaml
apiVersion: v1 # <<<<<< v1 是 Service 的 apiVersion
kind: Service # <<<<<< 指明当前资源的类型为 Service
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx # <<<<<< Service 的名字为 nginx
spec:
ports:
- port: 80 # <<<<<< 将 Service 的 80 端口映射到 Pod 的 80 端口,使用 TCP 协议
protocol: TCP
targetPort: 80
selector:
app: nginx # <<<<<< selector 指明挑选那些 label 为 run: nginx 的 Pod 作为 Service 的后端
status:
loadBalancer: {}
我们来看下这个nginx的svc描述
[root@master1 ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP Families: <none>
IP: 10.68.142.234
IPs: 10.68.142.234
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32511/TCP
Endpoints: 172.20.166.144:80,172.20.166.160:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
我们可以看到在Endpoints列出了2个pod的IP和端口,pod的ip是在容器中配置的,那么这里Service cluster IP又是在哪里配置的呢?cluster ip又是自律映射到pod ip上的呢?
# 首先看下kube-proxy的配置
# cat /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kube/bin/kube-proxy \
--bind-address=10.0.1.202 \
--cluster-cidr=172.20.0.0/16 \
--hostname-override=10.0.1.202 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--logtostderr=true \
--proxy-mode=ipvs #<------- 我们在最开始部署kube-proxy的时候就设定它的转发模式为ipvs,因为默认的iptables在存在大量svc的情况下性能很低
Restart=always
RestartSec=5
LimitNOFILE=65536
# 看下本地网卡,会有一个ipvs的虚拟网卡
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:20:b8:39 brd fe:fe:fe:fe:fe:ff
inet 10.0.1.202/24 brd 10.0.1.255 scope global noprefixroute ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29fe:fe20:b839/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:91:ac:ce:13 brd fe:fe:fe:fe:fe:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 22:50:98:a6:f9:e4 brd fe:fe:fe:fe:fe:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 96:6b:f0:25:1a:26 brd fe:fe:fe:fe:fe:ff
inet 10.68.0.2/32 brd 10.68.0.2 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.68.0.1/32 brd 10.68.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.68.120.201/32 brd 10.68.120.201 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.68.50.42/32 brd 10.68.50.42 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.68.18.121/32 brd 10.68.18.121 scope global kube-ipvs0 # <-------- SVC的IP配置在这里
valid_lft forever preferred_lft forever
6: caliaeb0378f7a4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd fe:fe:fe:fe:fe:ff link-netnsid 0
inet6 fe80::ecee:eefe:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
7: tunl0@NONe: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.20.247.0/32 brd 172.20.247.0 scope global tunl0
valid_lft forever preferred_lft forever
# 来看下lvs的虚拟服务器列表
[root@master1 ~]# ipvsadm -ln |grep -C3 10.68.142.234
-> 172.20.104.14:9153 Masq 1 0 0
TCP 10.68.4.40:53 rr
-> 172.20.104.14:53 Masq 1 0 0
TCP 10.68.142.234:80 rr #<----------- SVC转发Pod的明细在这里
-> 172.20.166.144:80 Masq 1 0 0
-> 172.20.166.160:80 Masq 1 0 0