Server

  1. Service为一组功能相同的pod提供统一入口并为它们提供负载均衡和自动服务发现。Service所针对的一组Pod,通常是由Selector来确定。<br /> 为什么需要Service?<br />

直接接通过pod的ip加端口去访问应用是不稳定的,因为pod的生命周期是不断变化的,每次触发了删除的动作后Pod会重新创建,那么Pod的IP地址就会产生变化。
image.png



image.png

image.png


image.png

Service的转发后端的三种方式**
Clusterip:通过集群的内部IP的创建,实现服务在集群内的访问。这也是集群默认的 ServiceType。
Nodeport:通过每个 Node 上的 IP 和静态端口(NodePort)暴露服务。通过请求

:,可以从集群的外部访问一个 Kubernetes服务。
Loadblance:使用云提供商的负载局衡器,可以向外部暴露服务。外部的负载均衡器可以路由到Service然后到应用。
Externalname:依赖于DNS,用于将集群外部的服务通过域名的方式映射到Kubernetes集群内部,通过Service的name进行访问。
Service的三种端口类型
Port:Service对内暴露的端口
Targetport:POD暴露的端口
Nodeport:Service对外暴露的端口
集群外访问Service的方式
Nodeport
Loadblance
ExternalIP

ClusterIP Service 类型的结构

image.png

NodePort service 类型的结构

image.png


LoadBalancer service 类型
image.png

**

  1. cat << EOF > nginx-deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nginx-deployment
  6. labels:
  7. app: nginx-deploy
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: nginx-app
  13. template:
  14. metadata:
  15. labels:
  16. app: nginx-app
  17. spec:
  18. containers:
  19. - name: nginx
  20. image: nginx:1.7.6
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - containerPort: 80
  24. EOF

1. 创建ClusterIP

方法一:通过命令行创建
kubectl expose deployment —target-port=80 —port=80 —type=ClusterIP
#方法二:通过yaml文件创建

  1. cat << EOF > clusterip.yaml
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: my-service
  6. spec:
  7. type: ClusterIP
  8. selector:
  9. app: nginx-app
  10. ports:
  11. - protocol: TCP
  12. port: 80
  13. targetPort: 80
  14. EOF
  1. [liwm@rmaster01 liwm]$ kubectl get service
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18d
  4. my-service ClusterIP 10.43.160.136 <none> 80/TCP 20h
  5. [liwm@rmaster01 liwm]$ kubectl describe service my-service
  6. Name: my-service
  7. Namespace: default
  8. Labels: <none>
  9. Annotations: kubectl.kubernetes.io/last-applied-configuration:
  10. {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"port":80,"p...
  11. Selector: app=nginx
  12. Type: ClusterIP
  13. IP: 10.43.160.136
  14. Port: <unset> 80/TCP
  15. TargetPort: 80/TCP
  16. Endpoints: <none>
  17. Session Affinity: None
  18. Events: <none>
  19. [liwm@rmaster01 liwm]$

资源记录格式:
SVC_NAME.NS_NAME.DOMAIN.LTD
默认的service的a记录 svc.cluster.local
my-service.default.cluster.local
**

2. 创建NodePort

方法一:通过命令行创建
kubectl expose deployment —target-port=80 —port=80 —type=NodePort
#方法二:通过yaml文件创建

  1. cat << EOF > nodeport.yaml
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: nodeport-service
  6. spec:
  7. type: NodePort
  8. selector:
  9. app: nginx-app
  10. ports:
  11. - protocol: TCP
  12. port: 80
  13. targetPort: 80
  14. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f nodeport.yaml
  2. service/nodeport-service created
  3. [liwm@rmaster01 liwm]$ kubectl get service
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18d
  6. my-service ClusterIP 10.43.160.136 <none> 80/TCP 20h
  7. nodeport-service NodePort 10.43.171.253 <none> 80:31120/TCP 6s
  8. [liwm@rmaster01 liwm]$ kubectl describe service nodeport-service
  9. Name: nodeport-service
  10. Namespace: default
  11. Labels: <none>
  12. Annotations: field.cattle.io/publicEndpoints:
  13. [{"addresses":["192.168.31.133"],"port":31120,"protocol":"TCP","serviceName":"default:nodeport-service","allNodes":true}]
  14. Selector: app=nginx
  15. Type: NodePort
  16. IP: 10.43.171.253
  17. Port: <unset> 80/TCP
  18. TargetPort: 80/TCP
  19. NodePort: <unset> 31120/TCP
  20. Endpoints: <none>
  21. Session Affinity: None
  22. External Traffic Policy: Cluster
  23. Events: <none>
  24. [liwm@rmaster01 liwm]$

查看pod的endpoints组合

  1. cat << EOF > nodeport-pod.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nginx-deployment
  6. labels:
  7. app: nginx
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: nginx
  13. template:
  14. metadata:
  15. labels:
  16. app: nginx
  17. spec:
  18. containers:
  19. - name: nginx
  20. image: nginx:1.7.6
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - containerPort: 80
  24. EOF
  1. [liwm@rmaster01 liwm]$ kubectl describe service nodeport-service
  2. Name: nodeport-service
  3. Namespace: default
  4. Labels: <none>
  5. Annotations: field.cattle.io/publicEndpoints:
  6. [{"addresses":["192.168.31.133"],"port":31120,"protocol":"TCP","serviceName":"default:nodeport-service","allNodes":true}]
  7. Selector: app=nginx
  8. Type: NodePort
  9. IP: 10.43.171.253
  10. Port: <unset> 80/TCP
  11. TargetPort: 80/TCP
  12. NodePort: <unset> 31120/TCP
  13. Endpoints: 10.42.2.163:80,10.42.4.253:80
  14. Session Affinity: None
  15. External Traffic Policy: Cluster
  16. Events: <none>
  17. [liwm@rmaster01 liwm]$ kubectl get endpoints
  18. NAME ENDPOINTS AGE
  19. kubernetes 192.168.31.130:6443,192.168.31.131:6443,192.168.31.132:6443 18d
  20. my-service 10.42.2.163:80,10.42.4.253:80 20h
  21. nodeport-service 10.42.2.163:80,10.42.4.253:80 7m50s
  22. [liwm@rmaster01 liwm]$
  1. [liwm@rmaster01 liwm]$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. nginx-deployment-74dc4cb5fb-52wfz 1/1 Running 0 12m
  4. nginx-deployment-74dc4cb5fb-b4gwc 1/1 Running 0 11m
  5. [liwm@rmaster01 liwm]$ kubectl exec -it nginx-deployment-74dc4cb5fb-52wfz bash
  6. root@nginx-deployment-74dc4cb5fb-52wfz:/# cd /usr/share/nginx/html/
  7. root@nginx-deployment-74dc4cb5fb-52wfz:/usr/share/nginx/html# echo 1 > index.html
  8. root@nginx-deployment-74dc4cb5fb-52wfz:/usr/share/nginx/html# exit
  9. exit
  10. [liwm@rmaster01 liwm]$ kubectl exec -it nginx-deployment-74dc4cb5fb-b4gwc bash
  11. root@nginx-deployment-74dc4cb5fb-b4gwc:/# cd /usr/share/nginx/html/
  12. root@nginx-deployment-74dc4cb5fb-b4gwc:/usr/share/nginx/html# echo 2 > index.html
  13. root@nginx-deployment-74dc4cb5fb-b4gwc:/usr/share/nginx/html# exit
  14. exit
  15. [liwm@rmaster01 liwm]$ for a in {1..10}; do curl http://10.43.171.253 && sleep 1s; done
  16. 1
  17. 2
  18. 1
  19. 2
  20. 1
  21. 1
  22. 1
  23. 2
  24. 2
  25. 1
  26. [liwm@rmaster01 liwm]$
  27. [liwm@rmaster01 liwm]$ for a in {1..10}; do curl http://192.168.31.130:31120 && sleep 1s; done
  28. 1
  29. 2
  30. 1
  31. 1
  32. 2
  33. 2
  34. 2
  35. 2
  36. 1
  37. 2
  38. [liwm@rmaster01 liwm]$

3. Headless Services

  1. cat << EOF > headless.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: nginx
  6. labels:
  7. app: nginx
  8. spec:
  9. ports:
  10. - port: 80
  11. name: web
  12. clusterIP: None
  13. selector:
  14. app: nginx
  15. ---
  16. apiVersion: apps/v1
  17. kind: StatefulSet
  18. metadata:
  19. name: web
  20. spec:
  21. selector:
  22. matchLabels:
  23. app: nginx # has to match .spec.template.metadata.labels
  24. serviceName: "nginx"
  25. replicas: 3 # by default is 1
  26. template:
  27. metadata:
  28. labels:
  29. app: nginx # has to match .spec.selector.matchLabels
  30. spec:
  31. terminationGracePeriodSeconds: 10
  32. containers:
  33. - name: nginx
  34. image: nginx
  35. imagePullPolicy: IfNotPresent
  36. ports:
  37. - containerPort: 80
  38. name: web
  39. EOF

4.ExternalName

  1. cat << EOF > externalname.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: my-service
  6. namespace: prod
  7. spec:
  8. type: ExternalName
  9. externalName: my.database.example.com
  10. EOF

5. External IPs

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: my-service
  5. spec:
  6. selector:
  7. app: nginx
  8. ports:
  9. - name: http
  10. protocol: TCP
  11. port: 18080
  12. targetPort: 9376
  13. externalIPs:
  14. - 172.31.53.96

Userspace模式

kube-proxy会为每个service随机监听一个端口(proxy port ),并增加一条iptables规则。
所有到clusterIP:Port 的报文都转发到proxy port;
kube-proxy从它监听的proxy port收到报文后,走round robin(默认)或者session affinity(会话亲和性,即同一client IP都走同一链路给同一pod服务)分发给对应的pod。
优点:
实现方式简单,通过kube-proxy调用、转发。
缺点:
所有流量都要经过kube-proxy,容易造成性能瓶颈。
经过kube-proxy的流量还需要在经过iptables转发,需要在用户态和内核态不断进行切换,效率低。
image.png

09-服务发现与对外暴露服务 - 图9
**

Iptables模式

  1. kube-proxy

会监视 Kubernetes 控制节点对 Service 对象和 Endpoints 对象的添加和移除。 对每个 Service都会创建iptables 规则,从而捕获到达该 Service 的 clusterIP和端口的请求,进而将请求重定向到 Service 的一组 backend 中的某个上面。
对于每个 Endpoints 对象,它也会创建iptables 规则,这个规则会选择一个 backend 组合。
默认的策略是,kube-proxy 在 iptables 模式下随机选择一个 backend。
目前kubernetes提供了两种负载分发策略:RoundRobin和SessionAffinity
优势:
使用 iptables 处理流量具有较低的系统开销,因为流量由 Linux netfilter 处理,而无需在用户空间和内核空间之间切换。 这种方法更可靠。
结合 Pod readiness探测器可以避免将流量通过 kube-proxy 发送到已知已失败的Pod。

image.png
09-服务发现与对外暴露服务 - 图11

IPVS模式

在 ipvs模式下,kube-proxy监视Kubernetes服务和端点,调用netlink接口创建相应IPVS规则,并定期将 IPVS 规则与 Kubernetes Service和Enidpoint同步。访问Service时,IPVS将流量定向到后端Pod之一。
IPVS代理模式基于类似于 iptables 模式的 netfilter 挂钩函数,但是使用哈希表作为基础数据结构,并且在内核空间中工作。 这意味着与iptables模式下的 kube-proxy 相比,IPVS 模式下的 kube-proxy 重定向通信的延迟要短,并且在同步代理规则时具有更好的性能。与其他代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。

注意:
要在 IPVS 模式下运行 kube-proxy,必须在启动 kube-proxy 之前使 IPVS Linux 在节点上可用。
当 kube-proxy 以 IPVS 代理模式启动时,它将验证 IPVS 内核模块是否可用。 如果未检测到 IPVS 内核模块,则 kube-proxy 将退回到以 iptables 代理模式运行。

image.png

09-服务发现与对外暴露服务 - 图13
09-服务发现与对外暴露服务 - 图14

IPVS主要有三种模式

DR模式:调度器LB直接修改报文的目的MAC地址为后端真实服务器地址,服务器响应处理后的报文无需经过调度器LB,直接返回给客户端。这种模式也是性能最好的。

TUN模式:LB接收到客户请求包,进行IP Tunnel封装。即在原有的包头加上IP Tunnel的包头。然后发给后端真实服务器,真实的服务器将响应处理后的数据直接返回给客户端。

NAT模式:LB将客户端发过来的请求报文修改目的IP地址为后端真实服务器IP另外也修改后端真实服务器发过来的响应的报文的源IP为LB上的IP。

注意:kube-proxy的IPVS模式用的上NAT模式,因为DR,TUN模式都不支持端口映射
IPVS多种调度算法来平衡后端的Pod流量:
rr:轮询
lc:最小连接
dh:目的地址哈希
sh:源地址哈希
sed: 最短的预期延迟
nq:永不队列
**

会话保持

  1. 在这些代理模型中如果要确保每次都将来自特定客户端的连接传递到同一Pod,则可以通过修改Service的字段:

service.spec.sessionAffinity 设置为 “ClientIP” (默认值是 “None”),实现基于客户端的IP地址选择会话关联。还可以通过适当设置 service.spec.sessionAffinityConfig.clientIP.timeoutSeconds 来设置最大会话停留时间。 (默认值为 10800 秒,即 3 小时)

kubectl explain svc.spec.sessionAffinity
支持ClientIP和None 两种方式,默认是None(随机调度) ClientIP是来自于同一个客户端的请求调度到同一个pod中

创建Service编辑sessionAffinity
kubectl edit service/test
将sessionaffinity参数改为 sessionAffinity: ClientIP
iptables -t nat -L | grep 10800

  1. cat << EOF > myapp.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: myapp
  6. labels:
  7. app: myapp
  8. spec:
  9. replicas: 2
  10. selector:
  11. matchLabels:
  12. app: myapp
  13. template:
  14. metadata:
  15. labels:
  16. app: myapp
  17. spec:
  18. containers:
  19. - name: nginx
  20. image: nginx:1.7.6
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - containerPort: 80
  24. ---
  25. apiVersion: v1
  26. kind: Service
  27. metadata:
  28. name: myapp
  29. namespace: default
  30. spec:
  31. selector:
  32. app: myapp
  33. sessionAffinity: ClientIP
  34. type: NodePort
  35. ports:
  36. - port: 80
  37. targetPort: 80
  38. nodePort: 30080
  39. EOF


6. 服务发现

Kubernetes支持集群的的DNS服务器(例如CoreDNS)监视Kubernetes API中新创建的Service,并为每个Service创建一组DNS记录。如果在整个群集中都启用了DNS,则所有Pod都应该能够通过其DNS名称自动解析服务。
DNS策略:
“Default“:使用宿主机集群的DNS。
“ClusterFirst“:任何与配置的集群域后缀不匹配的DNS查询(例如“ www.kubernetes.io”)都将转发到节点上继承的DNS服务器。
“ClusterFirstWithHostNet“:Pod使用主机网络(hostNetwork: true),则使用此策略
“None“:它允许Pod忽略Kubernetes环境中的DNS设置,以pod Spec中dnsConfig配置为准。
注意:
“Default”不是默认的DNS策略。如果dnsPolicy未明确指定,则默认使用“ ClusterFirst”。
同个Namespace的应用可以直接通过service_name访问。跨Namespace访问通过service_name.namespace。


**

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: busybox1
  5. labels:
  6. name: busybox
  7. spec:
  8. hostname: busybox-1
  9. subdomain: default-subdomain
  10. containers:
  11. - image: busybox:1.28
  12. command:
  13. - sleep
  14. - "3600"
  15. name: busybox
  16. apiVersion: v1
  17. kind: Pod
  18. metadata:
  19. name: hostaliases-pod
  20. spec:
  21. restartPolicy: Never
  22. hostAliases:
  23. - ip: "127.0.0.1"
  24. hostnames:
  25. - "foo.local"
  26. - "bar.local"
  27. - ip: "10.1.2.3"
  28. hostnames:
  29. - "foo.remote"
  30. - "bar.remote"
  31. containers:
  32. - name: cat-hosts
  33. image: busybox
  34. command:
  35. - cat
  36. args:
  37. - "/etc/hosts"
  1. [liwm@rmaster01 liwm]$ kubectl create -f myapp.yaml
  2. Error from server (AlreadyExists): error when creating "myapp.yaml": deployments.apps "myapp" already exists
  3. Error from server (Invalid): error when creating "myapp.yaml": Service "myapp" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
  4. [liwm@rmaster01 liwm]$
  5. [liwm@rmaster01 liwm]$ kubectl delete deployments.apps --all
  6. deployment.apps "myapp" deleted
  7. [liwm@rmaster01 liwm]$
  8. [liwm@rmaster01 liwm]$ kubectl create -f myapp.yaml
  9. deployment.apps/myapp created
  10. The Service "myapp" is invalid: spec.ports[0].nodePort: Invalid value: 30080: provided port is already allocated
  11. [liwm@rmaster01 liwm]$ kubectl describe service myapp
  12. Name: myapp
  13. Namespace: default
  14. Labels: <none>
  15. Annotations: field.cattle.io/publicEndpoints:
  16. [{"addresses":["192.168.31.133"],"port":30080,"protocol":"TCP","serviceName":"default:myapp","allNodes":true}]
  17. Selector: app=myapp
  18. Type: NodePort
  19. IP: 10.43.76.239
  20. Port: <unset> 80/TCP
  21. TargetPort: 80/TCP
  22. NodePort: <unset> 30080/TCP
  23. Endpoints: 10.42.2.169:80,10.42.4.15:80
  24. Session Affinity: ClientIP
  25. External Traffic Policy: Cluster
  26. Events: <none>
  27. [liwm@rmaster01 liwm]$ kubectl get pod
  28. NAME READY STATUS RESTARTS AGE
  29. myapp-54d6f6cdb7-npg6q 1/1 Running 0 7m32s
  30. myapp-54d6f6cdb7-ztpfp 1/1 Running 0 7m32s
  31. [liwm@rmaster01 liwm]$ kubectl exec -it myapp-54d6f6cdb7-npg6q bash
  32. root@myapp-54d6f6cdb7-npg6q:/# cd /usr/share/nginx/html/
  33. root@myapp-54d6f6cdb7-npg6q:/usr/share/nginx/html# echo 1 > index.html
  34. root@myapp-54d6f6cdb7-npg6q:/usr/share/nginx/html# exit
  35. exit
  36. [liwm@rmaster01 liwm]$ kubectl exec -it myapp-54d6f6cdb7-ztpfp bash
  37. root@myapp-54d6f6cdb7-ztpfp:/# cd /usr/share/nginx/html/
  38. root@myapp-54d6f6cdb7-ztpfp:/usr/share/nginx/html# echo 2 > index.html
  39. root@myapp-54d6f6cdb7-ztpfp:/usr/share/nginx/html# exit
  40. exit
  41. [liwm@rmaster01 liwm]$ for a in {1..10}; do curl http://192.168.31.130:30080 && sleep 1s; done
  42. 2
  43. 2
  44. 2
  45. 2
  46. 2
  47. 2
  48. 2
  49. 2
  50. 2
  51. 2
  52. [liwm@rmaster01 liwm]$ for a in {1..10}; do curl http://192.168.31.130:30080 && sleep 1s; done
  53. 2
  54. 2
  55. 2
  56. 2
  57. 2
  58. 2
  59. 2
  60. 2
  61. 2
  62. 2
  63. [liwm@rmaster01 liwm]$ for a in {1..10}; do curl http://192.168.31.130:30080 && sleep 1s; done
  64. 2
  65. 2
  66. 2
  67. 2
  68. 2
  69. 2
  70. 2
  71. 2
  72. 2
  73. 2
  74. [liwm@rmaster01 liwm]$

7. Ingress

  1. Ingress

公开了从集群外部到集群内 services 的HTTP和HTTPS路由。 在Ingress 资源上定义了流量路由规则控制。可以通过创建ingress实现针对URL、path、SSL的请求转发。
需要哪些准备?
必须具有 ingress Controllers才能满足 Ingress 的要求。仅创建 Ingress 资源无效

image.png

ingress-controller本身是一个pod,这个pod里面的容器安装了反向代理软件,通过读取添加的Service,动态生成负载均衡器的反向代理配置,你添加对应的ingress服务后,里面规则包含了对应的规则,里面有域名和对应的Service-backend。
NGINX Ingress Controller是目前使用最多也是评分最高的Ingress控制器:
基于http-header 的路由
基于 path 的路由
单个ingress 的 timeout (不影响其他ingress 的 timeout 时间设置)
请求速率limit rewrite
规则 ssl

Ingress 结构


image.png

09-服务发现与对外暴露服务 - 图17
**

  1. ---
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. annotations:
  6. deployment.kubernetes.io/revision: '1'
  7. kubectl.kubernetes.io/last-applied-configuration: >
  8. {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"default-http-backend"},"name":"default-http-backend","namespace":"ingress-nginx"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"default-http-backend"}},"template":{"metadata":{"labels":{"app":"default-http-backend"}},"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/os","operator":"NotIn","values":["windows"]},{"key":"node-role.kubernetes.io/worker","operator":"Exists"}]}]}}},"containers":[{"image":"rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1","livenessProbe":{"httpGet":{"path":"/healthz","port":8080,"scheme":"HTTP"},"initialDelaySeconds":30,"timeoutSeconds":5},"name":"default-http-backend","ports":[{"containerPort":8080}],"resources":{"limits":{"cpu":"10m","memory":"20Mi"},"requests":{"cpu":"10m","memory":"20Mi"}}}],"terminationGracePeriodSeconds":60,"tolerations":[{"effect":"NoExecute","operator":"Exists"},{"effect":"NoSchedule","operator":"Exists"}]}}}}
  9. creationTimestamp: '2020-04-27T12:56:52Z'
  10. generation: 1
  11. labels:
  12. app: default-http-backend
  13. name: default-http-backend
  14. namespace: ingress-nginx
  15. resourceVersion: '8840297'
  16. selfLink: /apis/apps/v1/namespaces/ingress-nginx/deployments/default-http-backend
  17. uid: ab861f90-92df-4952-9431-ab21f83d739b
  18. spec:
  19. progressDeadlineSeconds: 600
  20. replicas: 1
  21. revisionHistoryLimit: 10
  22. selector:
  23. matchLabels:
  24. app: default-http-backend
  25. strategy:
  26. rollingUpdate:
  27. maxSurge: 25%
  28. maxUnavailable: 25%
  29. type: RollingUpdate
  30. template:
  31. metadata:
  32. labels:
  33. app: default-http-backend
  34. spec:
  35. affinity:
  36. nodeAffinity:
  37. requiredDuringSchedulingIgnoredDuringExecution:
  38. nodeSelectorTerms:
  39. - matchExpressions:
  40. - key: beta.kubernetes.io/os
  41. operator: NotIn
  42. values:
  43. - windows
  44. - key: node-role.kubernetes.io/worker
  45. operator: Exists
  46. containers:
  47. - image: 'rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1'
  48. imagePullPolicy: IfNotPresent
  49. livenessProbe:
  50. failureThreshold: 3
  51. httpGet:
  52. path: /healthz
  53. port: 8080
  54. scheme: HTTP
  55. initialDelaySeconds: 30
  56. periodSeconds: 10
  57. successThreshold: 1
  58. timeoutSeconds: 5
  59. name: default-http-backend
  60. ports:
  61. - containerPort: 8080
  62. protocol: TCP
  63. resources:
  64. limits:
  65. cpu: 10m
  66. memory: 20Mi
  67. requests:
  68. cpu: 10m
  69. memory: 20Mi
  70. terminationMessagePath: /dev/termination-log
  71. terminationMessagePolicy: File
  72. dnsPolicy: ClusterFirst
  73. restartPolicy: Always
  74. schedulerName: default-scheduler
  75. terminationGracePeriodSeconds: 60
  76. tolerations:
  77. - effect: NoExecute
  78. operator: Exists
  79. - effect: NoSchedule
  80. operator: Exists
  81. status:
  82. availableReplicas: 1
  83. conditions:
  84. - lastTransitionTime: '2020-04-27T12:56:52Z'
  85. lastUpdateTime: '2020-04-27T12:57:18Z'
  86. message: >-
  87. ReplicaSet "default-http-backend-67cf578fc4" has successfully
  88. progressed.
  89. reason: NewReplicaSetAvailable
  90. status: 'True'
  91. type: Progressing
  92. - lastTransitionTime: '2020-05-01T03:41:59Z'
  93. lastUpdateTime: '2020-05-01T03:41:59Z'
  94. message: Deployment has minimum availability.
  95. reason: MinimumReplicasAvailable
  96. status: 'True'
  97. type: Available
  98. observedGeneration: 1
  99. readyReplicas: 1
  100. replicas: 1
  101. updatedReplicas: 1


  1. ---
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. annotations:
  6. deprecated.daemonset.template.generation: '1'
  7. field.cattle.io/publicEndpoints: >-
  8. [{"nodeName":"c-vvctx:machine-hvm67","addresses":["192.168.31.133"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-5w89b","allNodes":false},{"nodeName":"c-vvctx:machine-hvm67","addresses":["192.168.31.133"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-5w89b","allNodes":false},{"nodeName":"c-vvctx:machine-4hj8w","addresses":["192.168.31.134"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-vg4sh","allNodes":false},{"nodeName":"c-vvctx:machine-4hj8w","addresses":["192.168.31.134"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-vg4sh","allNodes":false}]
  9. kubectl.kubernetes.io/last-applied-configuration: >
  10. {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"nginx-ingress-controller","namespace":"ingress-nginx"},"spec":{"selector":{"matchLabels":{"app":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true"},"labels":{"app":"ingress-nginx"}},"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/os","operator":"NotIn","values":["windows"]},{"key":"node-role.kubernetes.io/worker","operator":"Exists"}]}]}}},"containers":[{"args":["/nginx-ingress-controller","--default-backend-service=$(POD_NAMESPACE)/default-http-backend","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--annotations-prefix=nginx.ingress.kubernetes.io"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"rancher/nginx-ingress-controller:nginx-0.25.1-rancher1","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"nginx-ingress-controller","ports":[{"containerPort":80,"name":"http"},{"containerPort":443,"name":"https"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"securityContext":{"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":33}}],"hostNetwork":true,"serviceAccountName":"nginx-ingress-serviceaccount","tolerations":[{"effect":"NoExecute","operator":"Exists"},{"effect":"NoSchedule","operator":"Exists"}]}}}}
  11. creationTimestamp: '2020-04-27T12:56:52Z'
  12. generation: 1
  13. name: nginx-ingress-controller
  14. namespace: ingress-nginx
  15. resourceVersion: '8850974'
  16. selfLink: /apis/apps/v1/namespaces/ingress-nginx/daemonsets/nginx-ingress-controller
  17. uid: 7cbd5760-edf2-48ae-8584-683a29bf3ac8
  18. spec:
  19. revisionHistoryLimit: 10
  20. selector:
  21. matchLabels:
  22. app: ingress-nginx
  23. template:
  24. metadata:
  25. annotations:
  26. prometheus.io/port: '10254'
  27. prometheus.io/scrape: 'true'
  28. labels:
  29. app: ingress-nginx
  30. spec:
  31. affinity:
  32. nodeAffinity:
  33. requiredDuringSchedulingIgnoredDuringExecution:
  34. nodeSelectorTerms:
  35. - matchExpressions:
  36. - key: beta.kubernetes.io/os
  37. operator: NotIn
  38. values:
  39. - windows
  40. - key: node-role.kubernetes.io/worker
  41. operator: Exists
  42. containers:
  43. - args:
  44. - /nginx-ingress-controller
  45. - '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
  46. - '--configmap=$(POD_NAMESPACE)/nginx-configuration'
  47. - '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services'
  48. - '--udp-services-configmap=$(POD_NAMESPACE)/udp-services'
  49. - '--annotations-prefix=nginx.ingress.kubernetes.io'
  50. env:
  51. - name: POD_NAME
  52. valueFrom:
  53. fieldRef:
  54. apiVersion: v1
  55. fieldPath: metadata.name
  56. - name: POD_NAMESPACE
  57. valueFrom:
  58. fieldRef:
  59. apiVersion: v1
  60. fieldPath: metadata.namespace
  61. image: 'rancher/nginx-ingress-controller:nginx-0.25.1-rancher1'
  62. imagePullPolicy: IfNotPresent
  63. livenessProbe:
  64. failureThreshold: 3
  65. httpGet:
  66. path: /healthz
  67. port: 10254
  68. scheme: HTTP
  69. initialDelaySeconds: 10
  70. periodSeconds: 10
  71. successThreshold: 1
  72. timeoutSeconds: 1
  73. name: nginx-ingress-controller
  74. ports:
  75. - containerPort: 80
  76. hostPort: 80
  77. name: http
  78. protocol: TCP
  79. - containerPort: 443
  80. hostPort: 443
  81. name: https
  82. protocol: TCP
  83. readinessProbe:
  84. failureThreshold: 3
  85. httpGet:
  86. path: /healthz
  87. port: 10254
  88. scheme: HTTP
  89. periodSeconds: 10
  90. successThreshold: 1
  91. timeoutSeconds: 1
  92. securityContext:
  93. capabilities:
  94. add:
  95. - NET_BIND_SERVICE
  96. drop:
  97. - ALL
  98. runAsUser: 33
  99. terminationMessagePath: /dev/termination-log
  100. terminationMessagePolicy: File
  101. dnsPolicy: ClusterFirst
  102. hostNetwork: true
  103. restartPolicy: Always
  104. schedulerName: default-scheduler
  105. serviceAccount: nginx-ingress-serviceaccount
  106. serviceAccountName: nginx-ingress-serviceaccount
  107. terminationGracePeriodSeconds: 30
  108. tolerations:
  109. - effect: NoExecute
  110. operator: Exists
  111. - effect: NoSchedule
  112. operator: Exists
  113. updateStrategy:
  114. rollingUpdate:
  115. maxUnavailable: 1
  116. type: RollingUpdate
  117. status:
  118. currentNumberScheduled: 2
  119. desiredNumberScheduled: 2
  120. numberAvailable: 2
  121. numberMisscheduled: 0
  122. numberReady: 2
  123. observedGeneration: 1
  124. updatedNumberScheduled: 2





# 创建Ingress
kubectl run —image=nginx test1
kubectl run —image=nginx test2
kubectl expose deployment test1 —port=8081 —target-port=80
kubectl expose deployment test2 —port=8081 —target-port=80

cat << EOF > ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: riyimei.cn
    http:
      paths:
      - path: /dev
        backend:
          serviceName: test1
          servicePort: 8081
      - path: /uat
        backend:
          serviceName: test2
          servicePort: 8081
EOF

hosts表添加node01的ip地址+域名
vim /etc/hosts
192.168.31.131 node01 riyimei.cn
curl riyimei.cn/dev
curl riyimei.cn/uat

编写ingress2.yaml,使用ip地址进行访问

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
spec:
  backend:
    serviceName: test
    servicePort: 18080

curl 172.17.224.180
**


**