01 Controllers

官网https://kubernetes.io/docs/concepts/workloads/controllers/

ReplicationController(RC)

官网https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/

  1. A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.

ReplicationController定义了一个期望的场景,即声明某种Pod的副本数量在任意时刻都符合某个预期值,所以RC的定义包含以下几个部分:

  • Pod期待的副本数(replicas)
  • 用于筛选目标Pod的Label Selector
  • 当Pod的副本数量小于预期数量时,用于创建新Pod的Pod模板(template)

也就是说通过RC实现了集群中Pod的高可用,减少了传统IT环境中手工运维的工作。

Have a try

kind:表示要新建对象的类型

spec.selector:表示需要管理的Pod的label,这里表示包含app: nginx的label的Pod都会被该RC管理

spec.replicas:表示受此RC管理的Pod需要运行的副本数

spec.template:表示用于定义Pod的模板,比如Pod名称、拥有的label以及Pod中运行的应用等

通过改变RC里Pod模板中的镜像版本,可以实现Pod的升级功能

kubectl apply -f nginx-pod.yaml,此时k8s会在所有可用的Node上,创建3个Pod,并且每个Pod都有一个app: nginx的label,同时每个Pod中都运行了一个nginx容器。

如果某个Pod发生问题,Controller Manager能够及时发现,然后根据RC的定义,创建一个新的Pod

扩缩容:kubectl scale rc nginx —replicas=5

(1)创建名为nginx_replication.yaml

  1. apiVersion: v1
  2. kind: ReplicationController
  3. metadata:
  4. name: nginx
  5. spec:
  6. replicas: 3
  7. selector:
  8. app: nginx
  9. template:
  10. metadata:
  11. name: nginx
  12. labels:
  13. app: nginx
  14. spec:
  15. containers:
  16. - name: nginx
  17. image: nginx
  18. ports:
  19. - containerPort: 80

(2)根据nginx_replication.yaml创建pod

  1. kubectl apply -f nginx_replication.yaml

(3)查看pod

  1. kubectl get pods -o wide
  2. NAME READY STATUS
  3. nginx-hksg8 1/1 Running 0 44s 192.168.80.195 w2
  4. nginx-q7bw5 1/1 Running 0 44s 192.168.190.67 w1
  5. nginx-zzwzl 1/1 Running 0 44s 192.168.190.68 w1
  6. kubectl get rc
  7. NAME DESIRED CURRENT READY AGE
  8. nginx 3 3 3 2m54s

(4)尝试删除一个pod

  1. kubectl delete pods nginx-zzwzl
  2. kubectl get pods

(5)对pod进行扩缩容

  1. kubectl scale rc nginx --replicas=5
  2. kubectl get pods
  3. nginx-8fctt 0/1 ContainerCreating 0 2s
  4. nginx-9pgwk 0/1 ContainerCreating 0 2s
  5. nginx-hksg8 1/1 Running 0 6m50s
  6. nginx-q7bw5 1/1 Running 0 6m50s
  7. nginx-wzqkf 1/1 Running 0 99s

(6)删除pod

  1. kubectl delete -f nginx_replication.yaml

ReplicaSet(RS)

官网https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/

  1. A ReplicaSets purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

在Kubernetes v1.2时,RC就升级成了另外一个概念:Replica Set,官方解释为“下一代RC”

ReplicaSet和RC没有本质的区别,kubectl中绝大部分作用于RC的命令同样适用于RS

RS与RC唯一的区别是:RS支持基于集合的Label Selector(Set-based selector),而RC只支持基于等式的Label Selector(equality-based selector),这使得Replica Set的功能更强

Have a try

  1. apiVersion: extensions/v1beta1
  2. kind: ReplicaSet
  3. metadata:
  4. name: frontend
  5. spec:
  6. matchLabels:
  7. tier: frontend
  8. matchExpressions:
  9. - {key:tier,operator: In,values: [frontend]}
  10. template:
  11. ...

注意:一般情况下,我们很少单独使用Replica Set,它主要是被Deployment这个更高的资源对象所使用,从而形成一整套Pod创建、删除、更新的编排机制。当我们使用Deployment时,无须关心它是如何创建和维护Replica Set的,这一切都是自动发生的。同时,无需担心跟其他机制的不兼容问题(比如ReplicaSet不支持rolling-update但Deployment支持)。

Deployment

官网https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

``` A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

  1. > Deployment相对RC最大的一个升级就是我们可以随时知道当前Pod“部署”的进度。
  2. > 创建一个Deployment对象来生成对应的Replica Set并完成Pod副本的创建过程
  3. > 检查Deploymnet的状态来看部署动作是否完成(Pod副本的数量是否达到预期的值)
  4. > (1)创建nginx_deployment.yaml文件
  5. ```yaml
  6. apiVersion: apps/v1
  7. kind: Deployment
  8. metadata:
  9. name: nginx-deployment
  10. labels:
  11. app: nginx
  12. spec:
  13. replicas: 3
  14. selector:
  15. matchLabels:
  16. app: nginx
  17. template:
  18. metadata:
  19. labels:
  20. app: nginx
  21. spec:
  22. containers:
  23. - name: nginx
  24. image: nginx:1.7.9
  25. ports:
  26. - containerPort: 80

(2)根据nginx_deployment.yaml文件创建pod

  1. kubectl apply -f nginx_deployment.yaml

(3)查看pod

kubectl get pods -o wide

kubectl get deployment

kubectl get rs

kubectl get deployment -o wide

  1. nginx-deployment-6dd86d77d-f7dxb 1/1 Running 0 22s 192.168.80.198 w2
  2. nginx-deployment-6dd86d77d-npqxj 1/1 Running 0 22s 192.168.190.71 w1
  3. nginx-deployment-6dd86d77d-swt25 1/1 Running 0 22s 192.168.190.70 w1

nginx-deployment[deployment]-6dd86d77d[replicaset]-f7dxb[pod]

(4)当前nginx的版本

  1. kubectl get deployment -o wide
  2. NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
  3. nginx-deployment 3/3 3 3 3m27s nginx nginx:1.7.9 app=nginx

(5)更新nginx的image版本

  1. kubectl set image deployment nginx-deployment nginx=nginx:1.9.1

02 Labels and Selectors

在前面的yaml文件中,看到很多label,顾名思义,就是给一些资源打上标签的

官网https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

  1. Labels are key/value pairs that are attached to objects, such as pods.
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: nginx-pod
  5. labels:
  6. app: nginx

表示名称为nginx-pod的pod,有一个label,key为app,value为nginx。

我们可以将具有同一个label的pod,交给selector管理

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginx-deployment
  5. labels:
  6. app: nginx
  7. spec:
  8. replicas: 3
  9. selector: # 匹配具有同一个label属性的pod标签
  10. matchLabels:
  11. app: nginx
  12. template: # 定义pod的模板
  13. metadata:
  14. labels:
  15. app: nginx # 定义当前pod的label属性,app为key,value为nginx
  16. spec:
  17. containers:
  18. - name: nginx
  19. image: nginx:1.7.9
  20. ports:
  21. - containerPort: 80

查看pod的label标签:kubectl get pods —show-labels

这里可以尝试一下selector匹配不上的结果

03 Namespace

kubectl get pods

kubectl get pods -n kube-system

比较一下,上述两行命令的输入是否一样,发现不一样,是因为Pod属于不同的Namespace。

查看一下当前的命名空间:kubectl get namespaces/ns

  1. NAME STATUS AGE
  2. default Active 27m
  3. kube-node-lease Active 27m
  4. kube-public Active 27m
  5. kube-system Active 27m

其实说白了,命名空间就是为了隔离不同的资源,比如:Pod、Service、Deployment等。可以在输入命令的时候指定命名空间-n,如果不指定,则使用默认的命名空间:default。

创建命名空间

myns-namespace.yaml

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: myns

kubectl apply -f myns-namespace.yaml

kubectl get namespaces/ns

  1. NAME STATUS AGE
  2. default Active 38m
  3. kube-node-lease Active 38m
  4. kube-public Active 38m
  5. kube-system Active 38m
  6. myns Active 6s

指定命名空间下的资源

比如创建一个pod,属于myns命名空间下

vi nginx-pod.yaml

kubectl apply -f nginx-pod.yaml

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: nginx-pod
  5. namespace: myns
  6. spec:
  7. containers:
  8. - name: nginx-container
  9. image: nginx
  10. ports:
  11. - containerPort: 80

查看myns命名空间下的Pod和资源

kubectl get pods

kubectl get pods -n myns

kubectl get all -n myns

kubectl get pods —all-namespaces #查找所有命名空间下的pod

04 Network

4.1 同一个Pod中的容器通信

接下来就要说到跟Kubernetes网络通信相关的内容咯

我们都知道K8S最小的操作单位是Pod,先思考一下同一个Pod中多个容器要进行通信

由官网的这段话可以看出,同一个pod中的容器是共享网络ip地址和端口号的,通信显然没问题

  1. Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports.

那如果是通过容器的名称进行通信呢?就需要将所有pod中的容器加入到同一个容器的网络中,我们把该容器称作为pod中的pause container。

4.2 集群内Pod之间的通信

接下来就聊聊K8S最小的操作单元,Pod之间的通信

我们都之间Pod会有独立的IP地址,这个IP地址是被Pod中所有的Container共享的

那多个Pod之间的通信能通过这个IP地址吗?

我认为需要分两个维度:一是集群中同一台机器中的Pod,二是集群中不同机器中的Pod

准备两个pod,一个nginx,一个busybox

nginx_pod.yaml

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: nginx-pod
  5. labels:
  6. app: nginx
  7. spec:
  8. containers:
  9. - name: nginx-container
  10. image: nginx
  11. ports:
  12. - containerPort: 80

busybox_pod.yaml

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: busybox
  5. labels:
  6. app: busybox
  7. spec:
  8. containers:
  9. - name: busybox
  10. image: busybox
  11. command: ['sh', '-c', 'echo The app is running! && sleep 3600']

将两个pod运行起来,并且查看运行情况

kubectl apply -f nginx_pod.yaml

kubectl apply -f busy_pod.yaml

kubectl get pods -o wide

  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. busybox 1/1 Running 0 49s 192.168.221.70 worker02-kubeadm-k8s
  3. nginx-pod 1/1 Running 0 7m46s 192.168.14.1 worker01-kubeadm-k8s

发现:nginx-pod的ip为192.168.14.1 busybox-pod的ip为192.168.221.70

同一个集群中同一台机器

(1)来到worker01:ping 192.168.14.1

  1. PING 192.168.14.1 (192.168.14.1) 56(84) bytes of data.
  2. 64 bytes from 192.168.14.1: icmp_seq=1 ttl=64 time=0.063 ms
  3. 64 bytes from 192.168.14.1: icmp_seq=2 ttl=64 time=0.048 ms

(2)来到worker01:curl 192.168.14.1

  1. <!DOCTYPE html>
  2. <html>
  3. <head>
  4. <title>Welcome to nginx!</title>
  5. <style>
  6. body {
  7. width: 35em;
  8. margin: 0 auto;
  9. font-family: Tahoma, Verdana, Arial, sans-serif;
  10. }
  11. </style>

同一个集群中不同机器

(1)来到worker02:ping 192.168.14.1

  1. [root@worker02-kubeadm-k8s ~]# ping 192.168.14.1
  2. PING 192.168.14.1 (192.168.14.1) 56(84) bytes of data.
  3. 64 bytes from 192.168.14.1: icmp_seq=1 ttl=63 time=0.680 ms
  4. 64 bytes from 192.168.14.1: icmp_seq=2 ttl=63 time=0.306 ms
  5. 64 bytes from 192.168.14.1: icmp_seq=3 ttl=63 time=0.688 ms

(2)来到worker02:curl 192.168.14.1,同样可以访问nginx

(3)来到master:

ping/curl 192.168.14.1 访问的是worker01上的nginx-pod

ping 192.168.221.70 访问的是worker02上的busybox-pod

(4)来到worker01:ping 192.168.221.70 访问的是worker02上的busybox-pod

How to implement the Kubernetes Cluster networking model—Calico

官网https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model

  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT

4.3 集群内Service-Cluster IP

对于上述的Pod虽然实现了集群内部互相通信,但是Pod是不稳定的,比如通过Deployment管理Pod,随时可能对Pod进行扩缩容,这时候Pod的IP地址是变化的。能够有一个固定的IP,使得集群内能够访问。也就是之前在架构描述的时候所提到的,能够把相同或者具有关联的Pod,打上Label,组成Service。而Service有固定的IP,不管Pod怎么创建和销毁,都可以通过Service的IP进行访问

Service官网https://kubernetes.io/docs/concepts/services-networking/service/

  1. An abstract way to expose an application running on a set of Pods as a network service.
  2. With Kubernetes you dont need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

(1)创建whoami-deployment.yaml文件,并且apply

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: whoami-deployment
  5. labels:
  6. app: whoami
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: whoami
  12. template:
  13. metadata:
  14. labels:
  15. app: whoami
  16. spec:
  17. containers:
  18. - name: whoami
  19. image: jwilder/whoami
  20. ports:
  21. - containerPort: 8000

(2)查看pod以及service

  1. whoami-deployment-5dd9ff5fd8-22k9n 192.168.221.80 worker02-kubeadm-k8s
  2. whoami-deployment-5dd9ff5fd8-vbwzp 192.168.14.6 worker01-kubeadm-k8s
  3. whoami-deployment-5dd9ff5fd8-zzf4d 192.168.14.7 worker01-kubeadm-k8s

kubect get svc:可以发现目前并没有关于whoami的service

  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h

(3)在集群内正常访问

  1. curl 192.168.221.80:8000/192.168.14.6:8000/192.168.14.7:8000

(4)创建whoami的service

注意:该地址只能在集群内部访问

  1. kubectl expose deployment whoami-deployment
  2. kubectl get svc
  3. 删除svc kubectl delete service whoami-deployment
  4. [root@master-kubeadm-k8s ~]# kubectl get svc
  5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  6. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
  7. whoami-deployment ClusterIP 10.105.147.59 <none> 8000/TCP 23s

可以发现有一个Cluster IP类型的service,名称为whoami-deployment,IP地址为10.101.201.192

(5)通过Service的Cluster IP访问

  1. [root@master-kubeadm-k8s ~]# curl 10.105.147.59:8000
  2. I'm whoami-deployment-678b64444d-b2695
  3. [root@master-kubeadm-k8s ~]# curl 10.105.147.59:8000
  4. I'm whoami-deployment-678b64444d-hgdrk
  5. [root@master-kubeadm-k8s ~]# curl 10.105.147.59:8000
  6. I'm whoami-deployment-678b64444d-65t88

(6)具体查看一下whoami-deployment的详情信息,发现有一个Endpoints连接了具体3个Pod

  1. [root@master-kubeadm-k8s ~]# kubectl describe svc whoami-deployment
  2. Name: whoami-deployment
  3. Namespace: default
  4. Labels: app=whoami
  5. Annotations: <none>
  6. Selector: app=whoami
  7. Type: ClusterIP
  8. IP: 10.105.147.59
  9. Port: <unset> 8000/TCP
  10. TargetPort: 8000/TCP
  11. Endpoints: 192.168.14.8:8000,192.168.221.81:8000,192.168.221.82:8000
  12. Session Affinity: None
  13. Events: <none>

(7)不妨对whoami扩容成5个

  1. kubectl scale deployment whoami-deployment --replicas=5

(8)再次访问:curl 10.105.147.59:8000

(9)再次查看service具体信息:kubectl describe svc whoami-deployment

(10)其实对于Service的创建,不仅仅可以使用kubectl expose,也可以定义一个yaml文件

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: my-service
  5. spec:
  6. selector:
  7. app: MyApp
  8. ports:
  9. - protocol: TCP
  10. port: 80
  11. targetPort: 9376
  12. type: Cluster

conclusion:其实Service存在的意义就是为了Pod的不稳定性,而上述探讨的就是关于Service的一种类型Cluster IP,只能供集群内访问

以Pod为中心,已经讨论了关于集群内的通信方式,接下来就是探讨集群中的Pod访问外部服务,以及外部服务访问集群中的Pod

4.4 Pod访问外部服务

比较简单,没太多好说的内容,直接访问即可

4.5 外部服务访问集群中的Pod

Service-NodePort

也是Service的一种类型,可以通过NodePort的方式

说白了,因为外部能够访问到集群的物理机器IP,所以就是在集群中每台物理机器上暴露一个相同的IP,比如32008

(1)根据whoami-deployment.yaml创建pod

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: whoami-deployment
  5. labels:
  6. app: whoami
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: whoami
  12. template:
  13. metadata:
  14. labels:
  15. app: whoami
  16. spec:
  17. containers:
  18. - name: whoami
  19. image: jwilder/whoami
  20. ports:
  21. - containerPort: 8000

(2)创建NodePort类型的service,名称为whoami-deployment

  1. kubectl delete svc whoami-deployment
  2. kubectl expose deployment whoami-deployment --type=NodePort
  3. [root@master-kubeadm-k8s ~]# kubectl get svc
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
  6. whoami-deployment NodePort 10.99.108.82 <none> 8000:32041/TCP 7s

(3)注意上述的端口32041,实际上就是暴露在集群中物理机器上的端口

  1. lsof -i tcp:32041
  2. netstat -ntlp|grep 32041

(4)浏览器通过物理机器的IP访问

  1. http://192.168.0.51:32041
  2. curl 192.168.0.61:32041

conclusion:NodePort虽然能够实现外部访问Pod的需求,但是真的好吗?其实不好,占用了各个物理主机上的端口

Service-LoadBalance

通常需要第三方云提供商支持,有约束性

Ingress

官网https://kubernetes.io/docs/concepts/services-networking/ingress/

``` An API object that manages external access to the services in a cluster, typically HTTP.

Ingress can provide load balancing, SSL termination and name-based virtual hosting.

  1. > 可以发现,Ingress就是帮助我们访问集群内的服务的。不过在看Ingress之前,我们还是先以一个案例出发。
  2. > 很简单,在K8S集群中部署tomcat
  3. 浏览器想要访问这个tomcat,也就是外部要访问该tomcat,用之前的Service-NodePort的方式是可以的,比如暴露一个32008端口,只需要访问192.168.0.61:32008即可。
  4. vi my-tomcat.yaml
  5. kubectl apply -f my-tomcat.yaml
  6. kubectl get pods
  7. kubectl get deployment
  8. kubectl get svc
  9. `tomcat-service NodePort 10.105.51.97 <none> 80:31032/TCP 37s`
  10. ```yaml
  11. apiVersion: apps/v1
  12. kind: Deployment
  13. metadata:
  14. name: tomcat-deployment
  15. labels:
  16. app: tomcat
  17. spec:
  18. replicas: 1
  19. selector:
  20. matchLabels:
  21. app: tomcat
  22. template:
  23. metadata:
  24. labels:
  25. app: tomcat
  26. spec:
  27. containers:
  28. - name: tomcat
  29. image: tomcat
  30. ports:
  31. - containerPort: 8080
  32. ---
  33. apiVersion: v1
  34. kind: Service
  35. metadata:
  36. name: tomcat-service
  37. spec:
  38. ports:
  39. - port: 80
  40. protocol: TCP
  41. targetPort: 8080
  42. selector:
  43. app: tomcat
  44. type: NodePort

显然,Service-NodePort的方式生产环境不推荐使用,那接下来就基于上述需求,使用Ingress实现访问tomcat的需求。

官网Ingress:https://kubernetes.io/docs/concepts/services-networking/ingress/

GitHub Ingress Nginx:https://github.com/kubernetes/ingress-nginx

Nginx Ingress Controller:<https://kubernetes.github.io/ingress-nginx/

(1)以Deployment方式创建Pod,该Pod为Ingress Nginx Controller,要想让外界访问,可以通过Service的NodePort或者HostPort方式,这里选择HostPort,比如指定worker01运行

  1. # 确保nginx-controller运行到w1节点上
  2. kubectl label node w1 name=ingress
  3. # 使用HostPort方式运行,需要增加配置
  4. hostNetwork: true
  5. # 搜索nodeSelector,并且要确保w1节点上的80和443端口没有被占用,镜像拉取需要较长的时间,这块注意一下哦
  6. # mandatory.yaml在网盘中的“课堂源码”目录
  7. kubectl apply -f mandatory.yaml
  8. kubectl get all -n ingress-nginx

(2)查看w1的80和443端口

  1. lsof -i tcp:80
  2. lsof -i tcp:443

(3)创建tomcat的pod和service

记得将之前的tomcat删除:kubectl delete -f my-tomcat.yaml

vi tomcat.yaml

kubectl apply -f tomcat.yaml

kubectl get svc

kubectl get pods

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: tomcat-deployment
  5. labels:
  6. app: tomcat
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. app: tomcat
  12. template:
  13. metadata:
  14. labels:
  15. app: tomcat
  16. spec:
  17. containers:
  18. - name: tomcat
  19. image: tomcat
  20. ports:
  21. - containerPort: 8080
  22. ---
  23. apiVersion: v1
  24. kind: Service
  25. metadata:
  26. name: tomcat-service
  27. spec:
  28. ports:
  29. - port: 80
  30. protocol: TCP
  31. targetPort: 8080
  32. selector:
  33. app: tomcat

(4)创建Ingress以及定义转发规则

kubectl apply -f nginx-ingress.yaml

kubectl get ingress

kubectl describe ingress nginx-ingress

  1. #ingress
  2. apiVersion: extensions/v1beta1
  3. kind: Ingress
  4. metadata:
  5. name: nginx-ingress
  6. spec:
  7. rules:
  8. - host: tomcat.jack.com
  9. http:
  10. paths:
  11. - path: /
  12. backend:
  13. serviceName: tomcat-service
  14. servicePort: 80

(5)修改win的hosts文件,添加dns解析

  1. 192.168.8.61 tomcat.jack.com

(6)打开浏览器,访问tomcat.jack.com

总结:如果以后想要使用Ingress网络,其实只要定义ingress,service和pod即可,前提是要保证nginx ingress controller已经配置好了。