deployment.yaml

导出 deploymentyaml
kubectl create deployment java-demo --help
kubectl create deployment java-demo --image=lizhenliang/java-demo --dry-run -o yaml > deployment-java.yaml # -o 输出 —dry-run 试运行

得到 deployment-java.yaml
kubectl apply -f deployment-java.yaml

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app: java-demo
  7. name: java-demo
  8. spec:
  9. replicas: 1 ## replicas
  10. selector:
  11. matchLabels:
  12. app: java-demo
  13. strategy: {}
  14. template:
  15. metadata:
  16. creationTimestamp: null
  17. labels:
  18. app: java-demo
  19. spec:
  20. containers:
  21. - image: lizhenliang/java-demo ## 默认从 docker_bub 上 拉取
  22. name: java-demo
  23. resources: {}
  24. status: {}

Deployment 控制器

controllers: otherwise known as workload

  1. [root@master ~]# kubectl create deployment web --image=nginx --dry-run -o yaml > web_nging.yml
  2. W0226 19:28:14.262671 156463 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
  3. [root@master ~]# ls
  4. dashboard.yaml kube-flannel.yml kubernetes-dashboard.yaml pod.yaml recommended.yaml svc.yaml test-java.yaml web_nging.yml
  5. [root@master ~]# cat web_nging.yml

web_nging.yml

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app: web
  7. name: web
  8. spec:
  9. replicas: 3
  10. selector:
  11. matchLabels:
  12. app: web
  13. strategy: {}
  14. template:
  15. metadata:
  16. creationTimestamp: null
  17. labels:
  18. app: web
  19. spec:
  20. containers:
  21. - image: nginx
  22. name: nginx
  23. resources: {}
  24. status: {}

修改下

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. labels:
  5. app: web
  6. name: web
  7. spec:
  8. replicas: 3
  9. selector:
  10. matchLabels:
  11. app: web
  12. strategy: {}
  13. template:
  14. metadata:
  15. labels:
  16. app: web
  17. spec:
  18. containers:
  19. - image: lizhenliang/java-demo
  20. name: nginx-container
  21. resources: {}
  22. status: {}
  1. [root@master ~]# [root@master ~]# kubectl apply -f web_nging.yml
  2. deployment.apps/web created
  3. [root@master ~]# kubectl get svc,pod -o wide
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  5. service/java-demo NodePort 10.107.162.9 <none> 80:32331/TCP 130m app=java-demo
  6. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h34m <none>
  7. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  8. pod/web-57fc9cf4d4-2nj77 1/1 Running 0 38s 10.244.1.7 node1 <none> <none>
  9. pod/web-57fc9cf4d4-8n8hf 1/1 Running 0 38s 10.244.1.9 node1 <none> <none>
  10. pod/web-57fc9cf4d4-dfmmb 1/1 Running 0 38s 10.244.1.8 node1 <none> <none>

service.yaml

kubectl expose deployment java-demo --port=80 --name=<your_service_name> --target-port=8080 --type=NodePort --dry-run -o yaml > svc.yaml

—target-port 为 images 内 端口 —port 为 k8s 内,pods/clusters 之间, images 之外访问。 , ClusterIP 为 k8s 内,pods/clusters 之间, images 之外访问。 :<—port> —type=NodePort 则是 k8s 之外。 若不指定则随机生成一个端口。 —name= 若不指定 则 默认 和你 depoloyment 一样。

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app: java-demo
  7. name: java-demo
  8. spec:
  9. ports:
  10. - port: 80
  11. protocol: TCP
  12. targetPort: 8080
  13. selector:
  14. app: java-demo
  15. type: NodePort
  16. status:
  17. loadBalancer: {}

使用
kubectl apply -f svc.yaml

  1. [root@master ~]# kubectl apply -f svc.yaml
  2. service/java-demo created
  3. [root@master ~]# kubectl get pods,svc
  4. NAME READY STATUS RESTARTS AGE
  5. pod/java-demo-56d54df448-5f5p5 1/1 Running 0 93s
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. service/java-demo NodePort 10.107.162.9 <none> 80:32331/TCP 2s
  8. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h23m

pod

namespace: 资源隔离,

团队内多个项目,如何隔离。

pod:
一个容器 或 多个 容器 集合。
同一个 pod 内 共享 namespace ,store
pod is temporary

亲密性 应用场景

  • 多个应用间 文件交互, ——- 文件共享,存储共享———利用 data-volume, 数据卷 能在 不同的 pods 间 漂移, emptydDir (用作 公共卷 —给 多个共同挂载)
  • 127.0.0.1 网络/socket 交互 ——- 网络共享
  • 相互调用

docker 的设计理念: 一个容器 单个 app. 每次 run 起来 就是 一个 进程。
主要利用了 linux_namespace 隔离 和 cgroup 资源限制———— 所以 pod 创建 infrastructure-container, pause——共享网络部分

  1. [root@master ~]# docker ps | grep pause
  2. 3887328ddc36 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" About an hour ago Up About an hour k8s_POD_kubernetes-dashboard-7b4bdcb8b8-9dvnq_kube-system_20bae962-668b-42b1-87d7-9391f48ed57e_0
  3. 571cf7cbf548 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_dashboard-metrics-scraper-7b59f7d4df-vvs9f_kubernetes-dashboard_39008cef-f79d-439b-8019-541764c6f377_0
  4. ff257da7e78c registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kubernetes-dashboard-74d688b6bc-mbqm7_kubernetes-dashboard_09d0c811-c783-4a06-a16c-079fb7e764d4_0
  5. 7ed2829521b6 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_dashboard-metrics-scraper-head-58979977f8-txf76_kubernetes-dashboard-head_ca275c6b-2082-4fd7-a41c-e5a2e9782593_0
  6. 63bdce172dac registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kubernetes-dashboard-head-679f448bd7-qf7xl_kubernetes-dashboard-head_75d66e1a-5be5-4354-b816-1ebe88cf020e_0

get pod.yaml

env —— 传 环境变量
resources: ——- 资源限制
readinessProbe——-是否准备就绪—-给你重建,
livenessProbe —— 存活 探针—-转发流量

  1. [root@master ~]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. java-demo-56d54df448-5f5p5 1/1 Running 0 38m
  4. [root@master ~]# kubectl get pods java-demo-56d54df448-5f5p5 -o yaml > pod.yaml
  5. [root@master ~]# cat pod.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. creationTimestamp: "2021-02-26T09:23:48Z"
  5. generateName: java-demo-56d54df448-
  6. labels:
  7. app: java-demo
  8. pod-template-hash: 56d54df448
  9. managedFields:
  10. - apiVersion: v1
  11. fieldsType: FieldsV1
  12. fieldsV1:
  13. f:metadata:
  14. f:generateName: {}
  15. f:labels:
  16. .: {}
  17. f:app: {}
  18. f:pod-template-hash: {}
  19. f:ownerReferences:
  20. .: {}
  21. k:{"uid":"2701f0fd-988a-44d2-bc51-4a3639098f04"}:
  22. .: {}
  23. f:apiVersion: {}
  24. f:blockOwnerDeletion: {}
  25. f:controller: {}
  26. f:kind: {}
  27. f:name: {}
  28. f:uid: {}
  29. f:spec:
  30. f:containers:
  31. k:{"name":"java-demo"}:
  32. .: {}
  33. f:image: {}
  34. f:imagePullPolicy: {}
  35. f:name: {}
  36. f:resources: {}
  37. f:terminationMessagePath: {}
  38. f:terminationMessagePolicy: {}
  39. f:dnsPolicy: {}
  40. f:enableServiceLinks: {}
  41. f:restartPolicy: {}
  42. f:schedulerName: {}
  43. f:securityContext: {}
  44. f:terminationGracePeriodSeconds: {}
  45. manager: kube-controller-manager
  46. operation: Update
  47. time: "2021-02-26T09:23:48Z"
  48. - apiVersion: v1
  49. fieldsType: FieldsV1
  50. fieldsV1:
  51. f:status:
  52. f:conditions:
  53. k:{"type":"ContainersReady"}:
  54. .: {}
  55. f:lastProbeTime: {}
  56. f:lastTransitionTime: {}
  57. f:status: {}
  58. f:type: {}
  59. k:{"type":"Initialized"}:
  60. .: {}
  61. f:lastProbeTime: {}
  62. f:lastTransitionTime: {}
  63. f:status: {}
  64. f:type: {}
  65. k:{"type":"Ready"}:
  66. .: {}
  67. f:lastProbeTime: {}
  68. f:lastTransitionTime: {}
  69. f:status: {}
  70. f:type: {}
  71. f:containerStatuses: {}
  72. f:hostIP: {}
  73. f:phase: {}
  74. f:podIP: {}
  75. f:podIPs:
  76. .: {}
  77. k:{"ip":"10.244.1.6"}:
  78. .: {}
  79. f:ip: {}
  80. f:startTime: {}
  81. manager: kubelet
  82. operation: Update
  83. time: "2021-02-26T09:24:20Z"
  84. name: java-demo-56d54df448-5f5p5
  85. namespace: default
  86. ownerReferences:
  87. - apiVersion: apps/v1
  88. blockOwnerDeletion: true
  89. controller: true
  90. kind: ReplicaSet
  91. name: java-demo-56d54df448
  92. uid: 2701f0fd-988a-44d2-bc51-4a3639098f04
  93. resourceVersion: "18158"
  94. uid: 61a5c4e3-c021-403b-95a6-b3814b26c320
  95. spec:
  96. containers:
  97. - image: lizhenliang/java-demo
  98. imagePullPolicy: Always
  99. name: java-demo
  100. resources: {}
  101. terminationMessagePath: /dev/termination-log
  102. terminationMessagePolicy: File
  103. volumeMounts:
  104. - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
  105. name: default-token-z8hjj
  106. readOnly: true
  107. dnsPolicy: ClusterFirst
  108. enableServiceLinks: true
  109. nodeName: node1
  110. preemptionPolicy: PreemptLowerPriority
  111. priority: 0
  112. restartPolicy: Always
  113. schedulerName: default-scheduler
  114. securityContext: {}
  115. serviceAccount: default
  116. serviceAccountName: default
  117. terminationGracePeriodSeconds: 30
  118. tolerations:
  119. - effect: NoExecute
  120. key: node.kubernetes.io/not-ready
  121. operator: Exists
  122. tolerationSeconds: 300
  123. - effect: NoExecute
  124. key: node.kubernetes.io/unreachable
  125. operator: Exists
  126. tolerationSeconds: 300
  127. volumes:
  128. - name: default-token-z8hjj
  129. secret:
  130. defaultMode: 420
  131. secretName: default-token-z8hjj
  132. status:
  133. conditions:
  134. - lastProbeTime: null
  135. lastTransitionTime: "2021-02-26T09:23:48Z"
  136. status: "True"
  137. type: Initialized
  138. - lastProbeTime: null
  139. lastTransitionTime: "2021-02-26T09:24:20Z"
  140. status: "True"
  141. type: Ready
  142. - lastProbeTime: null
  143. lastTransitionTime: "2021-02-26T09:24:20Z"
  144. status: "True"
  145. type: ContainersReady
  146. - lastProbeTime: null
  147. lastTransitionTime: "2021-02-26T09:23:48Z"
  148. status: "True"
  149. type: PodScheduled
  150. containerStatuses:
  151. - containerID: docker://7f86bece84c38a8b8539b272634ad5868a7c0047d4b07099cea50ed0c89b746d
  152. image: lizhenliang/java-demo:latest
  153. imageID: docker-pullable://lizhenliang/java-demo@sha256:4e43b2bcd81adf6d00b46a5c7efd384fc9f5b059c75255c8c89404ed4818bae3
  154. lastState: {}
  155. name: java-demo
  156. ready: true
  157. restartCount: 0
  158. started: true
  159. state:
  160. running:
  161. startedAt: "2021-02-26T09:24:19Z"
  162. hostIP: 192.168.116.137
  163. phase: Running
  164. podIP: 10.244.1.6
  165. podIPs:
  166. - ip: 10.244.1.6
  167. qosClass: BestEffort
  168. startTime: "2021-02-26T09:23:48Z"

expose

一般有 两种方式: service, ingress 两种方式。

expose — service

  • 意义:
    • 防止 Pode 失联, (label—selector) ()
    • 定义 Pod 的访问策略 (负载均衡) (Tcp - Udp 4 层应用)

service - expose

支持 ClusterIP, NodePort 以及 LoadBanlancer ,【】 , 四种类型
ClusterIP : service 外, virtual ip. 不能直接给 外部用户, 还得加点东西
NodePort: Node 外 :
LoadBanlance: 相当于 在 node 更上一层。

  1. [root@master ~]# kubectl get svc
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. java-demo NodePort 10.107.162.9 <none> 80:32331/TCP 175m
  4. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h19m
  5. [root@master ~]# kubectl get svc -n kube-system
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6h20m
  8. kubernetes-dashboard NodePort 10.104.129.96 <none> 80:30000/TCP 4h58m

kubectl expose deployment java-demo --port=80 --name=your_service_name --target-port=8080 --type=NodePort

  1. [root@master ~]# kubectl apply -f svc.yaml
  2. service/java-demo-svc-test created
  3. [root@master ~]# cat svc.yaml
  4. apiVersion: v1
  5. kind: Service
  6. metadata:
  7. creationTimestamp: null
  8. labels:
  9. app: java-demo
  10. name: java-demo-svc-test
  11. spec:
  12. ports:
  13. - port: 80
  14. protocol: TCP
  15. targetPort: 8080
  16. nodePort: 30033
  17. selector:
  18. app: java-demo
  19. type: NodePort
  20. status:
  21. loadBalancer: {}
  22. [root@master ~]# kubectl get svc,pod -o wide
  23. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  24. service/java-demo-svc-test NodePort 10.109.80.185 <none> 80:30033/TCP 25s app=java-demo
  25. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h32m <none>

负载均衡: 默认使用 iptables ——-net filter —-来实现
lvs 负载均衡器。 大并发。 11.11, 618,1212 等
Ipvs 也是 类似 lvs

kube-proxy 实现 service 功能。

service_nodePort 的问题

  1. [root@master ~]# kubectl get svc,deployment -o wide
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  3. service/java-demo-svc-test NodePort 10.109.80.185 <none> 80:30033/TCP 11m app=web
  4. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h43m <none>
  5. NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
  6. deployment.apps/web 3/3 3 3 69m nginx-container lizhenliang/java-demo app=web

NodePort: http://192.168.116.137:30033/
NodePort: http://192.168.116.136:30033/
NodePort: http://192.168.116.130:30033/

当我部署之后,我上面的 3 个 IP 都能访问,

  • 端口需要记录,冲突
  • 4 层的 负载均衡, 但是 有时候我们需要 7 层的负载 (比如根据 url 来转发)
    service_LoadBanlance 的问题

    ingress - controller

本身 一个 规则,

ingress-controller 部署在 node 上。
ingress-controller 基于 域名 来转发 (借助 nginx 来实现)(upsteam)

ingress-controller.yaml 来部署

  1. [root@master ~]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-7f89b7bc75-dgllx 1/1 Running 0 7h14m
  4. coredns-7f89b7bc75-rlq5s 1/1 Running 0 7h14m
  5. etcd-master 1/1 Running 0 7h15m
  6. kube-apiserver-master 1/1 Running 0 7h15m
  7. kube-controller-manager-master 1/1 Running 0 7h15m
  8. kube-flannel-ds-hpwk9 1/1 Running 0 6h53m
  9. kube-flannel-ds-ncwv6 1/1 Running 0 7h14m
  10. kube-flannel-ds-tmnnt 1/1 Running 0 7h7m
  11. kube-proxy-6c2cj 1/1 Running 0 6h53m
  12. kube-proxy-8fg9h 1/1 Running 0 7h7m
  13. kube-proxy-g65th 1/1 Running 0 7h14m
  14. kube-scheduler-master 1/1 Running 0 7h14m
  15. kubernetes-dashboard-7b4bdcb8b8-9dvnq 1/1 Running 0 4h39m

ingress 安装之后

应用,
创建 yaml (包含 ingress 规则的)
应用 yaml,
域名 自己本机的 记住 需要能解析到。

namespace

  • 隔离资源, (测试环境, 开发环境)
  • 限制资源的大小

kubectl get ns # 查看 all namespace
kubectl get ns <namesapce_name> # 查看 the namespace
kubectl get pods -n <namesapce_name> # get pods of the namespace_name
kubectl describe ns # describe all namespace
kubectl describe ns <namesapce_name> # describe the namespace
kubectl describe pods -n <namesapce_name> # describe pods of the namespace_name

kubectl create ns <namesapce_name> # create the namespace
kubectl delete ns <namesapce_name> # delete the namespace

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. creationTimestamp: null
  5. name: daiyi-test-namespace
  6. spec: {}
  7. status: {}
  1. [root@master k8s]# kubectl get ns
  2. NAME STATUS AGE
  3. default Active 20h ## default
  4. kube-node-lease Active 20h ## heaalth-live
  5. kube-public Active 20h ## for all, even unauthorized user
  6. kube-system Active 20h ## created by kubernetes itself.
  7. kubernetes-dashboard Active 18h
  8. kubernetes-dashboard-head Active 19h
  9. [root@master k8s]# kubectl get ns default
  10. NAME STATUS AGE
  11. default Active 20h
  12. [root@master k8s]# kubectl get ns kubernetes-dashboard
  13. NAME STATUS AGE
  14. kubernetes-dashboard Active 18h
  15. [root@master k8s]# kubectl get pods -n kube-system
  16. NAME READY STATUS RESTARTS AGE
  17. coredns-7f89b7bc75-dgllx 1/1 Running 0 20h
  18. coredns-7f89b7bc75-rlq5s 1/1 Running 0 20h
  19. etcd-master 1/1 Running 0 20h
  20. kube-apiserver-master 1/1 Running 0 20h
  21. kube-controller-manager-master 1/1 Running 0 20h
  22. kube-flannel-ds-hpwk9 1/1 Running 0 20h
  23. kube-flannel-ds-ncwv6 1/1 Running 0 20h
  24. kube-flannel-ds-tmnnt 1/1 Running 0 20h
  25. kube-proxy-6c2cj 1/1 Running 0 20h
  26. kube-proxy-8fg9h 1/1 Running 0 20h
  27. kube-proxy-g65th 1/1 Running 0 20h
  28. kube-scheduler-master 1/1 Running 0 20h
  29. kubernetes-dashboard-7b4bdcb8b8-9dvnq 1/1 Running 0 17h
  30. [root@master k8s]# kubectl get pods -n default
  31. NAME READY STATUS RESTARTS AGE
  32. web-57fc9cf4d4-2nj77 1/1 Running 0 15h
  33. web-57fc9cf4d4-8n8hf 1/1 Running 0 15h
  34. web-57fc9cf4d4-dfmmb 1/1 Running 0 15h
  35. [root@master k8s]# kubectl get pods -n kubernetes-dashboard
  36. NAME READY STATUS RESTARTS AGE
  37. dashboard-metrics-scraper-7b59f7d4df-vvs9f 1/1 Running 0 18h
  38. kubernetes-dashboard-74d688b6bc-mbqm7 1/1 Running 0 18h
  39. [root@master k8s]# kubectl describe ns default
  40. Name: default
  41. Labels: <none>
  42. Annotations: <none>
  43. Status: Active
  44. No resource quota. ## this is resource_limit for whole namesapce
  45. No LimitRange resource. ## this is resource_limit for every conponent of the namespaces
  46. [root@master k8s]# kubectl create ns daiyi-test-namespace
  47. namespace/daiyi-test-namespace created
  48. [root@master k8s]# kubectl get ns daiyi-test-namespace
  49. NAME STATUS AGE
  50. daiyi-test-namespace Active 3s
  51. [root@master k8s]# kubectl delete ns daiyi-test-namespace
  52. namespace "daiyi-test-namespace" deleted
  53. [root@master k8s]# kubectl create ns daiyi-test-namespace --dry-run -o yaml > namesapce_test.yaml
  54. W0227 10:53:17.607938 232816 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
  55. [root@master k8s]# cat namesapce_test.yaml
  56. apiVersion: v1
  57. kind: Namespace
  58. metadata:
  59. creationTimestamp: null
  60. name: daiyi-test-namespace
  61. spec: {}
  62. status: {}
  63. [root@master k8s]# kubectl apply -f namesapce_test.yaml
  64. namespace/daiyi-test-namespace created
  65. [root@master k8s]# kubectl get ns daiyi-test-namespace
  66. NAME STATUS AGE
  67. daiyi-test-namespace Active 18s
  68. [root@master k8s]# kubectl delete ns daiyi-test-namespace
  69. namespace "daiyi-test-namespace" deleted
  70. [root@master k8s]# kubectl get ns daiyi-test-namespace
  71. Error from server (NotFound): namespaces "daiyi-test-namespace" not found
  72. [root@master k8s]#

ingress

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yml

做下替换

  1. image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
  2. image: quay-mirror.qiniu.com/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
  3. apiVersion: extensions/v1beta1
  4. apiVersion: apps/v1

kubectl apply -f kube-flannel.yml

  1. [root@master ingress]# kubectl apply -f ingress_deploy.yaml
  2. configmap/ingress-nginx-controller configured
  3. clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
  4. clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
  5. clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
  6. clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
  7. deployment.apps/ingress-nginx-controller configured
  8. namespace/ingress-nginx unchanged
  9. serviceaccount/ingress-nginx unchanged
  10. serviceaccount/ingress-nginx-admission unchanged
  11. service/ingress-nginx-controller-admission unchanged
  12. service/ingress-nginx-controller unchanged
  13. role.rbac.authorization.k8s.io/ingress-nginx unchanged
  14. role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
  15. rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
  16. rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
  17. validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
  18. job.batch/ingress-nginx-admission-create unchanged
  19. job.batch/ingress-nginx-admission-patch unchanged
  20. [root@master ingress]# kubectl get pod -n ingress-nginx
  21. NAME READY STATUS RESTARTS AGE
  22. ingress-nginx-admission-create-97bm7 0/1 Completed 0 33m
  23. ingress-nginx-admission-patch-7zhx8 0/1 Completed 0 33m
  24. ingress-nginx-controller-67897c9494-8spfl 0/1 ImagePullBackOff 0 33m
  25. [root@master ingress]# kubectl get svc -n ingress-nginx
  26. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  27. ingress-nginx-controller NodePort 10.96.10.240 <none> 80:30470/TCP,443:32096/TCP 34m
  28. ingress-nginx-controller-admission ClusterIP 10.104.22.242 <none> 443/TCP 34m

end