k8s的DNS实现了服务在集群内被自动发现,那如何使得服务在K8S集群’外’被使用和访问呢?
可以使用NodePort型的Service或者Ingress资源

一、NodePort 型的Service

这种方式是通过nginx将服务转发给没一个k8s节点, 有多少个节点,就要在nginx里配置多少个节点
这种方式很不灵活,现在已经不会用这种方式来暴露服务了

二、Ingress

Ingress是K8S API的标准资源类型之一,也是一种核心资源,它本质上是一组基于域名和URL路径,把用户的请求转发至指定Service资源的规则
可以将集群外部的请求流量,转发至集群内部,从而实现’服务暴露’
Ingress控制器是能够为Ingress资源监听某套接字,然后根据Ingress规则匹配机制路由调度流量的一个组件

常用的 Ingress控制器的实现软件

  • Ingress-nginx
  • HAProxy
  • Traefik

本次实验使用最常用的Traefik

2.1 创建pod定义文件(把定义文件放到 k8s-yaml.od.com对应的的目录下,在k8s-5-141服务器上操作)

  1. [root@k8s-5-141 /]# cd /data/k8s-yaml
  2. [root@k8s-5-141 k8s-yaml]# mkdir traefik
  3. [root@k8s-5-141 k8s-yaml]# ll
  4. total 0
  5. drwxr-xr-x 2 root root 69 Apr 1 10:51 coredns
  6. drwxr-xr-x 2 root root 6 Apr 2 16:34 traefik
  7. [root@k8s-5-141 k8s-yaml]# cd traefik
  8. #创建定义文件,因为我没有做vip,所以代理地址设置的是 - --kubernetes.endpoint=https://192.168.5.137:7443 (也可以设置到另外一台代理服务器)
  9. [root@k8s-5-141 k8s-yaml]# vim ds.yaml
  10. apiVersion: apps/v1
  11. kind: DaemonSet
  12. metadata:
  13. name: traefik-ingress
  14. namespace: kube-system
  15. labels:
  16. k8s-app: traefik-ingress
  17. spec:
  18. selector:
  19. matchLabels:
  20. name: traefik-ingress
  21. k8s-app: traefik-ingress
  22. template:
  23. metadata:
  24. labels:
  25. k8s-app: traefik-ingress
  26. name: traefik-ingress
  27. #--------prometheus自动发现增加内容--------
  28. annotations:
  29. prometheus_io_scheme: "traefik"
  30. prometheus_io_path: "/metrics"
  31. prometheus_io_port: "8080"
  32. #--------增加结束--------------------------
  33. spec:
  34. serviceAccountName: traefik-ingress-controller
  35. terminationGracePeriodSeconds: 60
  36. containers:
  37. - image: harbor.od.com/public/traefik:v1.7.2
  38. name: traefik-ingress
  39. ports:
  40. - name: controller
  41. containerPort: 80
  42. hostPort: 81
  43. - name: admin-web
  44. containerPort: 8080
  45. securityContext:
  46. capabilities:
  47. drop:
  48. - ALL
  49. add:
  50. - NET_BIND_SERVICE
  51. args:
  52. - --api
  53. - --kubernetes
  54. - --logLevel=INFO
  55. - --insecureskipverify=true
  56. - --kubernetes.endpoint=https://192.168.5.137:7443
  57. - --accesslog
  58. - --accesslog.filepath=/var/log/traefik_access.log
  59. - --traefiklog
  60. - --traefiklog.filepath=/var/log/traefik.log
  61. - --metrics.prometheus
  62. [root@k8s-5-141 k8s-yaml]# vim ingress.yaml
  63. apiVersion: extensions/v1beta1
  64. kind: Ingress
  65. metadata:
  66. name: traefik-web-ui
  67. namespace: kube-system
  68. annotations:
  69. kubernetes.io/ingress.class: traefik
  70. spec:
  71. rules:
  72. - host: traefik.od.com
  73. http:
  74. paths:
  75. - path: /
  76. backend:
  77. serviceName: traefik-ingress-service
  78. servicePort: 8080
  79. [root@k8s-5-141 k8s-yaml]# vim rbac.yaml
  80. apiVersion: v1
  81. kind: ServiceAccount
  82. metadata:
  83. name: traefik-ingress-controller
  84. namespace: kube-system
  85. ---
  86. apiVersion: rbac.authorization.k8s.io/v1beta1
  87. kind: ClusterRole
  88. metadata:
  89. name: traefik-ingress-controller
  90. rules:
  91. - apiGroups:
  92. - ""
  93. resources:
  94. - services
  95. - endpoints
  96. - secrets
  97. verbs:
  98. - get
  99. - list
  100. - watch
  101. - apiGroups:
  102. - extensions
  103. resources:
  104. - ingresses
  105. verbs:
  106. - get
  107. - list
  108. - watch
  109. ---
  110. kind: ClusterRoleBinding
  111. apiVersion: rbac.authorization.k8s.io/v1beta1
  112. metadata:
  113. name: traefik-ingress-controller
  114. roleRef:
  115. apiGroup: rbac.authorization.k8s.io
  116. kind: ClusterRole
  117. name: traefik-ingress-controller
  118. subjects:
  119. - kind: ServiceAccount
  120. name: traefik-ingress-controller
  121. namespace: kube-system
  122. [root@k8s-5-141 k8s-yaml]# vim svc.yaml
  123. kind: Service
  124. apiVersion: v1
  125. metadata:
  126. name: traefik-ingress-service
  127. namespace: kube-system
  128. spec:
  129. selector:
  130. k8s-app: traefik-ingress
  131. ports:
  132. - protocol: TCP
  133. port: 80
  134. name: controller
  135. - protocol: TCP
  136. port: 8080
  137. name: admin-web

2.2 拉取 traefik 镜像 (在任何一台可以连接到 私有镜像仓库,并且安装了docker的服务器上操作都行)

  1. [root@k8s-5-138 redis]# docker pull traefik:v1.7.2-alpine
  2. v1.7.2-alpine: Pulling from library/traefik
  3. 4fe2ade4980c: Pull complete
  4. 8d9593d002f4: Pull complete
  5. 5d09ab10efbd: Pull complete
  6. 37b796c58adc: Pull complete
  7. Digest: sha256:cf30141936f73599e1a46355592d08c88d74bd291f05104fe11a8bcce447c044
  8. Status: Downloaded newer image for traefik:v1.7.2-alpine
  9. docker.io/library/traefik:v1.7.2-alpine
  10. [root@k8s-5-138 redis]# docker tag traefik:v1.7.2-alpine harbor.od.com/public/traefik:v1.7.2
  11. [root@k8s-5-138 redis]# docker push !$
  12. docker push harbor.od.com/public/traefik:v1.7.2
  13. The push refers to repository [harbor.od.com/public/traefik]
  14. a02beb48577f: Pushed
  15. ca22117205f4: Pushed
  16. 3563c211d861: Pushed
  17. df64d3292fd6: Pushed
  18. v1.7.2: digest: sha256:6115155b261707b642341b065cd3fac2b546559ba035d0262650b3b3bbdd10ea size: 1157

2.3 配置nginx (在k8s-5-141服务器上操作)

  1. [root@k8s-5-141 conf.d]# vim od.com.conf
  2. upstream default_backend_traefik {
  3. server 192.168.5.138:81 max_fails=3 fail_timeout=10s;
  4. server 192.168.5.139:81 max_fails=3 fail_timeout=10s;
  5. }
  6. server {
  7. server_name *.od.com;
  8. location / {
  9. proxy_pass http://default_backend_traefik;
  10. proxy_set_header Host $http_host;
  11. proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
  12. }
  13. }

2.4 配置DNS(在k8s-5-140服务器上操作)

  1. [root@k8s-5-140 ~]# cd /var/named
  2. # 在 od.com.zone 中添加 traefik A 192.168.5.141
  3. [root@k8s-5-140 named]# vim od.com.zone
  4. [root@k8s-5-140 named]# cat od.com.zone
  5. $ORIGIN od.com.
  6. $TTL 600 ; 10 minutes
  7. @ IN SOA dns.od.com. dnsadmin.od.com. (
  8. 2021031605 ; serial
  9. 10800 ; refresh (3 hours)
  10. 900 ; retry (15 minutes)
  11. 604800 ; expire (1 week)
  12. 86400 ; minimum (1 day)
  13. )
  14. NS dns.od.com.
  15. $TTL 60 ; 1 minute
  16. dns A 192.168.5.140
  17. harbor A 192.168.5.141
  18. k8s-yaml A 192.168.5.141
  19. traefik A 192.168.5.141
  20. #重启dns服务
  21. [root@k8s-5-140 named]# systemctl restart named
  22. #验证
  23. [root@k8s-5-140 named]# ping traefik.od.com
  24. PING traefik.od.com (192.168.5.141) 56(84) bytes of data.
  25. 64 bytes from 192.168.5.141 (192.168.5.141): icmp_seq=1 ttl=64 time=0.323 ms
  26. 64 bytes from 192.168.5.141 (192.168.5.141): icmp_seq=2 ttl=64 time=0.209 ms
  27. [root@k8s-5-140 named]# dig -t A traefik.od.com @192.168.5.140 +short
  28. 192.168.5.141

2.5 部署 traefik 服务 (在任意一台k8s节点服务器上)

  1. [root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml
  2. serviceaccount/traefik-ingress-controller created
  3. clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
  4. clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
  5. [root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yaml
  6. daemonset.extensions/traefik-ingress created
  7. [root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
  8. service/traefik-ingress-service created
  9. [root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml
  10. ingress.extensions/traefik-web-ui created

2.6 部署dashboard

  1. #下载镜像
  2. [root@k8s-5-138 pod_template]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
  3. v1.8.3: Pulling from k8scn/kubernetes-dashboard-amd64
  4. a4026007c47e: Pull complete
  5. Digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ff
  6. Status: Downloaded newer image for k8scn/kubernetes-dashboard-amd64:v1.8.3
  7. docker.io/k8scn/kubernetes-dashboard-amd64:v1.8.3
  8. [root@k8s-5-138 pod_template]# docker tag k8scn/kubernetes-dashboard-amd64:v1.8.3 harbor.od.com/public/dashboard:v1.8.3
  9. [root@k8s-5-138 pod_template]# docker push !$
  10. docker push harbor.od.com/public/dashboard:v1.8.3
  11. The push refers to repository [harbor.od.com/public/dashboard]
  12. 23ddb8cbb75a: Pushed
  13. v1.8.3: digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ff size: 529
  14. #创建dashboard定义文件(在k8s-5-141服务器上操作)
  15. [root@k8s-5-141 k8s-yaml]# mkdir dashboard
  16. [root@k8s-5-141 k8s-yaml]# cd dashboard
  17. # 1. deployment.yaml
  18. [root@k8s-5-141 dashboard]# vim deployment.yaml
  19. apiVersion: apps/v1
  20. kind: Deployment
  21. metadata:
  22. name: kubernetes-dashboard
  23. namespace: kube-system
  24. labels:
  25. k8s-app: kubernetes-dashboard
  26. kubernetes.io/cluster-service: "true"
  27. addonmanager.kubernetes.io/mode: Reconcile
  28. spec:
  29. selector:
  30. matchLabels:
  31. k8s-app: kubernetes-dashboard
  32. template:
  33. metadata:
  34. labels:
  35. k8s-app: kubernetes-dashboard
  36. annotations:
  37. scheduler.alpha.kubernetes.io/critical-pod: ''
  38. spec:
  39. priorityClassName: system-cluster-critical
  40. containers:
  41. - name: kubernetes-dashboard
  42. image: harbor.od.com/public/dashboard:v1.18.3
  43. resources:
  44. limits:
  45. cpu: 100m
  46. memory: 300Mi
  47. requests:
  48. cpu: 50m
  49. memory: 100Mi
  50. ports:
  51. - containerPort: 8443
  52. protocol: TCP
  53. args:
  54. # PLATFORM-SPECIFIC ARGS HERE
  55. - --auto-generate-certificates
  56. volumeMounts:
  57. - name: tmp-volume
  58. mountPath: /tmp
  59. livenessProbe:
  60. httpGet:
  61. scheme: HTTPS
  62. path: /
  63. port: 8443
  64. initialDelaySeconds: 30
  65. timeoutSeconds: 30
  66. volumes:
  67. - name: tmp-volume
  68. emptyDir: {}
  69. serviceAccountName: kubernetes-dashboard-admin
  70. tolerations:
  71. - key: "CriticalAddonsOnly"
  72. operator: "Exists"
  73. imagePullSecrets:
  74. - name: harbor
  75. # ingress.yaml
  76. [root@k8s-5-141 dashboard]# vim ingress.yaml
  77. apiVersion: extensions/v1beta1
  78. kind: Ingress
  79. metadata:
  80. name: kubernetes-dashboard
  81. namespace: kube-system
  82. annotations:
  83. kubernetes.io/ingress.class: traefik
  84. spec:
  85. rules:
  86. - host: dashboard.od.com
  87. - host: dashboard.grep.pro
  88. http:
  89. paths:
  90. - backend:
  91. serviceName: kubernetes-dashboard
  92. servicePort: 443
  93. # 3. rbac.yaml
  94. [root@k8s-5-141 dashboard]# vim rbac.yaml
  95. apiVersion: v1
  96. kind: ServiceAccount
  97. metadata:
  98. labels:
  99. k8s-app: kubernetes-dashboard
  100. addonmanager.kubernetes.io/mode: Reconcile
  101. name: kubernetes-dashboard-admin
  102. namespace: kube-system
  103. ---
  104. apiVersion: rbac.authorization.k8s.io/v1
  105. kind: ClusterRoleBinding
  106. metadata:
  107. name: kubernetes-dashboard-admin
  108. namespace: kube-system
  109. labels:
  110. k8s-app: kubernetes-dashboard
  111. addonmanager.kubernetes.io/mode: Reconcile
  112. roleRef:
  113. apiGroup: rbac.authorization.k8s.io
  114. kind: ClusterRole
  115. name: cluster-admin
  116. subjects:
  117. - kind: ServiceAccount
  118. name: kubernetes-dashboard-admin
  119. namespace: kube-system
  120. # 4. svc.yaml
  121. [root@k8s-5-141 dashboard]# vim svc.yaml
  122. apiVersion: v1
  123. kind: Service
  124. metadata:
  125. name: kubernetes-dashboard
  126. namespace: kube-system
  127. labels:
  128. k8s-app: kubernetes-dashboard
  129. kubernetes.io/cluster-service: "true"
  130. addonmanager.kubernetes.io/mode: Reconcile
  131. spec:
  132. selector:
  133. k8s-app: kubernetes-dashboard
  134. ports:
  135. - port: 443
  136. targetPort: 8443
  137. #部署dashboard到k8s (切换到任意一个K8s运算节点上操作)
  138. [root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
  139. serviceaccount/kubernetes-dashboard-admin created
  140. clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
  141. [root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/deployment.yaml
  142. deployment.apps/kubernetes-dashboard created
  143. [root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
  144. service/kubernetes-dashboard created
  145. [root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml
  146. ingress.extensions/kubernetes-dashboard created
  147. [root@k8s-5-138 dashboard]# kubectl get all -n kube-system |grep dashboard
  148. pod/kubernetes-dashboard-7c55767659-mpjdw 1/1 Running 0 2m50s
  149. service/kubernetes-dashboard ClusterIP 192.168.209.29 <none> 443/TCP 2m19s
  150. deployment.apps/kubernetes-dashboard 1/1 1 1 2m50s
  151. replicaset.apps/kubernetes-dashboard-7c55767659 1 1 1 2m50s

2.7 设置DNS

  1. # 增加 dashboard A 192.168.5.141
  2. [root@k8s-5-140 named]# vim /var/named/od.com.zone
  3. $ORIGIN od.com.
  4. $TTL 600 ; 10 minutes
  5. @ IN SOA dns.od.com. dnsadmin.od.com. (
  6. 2021031605 ; serial
  7. 10800 ; refresh (3 hours)
  8. 900 ; retry (15 minutes)
  9. 604800 ; expire (1 week)
  10. 86400 ; minimum (1 day)
  11. )
  12. NS dns.od.com.
  13. $TTL 60 ; 1 minute
  14. dns A 192.168.5.140
  15. harbor A 192.168.5.141
  16. k8s-yaml A 192.168.5.141
  17. traefik A 192.168.5.141
  18. dashboard A 192.168.5.141
  19. [root@k8s-5-140 named]# systemctl restart named
  20. [root@k8s-5-140 named]# dig -t A dashboard.od.com @192.168.5.140 +short
  21. 192.168.5.141
  22. [root@k8s-5-138 dashboard]# dig -t A dashboard.od.com @192.168.0.2 +short
  23. 192.168.5.141

2.8 签发证书
在k8s-5-141服务器上操作

  1. [root@k8s-5-141 dashboard]# cd /opt/certs/
  2. [root@k8s-5-141 certs]# mkdir dashboard-cert
  3. [root@k8s-5-141 /]# cd /opt/certs/dashboard-cert/
  4. [root@k8s-5-141 /]# cat > dashboard-csr.json <<EOF
  5. {
  6. "CN": "Dashboard",
  7. "hosts": [],
  8. "key": {
  9. "algo": "rsa",
  10. "size": 2048
  11. },
  12. "names": [
  13. {
  14. "C": "CN",
  15. "L": "ShenZhen",
  16. "ST": "GuangDong",
  17. "O": "batar",
  18. "OU": "batar-zhonggu"
  19. }
  20. ]
  21. }
  22. EOF
  23. [root@alice001 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=GuangDong/L=ShenZhen/O=batar/OU=batar-zhonggu"
  24. [root@alice001 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ../ca.pem -CAkey ../ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
  25. Signature ok