k8s的DNS实现了服务在集群内被自动发现,那如何使得服务在K8S集群’外’被使用和访问呢?
可以使用NodePort型的Service或者Ingress资源
一、NodePort 型的Service
这种方式是通过nginx将服务转发给没一个k8s节点, 有多少个节点,就要在nginx里配置多少个节点
这种方式很不灵活,现在已经不会用这种方式来暴露服务了
二、Ingress
Ingress是K8S API的标准资源类型之一,也是一种核心资源,它本质上是一组基于域名和URL路径,把用户的请求转发至指定Service资源的规则
可以将集群外部的请求流量,转发至集群内部,从而实现’服务暴露’
Ingress控制器是能够为Ingress资源监听某套接字,然后根据Ingress规则匹配机制路由调度流量的一个组件
常用的 Ingress控制器的实现软件
- Ingress-nginx
- HAProxy
- Traefik
- …
本次实验使用最常用的Traefik
2.1 创建pod定义文件(把定义文件放到 k8s-yaml.od.com对应的的目录下,在k8s-5-141服务器上操作)
[root@k8s-5-141 /]# cd /data/k8s-yaml[root@k8s-5-141 k8s-yaml]# mkdir traefik[root@k8s-5-141 k8s-yaml]# lltotal 0drwxr-xr-x 2 root root 69 Apr 1 10:51 corednsdrwxr-xr-x 2 root root 6 Apr 2 16:34 traefik[root@k8s-5-141 k8s-yaml]# cd traefik#创建定义文件,因为我没有做vip,所以代理地址设置的是 - --kubernetes.endpoint=https://192.168.5.137:7443 (也可以设置到另外一台代理服务器)[root@k8s-5-141 k8s-yaml]# vim ds.yamlapiVersion: apps/v1kind: DaemonSetmetadata:name: traefik-ingressnamespace: kube-systemlabels:k8s-app: traefik-ingressspec:selector:matchLabels:name: traefik-ingressk8s-app: traefik-ingresstemplate:metadata:labels:k8s-app: traefik-ingressname: traefik-ingress#--------prometheus自动发现增加内容--------annotations:prometheus_io_scheme: "traefik"prometheus_io_path: "/metrics"prometheus_io_port: "8080"#--------增加结束--------------------------spec:serviceAccountName: traefik-ingress-controllerterminationGracePeriodSeconds: 60containers:- image: harbor.od.com/public/traefik:v1.7.2name: traefik-ingressports:- name: controllercontainerPort: 80hostPort: 81- name: admin-webcontainerPort: 8080securityContext:capabilities:drop:- ALLadd:- NET_BIND_SERVICEargs:- --api- --kubernetes- --logLevel=INFO- --insecureskipverify=true- --kubernetes.endpoint=https://192.168.5.137:7443- --accesslog- --accesslog.filepath=/var/log/traefik_access.log- --traefiklog- --traefiklog.filepath=/var/log/traefik.log- --metrics.prometheus[root@k8s-5-141 k8s-yaml]# vim ingress.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata:name: traefik-web-uinamespace: kube-systemannotations:kubernetes.io/ingress.class: traefikspec:rules:- host: traefik.od.comhttp:paths:- path: /backend:serviceName: traefik-ingress-serviceservicePort: 8080[root@k8s-5-141 k8s-yaml]# vim rbac.yamlapiVersion: v1kind: ServiceAccountmetadata:name: traefik-ingress-controllernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:name: traefik-ingress-controllerrules:- apiGroups:- ""resources:- services- endpoints- secretsverbs:- get- list- watch- apiGroups:- extensionsresources:- ingressesverbs:- get- list- watch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: traefik-ingress-controllerroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: traefik-ingress-controllersubjects:- kind: ServiceAccountname: traefik-ingress-controllernamespace: kube-system[root@k8s-5-141 k8s-yaml]# vim svc.yamlkind: ServiceapiVersion: v1metadata:name: traefik-ingress-servicenamespace: kube-systemspec:selector:k8s-app: traefik-ingressports:- protocol: TCPport: 80name: controller- protocol: TCPport: 8080name: admin-web
2.2 拉取 traefik 镜像 (在任何一台可以连接到 私有镜像仓库,并且安装了docker的服务器上操作都行)
[root@k8s-5-138 redis]# docker pull traefik:v1.7.2-alpinev1.7.2-alpine: Pulling from library/traefik4fe2ade4980c: Pull complete8d9593d002f4: Pull complete5d09ab10efbd: Pull complete37b796c58adc: Pull completeDigest: sha256:cf30141936f73599e1a46355592d08c88d74bd291f05104fe11a8bcce447c044Status: Downloaded newer image for traefik:v1.7.2-alpinedocker.io/library/traefik:v1.7.2-alpine[root@k8s-5-138 redis]# docker tag traefik:v1.7.2-alpine harbor.od.com/public/traefik:v1.7.2[root@k8s-5-138 redis]# docker push !$docker push harbor.od.com/public/traefik:v1.7.2The push refers to repository [harbor.od.com/public/traefik]a02beb48577f: Pushedca22117205f4: Pushed3563c211d861: Pusheddf64d3292fd6: Pushedv1.7.2: digest: sha256:6115155b261707b642341b065cd3fac2b546559ba035d0262650b3b3bbdd10ea size: 1157
2.3 配置nginx (在k8s-5-141服务器上操作)
[root@k8s-5-141 conf.d]# vim od.com.confupstream default_backend_traefik {server 192.168.5.138:81 max_fails=3 fail_timeout=10s;server 192.168.5.139:81 max_fails=3 fail_timeout=10s;}server {server_name *.od.com;location / {proxy_pass http://default_backend_traefik;proxy_set_header Host $http_host;proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;}}
2.4 配置DNS(在k8s-5-140服务器上操作)
[root@k8s-5-140 ~]# cd /var/named# 在 od.com.zone 中添加 traefik A 192.168.5.141[root@k8s-5-140 named]# vim od.com.zone[root@k8s-5-140 named]# cat od.com.zone$ORIGIN od.com.$TTL 600 ; 10 minutes@ IN SOA dns.od.com. dnsadmin.od.com. (2021031605 ; serial10800 ; refresh (3 hours)900 ; retry (15 minutes)604800 ; expire (1 week)86400 ; minimum (1 day))NS dns.od.com.$TTL 60 ; 1 minutedns A 192.168.5.140harbor A 192.168.5.141k8s-yaml A 192.168.5.141traefik A 192.168.5.141#重启dns服务[root@k8s-5-140 named]# systemctl restart named#验证[root@k8s-5-140 named]# ping traefik.od.comPING traefik.od.com (192.168.5.141) 56(84) bytes of data.64 bytes from 192.168.5.141 (192.168.5.141): icmp_seq=1 ttl=64 time=0.323 ms64 bytes from 192.168.5.141 (192.168.5.141): icmp_seq=2 ttl=64 time=0.209 ms[root@k8s-5-140 named]# dig -t A traefik.od.com @192.168.5.140 +short192.168.5.141
2.5 部署 traefik 服务 (在任意一台k8s节点服务器上)
[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yamlserviceaccount/traefik-ingress-controller createdclusterrole.rbac.authorization.k8s.io/traefik-ingress-controller createdclusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yamldaemonset.extensions/traefik-ingress created[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yamlservice/traefik-ingress-service created[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yamlingress.extensions/traefik-web-ui created
2.6 部署dashboard
#下载镜像[root@k8s-5-138 pod_template]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3v1.8.3: Pulling from k8scn/kubernetes-dashboard-amd64a4026007c47e: Pull completeDigest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ffStatus: Downloaded newer image for k8scn/kubernetes-dashboard-amd64:v1.8.3docker.io/k8scn/kubernetes-dashboard-amd64:v1.8.3[root@k8s-5-138 pod_template]# docker tag k8scn/kubernetes-dashboard-amd64:v1.8.3 harbor.od.com/public/dashboard:v1.8.3[root@k8s-5-138 pod_template]# docker push !$docker push harbor.od.com/public/dashboard:v1.8.3The push refers to repository [harbor.od.com/public/dashboard]23ddb8cbb75a: Pushedv1.8.3: digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ff size: 529#创建dashboard定义文件(在k8s-5-141服务器上操作)[root@k8s-5-141 k8s-yaml]# mkdir dashboard[root@k8s-5-141 k8s-yaml]# cd dashboard# 1. deployment.yaml[root@k8s-5-141 dashboard]# vim deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilespec:selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:priorityClassName: system-cluster-criticalcontainers:- name: kubernetes-dashboardimage: harbor.od.com/public/dashboard:v1.18.3resources:limits:cpu: 100mmemory: 300Mirequests:cpu: 50mmemory: 100Miports:- containerPort: 8443protocol: TCPargs:# PLATFORM-SPECIFIC ARGS HERE- --auto-generate-certificatesvolumeMounts:- name: tmp-volumemountPath: /tmplivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboard-admintolerations:- key: "CriticalAddonsOnly"operator: "Exists"imagePullSecrets:- name: harbor# ingress.yaml[root@k8s-5-141 dashboard]# vim ingress.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata:name: kubernetes-dashboardnamespace: kube-systemannotations:kubernetes.io/ingress.class: traefikspec:rules:- host: dashboard.od.com- host: dashboard.grep.prohttp:paths:- backend:serviceName: kubernetes-dashboardservicePort: 443# 3. rbac.yaml[root@k8s-5-141 dashboard]# vim rbac.yamlapiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcilename: kubernetes-dashboard-adminnamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: kubernetes-dashboard-adminnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: ReconcileroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: kubernetes-dashboard-adminnamespace: kube-system# 4. svc.yaml[root@k8s-5-141 dashboard]# vim svc.yamlapiVersion: v1kind: Servicemetadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilespec:selector:k8s-app: kubernetes-dashboardports:- port: 443targetPort: 8443#部署dashboard到k8s (切换到任意一个K8s运算节点上操作)[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yamlserviceaccount/kubernetes-dashboard-admin createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/deployment.yamldeployment.apps/kubernetes-dashboard created[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yamlservice/kubernetes-dashboard created[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yamlingress.extensions/kubernetes-dashboard created[root@k8s-5-138 dashboard]# kubectl get all -n kube-system |grep dashboardpod/kubernetes-dashboard-7c55767659-mpjdw 1/1 Running 0 2m50sservice/kubernetes-dashboard ClusterIP 192.168.209.29 <none> 443/TCP 2m19sdeployment.apps/kubernetes-dashboard 1/1 1 1 2m50sreplicaset.apps/kubernetes-dashboard-7c55767659 1 1 1 2m50s
2.7 设置DNS
# 增加 dashboard A 192.168.5.141[root@k8s-5-140 named]# vim /var/named/od.com.zone$ORIGIN od.com.$TTL 600 ; 10 minutes@ IN SOA dns.od.com. dnsadmin.od.com. (2021031605 ; serial10800 ; refresh (3 hours)900 ; retry (15 minutes)604800 ; expire (1 week)86400 ; minimum (1 day))NS dns.od.com.$TTL 60 ; 1 minutedns A 192.168.5.140harbor A 192.168.5.141k8s-yaml A 192.168.5.141traefik A 192.168.5.141dashboard A 192.168.5.141[root@k8s-5-140 named]# systemctl restart named[root@k8s-5-140 named]# dig -t A dashboard.od.com @192.168.5.140 +short192.168.5.141[root@k8s-5-138 dashboard]# dig -t A dashboard.od.com @192.168.0.2 +short192.168.5.141
2.8 签发证书
在k8s-5-141服务器上操作
[root@k8s-5-141 dashboard]# cd /opt/certs/[root@k8s-5-141 certs]# mkdir dashboard-cert[root@k8s-5-141 /]# cd /opt/certs/dashboard-cert/[root@k8s-5-141 /]# cat > dashboard-csr.json <<EOF{"CN": "Dashboard","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "ShenZhen","ST": "GuangDong","O": "batar","OU": "batar-zhonggu"}]}EOF[root@alice001 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=GuangDong/L=ShenZhen/O=batar/OU=batar-zhonggu"[root@alice001 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ../ca.pem -CAkey ../ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650Signature ok
