k8s的DNS实现了服务在集群内被自动发现,那如何使得服务在K8S集群’外’被使用和访问呢?
可以使用NodePort型的Service或者Ingress资源
一、NodePort 型的Service
这种方式是通过nginx将服务转发给没一个k8s节点, 有多少个节点,就要在nginx里配置多少个节点
这种方式很不灵活,现在已经不会用这种方式来暴露服务了
二、Ingress
Ingress是K8S API的标准资源类型之一,也是一种核心资源,它本质上是一组基于域名和URL路径,把用户的请求转发至指定Service资源的规则
可以将集群外部的请求流量,转发至集群内部,从而实现’服务暴露’
Ingress控制器是能够为Ingress资源监听某套接字,然后根据Ingress规则匹配机制路由调度流量的一个组件
常用的 Ingress控制器的实现软件
- Ingress-nginx
- HAProxy
- Traefik
- …
本次实验使用最常用的Traefik
2.1 创建pod定义文件(把定义文件放到 k8s-yaml.od.com对应的的目录下,在k8s-5-141服务器上操作)
[root@k8s-5-141 /]# cd /data/k8s-yaml
[root@k8s-5-141 k8s-yaml]# mkdir traefik
[root@k8s-5-141 k8s-yaml]# ll
total 0
drwxr-xr-x 2 root root 69 Apr 1 10:51 coredns
drwxr-xr-x 2 root root 6 Apr 2 16:34 traefik
[root@k8s-5-141 k8s-yaml]# cd traefik
#创建定义文件,因为我没有做vip,所以代理地址设置的是 - --kubernetes.endpoint=https://192.168.5.137:7443 (也可以设置到另外一台代理服务器)
[root@k8s-5-141 k8s-yaml]# vim ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress
namespace: kube-system
labels:
k8s-app: traefik-ingress
spec:
selector:
matchLabels:
name: traefik-ingress
k8s-app: traefik-ingress
template:
metadata:
labels:
k8s-app: traefik-ingress
name: traefik-ingress
#--------prometheus自动发现增加内容--------
annotations:
prometheus_io_scheme: "traefik"
prometheus_io_path: "/metrics"
prometheus_io_port: "8080"
#--------增加结束--------------------------
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: harbor.od.com/public/traefik:v1.7.2
name: traefik-ingress
ports:
- name: controller
containerPort: 80
hostPort: 81
- name: admin-web
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
- --insecureskipverify=true
- --kubernetes.endpoint=https://192.168.5.137:7443
- --accesslog
- --accesslog.filepath=/var/log/traefik_access.log
- --traefiklog
- --traefiklog.filepath=/var/log/traefik.log
- --metrics.prometheus
[root@k8s-5-141 k8s-yaml]# vim ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik.od.com
http:
paths:
- path: /
backend:
serviceName: traefik-ingress-service
servicePort: 8080
[root@k8s-5-141 k8s-yaml]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
[root@k8s-5-141 k8s-yaml]# vim svc.yaml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress
ports:
- protocol: TCP
port: 80
name: controller
- protocol: TCP
port: 8080
name: admin-web
2.2 拉取 traefik 镜像 (在任何一台可以连接到 私有镜像仓库,并且安装了docker的服务器上操作都行)
[root@k8s-5-138 redis]# docker pull traefik:v1.7.2-alpine
v1.7.2-alpine: Pulling from library/traefik
4fe2ade4980c: Pull complete
8d9593d002f4: Pull complete
5d09ab10efbd: Pull complete
37b796c58adc: Pull complete
Digest: sha256:cf30141936f73599e1a46355592d08c88d74bd291f05104fe11a8bcce447c044
Status: Downloaded newer image for traefik:v1.7.2-alpine
docker.io/library/traefik:v1.7.2-alpine
[root@k8s-5-138 redis]# docker tag traefik:v1.7.2-alpine harbor.od.com/public/traefik:v1.7.2
[root@k8s-5-138 redis]# docker push !$
docker push harbor.od.com/public/traefik:v1.7.2
The push refers to repository [harbor.od.com/public/traefik]
a02beb48577f: Pushed
ca22117205f4: Pushed
3563c211d861: Pushed
df64d3292fd6: Pushed
v1.7.2: digest: sha256:6115155b261707b642341b065cd3fac2b546559ba035d0262650b3b3bbdd10ea size: 1157
2.3 配置nginx (在k8s-5-141服务器上操作)
[root@k8s-5-141 conf.d]# vim od.com.conf
upstream default_backend_traefik {
server 192.168.5.138:81 max_fails=3 fail_timeout=10s;
server 192.168.5.139:81 max_fails=3 fail_timeout=10s;
}
server {
server_name *.od.com;
location / {
proxy_pass http://default_backend_traefik;
proxy_set_header Host $http_host;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}
}
2.4 配置DNS(在k8s-5-140服务器上操作)
[root@k8s-5-140 ~]# cd /var/named
# 在 od.com.zone 中添加 traefik A 192.168.5.141
[root@k8s-5-140 named]# vim od.com.zone
[root@k8s-5-140 named]# cat od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2021031605 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 192.168.5.140
harbor A 192.168.5.141
k8s-yaml A 192.168.5.141
traefik A 192.168.5.141
#重启dns服务
[root@k8s-5-140 named]# systemctl restart named
#验证
[root@k8s-5-140 named]# ping traefik.od.com
PING traefik.od.com (192.168.5.141) 56(84) bytes of data.
64 bytes from 192.168.5.141 (192.168.5.141): icmp_seq=1 ttl=64 time=0.323 ms
64 bytes from 192.168.5.141 (192.168.5.141): icmp_seq=2 ttl=64 time=0.209 ms
[root@k8s-5-140 named]# dig -t A traefik.od.com @192.168.5.140 +short
192.168.5.141
2.5 部署 traefik 服务 (在任意一台k8s节点服务器上)
[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml
serviceaccount/traefik-ingress-controller created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yaml
daemonset.extensions/traefik-ingress created
[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
service/traefik-ingress-service created
[root@alice002 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml
ingress.extensions/traefik-web-ui created
2.6 部署dashboard
#下载镜像
[root@k8s-5-138 pod_template]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
v1.8.3: Pulling from k8scn/kubernetes-dashboard-amd64
a4026007c47e: Pull complete
Digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ff
Status: Downloaded newer image for k8scn/kubernetes-dashboard-amd64:v1.8.3
docker.io/k8scn/kubernetes-dashboard-amd64:v1.8.3
[root@k8s-5-138 pod_template]# docker tag k8scn/kubernetes-dashboard-amd64:v1.8.3 harbor.od.com/public/dashboard:v1.8.3
[root@k8s-5-138 pod_template]# docker push !$
docker push harbor.od.com/public/dashboard:v1.8.3
The push refers to repository [harbor.od.com/public/dashboard]
23ddb8cbb75a: Pushed
v1.8.3: digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ff size: 529
#创建dashboard定义文件(在k8s-5-141服务器上操作)
[root@k8s-5-141 k8s-yaml]# mkdir dashboard
[root@k8s-5-141 k8s-yaml]# cd dashboard
# 1. deployment.yaml
[root@k8s-5-141 dashboard]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-dashboard
image: harbor.od.com/public/dashboard:v1.18.3
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 8443
protocol: TCP
args:
# PLATFORM-SPECIFIC ARGS HERE
- --auto-generate-certificates
volumeMounts:
- name: tmp-volume
mountPath: /tmp
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard-admin
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
imagePullSecrets:
- name: harbor
# ingress.yaml
[root@k8s-5-141 dashboard]# vim ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: dashboard.od.com
- host: dashboard.grep.pro
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
# 3. rbac.yaml
[root@k8s-5-141 dashboard]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
# 4. svc.yaml
[root@k8s-5-141 dashboard]# vim svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 443
targetPort: 8443
#部署dashboard到k8s (切换到任意一个K8s运算节点上操作)
[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/deployment.yaml
deployment.apps/kubernetes-dashboard created
[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
service/kubernetes-dashboard created
[root@k8s-5-138 dashboard]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml
ingress.extensions/kubernetes-dashboard created
[root@k8s-5-138 dashboard]# kubectl get all -n kube-system |grep dashboard
pod/kubernetes-dashboard-7c55767659-mpjdw 1/1 Running 0 2m50s
service/kubernetes-dashboard ClusterIP 192.168.209.29 <none> 443/TCP 2m19s
deployment.apps/kubernetes-dashboard 1/1 1 1 2m50s
replicaset.apps/kubernetes-dashboard-7c55767659 1 1 1 2m50s
2.7 设置DNS
# 增加 dashboard A 192.168.5.141
[root@k8s-5-140 named]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2021031605 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 192.168.5.140
harbor A 192.168.5.141
k8s-yaml A 192.168.5.141
traefik A 192.168.5.141
dashboard A 192.168.5.141
[root@k8s-5-140 named]# systemctl restart named
[root@k8s-5-140 named]# dig -t A dashboard.od.com @192.168.5.140 +short
192.168.5.141
[root@k8s-5-138 dashboard]# dig -t A dashboard.od.com @192.168.0.2 +short
192.168.5.141
2.8 签发证书
在k8s-5-141服务器上操作
[root@k8s-5-141 dashboard]# cd /opt/certs/
[root@k8s-5-141 certs]# mkdir dashboard-cert
[root@k8s-5-141 /]# cd /opt/certs/dashboard-cert/
[root@k8s-5-141 /]# cat > dashboard-csr.json <<EOF
{
"CN": "Dashboard",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShenZhen",
"ST": "GuangDong",
"O": "batar",
"OU": "batar-zhonggu"
}
]
}
EOF
[root@alice001 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=GuangDong/L=ShenZhen/O=batar/OU=batar-zhonggu"
[root@alice001 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ../ca.pem -CAkey ../ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
Signature ok