
服务发现,说白了就是服务(应用)之间相互定位的过程。
服务发现需要解决的问题:
1、服务动态性强—容器在k8s中ip变化或迁移
2、更新发布频繁—版本迭代快
3、支持自动伸缩—大促或流量高峰
我们为了解决pod地址变化的问题,我们之前部署了service资源,将pod地址通过service资源暴露的固定地址,来解决以上问题,
那么,如何解决service资源名称和service资源暴露出来的集群网络IP做自动的对应呢,从而达到服务的自动发现呢?
在k8s中,coredns就是为了解决以上问题。
从coredns开始,我们采用向k8s中交付容器的方式,来部署服务,并且使用声明式的方式,来部署服务
1、在运维主机HDSS7-200.host.com上,配置一个nginx虚拟机,用以提供k8s统一的资源配置清单访问入口
配置nginx:
vi /etc/nginx/conf.d/k8s-yaml.jack.com.conf
输入:
server {
listen 80;
server_name k8s-yaml.jack.com;
location / {<br /> autoindex on;<br /> default_type text/plain;<br /> root /data/k8s-yaml;<br /> }<br />}
执行:mkdir -p /data/k8s-yaml/coredns
执行:nginx -t
执行:nginx -s reload
2、添加域名解析:hdss-11上
vi /var/named/jack.com.zone
在最后添加一条解析记录
$ORIGIN jack.com.$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.jack.com. (
2019061803 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.jack.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
harbor A 10.4.7.200
k8s-yaml A 10.4.7.200
systemctl restart named
测试:dig -t A k8s-yaml.jack.com @10.4.7.11 +short
输出:
dig -t A k8s-yaml.jack.com @10.4.7.11 +short
3、在hdss7-200上部署coredns
coredns github地址:
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
cd /data/k8s-yaml/coredns
docker pull docker.io/coredns/coredns:1.6.1
docker tag c0f6e815079e harbor.jack.com/public/coredns:v1.6.1
docker push harbor.od.com/public/coredns:v1.6.1
1、rbac.yaml—拿到集群相关权限
vi rbac.yaml
apiVersion: v1kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
—-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- “”
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
—-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: “true”
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
2、cm.yaml—configmap 对集群的相关配置
vi cm.yaml
apiVersion: v1kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
ready
kubernetes cluster.local 192.168.0.0/16 #service资源cluster地址
forward . 10.4.7.11 #上级DNS地址
cache 30
loop
reload
loadbalance
}
3、dp.yaml—-pod控制器
vi dp.yaml
apiVersion: apps/v1kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: “CoreDNS”
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
containers:
- name: coredns
image: harbor.jack.com/public/coredns:v1.6.1
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
4、svc.yaml—-service资源
vi svc.yaml
apiVersion: v1kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: “true”
kubernetes.io/name: “CoreDNS”
spec:
selector:
k8s-app: coredns
clusterIP: 192.168.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
- name: metrics
port: 9153
protocol: TCP
然后使用http请求资源配置清单yaml的方式来创建资源(21,22上):在任意node节点上创建
kubectl create -f http://k8s-yaml.jack.com/coredns/rbac.yaml
输出:
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
kubectl create -f http://k8s-yaml.jack.com/coredns/cm.yaml
输出:
configmap/coredns created
kubectl create -f http://k8s-yaml.jack.com/coredns/dp.yaml
输出:
deployment.apps/coredns created
kubectl create -f http://k8s-yaml.jack.com/coredns/svc.yaml
输出:
service/coredns created
查看运行情况:
kubectl get all -n kube-system
输出:
NAME READY STATUS RESTARTS AGE
pod/coredns-898c47548-jq9nb 1/1 Running 0 65s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coredns ClusterIP 192.168.0.2
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 1/1 1 1 70s
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-898c47548 1 1 1 66s
查看coredns的cluster ip:
kubectl get all -o wide -n kube-system
输出:
kubectl get svc -o wide -n kube-system
输出:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
coredns ClusterIP 192.168.0.2
cat /opt/kubernetes/server/bin/kube-apiserver.sh 集群地址范围
cat /opt/kubernetes/server/bin/kubelet.sh DNS地址
测试coredns:
dig -t A www.baidu.com @192.168.0.2 +short
输出:
www.a.shifen.com.
180.101.49.11
180.101.49.12
dig -t A hdss7-21.host.com @192.168.0.2 +short
输出:
10.4.7.21
看到已经可以解析到百度。
————————————————————————————————————————————————————
测试coredns解析service资源名称,首先查看kube-public下是否有service资源,如果没有,创建一个,使用kubectl expose nginx-dp —port=80 -n kube-public
kubectl get pods -n kube-public
kubectl get svc -n kube-public
kubectl expose deployment nginx-dp —port=80 -n kube-public 
dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short
kubectl expose nginx-dp —port=80 -n kube-public
测试:使用coredns测试解析,需要使用SQDN规则
dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short
可以看到我们没有手动添加任何解析记录,我们nginx-dp的service资源的IP,已经被解析了:
那么为什么呢?
推荐大家了解一下coredns都做了什么:Kubernetes内部域名解析原理、弊端及优化方式
大家可以看到,当我进入到pod内部以后,我们会发现我们的dns地址是我们的coredns地址,以及搜索域:
