集群信息

1. 节点规划

部署k8s集群的节点按照用途可以划分为如下2类角色:

  • master:集群的master节点,集群的初始化节点,基础配置不低于2C4G
  • slave:集群的slave节点,可以多台,基础配置不低于2C4G

本例为了演示slave节点的添加,会部署一台master+2台slave,节点规划如下:

主机名 节点ip 角色 部署组件
k8s-master1 10.4.7.10 master etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel
k8s-master2 10.4.7.11 master etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel
k8s-master3 10.4.7.12 master etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel
k8s-slave1 10.4.7.13 slave kubectl, kubelet, kube-proxy, flannel

2. 组件版本

组件 版本 说明
CentOS 7.8.2003
Kernel Linux 3.10.0-1062.9.1.el7.x86_64
etcd 3.4.13 使用容器方式部署,默认数据挂载到本地路径
coredns 1.7.0
kubeadm v1.19.2
kubectl v1.19.2
kubelet v1.19.2
kube-proxy v1.19.2
flannel v0.11.0

安装前准备工作

1. 设置hosts解析

操作节点:所有节点(k8s-master,k8s-slave)均需执行

  • 修改hostname
    hostname必须只能包含小写字母、数字、”,”、”-“,且开头结尾必须是小写字母或数字
  1. # 在172.29.18.20,k8s-slave-1
  2. hostnamectl set-hostname igress
  3. # 在master1节点,172.29.18.19
  4. $ hostnamectl set-hostname k8s-master-1 #设置master节点的hostname
  5. # 在master2节点,172.29.18.18
  6. $ hostnamectl set-hostname k8s-master-2 #设置master2节点的hostname
  7. # 172.29.18.17节点,k8s-slave-2
  8. $ hostnamectl set-hostname k8s-slave-2
  9. # 172.29.18.16节点,k8s-slave-3
  10. $ hostnamectl set-hostname k8s-slave-3
  11. # 172.29.18.14节点,k8s-master-3
  12. $ hostnamectl set-hostname k8s-master-3
  13. # 172.29.18.13节点,k8s-slave-4
  14. $ hostnamectl set-hostname k8s-slave-4
  15. # 172.29.18.12节点,nfs1
  16. $ hostnamectl set-hostname nfs1
  17. # 172.29.18.11节点,harbor
  18. $ hostnamectl set-hostname harbor #设置master3节点的hostname
  • 添加hosts解析
  1. $ cat >>/etc/hosts<<EOF
  2. 172.29.18.20 igress k8s-slave-1 yfzf18-20.host.com
  3. 172.29.18.19 k8s-master-1 yfzf18-19.host.com
  4. 172.29.18.18 k8s-master-2 yfzf18-18.host.com
  5. 172.29.18.17 k8s-slave-2 yfzf18-17.host.com
  6. 172.29.18.16 k8s-slave-3 yfzf18-16.host.com
  7. 172.29.18.14 k8s-master-3 yfzf18-14.host.com
  8. 172.29.18.13 k8s-slave-4 yfzf18-13.host.com
  9. 172.29.18.12 nfs1 yfzf18-12.host.com
  10. 172.29.18.11 harbor yfzf18-11.host.com harbor.minstone.com
  11. EOF

2. 调整系统配置

操作节点: 所有的master和slave节点(k8s-master,k8s-slave)需要执行

本章下述操作均以k8s-master为例,其他节点均是相同的操作(ip和hostname的值换成对应机器的真实值)

  • 安装iptables和清空放火墙规则 ```python yum install vim bash-completion wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils ntpdate chrony -y

systemctl stop firewalld systemctl disable firewalld

yum -y install iptables-services iptables systemctl enable iptables systemctl start iptables service iptables save

cat < /etc/sysconfig/iptables

Generated by iptables-save v1.4.21 on Mon Oct 12 03:51:41 2020

*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5:560] COMMIT

Completed on Mon Oct 12 03:51:41 2020

EOF

systemctl reload iptables iptables -nL

  1. - 同步时间;注释:使用的是阿里时间服务器,同步时间;请根据你的情况设置
  2. ```python
  3. cat <<EOF > /etc/chrony.conf
  4. server ntp.aliyun.com iburst
  5. stratumweight 0
  6. driftfile /var/lib/chrony/drift
  7. rtcsync
  8. makestep 10 3
  9. bindcmdaddress 127.0.0.1
  10. bindcmdaddress ::1
  11. keyfile /etc/chrony.keys
  12. commandkey 1
  13. generatecommandkey
  14. logchange 0.5
  15. logdir /var/log/chrony
  16. EOF
  17. systemctl start chronyd.service
  18. systemctl enable chronyd.service
  19. 立即手工同步
  20. chronyc -a makestep
  • 设置安全组开放端口

如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master节点:TCP:6443,2379,2380,60080,60081UDP协议端口全部打开
k8s-slave节点:UDP协议端口全部打开

  • 设置iptables
  1. iptables -P FORWARD ACCEPT
  • 关闭swap
  1. swapoff -a
  2. # 防止开机自动挂载 swap 分区
  3. sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  • 关闭selinux和防火墙
  1. sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
  2. setenforce 0
  3. systemctl disable firewalld && systemctl stop firewalld
  • 修改内核参数
  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_forward=1
  5. vm.max_map_count=262144
  6. net.ipv4.tcp_syncookies = 1
  7. kernel.msgmnb = 65536
  8. net.core.netdev_max_backlog = 262144
  9. net.core.somaxconn = 65535
  10. fs.file-max = 655360
  11. EOF
  12. cp -rf /etc/security/limits.conf /etc/security/limits.conf.back
  13. cat > /etc/security/limits.conf << EOF
  14. * soft nofile 655350
  15. * hard nofile 655350
  16. * soft nproc unlimited
  17. * hard nproc unlimited
  18. * soft core unlimited
  19. * hard core unlimited
  20. root soft nofile 655350
  21. root hard nofile 655350
  22. root soft nproc unlimited
  23. root hard nproc unlimited
  24. root soft core unlimited
  25. root hard core unlimited
  26. EOF
  27. modprobe br_netfilter
  28. sysctl -p /etc/sysctl.d/k8s.conf
  29. sysctl -p
  • 加载ipvs模块
  1. yum -y install ipvsadm ipset
  2. lsmod | grep ip_vs
  3. cd /root
  4. vim ipvs.sh
  5. #内容如下
  6. #!/bin/bash
  7. ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
  8. for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
  9. do
  10. /sbin/modinfo -F filename $i &>/dev/null
  11. if [ $? -eq 0 ];then
  12. /sbin/modprobe $i
  13. fi
  14. done
  15. chmod +x ipvs.sh
  16. bash /root/ipvs.sh
  17. lsmod | grep ip_vs

结果:

  1. ip_vs_wlc 12519 0
  2. ip_vs_sed 12519 0
  3. ip_vs_pe_sip 12697 0
  4. nf_conntrack_sip 33860 1 ip_vs_pe_sip
  5. ip_vs_nq 12516 0
  6. ip_vs_lc 12516 0
  7. ip_vs_lblcr 12922 0
  8. ip_vs_lblc 12819 0
  9. ip_vs_ftp 13079 0
  10. ip_vs_dh 12688 0
  11. ip_vs_sh 12688 0
  12. ip_vs_wrr 12697 0
  13. ip_vs_rr 12600 0
  14. ip_vs 141092 57 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,xt_ipvs,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
  15. nf_nat 26787 5 ip_vs_ftp,nf_nat_ipv4,nf_nat_ipv6,xt_nat,nf_nat_masquerade_ipv4
  16. nf_conntrack 133387 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4,nf_conntrack_ipv6
  17. libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack

设置开机自动加载

  1. chmod +x /etc/rc.d/rc.local
  2. echo '/bin/bash /root/ipvs.sh' >> /etc/rc.local
  • 设置yum源
  1. $ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  2. $ curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  3. $ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  4. $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  5. [kubernetes]
  6. name=Kubernetes
  7. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  8. enabled=1
  9. gpgcheck=0
  10. repo_gpgcheck=0
  11. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  12. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  13. EOF
  14. $ yum clean all && yum makecache

3. 安装docker

操作节点: 所有节点

  1. ## 查看所有的可用版本
  2. $ yum list docker-ce --showduplicates | sort -r
  3. ##安装旧版本 yum install docker-ce-cli-18.09.9-3.el7 docker-ce-18.09.9-3.el7
  4. ## 安装源里最新版本
  5. $ yum install docker-ce -y
  6. ## 配置docker加速,离线安装"exec-opts": ["native.cgroupdriver=systemd"]这一项不能加,不然报错
  7. $ mkdir -p /etc/docker
  8. $ mkdir /data/docker -p
  9. vi /etc/docker/daemon.json
  10. {
  11. "graph": "/data/docker",
  12. "storage-driver": "overlay2",
  13. "insecure-registries": [
  14. "10.4.7.10:5000"
  15. ],
  16. "registry-mirrors" : [
  17. "https://8xpk5wnt.mirror.aliyuncs.com"
  18. ],
  19. "live-restore": true
  20. }
  21. ## 启动docker
  22. $ systemctl enable docker && systemctl start docker
  23. $ docker info

4.安装harbor仓库(可选)

  1. cd /opt
  2. mkdir src
  3. cd /opt/src
  4. # 上传harbor-offline-installer-v2.0.3.tgz包
  5. tar xf harbor-offline-installer-v2.0.3.tgz -C /opt/
  6. cd /opt
  7. mv harbor harbor-v2.0.3
  8. ln -s /opt/harbor-v2.0.3 /opt/harbor
  9. cd /opt/harbor
  10. cp -a harbor.yml.tmpl harbor.yml
  11. # 配置harbor仓库,修改下来几行
  12. vim /opt/harbor/harbor.yml
  13. hostname: harbor.k8s.com
  14. http:
  15. port: 180
  16. data_volume: /data/harbor
  17. #https: 注释掉
  18. # port: 443
  19. # certificate: /your/certificate/path
  20. # private_key: /your/private/key/path
  21. #需要修改成复杂度高的密码
  22. harbor_admin_password: Harbor12345
  23. log:
  24. level: info
  25. rotate_count: 50
  26. rotate_size: 200M
  27. location: /data/harbor/logs
  28. mkdir -p /data/harbor/logs
  29. # 安装docker-compose,必须先配置好yum源
  30. yum install docker-compose -y
  31. # 一定要启动docker才能运行
  32. bash /opt/harbor/install.sh
  33. # 查看是否安装成功
  34. docker-compose ps
  35. docker ps -a
  36. # 设置开机启动
  37. chmod +x /etc/rc.d/rc.local
  38. echo 'cd /opt/harbor && docker-compose up -d' >> /etc/rc.local
  39. # 下面配置nginx代理harbor仓库

5.安装nginx

  1. yum install nginx -y
  2. cd /etc/nginx
  3. cp -a nginx.conf nginx.conf.default
  4. vim /etc/nginx/conf.d/harbor.od.com.conf
  5. #内容如下
  6. server {
  7. listen 80;
  8. server_name harbor.k8s.com;
  9. client_max_body_size 1000m;
  10. location / {
  11. proxy_pass http://127.0.0.1:180;
  12. }
  13. }
  14. vim nginx.conf
  15. #最后添加下面配置
  16. stream {
  17. log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol '
  18. '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ;
  19. access_log /var/log/nginx/nginx-proxy.log proxy;
  20. upstream kubernetes_lb {
  21. server 10.4.7.10:6443 weight=5 max_fails=3 fail_timeout=30s;
  22. server 10.4.7.11:6443 weight=5 max_fails=3 fail_timeout=30s;
  23. server 10.4.7.12:6443 weight=5 max_fails=3 fail_timeout=30s;
  24. }
  25. server {
  26. listen 7443;
  27. proxy_connect_timeout 30s;
  28. proxy_timeout 30s;
  29. proxy_pass kubernetes_lb;
  30. }
  31. }
  32. nginx -t
  33. # 可能需要执行这条命令才能启动成功semanage port -a -t http_port_t -p tcp 7443
  34. systemctl start nginx
  35. systemctl enable nginx

部署kubernetes

1. 安装 kubeadm, kubelet 和 kubectl

操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行

  1. $ yum install -y kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2 --disableexcludes=kubernetes
  2. ## 查看kubeadm 版本
  3. $ kubeadm version
  4. ## 设置kubelet开机启动
  5. $ systemctl enable kubelet

2. 初始化配置文件

操作节点: 只在master节点(k8s-master)执行

  1. $ cd
  2. $ kubeadm config print init-defaults > kubeadm.yaml
  3. $ cat kubeadm.yaml
  4. apiVersion: kubeadm.k8s.io/v1beta2
  5. bootstrapTokens:
  6. - groups:
  7. - system:bootstrappers:kubeadm:default-node-token
  8. token: abcdef.0123456789abcdef
  9. ttl: 24h0m0s
  10. usages:
  11. - signing
  12. - authentication
  13. kind: InitConfiguration
  14. localAPIEndpoint:
  15. advertiseAddress: 10.4.7.10 # apiserver地址,所以配置master的节点内网IP
  16. bindPort: 6443
  17. nodeRegistration:
  18. criSocket: /var/run/dockershim.sock
  19. name: k8s-master1 #修改主机名
  20. taints:
  21. - effect: NoSchedule
  22. key: node-role.kubernetes.io/master
  23. ---
  24. apiServer:
  25. timeoutForControlPlane: 4m0s
  26. apiVersion: kubeadm.k8s.io/v1beta2
  27. certificatesDir: /etc/kubernetes/pki
  28. clusterName: kubernetes
  29. controllerManager: {}
  30. controlPlaneEndpoint: "10.4.7.11:7443" #载均衡器必须能够与apiserver端口上的所有master节点通信。它还必须允许其侦听端口上的传入流量。另外确保负载均衡器的地址始终与kubeadm的ControlPlaneEndpoint地址匹配。
  31. dns:
  32. type: CoreDNS
  33. etcd:
  34. local:
  35. dataDir: /var/lib/etcd
  36. imageRepository: registry.aliyuncs.com/google_containers # 修改成阿里镜像源
  37. kind: ClusterConfiguration
  38. kubernetesVersion: v1.19.2
  39. networking:
  40. dnsDomain: cluster.local
  41. podSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需要使用这个网段
  42. serviceSubnet: 10.96.0.0/12
  43. scheduler: {}
  44. ---
  45. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  46. kind: KubeProxyConfiguration
  47. featureGates:
  48. SupportIPVSProxyMode: true
  49. mode: ipvs

对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2

3. 提前下载镜像

操作节点:只在master节点(k8s-master)执行

  1. # 查看需要使用的镜像列表,若无问题,将得到如下列表
  2. $ kubeadm config images list --config kubeadm.yaml
  3. registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
  4. registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
  5. registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
  6. registry.aliyuncs.com/google_containers/kube-proxy:v1.19.2
  7. registry.aliyuncs.com/google_containers/pause:3.2
  8. registry.aliyuncs.com/google_containers/etcd:3.4.13-0
  9. registry.aliyuncs.com/google_containers/coredns:1.7.0
  10. # 提前下载镜像到本地
  11. $ kubeadm config images pull --config kubeadm.yaml
  12. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
  13. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
  14. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
  15. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.2
  16. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
  17. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
  18. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0

重要更新:如果出现不可用的情况,请使用如下方式来代替:

  1. 还原kubeadm.yaml的imageRepository ```yaml … imageRepository: k8s.gcr.io …

查看使用的镜像源

kubeadm config images list —config kubeadm.yaml k8s.gcr.io/kube-apiserver:v1.19.2 k8s.gcr.io/kube-controller-manager:v1.19.2 k8s.gcr.io/kube-scheduler:v1.19.2 k8s.gcr.io/kube-proxy:v1.19.2 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0

  1. 2. 使用docker hub中的镜像源来下载,注意上述列表中要加上处理器架构,通常我们使用的虚拟机都是amd64
  2. ```powershell
  3. $ docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.19.2
  4. $ docker pull mirrorgooglecontainers/etcd-amd64:3.4.13-0
  5. ...
  6. $ docker tag mirrorgooglecontainers/etcd-amd64:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0

4. 初始化master节点

0.初始化前,更换证书(学习的话可以不用操作,生产上一定要操作,不然一年后证书失效,集群就会出现证书失效故障)

1.安装go环境

注意操作节点:只在master节点(k8s-master)执行

yum -y install wget rsync

cd /opt

wget https://dl.google.com/go/go1.15.2.linux-amd64.tar.gz

tar zxvf go1.15.2.linux-amd64.tar.gz

mv go /usr/local/

cat >> /etc/profile <<EOF

export PATH=$PATH:/usr/local/go/bin

EOF

source /etc/profile

go version

2.重新编译kubeadm,更换证书

注意下面是编译的是1.19.2版本,不同版本的修改证书的文件不一样

1.下载源代码

cd /opt

wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/kubernetes-server-linux-amd64.tar.gz

2.更换证书日期

tar xf kubernetes-server-linux-amd64.tar.gz

cd /opt/kubernetes

tar xf kubernetes-src.tar.gz

vim cmd/kubeadm/app/constants/constants.go

Kubernetes安装手册(高可用版) - 图1

修改ca的证书时间

vim staging/src/k8s.io/client-go/util/cert/cert.go

Kubernetes安装手册(高可用版) - 图2

3.编译安装

cd /opt/kubernetes

make WHAT=cmd/kubeadm

Kubernetes安装手册(高可用版) - 图3

4.替换原来的是 kubeadm (注意:这步操作需要在所有节点上执行)

mv /usr/bin/kubeadm /usr/bin/kubeadm_v1_19_2

cp _output/bin/kubeadm /usr/bin/kubeadm

chmod +x /usr/bin/kubeadm

ls -l /usr/bin/kubeadm*

5.开始初始化

操作节点:只在master节点(k8s-master)执行

  1. $ cd
  2. kubeadm init --config kubeadm.yaml

若初始化成功后,最后会提示如下信息:

  1. ...
  2. Your Kubernetes control-plane has initialized successfully!
  3. To start using your cluster, you need to run the following as a regular user:
  4. mkdir -p $HOME/.kube
  5. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. You should now deploy a pod network to the cluster.
  8. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  9. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  10. You can now join any number of control-plane nodes by copying certificate authorities
  11. and service account keys on each node and then running the following as root:
  12. kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \
  13. --discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b \
  14. --control-plane
  15. Then you can join any number of worker nodes by running the following on each as root:
  16. kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \
  17. --discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b

重要所有的master都要操作,接下来按照上述提示信息操作,配置kubectl客户端的认证

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

⚠️注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件 若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可

复制证书到k8s-master2

  1. ssh "root"@k8s-master2 "mkdir -p /etc/kubernetes/pki/etcd"
  2. scp /etc/kubernetes/pki/ca.* "root"@k8s-master2:/etc/kubernetes/pki/
  3. scp /etc/kubernetes/pki/sa.* "root"@k8s-master2:/etc/kubernetes/pki/
  4. scp /etc/kubernetes/pki/front-proxy-ca.* "root"@k8s-master2:/etc/kubernetes/pki/
  5. scp /etc/kubernetes/pki/etcd/ca.* "root"@k8s-master2:/etc/kubernetes/pki/etcd/
  6. scp /etc/kubernetes/admin.conf "root"@k8s-master2:/etc/kubernetes/

复制证书到k8s-master3

  1. ssh "root"@k8s-master3 "mkdir -p /etc/kubernetes/pki/etcd"
  2. scp /etc/kubernetes/pki/ca.* "root"@k8s-master3:/etc/kubernetes/pki/
  3. scp /etc/kubernetes/pki/sa.* "root"@k8s-master3:/etc/kubernetes/pki/
  4. scp /etc/kubernetes/pki/front-proxy-ca.* "root"@k8s-master3:/etc/kubernetes/pki/
  5. scp /etc/kubernetes/pki/etcd/ca.* "root"@k8s-master3:/etc/kubernetes/pki/etcd/
  6. scp /etc/kubernetes/admin.conf "root"@k8s-master3:/etc/kubernetes/

6.添加master节点到集群中

操作节点:所有的master节点(k8s-master)需要执行
在每台master节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。

  1. kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b \
  3. --control-plane

7. 添加slave节点到集群中

操作节点:所有的slave节点(k8s-slave)需要执行
在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。

  1. kubeadm join 10.4.7.10:7443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:2aee035fa8ffbf16869befb1ea24785be1a6fa0b939e6f2f17158b105b222b7b

8. 安装flannel插件

操作节点:只在master节点(k8s-master)执行

  • 下载flannel的yaml文件
  1. wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
  • 修改配置,flannel网络模式,大概在文件的128行,:
  1. $ vi kube-flannel.yml
  2. ...
  3. net-conf.json: |
  4. {
  5. "Network": "10.244.0.0/16",
  6. "Backend": {
  7. "Type": "host-gw"
  8. }
  9. }
  10. ...
  • 修改配置,指定网卡名称,大概在文件的190行,添加一行配置:
  1. $ vi kube-flannel.yml
  2. ...
  3. containers:
  4. - name: kube-flannel
  5. image: quay.io/coreos/flannel:v0.11.0-amd64
  6. command:
  7. - /opt/bin/flanneld
  8. args:
  9. - --ip-masq
  10. - --kube-subnet-mgr
  11. - --iface=eth0 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网
  12. resources:
  13. requests:
  14. cpu: "100m"
  15. ...
  • (可选)修改flannel镜像地址,以防默认的镜像拉取失败,同样是在170和190行上下的位置
  1. vi kube-flannel.yml
  2. ...
  3. containers:
  4. - name: kube-flannel
  5. image: 192.168.136.10:5000/flannel:v0.11.0-amd64
  6. command:
  7. - /opt/bin/flanneld
  8. args:
  9. - --ip-masq
  10. - --kube-subnet-mgr
  11. - --iface=ens33 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网
  12. resources:
  13. requests:
  14. cpu: "100m"
  15. ...
  • 执行安装flannel网络插件
  1. # 先拉取镜像,此过程国内速度比较慢
  2. $ docker pull quay.io/coreos/flannel:v0.11.0-amd64
  3. # 执行flannel安装
  4. $ kubectl create -f kube-flannel.yml

验证证书时间更新结果

cd /etc/kubernetes/pki && ll

kubeadm alpha certs check-expiration

  1. $ kubeadm alpha certs check-expiration
  2. [check-expiration] Reading configuration from the cluster...
  3. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  4. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  5. admin.conf Sep 13, 2080 00:56 UTC 59y no
  6. apiserver Sep 13, 2080 00:56 UTC 59y ca no
  7. apiserver-etcd-client Sep 13, 2080 00:56 UTC 59y etcd-ca no
  8. apiserver-kubelet-client Sep 13, 2080 00:56 UTC 59y ca no
  9. controller-manager.conf Sep 13, 2080 00:56 UTC 59y no
  10. etcd-healthcheck-client Sep 13, 2080 00:56 UTC 59y etcd-ca no
  11. etcd-peer Sep 13, 2080 00:56 UTC 59y etcd-ca no
  12. etcd-server Sep 13, 2080 00:56 UTC 59y etcd-ca no
  13. front-proxy-client Sep 13, 2080 00:56 UTC 59y front-proxy-ca no
  14. scheduler.conf Sep 13, 2080 00:56 UTC 59y no
  15. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  16. ca Sep 13, 2080 00:56 UTC 59y no
  17. etcd-ca Sep 13, 2080 00:56 UTC 59y no
  18. front-proxy-ca Sep 13, 2080 00:56 UTC 59y no

9. 设置master节点是否可调度(可选)

操作节点:k8s-master

默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:

  1. $ kubectl taint node k8s-master1 node-role.kubernetes.io/master:NoSchedule-
  2. $ kubectl taint node k8s-master2 node-role.kubernetes.io/master:NoSchedule-
  3. $ kubectl taint node k8s-master3 node-role.kubernetes.io/master:NoSchedule-

10. 验证集群

操作节点: 在master节点(k8s-master)执行

  1. $ kubectl get nodes #观察集群节点是否全部Ready

创建测试nginx服务

  1. $ kubectl run test-nginx --image=nginx:alpine

查看pod是否创建成功,并访问pod ip测试是否可用

  1. $ kubectl get po -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. test-nginx-5bd8859b98-5nnnw 1/1 Running 0 9s 10.244.1.2 k8s-slave1 <none> <none>
  4. $ curl 10.244.1.2
  5. ...
  6. <h1>Welcome to nginx!</h1>
  7. <p>If you see this page, the nginx web server is successfully installed and
  8. working. Further configuration is required.</p>
  9. <p>For online documentation and support please refer to
  10. <a href="http://nginx.org/">nginx.org</a>.<br/>
  11. Commercial support is available at
  12. <a href="http://nginx.com/">nginx.com</a>.</p>
  13. <p><em>Thank you for using nginx.</em></p>
  14. </body>
  15. </html>

11. 部署dashboard

  • 部署服务
  1. # 推荐使用下面这种方式
  2. $ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
  3. $ vi recommended.yaml
  4. # 修改Service为NodePort类型,文件的45行上下
  5. ......
  6. kind: Service
  7. apiVersion: v1
  8. metadata:
  9. labels:
  10. k8s-app: kubernetes-dashboard
  11. name: kubernetes-dashboard
  12. namespace: kubernetes-dashboard
  13. spec:
  14. ports:
  15. - port: 443
  16. targetPort: 8443
  17. selector:
  18. k8s-app: kubernetes-dashboard
  19. type: NodePort # 加上type=NodePort变成NodePort类型的服务
  20. ......
  • 查看访问地址,本例为30133端口
  1. kubectl create -f recommended.yaml
  2. kubectl -n kubernetes-dashboard get svc
  3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  4. dashboard-metrics-scraper ClusterIP 10.105.62.124 <none> 8000/TCP 31m
  5. kubernetes-dashboard NodePort 10.103.74.46 <none> 443:30133/TCP 31m
  • 使用浏览器访问 https://10.4.7.10:30133,其中10.4.7.10为master节点的外网ip地址,chrome目前由于安全限制,测试访问不了,使用firefox可以进行访问。
  • 创建ServiceAccount进行访问
  1. $ vi admin.conf
  2. kind: ClusterRoleBinding
  3. apiVersion: rbac.authorization.k8s.io/v1beta1
  4. metadata:
  5. name: admin
  6. annotations:
  7. rbac.authorization.kubernetes.io/autoupdate: "true"
  8. roleRef:
  9. kind: ClusterRole
  10. name: cluster-admin
  11. apiGroup: rbac.authorization.k8s.io
  12. subjects:
  13. - kind: ServiceAccount
  14. name: admin
  15. namespace: kubernetes-dashboard
  16. ---
  17. apiVersion: v1
  18. kind: ServiceAccount
  19. metadata:
  20. name: admin
  21. namespace: kubernetes-dashboard
  22. $ kubectl create -f admin.conf
  23. $ kubectl -n kubernetes-dashboard get secret |grep admin-token
  24. admin-token-fqdpf kubernetes.io/service-account-token 3 7m17s
  25. # 使用该命令拿到token,然后粘贴到
  26. $ kubectl -n kubernetes-dashboard get secret admin-token-fqdpf -o jsonpath={.data.token}|base64 -d
  27. eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1rb2xHWHMwbWFPMjJaRzhleGRqaExnVi1BLVNRc2txaEhETmVpRzlDeDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mcWRwZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjYyNWMxNjJlLTQ1ZG...

Kubernetes安装手册(高可用版) - 图4

Kubernetes安装手册(高可用版) - 图5

12.配置kube-proxy网络为LVS

配置kube-proxy,在master上操作,因使用kubeadmin安装,所以操作方式如下

  1. [root@master] # kubectl edit cm kube-proxy -n kube-system
  2. configmap/kube-proxy edited
  1. #修改如下
  2. kind: MasterConfiguration
  3. apiVersion: kubeadm.k8s.io/v1alpha1
  4. ...
  5. ipvs:
  6. excludeCIDRs: null
  7. minSyncPeriod: 0s
  8. scheduler: ""
  9. strictARP: false
  10. syncPeriod: 0s
  11. tcpFinTimeout: 0s
  12. tcpTimeout: 0s
  13. udpTimeout: 0s
  14. kind: KubeProxyConfiguration
  15. metricsBindAddress: ""
  16. mode: "ipvs" #修改

在master重启kube-proxy

  1. kubectl get pod -n kube-system | grep kube-proxy | awk '{print $1}' | xargs kubectl delete pod -n kube-system
  2. ipvsadm -Ln

验证ipvs是否开启

  1. [root@k8s-master]# kubectl logs kube-proxy-cvzb4 -n kube-system
  2. I0928 03:31:07.469852 1 node.go:136] Successfully retrieved node IP: 192.168.136.10
  3. I0928 03:31:07.469937 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.136.10), assume IPv4 operation
  4. I0928 03:31:07.509688 1 server_others.go:259] Using ipvs Proxier.
  5. E0928 03:31:07.510007 1 proxier.go:381] can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1
  6. W0928 03:31:07.510243 1 proxier.go:434] IPVS scheduler not specified, use rr by default
  7. I0928 03:31:07.510694 1 server.go:650] Version: v1.16.2
  8. I0928 03:31:07.511075 1 conntrack.go:52] Setting nf_conntrack_max to 131072
  9. I0928 03:31:07.511640 1 config.go:315] Starting service config controller
  10. I0928 03:31:07.511671 1 shared_informer.go:240] Waiting for caches to sync for service config
  11. I0928 03:31:07.511716 1 config.go:224] Starting endpoint slice config controller
  12. I0928 03:31:07.511722 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
  13. I0928 03:31:07.611909 1 shared_informer.go:247] Caches are synced for endpoint slice config
  14. I0928 03:31:07.611909 1 shared_informer.go:247] Caches are synced for service config

进入pod内,现在可以ping通servicename了,使用iptables时,发现ping的时候出现了如下错误,执行完上述操作,一切正常

  1. root@xxxxxx-cb4c9cb8c-hpzdl:/opt# ping xxxxxx PING xxxxxx.xxxxx.svc.cluster.local (172.16.140.78) 56(84) bytes of data.
  2. From 172.16.8.1 (172.16.8.1) icmp_seq=1 Time to live exceeded
  3. From 172.16.8.1 (172.16.8.1) icmp_seq=2 Time to live exceeded

13.配置flannel host-gw模式提升集群网络性能

修改flannel的网络后端:

  1. $ kubectl edit cm kube-flannel-cfg -n kube-system
  2. ...
  3. net-conf.json: |
  4. {
  5. "Network": "10.244.0.0/16",
  6. "Backend": {
  7. "Type": "host-gw"
  8. }
  9. }
  10. kind: ConfigMap
  11. ...

重建Flannel的Pod

  1. $ kubectl -n kube-system get po |grep flannel
  2. kube-flannel-ds-amd64-5dgb8 1/1 Running 0 15m
  3. kube-flannel-ds-amd64-c2gdc 1/1 Running 0 14m
  4. kube-flannel-ds-amd64-t2jdd 1/1 Running 0 15m
  5. $ kubectl -n kube-system delete po kube-flannel-ds-amd64-5dgb8 kube-flannel-ds-amd64-c2gdc kube-flannel-ds-amd64-t2jdd
  6. # 等待Pod新启动后,查看日志,出现Backend type: host-gw字样
  7. $ kubectl -n kube-system logs -f kube-flannel-ds-amd64-4hjdw
  8. I0704 01:18:11.916374 1 kube.go:126] Waiting 10m0s for node controller to sync
  9. I0704 01:18:11.916579 1 kube.go:309] Starting kube subnet manager
  10. I0704 01:18:12.917339 1 kube.go:133] Node controller sync successful
  11. I0704 01:18:12.917848 1 main.go:247] Installing signal handlers
  12. I0704 01:18:12.918569 1 main.go:386] Found network config - Backend type: host-gw
  13. I0704 01:18:13.017841 1 main.go:317] Wrote subnet file to /run/flannel/subnet.env

查看节点路由表:

  1. $ route -n
  2. Destination Gateway Genmask Flags Metric Ref Use Iface
  3. 0.0.0.0 192.168.136.2 0.0.0.0 UG 100 0 0 ens33
  4. 10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
  5. 10.244.1.0 192.168.136.11 255.255.255.0 UG 0 0 0 ens33
  6. 10.244.2.0 192.168.136.12 255.255.255.0 UG 0 0 0 ens33
  7. 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
  8. 192.168.136.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
  • k8s-slave1 节点中的 pod-a(10.244.2.19)当中的 IP 包通过 pod-a 内的路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥 cni0
  • 到达 cni0 当中的 IP 包通过匹配节点 k8s-slave1 的路由表发现通往 10.244.2.19 的 IP 包应该使用192.168.136.12这个网关进行转发
  • 包到达k8s-slave2节点(192.168.136.12)节点的eth0网卡,根据该节点的路由规则,转发给cni0网卡
  • cni0将 IP 包转发给连接在 cni0 上的 pod-b

14.Kubernetes服务访问之Ingress

对于Kubernetes的Service,无论是Cluster-Ip和NodePort均是四层的负载,集群内的服务如何实现七层的负载均衡,这就需要借助于Ingress,Ingress控制器的实现方式有很多,比如nginx, Contour, Haproxy, trafik, Istio。几种常用的ingress功能对比和选型可以参考这里

Ingress-nginx是7层的负载均衡器 ,负责统一管理外部对k8s cluster中Service的请求。主要包含:

  • ingress-nginx-controller:根据用户编写的ingress规则(创建的ingress的yaml文件),动态的去更改nginx服务的配置文件,并且reload重载使其生效(是自动化的,通过lua脚本来实现);
  • Ingress资源对象:将Nginx的配置抽象成一个Ingress对象
    1. apiVersion: networking.k8s.io/v1beta1
    2. kind: Ingress
    3. metadata:
    4. name: simple-example
    5. spec:
    6. rules:
    7. - host: foo.bar.com
    8. http:
    9. paths:
    10. - path: /
    11. backend:
    12. serviceName: service1
    13. servicePort: 8080

示意图:

Kubernetes安装手册(高可用版) - 图6

实现逻辑

1)ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化
2)然后读取ingress规则(规则就是写明了哪个域名对应哪个service),按照自定义的规则,生成一段nginx配置
3)再写到nginx-ingress-controller的pod里,这个Ingress controller的pod里运行着一个Nginx服务,控制器把生成的nginx配置写入/etc/nginx/nginx.conf文件中
4)然后reload一下使配置生效。以此达到域名分别配置和动态更新的问题。

安装

官方文档

  1. $ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
  2. ## 或者使用myblog/deployment/ingress/mandatory.yaml
  3. ## 修改部署节点
  4. $ grep -n5 nodeSelector mandatory.yaml
  5. 212- spec:
  6. 213- hostNetwork: true #添加为host模式
  7. 214- # wait up to five minutes for the drain of connections
  8. 215- terminationGracePeriodSeconds: 300
  9. 216- serviceAccountName: nginx-ingress-serviceaccount
  10. 217: nodeSelector:
  11. 218- ingress: "true" #替换此处,来决定将ingress部署在哪些机器
  12. 219- containers:
  13. 220- - name: nginx-ingress-controller
  14. 221- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
  15. 222- args:

创建ingress

  1. # 为k8s-master2节点添加label,注意不能再节点node k8s-master1上面运行nginx-ingress-controller,因为k8s-master1上面的80端口已经被nginx使用了
  2. $ kubectl label node k8s-master2 ingress=true
  3. $ kubectl create -f mandatory.yaml
  4. # 查看节点标签
  5. $ kubectl get nodes --show-labels
  6. # 添加节点标签
  7. kubectl label node k8s-master2 node-role.kubernetes.io/master=
  8. kubectl label node k8s-master2 node-role.kubernetes.io/node=
  9. # 删除节点标签
  10. kubectl label node k8s-master2 ingress-

测试是否成功:

  1. # 在k8s-master1上,
  2. cd
  3. vim test-nginx-svc.yml
  4. kind: Service
  5. apiVersion: v1
  6. metadata:
  7. name: test-nginx
  8. namespace: default
  9. spec:
  10. ports:
  11. - protocol: TCP
  12. port: 80
  13. targetPort: 80
  14. selector:
  15. run: test-nginx
  16. vim test-nginx-ingress.yaml
  17. kind: Ingress
  18. apiVersion: networking.k8s.io/v1
  19. metadata:
  20. name: test-nginx
  21. namespace: default
  22. spec:
  23. rules:
  24. - host: test-nginx.k8s.com
  25. http:
  26. paths:
  27. - pathType: Prefix
  28. path: /
  29. backend:
  30. service:
  31. name: test-nginx
  32. port:
  33. number: 80
  34. kubectl apply -f test-nginx-svc.yml
  35. kubectl apply -f test-nginx-ingress.yaml
  36. # 在主机的hosts文件中增加(根据自己的系统添加)
  37. 10.4.7.11 test-nginx.k8s.com
  38. 打开浏览器访问test-nginx.k8s.com

15. 清理环境

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

  1. $ kubeadm reset
  2. $ ifconfig cni0 down && ip link delete cni0
  3. $ ifconfig flannel.1 down && ip link delete flannel.1
  4. $ rm -rf /var/lib/cni/

16 扩展

配置docker加速,

vim /etc/docker/daemon.json
{
“graph”: “/data/docker”,
“exec-opts”: [“native.cgroupdriver=systemd”],
“storage-driver”: “overlay2”,
“insecure-registries”: [
“192.168.0.104:180”
],
“registry-mirrors” : [
https://8xpk5wnt.mirror.aliyuncs.com
],
“live-restore”: true
}

配置harbor

[root@harbor ~]# vim harbor/harbor.yml
hostname: 192.168.0.104
http:
port: 180
harbor_admin_password: wvilHY14
database:
password: wvilHY14

配置harbor,nginx

[root@harbor ~]# vim /etc/nginx/nginx.conf
server {
listen 8880;
server_name harbor.k8s.com;

  1. client_max_body_size 1000m;
  2. location / {
  3. proxy_pass http://127.0.0.1:180;
  4. }
  5. }

配置keepalived+haproxy

[root@k8s-master-1 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
liqingfei5625@163.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id k8s-master-1
}

vrrp_script check_apiserver {
script “/root/keepalived/check_apiserver.sh”
interval 1
weight -20
fall 3
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_apiserver
}
virtual_ipaddress {
192.168.0.64/24 brd 192.168.0.64 dev eth0 label eth0:0
}
preempt delay 60
}

[root@k8s-master-2 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
liqingfei5625@163.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id k8s-master-1
}
vrrp_script check_apiserver {
script “/root/keepalived/check_apiserver.sh”
interval 1
weight -20
fall 3
rise 1
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_apiserver
}
virtual_ipaddress {
192.168.0.64/24 brd 192.168.0.64 dev eth0 label eth0:0
}
preempt delay 60
}

[root@k8s-master-1 ~]# vim /root/keepalived/check_apiserver.sh
#!/bin/bash
netstat -ntupl | grep 880
if [ $? == 0 ]; then
exit 0
else
exit 1
fi

[root@k8s-master-2 ~]# vim /root/keepalived/check_apiserver.sh
#!/bin/bash
netstat -ntupl | grep 880
if [ $? == 0 ]; then
exit 0
else
exit 1
fi

[root@k8s-master-1 ~]# vim /etc/haproxy/haproxy.cfg
listen https-apiserver
bind 0.0.0.0:880
mode tcp
balance roundrobin
timeout server 900s
timeout connect 15s

  1. server apiserver01 192.168.0.76:6443 check port 6443 inter 5000 fall 5
  2. server apiserver02 192.168.0.231:6443 check port 6443 inter 5000 fall 5
  3. server apiserver03 192.168.0.55:6443 check port 6443 inter 5000 fall 5

[root@k8s-master-2 ~]# vim /etc/haproxy/haproxy.cfg
listen https-apiserver
bind 0.0.0.0:880
mode tcp
balance roundrobin
timeout server 900s
timeout connect 15s

  1. server apiserver01 192.168.0.76:6443 check port 6443 inter 5000 fall 5
  2. server apiserver02 192.168.0.231:6443 check port 6443 inter 5000 fall 5
  3. server apiserver03 192.168.0.55:6443 check port 6443 inter 5000 fall 5