PS: 最近经常有朋友问我有没有用kubeadm搭建高可用集群的文档,说实在的我确实没有,我自己测试的话就用kubeadm单master版,公司用的话就用二进制搭建的。所以就找了个下班时间搭建测试了一番。希望对大家有帮助!如果觉得有用的话就帮忙点个关注或转发吧,哈哈~

节点规划信息

名称
IP
k8s-master01 10.1.10.100
k8s-master02 10.1.10.101
k8s-master03 10.1.10.102
k8s-node01 10.1.10.103
k8s-lb 10.1.10.200

基础环境配置

环境信息

系统 CentOS7.6.1810
内核版本 4.9.220
软件 版本
kubernetes 1.18.2
docker 19.0.3

环境初始化

(1)、配置主机名,以k8s-master01为例

  1. hostnamectl set-hostname k8s-master01

(1)、配置主机hosts映射

  1. 10.1.10.100 k8s-master01
  2. 10.1.10.101 k8s-master02
  3. 10.1.10.102 k8s-master03
  4. 10.1.10.103 k8s-node01
  5. 10.1.10.200 k8s-lb

配置完后可以通过如下命令测试

  1. for host in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-lb;do ping -c 1 $host;done

这里ping k8s-node01不通,是因为我们还没配置VIP

(2)、禁用防火墙

  1. systemctl stop firewalld
  2. systemctl disable firewalld

(3)、关闭selinux

  1. setenforce 0
  2. sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/sysconfig/selinux
  3. sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config

(4)、关闭swap分区

  1. swapoff -a && sysctl -w vm.swappiness=0

(5)、时间同步

  1. yum install chrony -y
  2. systemctl enable chronyd
  3. systemctl start chronyd
  4. chronyc sources

(6)、配置ulimt

  1. ulimit -SHn 65535

(7)、配置内核参数

  1. cat >> /etc/sysctl.d/k8s.conf << EOF
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_forward = 1
  5. vm.swappiness=0
  6. EOF

使之生效

  1. sysctl -p

(8)、master之间添加互信(按需)

  1. ssh-keygen
  2. ssh-copy-id 10.1.10.101
  3. ssh-copy-id 10.1.10.102

内核升级

由于centos7.6的系统默认内核版本是3.10,3.10的内核有很多BUG,最常见的一个就是group memory leak。

(1)、下载所需要的内核版本,我这里采用rpm安装,所以直接下载的rpm包

  1. wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm

(2)、执行rpm升级即可

  1. rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm

(3)、升级完reboot,然后查看内核是否成功升级

  1. reboot
  2. uname -r

组件安装

安装ipvs

(1)、安装ipvs需要的软件
由于我准备使用ipvs作为kube-proxy的代理模式,所以需要安装相应的软件包。

  1. yum install ipvsadm ipset sysstat conntrack libseccomp -y

(2)、加载模块

  1. cat > /etc/sysconfig/modules/ipvs.modules <<EOF
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack
  8. modprobe -- ip_tables
  9. modprobe -- ip_set
  10. modprobe -- xt_set
  11. modprobe -- ipt_set
  12. modprobe -- ipt_rpfilter
  13. modprobe -- ipt_REJECT
  14. modprobe -- ipip
  15. EOF

注意:在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack

配置重启自动加载

  1. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

安装docker-ce
  1. # 安装需要的软件
  2. yum install -y yum-utils device-mapper-persistent-data lvm2
  3. # 添加yum源
  4. yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

查看是否有docker-ce包

  1. # yum list | grep docker-ce
  2. containerd.io.x86_64 1.2.13-3.1.el7 docker-ce-stable
  3. docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
  4. docker-ce-cli.x86_64 1:19.03.8-3.el7 docker-ce-stable
  5. docker-ce-selinux.noarch 17.03.3.ce-1.el7 docker-ce-stable

安装docker-ce

  1. yum install docker-ce-19.03.8-3.el7 -y
  2. systemctl start docker
  3. systemctl enable docker

配置镜像加速

  1. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
  2. systemctl restart docker

安装kubernetes组件

添加yum源

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF

安装软件

  1. yum install -y kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes

将kubelet设置为开机自启动

  1. systemctl enable kubelet.service

以上操作在所有节点执行

集群初始化

配置VIP

高可用采用的是HAProxy+Keepalived,HAProxy和KeepAlived以守护进程的方式在所有Master节点部署。

安装软件
  1. yum install keepalived haproxy -y

配置haproxy

所有master节点的配置相同,如下:

  1. #---------------------------------------------------------------------
  2. # Global settings
  3. #---------------------------------------------------------------------
  4. global
  5. # to have these messages end up in /var/log/haproxy.log you will
  6. # need to:
  7. #
  8. # 1) configure syslog to accept network log events. This is done
  9. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  10. # /etc/sysconfig/syslog
  11. #
  12. # 2) configure local2 events to go to the /var/log/haproxy.log
  13. # file. A line like the following can be added to
  14. # /etc/sysconfig/syslog
  15. #
  16. # local2.* /var/log/haproxy.log
  17. #
  18. log 127.0.0.1 local2
  19. chroot /var/lib/haproxy
  20. pidfile /var/run/haproxy.pid
  21. maxconn 4000
  22. user haproxy
  23. group haproxy
  24. daemon
  25. # turn on stats unix socket
  26. stats socket /var/lib/haproxy/stats
  27. #---------------------------------------------------------------------
  28. # common defaults that all the 'listen' and 'backend' sections will
  29. # use if not designated in their block
  30. #---------------------------------------------------------------------
  31. defaults
  32. mode http
  33. log global
  34. option httplog
  35. option dontlognull
  36. option http-server-close
  37. option redispatch
  38. retries 3
  39. timeout http-request 10s
  40. timeout queue 1m
  41. timeout connect 10s
  42. timeout client 1m
  43. timeout server 1m
  44. timeout http-keep-alive 10s
  45. timeout check 10s
  46. maxconn 3000
  47. #---------------------------------------------------------------------
  48. # kubernetes apiserver frontend which proxys to the backends
  49. #---------------------------------------------------------------------
  50. frontend kubernetes
  51. mode tcp
  52. bind *:16443
  53. option tcplog
  54. default_backend kubernetes-apiserver
  55. #---------------------------------------------------------------------
  56. # round robin balancing between the various backends
  57. #---------------------------------------------------------------------
  58. backend kubernetes-apiserver
  59. mode tcp
  60. balance roundrobin
  61. server k8s-master01 10.1.10.100:6443 check
  62. server k8s-master02 10.1.10.101:6443 check
  63. server k8s-master03 10.1.10.102:6443 check
  64. #---------------------------------------------------------------------
  65. # collection haproxy statistics message
  66. #---------------------------------------------------------------------
  67. listen stats
  68. bind *:9999
  69. stats auth admin:P@ssW0rd
  70. stats refresh 5s
  71. stats realm HAProxy\ Statistics
  72. stats uri /admin?stats

配置keepalived

k8s-master01

  1. ! Configuration File for keepalived
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 192.168.200.1
  10. smtp_connect_timeout 30
  11. router_id LVS_DEVEL
  12. vrrp_skip_check_adv_addr
  13. vrrp_garp_interval 0
  14. vrrp_gna_interval 0
  15. }
  16. # 定义脚本
  17. vrrp_script check_apiserver {
  18. script "/etc/keepalived/check_apiserver.sh"
  19. interval 2
  20. weight -5
  21. fall 3
  22. rise 2
  23. }
  24. vrrp_instance VI_1 {
  25. state MASTER
  26. interface eth33
  27. virtual_router_id 51
  28. priority 100
  29. advert_int 1
  30. authentication {
  31. auth_type PASS
  32. auth_pass 1111
  33. }
  34. virtual_ipaddress {
  35. 10.1.10.200
  36. }
  37. # 调用脚本
  38. track_script {
  39. check_apiserver
  40. }
  41. }

k8s-master02

  1. ! Configuration File for keepalived
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 192.168.200.1
  10. smtp_connect_timeout 30
  11. router_id LVS_DEVEL
  12. vrrp_skip_check_adv_addr
  13. vrrp_garp_interval 0
  14. vrrp_gna_interval 0
  15. }
  16. # 定义脚本
  17. vrrp_script check_apiserver {
  18. script "/etc/keepalived/check_apiserver.sh"
  19. interval 2
  20. weight -5
  21. fall 3
  22. rise 2
  23. }
  24. vrrp_instance VI_1 {
  25. state MASTER
  26. interface eth33
  27. virtual_router_id 51
  28. priority 99
  29. advert_int 1
  30. authentication {
  31. auth_type PASS
  32. auth_pass 1111
  33. }
  34. virtual_ipaddress {
  35. 10.1.10.200
  36. }
  37. # 调用脚本
  38. track_script {
  39. check_apiserver
  40. }
  41. }

k8s-master03

  1. ! Configuration File for keepalived
  2. global_defs {
  3. notification_email {
  4. acassen@firewall.loc
  5. failover@firewall.loc
  6. sysadmin@firewall.loc
  7. }
  8. notification_email_from Alexandre.Cassen@firewall.loc
  9. smtp_server 192.168.200.1
  10. smtp_connect_timeout 30
  11. router_id LVS_DEVEL
  12. vrrp_skip_check_adv_addr
  13. vrrp_garp_interval 0
  14. vrrp_gna_interval 0
  15. }
  16. # 定义脚本
  17. vrrp_script check_apiserver {
  18. script "/etc/keepalived/check_apiserver.sh"
  19. interval 2
  20. weight -5
  21. fall 3
  22. rise 2
  23. }
  24. vrrp_instance VI_1 {
  25. state MASTER
  26. interface ens33
  27. virtual_router_id 51
  28. priority 98
  29. advert_int 1
  30. authentication {
  31. auth_type PASS
  32. auth_pass 1111
  33. }
  34. virtual_ipaddress {
  35. 10.1.10.200
  36. }
  37. # 调用脚本
  38. #track_script {
  39. # check_apiserver
  40. #}
  41. }

先把健康检查关闭,等部署好了过后再打开

编写健康检测脚本check-apiserver.sh

  1. #!/bin/bash
  2. function check_apiserver(){
  3. for ((i=0;i<5;i++))
  4. do
  5. apiserver_job_id=${pgrep kube-apiserver}
  6. if [[ ! -z ${apiserver_job_id} ]];then
  7. return
  8. else
  9. sleep 2
  10. fi
  11. done
  12. apiserver_job_id=0
  13. }
  14. # 1->running 0->stopped
  15. check_apiserver
  16. if [[ $apiserver_job_id -eq 0 ]];then
  17. /usr/bin/systemctl stop keepalived
  18. exit 1
  19. else
  20. exit 0
  21. fi

启动haproxy和keepalived

  1. systemctl enable --now keepalived
  2. systemctl enable --now haproxy

部署master

(1)、在k8s-master01上,编写kubeadm.yaml配置文件,如下:

  1. cat >> kubeadm.yaml <<EOF
  2. apiVersion: kubeadm.k8s.io/v1beta2
  3. kind: ClusterConfiguration
  4. kubernetesVersion: v1.18.2
  5. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  6. controlPlaneEndpoint: "k8s-lb:16443"
  7. networking:
  8. dnsDomain: cluster.local
  9. podSubnet: 192.168.0.0/16
  10. serviceSubnet: 10.96.0.0/12
  11. ---
  12. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  13. kind: KubeProxyConfiguration
  14. featureGates:
  15. SupportIPVSProxyMode: true
  16. mode: ipvs
  17. EOF

提前下载镜像

  1. kubeadm config images pull --config kubeadm.yaml

进行初始化

  1. kubeadm init --config kubeadm.yaml --upload-certs
  1. W0509 22:37:40.702752 65728 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  2. [init] Using Kubernetes version: v1.18.2
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Starting the kubelet
  11. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  12. [certs] Generating "ca" certificate and key
  13. [certs] Generating "apiserver" certificate and key
  14. [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-lb] and IPs [10.96.0.1 10.1.10.100]
  15. [certs] Generating "apiserver-kubelet-client" certificate and key
  16. [certs] Generating "front-proxy-ca" certificate and key
  17. [certs] Generating "front-proxy-client" certificate and key
  18. [certs] Generating "etcd/ca" certificate and key
  19. [certs] Generating "etcd/server" certificate and key
  20. [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.1.10.100 127.0.0.1 ::1]
  21. [certs] Generating "etcd/peer" certificate and key
  22. [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.1.10.100 127.0.0.1 ::1]
  23. [certs] Generating "etcd/healthcheck-client" certificate and key
  24. [certs] Generating "apiserver-etcd-client" certificate and key
  25. [certs] Generating "sa" key and public key
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  28. [kubeconfig] Writing "admin.conf" kubeconfig file
  29. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  30. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  31. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  32. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  33. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  34. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  35. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  36. [control-plane] Creating static Pod manifest for "kube-apiserver"
  37. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  38. W0509 22:37:47.750722 65728 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  39. [control-plane] Creating static Pod manifest for "kube-scheduler"
  40. W0509 22:37:47.764989 65728 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  41. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  42. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  43. [apiclient] All control plane components are healthy after 20.024575 seconds
  44. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  45. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  46. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  47. [upload-certs] Using certificate key:
  48. f25e738324e4f027703f24b55d47d28f692b4edc21c2876171ff87877dc8f2ef
  49. [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  50. [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  51. [bootstrap-token] Using token: 3k4vr0.x3y2nc3ksfnei4y1
  52. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  53. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  54. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  55. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  56. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  57. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  58. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  59. [addons] Applied essential addon: CoreDNS
  60. [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
  61. [addons] Applied essential addon: kube-proxy
  62. Your Kubernetes control-plane has initialized successfully!
  63. To start using your cluster, you need to run the following as a regular user:
  64. mkdir -p $HOME/.kube
  65. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  66. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  67. You should now deploy a pod network to the cluster.
  68. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  69. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  70. You can now join any number of the control-plane node running the following command on each as root:
  71. kubeadm join k8s-lb:16443 --token 3k4vr0.x3y2nc3ksfnei4y1 \
  72. --discovery-token-ca-cert-hash sha256:a5f761f332bd45a199d0676875e7f58c323226df6fb9b4f0b977b6f63b252791 \
  73. --control-plane --certificate-key f25e738324e4f027703f24b55d47d28f692b4edc21c2876171ff87877dc8f2ef
  74. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  75. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  76. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  77. Then you can join any number of worker nodes by running the following on each as root:
  78. kubeadm join k8s-lb:16443 --token 3k4vr0.x3y2nc3ksfnei4y1 \
  79. --discovery-token-ca-cert-hash sha256:a5f761f332bd45a199d0676875e7f58c323226df6fb9b4f0b977b6f63b252791

配置环境变量

  1. cat >> /root/.bashrc <<EOF
  2. export KUBECONFIG=/etc/kubernetes/admin.conf
  3. EOF
  4. source /root/.bashrc

查看节点状态

  1. # kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 NotReady master 3m1s v1.18.2

安装网络插件

  1. wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

如果有节点是多网卡,所以需要在资源清单文件中指定内网网卡
vi calico.yaml

  1. ......
  2. spec:
  3. containers:
  4. - env:
  5. - name: DATASTORE_TYPE
  6. value: kubernetes
  7. - name: IP_AUTODETECTION_METHOD # DaemonSet中添加该环境变量
  8. value: interface=ens33 # 指定内网网卡
  9. - name: WAIT_FOR_DATASTORE
  10. value: "true"
  11. ......

kubectl apply -f calico.yaml # 安装calico网络插件

当网络插件安装完成后,查看node节点信息如下:

  1. # kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 Ready master 10m v1.18.2

可以看到状态已经从NotReady变为ready了。
(2)、将master02加入集群
提前下载镜像

  1. kubeadm config images pull --config kubeadm.yaml

加入集群

  1. kubeadm join k8s-lb:16443 --token 3k4vr0.x3y2nc3ksfnei4y1 \
  2. --discovery-token-ca-cert-hash sha256:a5f761f332bd45a199d0676875e7f58c323226df6fb9b4f0b977b6f63b252791 \
  3. --control-plane --certificate-key f25e738324e4f027703f24b55d47d28f692b4edc21c2876171ff87877dc8f2ef

输出如下:

  1. ...
  2. This node has joined the cluster and a new control plane instance was created:
  3. * Certificate signing request was sent to apiserver and approval was received.
  4. * The Kubelet was informed of the new secure connection details.
  5. * Control plane (master) label and taint were applied to the new node.
  6. * The Kubernetes control plane instances scaled up.
  7. * A new etcd member was added to the local/stacked etcd cluster.
  8. To start administering your cluster from this node, you need to run the following as a regular user:
  9. mkdir -p $HOME/.kube
  10. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  11. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  12. Run 'kubectl get nodes' to see this node join the cluster.
  13. ...

配置环境变量

  1. cat >> /root/.bashrc <<EOF
  2. export KUBECONFIG=/etc/kubernetes/admin.conf
  3. EOF
  4. source /root/.bashrc

另一台的操作一样。

查看集群状态

  1. # kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 Ready master 41m v1.18.2
  4. k8s-master02 Ready master 29m v1.18.2
  5. k8s-master03 Ready master 27m v1.18.2

查看集群组件状态

  1. # kubectl get pod -n kube-system -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. calico-kube-controllers-77c5fc8d7f-stl57 1/1 Running 0 26m 192.168.32.130 k8s-master01 <none> <none>
  4. calico-node-ppsph 1/1 Running 0 26m 10.1.10.100 k8s-master01 <none> <none>
  5. calico-node-tl6sq 0/1 Init:2/3 0 26m 10.1.10.101 k8s-master02 <none> <none>
  6. calico-node-w92qh 1/1 Running 0 26m 10.1.10.102 k8s-master03 <none> <none>
  7. coredns-546565776c-vtlhr 1/1 Running 0 42m 192.168.32.129 k8s-master01 <none> <none>
  8. coredns-546565776c-wz9bk 1/1 Running 0 42m 192.168.32.131 k8s-master01 <none> <none>
  9. etcd-k8s-master01 1/1 Running 0 42m 10.1.10.100 k8s-master01 <none> <none>
  10. etcd-k8s-master02 1/1 Running 0 30m 10.1.10.101 k8s-master02 <none> <none>
  11. etcd-k8s-master03 1/1 Running 0 28m 10.1.10.102 k8s-master03 <none> <none>
  12. kube-apiserver-k8s-master01 1/1 Running 0 42m 10.1.10.100 k8s-master01 <none> <none>
  13. kube-apiserver-k8s-master02 1/1 Running 0 30m 10.1.10.101 k8s-master02 <none> <none>
  14. kube-apiserver-k8s-master03 1/1 Running 0 28m 10.1.10.102 k8s-master03 <none> <none>
  15. kube-controller-manager-k8s-master01 1/1 Running 1 42m 10.1.10.100 k8s-master01 <none> <none>
  16. kube-controller-manager-k8s-master02 1/1 Running 1 30m 10.1.10.101 k8s-master02 <none> <none>
  17. kube-controller-manager-k8s-master03 1/1 Running 0 28m 10.1.10.102 k8s-master03 <none> <none>
  18. kube-proxy-6sbpp 1/1 Running 0 28m 10.1.10.102 k8s-master03 <none> <none>
  19. kube-proxy-dpppr 1/1 Running 0 42m 10.1.10.100 k8s-master01 <none> <none>
  20. kube-proxy-ln7l7 1/1 Running 0 30m 10.1.10.101 k8s-master02 <none> <none>
  21. kube-scheduler-k8s-master01 1/1 Running 1 42m 10.1.10.100 k8s-master01 <none> <none>
  22. kube-scheduler-k8s-master02 1/1 Running 1 30m 10.1.10.101 k8s-master02 <none> <none>
  23. kube-scheduler-k8s-master03 1/1 Running 0 28m 10.1.10.102 k8s-master03 <none> <none>

查看CSR

  1. kubectl get csr
  2. NAME AGE SIGNERNAME REQUESTOR CONDITION
  3. csr-cfl2w 42m kubernetes.io/kube-apiserver-client-kubelet system:node:k8s-master01 Approved,Issued
  4. csr-mm7g7 28m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:3k4vr0 Approved,Issued
  5. csr-qzn6r 30m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:3k4vr0 Approved,Issued

部署node

node节点只需加入集群即可

  1. kubeadm join k8s-lb:16443 --token 3k4vr0.x3y2nc3ksfnei4y1 \
  2. --discovery-token-ca-cert-hash sha256:a5f761f332bd45a199d0676875e7f58c323226df6fb9b4f0b977b6f63b252791

输出日志如下:

  1. W0509 23:24:12.159733 10635 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  2. [preflight] Running pre-flight checks
  3. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  4. [preflight] Reading configuration from the cluster...
  5. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  6. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
  7. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Starting the kubelet
  10. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  11. This node has joined the cluster:
  12. * Certificate signing request was sent to apiserver and a response was received.
  13. * The Kubelet was informed of the new secure connection details.
  14. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

然后查看集群节点信息

  1. # kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master01 Ready master 47m v1.18.2
  4. k8s-master02 Ready master 35m v1.18.2
  5. k8s-master03 Ready master 32m v1.18.2
  6. k8s-node01 Ready node01 55s v1.18.2

测试切换

关闭一台master主机,看集群是否可用。
关闭master01主机,然后查看整个集群。

  1. # 模拟关掉keepalived
  2. systemctl stop keepalived
  3. # 然后查看集群是否可用
  4. [root@k8s-master03 ~]# kubectl get nodes
  5. NAME STATUS ROLES AGE VERSION
  6. k8s-master01 Ready master 64m v1.18.2
  7. k8s-master02 Ready master 52m v1.18.2
  8. k8s-master03 Ready master 50m v1.18.2
  9. k8s-node01 Ready <none> 18m v1.18.2
  10. [root@k8s-master03 ~]# kubectl get pod -n kube-system
  11. NAME READY STATUS RESTARTS AGE
  12. calico-kube-controllers-77c5fc8d7f-stl57 1/1 Running 0 49m
  13. calico-node-8t5ft 1/1 Running 0 19m
  14. calico-node-ppsph 1/1 Running 0 49m
  15. calico-node-tl6sq 1/1 Running 0 49m
  16. calico-node-w92qh 1/1 Running 0 49m
  17. coredns-546565776c-vtlhr 1/1 Running 0 65m
  18. coredns-546565776c-wz9bk 1/1 Running 0 65m
  19. etcd-k8s-master01 1/1 Running 0 65m
  20. etcd-k8s-master02 1/1 Running 0 53m
  21. etcd-k8s-master03 1/1 Running 0 51m
  22. kube-apiserver-k8s-master01 1/1 Running 0 65m
  23. kube-apiserver-k8s-master02 1/1 Running 0 53m
  24. kube-apiserver-k8s-master03 1/1 Running 0 51m
  25. kube-controller-manager-k8s-master01 1/1 Running 2 65m
  26. kube-controller-manager-k8s-master02 1/1 Running 1 53m
  27. kube-controller-manager-k8s-master03 1/1 Running 0 51m
  28. kube-proxy-6sbpp 1/1 Running 0 51m
  29. kube-proxy-dpppr 1/1 Running 0 65m
  30. kube-proxy-ln7l7 1/1 Running 0 53m
  31. kube-proxy-r5ltk 1/1 Running 0 19m
  32. kube-scheduler-k8s-master01 1/1 Running 2 65m
  33. kube-scheduler-k8s-master02 1/1 Running 1 53m
  34. kube-scheduler-k8s-master03 1/1 Running 0 51m

到此集群搭建完了,然后可以开启keepalived的检查脚本了。另外一些组件就自己自行安装。

安装自动补全命令

  1. yum install -y bash-completion
  2. source /usr/share/bash-completion/bash_completion
  3. source <(kubectl completion bash)
  4. echo "source <(kubectl completion bash)" >> ~/.bashrc