前面的文章,我们使用Kubeadm在虚拟机中搭建了实验环境的Kubernetes集群,而实际生产环境中,往往要求会比这高得多。下面我们需要在真实的物理机上搭建(本人使用了8台阿里云的服务器)。

先看看架构图:
📃 搭建高可用 Kubernetes 集群 - 图1
📃 搭建高可用 Kubernetes 集群 - 图2

一、部署准备

  • BIOS 中开启 VT-X (如果是虚拟机注意设置)
  • 科学上网 (由于 GFW)

在阿里云中配置 8 台 CentOS 物理机, 3 台 master, 4 台 node, 1台lb, (IP为我写文章时胡诌的,根据自己的机子IP来配置) 分别为:

  • 192.168.1.140 k8s-master-lb (VIP)
  • 192.168.1.128 k8s-master1 (4 核 8GB)
  • 192.168.1.129 k8s-master2 (4 核 8GB)
  • 192.168.1.130 k8s-master3 (4 核 8GB)
  • 192.168.1.131 k8s-node1 (4 核 4GB)
  • 192.168.1.132 k8s-node2 (4 核 4GB)
  • 192.168.1.133 k8s-node3 (4 核 4GB)
  • 192.168.1.134 k8s-node4 (4 核 4GB)

📃 搭建高可用 Kubernetes 集群 - 图3

并在自己本地的客户机中将 192.168.1.140 k8s-master-lb 加入 hosts,以便访问。

主机互信

所有节点配置 hosts, 使三台机子能够互通

  1. $ cat <<EOF >> /etc/hosts
  2. 192.168.1.128 k8s-master1
  3. 192.168.1.129 k8s-master2
  4. 192.168.1.130 k8s-master3
  5. 192.168.1.131 k8s-node1
  6. 192.168.1.132 k8s-node2
  7. 192.168.1.133 k8s-node3
  8. 192.168.1.134 k8s-node4
  9. EOF

SSH 证书分发

  1. ssh-keygen -t rsa
  2. for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3 k8s-node4 k8s-node5;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

测试互通性:

  1. ping k8s-master1
  2. # 或
  3. ssh k8s-master1

二、开始部署

使用 k8s-ha-install

有大佬将部署高可用 k8s 写成了自动化脚本: k8s-ha-install

  1. git clone https://github.com/dotbalo/k8s-ha-install.git -b v1.13.x

以下的操作都在 k8s-ha-install 目录下执行

修改 create-config.sh, 将以下变量修改为自己的, 比如我自己的:

  1. #!/bin/bash
  2. #######################################
  3. # set variables below to create the config files, all files will create at ./config directory
  4. #######################################
  5. # master keepalived virtual ip address
  6. export K8SHA_VIP=192.168.1.140
  7. # master1 ip address
  8. export K8SHA_IP1=192.168.1.128
  9. # master2 ip address
  10. export K8SHA_IP2=192.168.1.129
  11. # master3 ip address
  12. export K8SHA_IP3=192.168.1.130
  13. # master keepalived virtual ip hostname
  14. export K8SHA_VHOST=k8s-master-lb
  15. # master1 hostname
  16. export K8SHA_HOST1=k8s-master1
  17. # master2 hostname
  18. export K8SHA_HOST2=k8s-master2
  19. # master3 hostname
  20. export K8SHA_HOST3=k8s-master3
  21. # master1 network interface name
  22. export K8SHA_NETINF1=ens33
  23. # master2 network interface name
  24. export K8SHA_NETINF2=ens33
  25. # master3 network interface name
  26. export K8SHA_NETINF3=ens33
  27. # keepalived auth_pass config
  28. export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
  29. # calico reachable ip address 服务器网关地址
  30. export K8SHA_CALICO_REACHABLE_IP=192.168.1.1
  31. # kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0"
  32. export K8SHA_CIDR=172.168.0.0

将 k8s 版本改为自己的版本(好几个地方, 全部都要改), 比如我的是 v1.14.2:

kubernetesVersion: v1.13.2

# 改为
kubernetesVersion: v1.14.2

修改完成之后执行:

$ ./create-config.sh
create kubeadm-config.yaml files success. config/k8s-master1/kubeadm-config.yaml
create kubeadm-config.yaml files success. config/k8s-master2/kubeadm-config.yaml
create kubeadm-config.yaml files success. config/k8s-master3/kubeadm-config.yaml
create keepalived files success. config/k8s-master1/keepalived/
create keepalived files success. config/k8s-master2/keepalived/
create keepalived files success. config/k8s-master3/keepalived/
create nginx-lb files success. config/k8s-master1/nginx-lb/
create nginx-lb files success. config/k8s-master2/nginx-lb/
create nginx-lb files success. config/k8s-master3/nginx-lb/
create calico.yaml file success. calico/calico.yaml

可以看到自动创建了很多文件, 然后将这些文件进行分发:

# 设置相关hostname变量
export HOST1=k8s-master1
export HOST2=k8s-master2
export HOST3=k8s-master3

# 把kubeadm配置文件放到各个master节点的/root/目录
scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/
scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/
scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/

# 把keepalived配置文件放到各个master节点的/etc/keepalived/目录
scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/
scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/
scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/

# 把nginx负载均衡配置文件放到各个master节点的/root/目录
scp -r config/$HOST1/nginx-lb $HOST1:/root/
scp -r config/$HOST2/nginx-lb $HOST2:/root/
scp -r config/$HOST3/nginx-lb $HOST3:/root/

nginx-lb 配置

在所有 master 节点执行

cd
docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d
docker-compose --file=/root/nginx-lb/docker-compose.yaml ps

keepalived 配置

在所有 master 节点修改以下文件:

$ vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
#vrrp_script chk_apiserver {
#    script "/etc/keepalived/check_apiserver.sh"
#    interval 2
#    weight -5
#    fall 3
#    rise 2
#}
vrrp_instance VI_1 {
    state MASTER
    interface ens160
    mcast_src_ip 192.168.1.128
    virtual_router_id 51
    priority 102
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 412f7dc3bfed32194d1600c483e10ad1d
    }
    virtual_ipaddress {
        192.168.1.140
    }
    track_script {
       chk_apiserver
    }
}

修改的地方:

  • 注释了 vrrp_script 部分
  • mcast_src_ip 配置为各个 master 节点的 IP
  • virtual_ipaddress 配置为 VIP

然后重启服务:

systemctl restart keepalived

测试 VIP 是否可以访问得通, 一定要通才能进行下一步:

ping 192.168.1.140 -c 4

注意步骤, 必须先执行 nginx-lb 相关的操作, 再注释keepalived.conf相关部分, 再重启 keepalived !!!

启动主节点

配置好之后, 提前下载镜像(所有节点都需执行):

kubeadm config images pull --config /root/kubeadm-config.yaml

然后启动 k8s-master1:

kubeadm init --config /root/kubeadm-config.yaml

查看当前启动的节点

$ kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   master   2m11s   v1.14.2

证书分发