kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

这个工具能通过两条指令完成一个kubernetes集群的部署:

  1. # 创建一个 Master 节点
  2. $ kubeadm init
  3. # 将一个 Node 节点加入到当前集群中
  4. $ kubeadm join <Master节点的IP和端口 >

1. 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
  • 禁止swap分区

2. 准备环境

角色 IP
master1 192.168.44.155
master2 192.168.44.156
node1 192.168.44.157
VIP(虚拟ip) 192.168.44.158
  1. # 关闭防火墙
  2. systemctl stop firewalld
  3. systemctl disable firewalld
  4. # 关闭selinux
  5. sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
  6. setenforce 0 # 临时
  7. # 关闭swap
  8. swapoff -a # 临时
  9. sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
  10. # 根据规划设置主机名
  11. hostnamectl set-hostname <hostname>
  12. # 在master添加hosts
  13. cat >> /etc/hosts << EOF
  14. 192.168.44.158 master.k8s.io k8s-vip
  15. 192.168.44.155 master01.k8s.io master1
  16. 192.168.44.156 master02.k8s.io master2
  17. 192.168.44.157 node01.k8s.io node1
  18. EOF
  19. # 将桥接的IPv4流量传递到iptables的链
  20. cat > /etc/sysctl.d/k8s.conf << EOF
  21. net.bridge.bridge-nf-call-ip6tables = 1
  22. net.bridge.bridge-nf-call-iptables = 1
  23. EOF
  24. sysctl --system # 生效
  25. # 时间同步
  26. yum install ntpdate -y
  27. ntpdate time.windows.com

3. 所有master节点部署keepalived

3.1 安装相关包和keepalived

  1. yum install -y conntrack-tools libseccomp libtool-ltdl
  2. yum install -y keepalived

3.2配置master节点

master1节点配置

  1. cat > /etc/keepalived/keepalived.conf <<EOF
  2. ! Configuration File for keepalived
  3. global_defs {
  4. router_id k8s
  5. }
  6. vrrp_script check_haproxy {
  7. script "killall -0 haproxy"
  8. interval 3
  9. weight -2
  10. fall 10
  11. rise 2
  12. }
  13. vrrp_instance VI_1 {
  14. state MASTER
  15. interface ens33
  16. virtual_router_id 51
  17. priority 250
  18. advert_int 1
  19. authentication {
  20. auth_type PASS
  21. auth_pass ceb1b3ec013d66163d6ab
  22. }
  23. virtual_ipaddress {
  24. 192.168.44.158
  25. }
  26. track_script {
  27. check_haproxy
  28. }
  29. }
  30. EOF

master2节点配置

  1. cat > /etc/keepalived/keepalived.conf <<EOF
  2. ! Configuration File for keepalived
  3. global_defs {
  4. router_id k8s
  5. }
  6. vrrp_script check_haproxy {
  7. script "killall -0 haproxy"
  8. interval 3
  9. weight -2
  10. fall 10
  11. rise 2
  12. }
  13. vrrp_instance VI_1 {
  14. state BACKUP
  15. interface ens33
  16. virtual_router_id 51
  17. priority 200
  18. advert_int 1
  19. authentication {
  20. auth_type PASS
  21. auth_pass ceb1b3ec013d66163d6ab
  22. }
  23. virtual_ipaddress {
  24. 192.168.44.158
  25. }
  26. track_script {
  27. check_haproxy
  28. }
  29. }
  30. EOF

3.3 启动和检查

在两台master节点都执行

  1. # 启动keepalived
  2. $ systemctl start keepalived.service
  3. 设置开机启动
  4. $ systemctl enable keepalived.service
  5. # 查看启动状态
  6. $ systemctl status keepalived.service

启动后查看master1的网卡信息

  1. ip a s ens33

4. 部署haproxy

4.1 安装

  1. yum install -y haproxy

4.2 配置

两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口

  1. cat > /etc/haproxy/haproxy.cfg << EOF
  2. #---------------------------------------------------------------------
  3. # Global settings
  4. #---------------------------------------------------------------------
  5. global
  6. # to have these messages end up in /var/log/haproxy.log you will
  7. # need to:
  8. # 1) configure syslog to accept network log events. This is done
  9. # by adding the '-r' option to the SYSLOGD_OPTIONS in
  10. # /etc/sysconfig/syslog
  11. # 2) configure local2 events to go to the /var/log/haproxy.log
  12. # file. A line like the following can be added to
  13. # /etc/sysconfig/syslog
  14. #
  15. # local2.* /var/log/haproxy.log
  16. #
  17. log 127.0.0.1 local2
  18. chroot /var/lib/haproxy
  19. pidfile /var/run/haproxy.pid
  20. maxconn 4000
  21. user haproxy
  22. group haproxy
  23. daemon
  24. # turn on stats unix socket
  25. stats socket /var/lib/haproxy/stats
  26. #---------------------------------------------------------------------
  27. # common defaults that all the 'listen' and 'backend' sections will
  28. # use if not designated in their block
  29. #---------------------------------------------------------------------
  30. defaults
  31. mode http
  32. log global
  33. option httplog
  34. option dontlognull
  35. option http-server-close
  36. option forwardfor except 127.0.0.0/8
  37. option redispatch
  38. retries 3
  39. timeout http-request 10s
  40. timeout queue 1m
  41. timeout connect 10s
  42. timeout client 1m
  43. timeout server 1m
  44. timeout http-keep-alive 10s
  45. timeout check 10s
  46. maxconn 3000
  47. #---------------------------------------------------------------------
  48. # kubernetes apiserver frontend which proxys to the backends
  49. #---------------------------------------------------------------------
  50. frontend kubernetes-apiserver
  51. mode tcp
  52. bind *:16443
  53. option tcplog
  54. default_backend kubernetes-apiserver
  55. #---------------------------------------------------------------------
  56. # round robin balancing between the various backends
  57. #---------------------------------------------------------------------
  58. backend kubernetes-apiserver
  59. mode tcp
  60. balance roundrobin
  61. server master01.k8s.io 192.168.44.155:6443 check
  62. server master02.k8s.io 192.168.44.156:6443 check
  63. #---------------------------------------------------------------------
  64. # collection haproxy statistics message
  65. #---------------------------------------------------------------------
  66. listen stats
  67. bind *:1080
  68. stats auth admin:awesomePassword
  69. stats refresh 5s
  70. stats realm HAProxy\ Statistics
  71. stats uri /admin?stats
  72. EOF

4.3 启动和检查

两台master都启动

  1. # 设置开机启动
  2. $ systemctl enable haproxy
  3. # 开启haproxy
  4. $ systemctl start haproxy
  5. # 查看启动状态
  6. $ systemctl status haproxy

检查端口

  1. netstat -lntup|grep haproxy

5. 所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

5.1 安装Docker

  1. $ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  2. $ yum -y install docker-ce-18.06.1.ce-3.el7
  3. $ systemctl enable docker && systemctl start docker
  4. $ docker --version
  5. Docker version 18.06.1-ce, build e68fc7a
  1. $ cat > /etc/docker/daemon.json << EOF
  2. {
  3. "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
  4. }
  5. EOF

5.2 添加阿里云YUM软件源

  1. $ cat > /etc/yum.repos.d/kubernetes.repo << EOF
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

5.3 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

  1. $ yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
  2. $ systemctl enable kubelet

6. 部署Kubernetes Master

6.1 创建kubeadm配置文件

在具有vip的master上操作,这里为master1

  1. $ mkdir /usr/local/kubernetes/manifests -p
  2. $ cd /usr/local/kubernetes/manifests/
  3. $ vi kubeadm-config.yaml
  4. apiServer:
  5. certSANs:
  6. - master1
  7. - master2
  8. - master.k8s.io
  9. - 192.168.242.100
  10. - 192.168.242.105
  11. - 192.168.242.151
  12. - 127.0.0.1
  13. extraArgs:
  14. authorization-mode: Node,RBAC
  15. timeoutForControlPlane: 4m0s
  16. apiVersion: kubeadm.k8s.io/v1beta1
  17. certificatesDir: /etc/kubernetes/pki
  18. clusterName: kubernetes
  19. controlPlaneEndpoint: "master.k8s.io:16443"
  20. controllerManager: {}
  21. dns:
  22. type: CoreDNS
  23. etcd:
  24. local:
  25. dataDir: /var/lib/etcd
  26. imageRepository: registry.aliyuncs.com/google_containers
  27. kind: ClusterConfiguration
  28. kubernetesVersion: v1.16.3
  29. networking:
  30. dnsDomain: cluster.local
  31. podSubnet: 10.244.0.0/16
  32. serviceSubnet: 10.1.0.0/16
  33. scheduler: {}

6.2 在master1节点执行

  1. $ kubeadm init --config kubeadm-config.yaml

按照提示配置环境变量,使用kubectl工具:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. $ kubectl get nodes
  5. $ kubectl get pods -n kube-system

按照提示保存以下内容,一会要使用:

  1. kubeadm join master.k8s.io:16443 --token jv5z7n.3y1zi95p952y9p65 \
  2. --discovery-token-ca-cert-hash sha256:403bca185c2f3a4791685013499e7ce58f9848e2213e27194b75a2e3293d8812 \
  3. --control-plane

查看集群状态

  1. kubectl get cs
  2. kubectl get pods -n kube-system

7.安装集群网络

从官方地址获取到flannel的yaml,在master1上执行

  1. mkdir flannel
  2. cd flannel
  3. wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装flannel网络

  1. kubectl apply -f kube-flannel.yml

检查

  1. kubectl get pods -n kube-system

8、master2节点加入集群

8.1 复制密钥及相关文件

从具有vip的master1复制密钥及相关文件到master2

  1. # ssh root@192.168.44.156 mkdir -p /etc/kubernetes/pki/etcd
  2. # scp /etc/kubernetes/admin.conf root@192.168.44.156:/etc/kubernetes
  3. # scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.44.156:/etc/kubernetes/pki
  4. # scp /etc/kubernetes/pki/etcd/ca.* root@192.168.44.156:/etc/kubernetes/pki/etcd

8.2 master2加入集群

执行在master1上init后输出的join命令,需要带上参数--control-plane表示把master控制节点加入集群

  1. kubeadm join master.k8s.io:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba --control-plane

检查状态

  1. kubectl get node
  2. kubectl get pods --all-namespaces

5. 加入Kubernetes Node

在node1上执行

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

  1. kubeadm join master.k8s.io:16443 --token ckf7bs.30576l0okocepg8b --discovery-token-ca-cert-hash sha256:19afac8b11182f61073e254fb57b9f19ab4d798b70501036fc69ebef46094aba

集群网络重新安装,因为添加了新的node节点

检查状态

  1. kubectl get node
  2. kubectl get pods --all-namespaces

7. 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

  1. $ kubectl create deployment nginx --image=nginx
  2. $ kubectl expose deployment nginx --port=80 --type=NodePort
  3. $ kubectl get pod,svc

访问地址:http://NodeIP:Port