多master集群

4.安装k8s - 图1

配置master节点

安装keepalived

配置mysql高可用的时候已安装

部署haproxy

可以用nginx代替

配置mysql高可用的时候已安装,这里修改一下配置文件,下面是完整的配置

  1. global
  2. log 127.0.0.1 local0 ##记日志的功能
  3. maxconn 4096
  4. chroot /var/lib/haproxy
  5. daemon
  6. defaults
  7. log global
  8. option dontlognull
  9. retries 3
  10. option redispatch
  11. maxconn 2000
  12. timeout connect 5000
  13. timeout client 50000
  14. timeout server 50000
  15. listen admin_status
  16. bind :1080 ##VIP
  17. stats uri /stats ##统计页面
  18. stats auth admin:admin
  19. mode http
  20. option httplog
  21. listen allmycat_service
  22. bind :8096 ##转发到 mycat 的 8066 端口,即 mycat 的服务端口
  23. mode tcp
  24. option tcplog
  25. option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
  26. balance roundrobin
  27. server mycat_61 192.168.1.61:8066 check port 48700 inter 5s rise 2 fall 3
  28. server mycat_62 192.168.1.62:8066 check port 48700 inter 5s rise 2 fall 3
  29. timeout server 20000
  30. listen allmycat_admin
  31. bind :8097 ##转发到 mycat 的 9066 端口,即 mycat 的管理控制台端口
  32. mode tcp
  33. option tcplog
  34. option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
  35. balance roundrobin
  36. server mycat_61 192.168.1.61:9066 check port 48700 inter 5s rise 2 fall 3
  37. server mycat_62 192.168.1.62:9066 check port 48700 inter 5s rise 2 fall 3
  38. timeout server 20000
  39. frontend kubernetes-apiserver
  40. mode tcp
  41. bind *:16443
  42. option tcplog
  43. default_backend kubernetes-apiserver
  44. #---------------------------------------------------------------------
  45. # round robin balancing between the various backends
  46. #---------------------------------------------------------------------
  47. backend kubernetes-apiserver
  48. mode tcp
  49. balance roundrobin
  50. server master01.k8s.io 192.168.1.64:6443 check
  51. server master02.k8s.io 192.168.1.63:6443 check

两台master都重启haproxy

  1. systemctl restart haproxy

安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

  1. $ yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
  2. $ systemctl enable kubelet

部署Kubernetes Master

创建kubeadm配置文件

在具有vip的master上操作,这里为master1

  1. $ mkdir /usr/local/kubernetes/manifests -p
  2. $ cd /usr/local/kubernetes/manifests/
  3. $ vi kubeadm-config.yaml
  4. apiServer:
  5. certSANs:
  6. - master1
  7. - master2
  8. - master.k8s.io
  9. - 192.168.1.64
  10. - 192.168.1.63
  11. - 192.168.1.62
  12. - 127.0.0.1
  13. extraArgs:
  14. authorization-mode: Node,RBAC
  15. timeoutForControlPlane: 4m0s
  16. apiVersion: kubeadm.k8s.io/v1beta1
  17. certificatesDir: /etc/kubernetes/pki
  18. clusterName: kubernetes
  19. controlPlaneEndpoint: "master.k8s.io:16443"
  20. controllerManager: {}
  21. dns:
  22. type: CoreDNS
  23. etcd:
  24. local:
  25. dataDir: /var/lib/etcd
  26. imageRepository: registry.aliyuncs.com/google_containers
  27. kind: ClusterConfiguration
  28. kubernetesVersion: v1.16.3
  29. networking:
  30. dnsDomain: cluster.local
  31. podSubnet: 10.244.0.0/16
  32. serviceSubnet: 10.1.0.0/16
  33. scheduler: {}

在master1节点执行

  1. kubeadm init --config kubeadm-config.yaml
  2. # 重置集群
  3. kubeadm reset

按照提示配置环境变量,使用kubectl工具:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. $ kubectl get nodes
  5. $ kubectl get pods -n kube-system

按照提示保存以下内容,一会要使用:

  1. kubeadm join master.k8s.io:16443 --token 7tvdpz.7fdp3d8vbjaz6ubi \
  2. --discovery-token-ca-cert-hash sha256:e5e4768ba8f53553d692f344912fa71efc666371b08d8f9425308ddf2acae743 \
  3. --control-plane

查看集群状态

  1. kubectl get cs
  2. kubectl get pods -n kube-system

安装集群网络

从官方地址获取到flannel的yaml,在master1上执行

  1. mkdir flannel
  2. cd flannel
  3. wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装flannel网络

  1. kubectl apply -f kube-flannel.yml

检查

  1. kubectl get pods -n kube-system

master2加入集群

复制密钥及相关文件

从master1复制密钥及相关文件到master2

  1. $ ssh root@192.168.1.63 mkdir -p /etc/kubernetes/pki/etcd
  2. $ scp /etc/kubernetes/admin.conf root@192.168.1.63:/etc/kubernetes
  3. $ scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.1.63:/etc/kubernetes/pki
  4. $ scp /etc/kubernetes/pki/etcd/ca.* root@192.168.1.63:/etc/kubernetes/pki/etcd

加入集群

执行在master1上init后输出的join命令,需要带上参数--control-plane表示把master控制节点加入集群(上面提示保持的)

  1. kubeadm join master.k8s.io:16443 --token 7tvdpz.7fdp3d8vbjaz6ubi \
  2. --discovery-token-ca-cert-hash sha256:e5e4768ba8f53553d692f344912fa71efc666371b08d8f9425308ddf2acae743 \
  3. --control-plane

检查状态

  1. kubectl get node
  2. kubectl get pods --all-namespaces

加入Kubernetes Node

在node1上执行

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:不需要-_-control-plane_

  1. kubeadm join master.k8s.io:16443 --token 7tvdpz.7fdp3d8vbjaz6ubi \
  2. --discovery-token-ca-cert-hash sha256:e5e4768ba8f53553d692f344912fa71efc666371b08d8f9425308ddf2acae743

集群网络重新安装,因为添加了新的node节点

在master1节点上

  1. kubectl delete -f kube-flannel.yml
  2. kubectl apply -f kube-flannel.yml

检查状态

  1. kubectl get node
  2. kubectl get pods --all-namespaces

测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

  1. $ kubectl create deployment nginx --image=nginx
  2. $ kubectl expose deployment nginx --port=80 --type=NodePort
  3. $ kubectl get pod,svc

访问地址:http://192.168.1.180:31507/