多master集群

配置master节点
安装keepalived
配置mysql高可用的时候已安装
部署haproxy
可以用nginx代替
配置mysql高可用的时候已安装,这里修改一下配置文件,下面是完整的配置
globallog 127.0.0.1 local0 ##记日志的功能maxconn 4096chroot /var/lib/haproxydaemondefaultslog globaloption dontlognullretries 3option redispatchmaxconn 2000timeout connect 5000timeout client 50000timeout server 50000listen admin_statusbind :1080 ##VIPstats uri /stats ##统计页面stats auth admin:adminmode httpoption httploglisten allmycat_servicebind :8096 ##转发到 mycat 的 8066 端口,即 mycat 的服务端口mode tcpoption tcplogoption httpchk OPTIONS * HTTP/1.1\r\nHost:\ wwwbalance roundrobinserver mycat_61 192.168.1.61:8066 check port 48700 inter 5s rise 2 fall 3server mycat_62 192.168.1.62:8066 check port 48700 inter 5s rise 2 fall 3timeout server 20000listen allmycat_adminbind :8097 ##转发到 mycat 的 9066 端口,即 mycat 的管理控制台端口mode tcpoption tcplogoption httpchk OPTIONS * HTTP/1.1\r\nHost:\ wwwbalance roundrobinserver mycat_61 192.168.1.61:9066 check port 48700 inter 5s rise 2 fall 3server mycat_62 192.168.1.62:9066 check port 48700 inter 5s rise 2 fall 3timeout server 20000frontend kubernetes-apiservermode tcpbind *:16443option tcplogdefault_backend kubernetes-apiserver#---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiservermode tcpbalance roundrobinserver master01.k8s.io 192.168.1.64:6443 checkserver master02.k8s.io 192.168.1.63:6443 check
两台master都重启haproxy
systemctl restart haproxy
安装kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:
$ yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3$ systemctl enable kubelet
部署Kubernetes Master
创建kubeadm配置文件
在具有vip的master上操作,这里为master1
$ mkdir /usr/local/kubernetes/manifests -p$ cd /usr/local/kubernetes/manifests/$ vi kubeadm-config.yamlapiServer:certSANs:- master1- master2- master.k8s.io- 192.168.1.64- 192.168.1.63- 192.168.1.62- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta1certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: "master.k8s.io:16443"controllerManager: {}dns:type: CoreDNSetcd:local:dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.16.3networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.1.0.0/16scheduler: {}
在master1节点执行
kubeadm init --config kubeadm-config.yaml# 重置集群kubeadm reset
按照提示配置环境变量,使用kubectl工具:
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config$ kubectl get nodes$ kubectl get pods -n kube-system
按照提示保存以下内容,一会要使用:
kubeadm join master.k8s.io:16443 --token 7tvdpz.7fdp3d8vbjaz6ubi \--discovery-token-ca-cert-hash sha256:e5e4768ba8f53553d692f344912fa71efc666371b08d8f9425308ddf2acae743 \--control-plane
查看集群状态
kubectl get cskubectl get pods -n kube-system
安装集群网络
从官方地址获取到flannel的yaml,在master1上执行
mkdir flannelcd flannelwget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安装flannel网络
kubectl apply -f kube-flannel.yml
检查
kubectl get pods -n kube-system
master2加入集群
复制密钥及相关文件
从master1复制密钥及相关文件到master2
$ ssh root@192.168.1.63 mkdir -p /etc/kubernetes/pki/etcd$ scp /etc/kubernetes/admin.conf root@192.168.1.63:/etc/kubernetes$ scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.1.63:/etc/kubernetes/pki$ scp /etc/kubernetes/pki/etcd/ca.* root@192.168.1.63:/etc/kubernetes/pki/etcd
加入集群
执行在master1上init后输出的join命令,需要带上参数--control-plane表示把master控制节点加入集群(上面提示保持的)
kubeadm join master.k8s.io:16443 --token 7tvdpz.7fdp3d8vbjaz6ubi \--discovery-token-ca-cert-hash sha256:e5e4768ba8f53553d692f344912fa71efc666371b08d8f9425308ddf2acae743 \--control-plane
检查状态
kubectl get nodekubectl get pods --all-namespaces
加入Kubernetes Node
在node1上执行
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:不需要-_-control-plane_
kubeadm join master.k8s.io:16443 --token 7tvdpz.7fdp3d8vbjaz6ubi \--discovery-token-ca-cert-hash sha256:e5e4768ba8f53553d692f344912fa71efc666371b08d8f9425308ddf2acae743
集群网络重新安装,因为添加了新的node节点
在master1节点上
kubectl delete -f kube-flannel.ymlkubectl apply -f kube-flannel.yml
检查状态
kubectl get nodekubectl get pods --all-namespaces
测试kubernetes集群
在Kubernetes集群中创建一个pod,验证是否正常运行:
$ kubectl create deployment nginx --image=nginx$ kubectl expose deployment nginx --port=80 --type=NodePort$ kubectl get pod,svc
