系统配置
master1 192.168.75.161master2 192.168.75.162master3 192.168.75.163node1 192.168.75.164node2 192.168.75.165node3 192.168.75.166VIP 192.168.75.160VIP PORT 6444
系统初始化
关闭防火墙
$ systemctl stop firewalld$ systemctl disable firewalld
关闭 selinux
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久$ setenforce 0 # 临时
关闭 swap
$ swapoff -a # 临时$ vim /etc/fstab # 永久 注释swap
主机名
$ hostnamectl set-hostname <hostname>
master 添加 hosts
$ cat >> /etc/hosts << EOF192.168.75.161 k8s-m1192.168.75.162 k8s-m2192.168.75.163 k8s-m3192.168.75.164 k8s-w1192.168.75.165 k8s-w2192.168.75.166 k8s-w3EOF
将桥接的 IPv4 流量传递到 iptables 的链
$ cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF$ sysctl --system # 生效
时间同步
$ yum install ntpdate -y$ ntpdate time.windows.com
所有节点安装 Docker/kubeadm/kubelet
安装docker
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo$ yum -y install docker-ce-18.06.1.ce-3.el7$ systemctl enable docker && systemctl start docker$ docker --version
添加阿里云 YUM 软件源
#设置仓库地址# cat > /etc/docker/daemon.json << EOF{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]} EOF#添加 yum 源$ cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
安装 kubeadm, kubelet 和 kubectl
$ yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0$ systemctl enable kubelet#备注#搜索版本yum list kubelet kubeadm kubectl --showduplicates|sort -r#卸载yum erase -y kubelet kubectl kubeadm
部署keepalived
参考keepalived yum安装手册
部署haproxy
参考haproxy安装手册
部署 Kubernetes Master
在master上执行
kubeadm init \--control-plane-endpoint 192.168.75.160:6444 \--upload-certs \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.19.0 \--pod-network-cidr 10.244.0.0/16
使用 kubectl 工具
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
部署 Kubernetes Worker
kubeadm join 192.168.75.160:6444 --token rsai58.yro11chey8qetgr6 \ --discovery-token-ca-cert-hash sha256:8fbc37be2c0807bef6df199e66387ce81f4d1a2cfd2c351e00e65401486a8c5e \ --control-plane --certificate-key fc9b86b166ca5275501138b6855466c4e0627e728e2a6af2a0e4b28c2ee3b0b8
安装网络插件flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
测试kubernetes集群
#在 Kubernetes 集群中创建一个 pod, 验证是否正常运行$ kubectl create deployment nginx --image=nginx$ kubectl expose deployment nginx --port=80 --type=NodePort$ kubectl get pod,svc#访问地址: http://NodeIP:Port
安装kuboard
#安装kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml#查看状态kubectl get pods -l k8s.eip.work/name=kuboard -n kube-system#获取tokenkubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d#参考https://www.cnblogs.com/xiao987334176/p/12060855.html
k8s防火墙配置
#查看防火墙状态systemctl status firewalld.service#开启防火墙systemctl start firewalld.service#关闭防火墙systemctl stop firewalld.service#重启防火墙systemctl restart firewalld.service#禁止防火墙开机启动systemctl disable firewalld.service#查看已经开放的端口firewall-cmd --list-ports#k8s master需要开放的端口firewall-cmd --permanent --add-port=6444/tcp #vip负载均衡端口firewall-cmd --permanent --add-port=6443/tcpfirewall-cmd --permanent --add-port=2379-2380/tcpfirewall-cmd --permanent --add-port=10250/tcpfirewall-cmd --permanent --add-port=10251/tcpfirewall-cmd --permanent --add-port=10252/tcpfirewall-cmd --permanent --add-port=10255/tcpfirewall-cmd --permanent --add-port=8472/udpfirewall-cmd --permanent --add-port=443/udpfirewall-cmd --permanent --add-port=53/udpfirewall-cmd --permanent --add-port=53/tcpfirewall-cmd --permanent --add-port=9153/tcpfirewall-cmd --add-masquerade --permanent# only if you want NodePorts exposed on control plane IP as wellfirewall-cmd --permanent --add-port=30000-32767/tcpsystemctl restart firewalld.service#k8s worker需要开放的端口firewall-cmd --permanent --add-port=10250/tcpfirewall-cmd --permanent --add-port=10255/tcpfirewall-cmd --permanent --add-port=8472/udpfirewall-cmd --permanent --add-port=443/udpfirewall-cmd --permanent --add-port=30000-32767/tcpfirewall-cmd --permanent --add-port=53/udpfirewall-cmd --permanent --add-port=53/tcpfirewall-cmd --permanent --add-port=9153/tcpfirewall-cmd --add-masquerade --permanentsystemctl restart firewalld.service
kubeadm初始化
kubeadm reset
查看dashboard的token
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
证书过期问题解决
#检查客户端证书过期时间kubeadm alpha certs check-expiration [check-expiration] Reading configuration from the cluster...[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGEDadmin.conf Jun 01, 2021 00:41 UTC 295d no apiserver Jun 01, 2021 00:41 UTC 295d ca no apiserver-etcd-client Jun 01, 2021 00:41 UTC 295d etcd-ca no apiserver-kubelet-client Jun 01, 2021 00:41 UTC 295d ca no controller-manager.conf Jun 01, 2021 00:41 UTC 295d no etcd-healthcheck-client Jun 01, 2021 00:41 UTC 295d etcd-ca no etcd-peer Jun 01, 2021 00:41 UTC 295d etcd-ca no etcd-server Jun 01, 2021 00:41 UTC 295d etcd-ca no front-proxy-client Jun 01, 2021 00:41 UTC 295d front-proxy-ca no scheduler.conf Jun 01, 2021 00:41 UTC 295d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGEDca May 30, 2030 00:41 UTC 9y no etcd-ca May 30, 2030 00:41 UTC 9y no front-proxy-ca May 30, 2030 00:41 UTC 9y no#续签所有证书(master执行)kubeadm alpha certs renew allcp /etc/kubernetes/admin.conf /root/.kube/config#续签之后,重启master相关服务:kube-apiserver-k8s-master & kube-controller-manager-k8s-master & kube-scheduler-k8s-master或者将/etc/kubernetes/manifests 文件夹重命名,等一会再改回去,上述pod也会重新构建#参考https://leif.fun/articles/2020/08/09/1596949888243.html
修改kubernetes的service服务类型nodeport端口范围
#编辑kube-apiserver.yaml文件vim /etc/kubernetes/manifests/kube-apiserver.yaml#增加kube-apiserver的启动选项--service-node-port-range=1-65535#重启kube-apiserversystemctl daemon-reloadsystemctl restart kube-apiserver#重启kubeletsystemctl daemon-reloadsystemctl restart kubelet