Docker安装
安装步骤
[root@clientvm k8s]# cd /resources/playbooks/k8s/[root@clientvm k8s]# ansible-playbook -i hosts docker.yml
验证
ansible -i hosts lab -m command -a 'docker images'
官方安装步骤
# (Install Docker CE)## Set up the repository### Install required packagessudo yum install -y yum-utils device-mapper-persistent-data lvm2## Add the Docker repositorysudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker CEyum install -y containerd.io-1.2.13 docker-ce-19.03.11 docker-ce-cli-19.03.11
如果需要安装指定版本,请先显示所有版本号:
yum list docker-ce --showduplicates | sort -rdocker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stabledocker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stabledocker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
然后指定版本进行安装:
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
继续……
## Create /etc/dockersudo mkdir /etc/docker
# Set up the Docker daemoncat <<EOF | sudo tee /etc/docker/daemon.json{"registry-mirrors": ["https://pee6w651.mirror.aliyuncs.com", "https://ustc-edu-cn.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2"}EOF
# Create /etc/systemd/system/docker.service.dsudo mkdir -p /etc/systemd/system/docker.service.d
# Restart Dockersudo systemctl daemon-reloadsudo systemctl restart docker
sudo systemctl enable docker
kubelet kubeadm kubectl安装
安装步骤
完成系统配置
- Turn off swapping
- Turn off SELinux
- Manage Kernel parameters
安装kubeadm, kubelet, kubectl[root@clientvm k8s]# ansible-playbook -i hosts tune-os.yml
命令补全[root@clientvm k8s]# ansible-playbook -i hosts kubeadm-kubelet.yml
[root@clientvm k8s]# echo "source <(kubectl completion bash)" >>~/.bashrc[root@clientvm k8s]# . ~/.bashrc
官方安装步骤
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
显示所有版本
yum list kubeadm --showduplicates | sort -r
安装指定版本
setenforce 0yum install -y kubelet-<VERSION_STRING> kubeadm-<VERSION_STRING> kubectl-<VERSION_STRING>systemctl enable kubelet && systemctl start kubelet
安装K8S集群
安装master
预先导入Image以节约时间,为避免多虚拟机同时读写磁盘数据带来磁盘压力导致镜像导入出错,增加 —forks 1 的参数,配置并行数量为1:
[root@clientvm k8s]# ansible-playbook --forks 1 -i hosts 01-preload-install-Image.yml[root@clientvm k8s]# ansible-playbook --forks 1 -i hosts 02-preload-other.yml[root@clientvm k8s]# ansible-playbook --forks 1 -i hosts 03-preload-ingress-storage-metallb-metrics.yml[root@clientvm k8s]# ansible-playbook --forks 1 -i hosts 08-preload-Exam.yml
如果镜像导入出错,需要在每个节点上执行如下命令删除镜像,然后重新导入:
for i in $(docker images | awk '{print $3}' |grep -v IMAGE); do docker rmi $i ; done
[root@clientvm k8s]# ssh masterLast login: Thu Nov 26 11:53:41 2020 from 192.168.241.132[root@master ~]#[root@master ~]# source <(kubeadm completion bash)
创建生成配置文件
以下IP替换为你自己master节点的IP:
[root@master ~]# kubeadm config print init-defaults >init.yaml[root@master ~]# vim init.yaml##修改如下几行advertiseAddress: 192.168.133.129imageRepository: registry.aliyuncs.com/google_containers......networking:dnsDomain: example.comserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
初始化
可以修改此文件中的IP地址后直接使用:/resources/yaml/cluster-init.yaml
[root@master ~]# kubeadm init --config /resources/yaml/cluster-init.yaml......Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.133.129:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:3c2a964155d000ac6950f7bc33f765e937fe2f58fdf4c2fe99792f886a4a84a4
手动拉取镜像命令
kubeadm config images pull --config cluster-init.yaml
配置kubectl
配置master的kubectl
[root@master ~]# mkdir -p ~/.kube[root@master ~]# cp -i /etc/kubernetes/admin.conf ~/.kube/config
配置客户端VM的kubectl
[root@clientvm k8s]# mkdir -p ~/.kube[root@clientvm k8s]# scp master:/root/.kube/config ~/.kube/[root@clientvm k8s]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster.example.com Ready master 46m v1.20.0
添加节点
[root@clientvm k8s]# ssh worker1Last login: Thu Nov 26 16:27:47 2020 from 192.168.241.132[root@worker1 ~]#[root@worker1 ~]#[root@worker1 ~]# kubeadm join 192.168.133.129:6443 --token abcdef.0123456789abcdef \> --discovery-token-ca-cert-hash sha256:00a111079e7d2e367e2b21500c64202a981898cf7e058957cfa5d06e933c2362
[root@clientvm k8s]# ssh worker2Last login: Thu Nov 26 16:27:44 2020 from 192.168.241.132[root@worker2 ~]# kubeadm join 192.168.133.129:6443 --token abcdef.0123456789abcdef \> --discovery-token-ca-cert-hash sha256:00a111079e7d2e367e2b21500c64202a981898cf7e058957cfa5d06e933c2362
部署网络组件calico
可参考官方文档:
https://docs.projectcalico.org/getting-started/kubernetes/quickstart
或直接使用以下yaml
https://docs.projectcalico.org/v3.14/manifests/calico.yaml
https://docs.projectcalico.org/v3.17/manifests/calico.yaml
K8S1.20版本请使用:calico-v3.14.yaml
K8S1.23版本请使用:calico-v3.21.yaml,镜像已经预先导入
[root@master ~]# cd /resources/yaml/[root@master yaml]# lscalico.yaml cluster-init.yaml[root@master yaml]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster.example.com NotReady master 4m57s v1.20.0[root@master yaml]#[root@master yaml]# kubectl apply -f calico-v3.21.yaml
集群部署完成后,为4台VM创建一个快照
其他操作
ComponentStatus资源报错
故障:
[root@master yaml]# kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refusedcontroller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refusedetcd-0 Healthy {"health":"true"}
解决:
#编辑如下两个配置文件,注释掉- port=0 的行[root@master yaml]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml[root@master yaml]# vim /etc/kubernetes/manifests/kube-scheduler.yaml[root@master yaml]# grep 'port=0' /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/manifests/kube-scheduler.yaml/etc/kubernetes/manifests/kube-controller-manager.yaml:# - --port=0/etc/kubernetes/manifests/kube-scheduler.yaml:# - --port=0## 重启 kubelet.service 服务[root@master yaml]# systemctl restart kubelet.service[root@master yaml]# kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORcontroller-manager Healthy okscheduler Healthy oketcd-0 Healthy {"health":"true"}
删除节点
在需要删除的节点上运行
[root@worker2 ~]# kubeadm reset -f[root@worker2 ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X[root@worker2 ~]# ipvsadm -C
在master上运行
[root@master yaml]# kubectl delete node worker2.example.comnode "worker2.example.com" deleted[root@master yaml]# kubectl delete node worker2.example.comnode "worker2.example.com" deleted[root@master yaml]# kubectl delete node worker1.example.comnode "worker1.example.com" deleted[root@master yaml]#[root@master yaml]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster.example.com Ready master 32m v1.20.0
Token过期后加入节点
在master上列出Token
[root@master yaml]# kubeadm token listTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPSabcdef.0123456789abcdef 23h 2020-11-27T16:29:16+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
生成永久Token
[root@master yaml]# kubeadm token create --ttl 0[root@master yaml]# kubeadm token listTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS2kpxk0.3861kgminh7jafrp <forever> <never> authentication,signing <none> system:bootstrappers:kubeadm:default-node-tokenabcdef.0123456789abcdef 23h 2020-11-27T16:29:16+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
获取discovery-token-ca-cert-hash
[root@master yaml]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'00a111079e7d2e367e2b21500c64202a981898cf7e058957cfa5d06e933c2362
在节点上执行命令加入集群
[root@worker1 ~]# kubeadm join 192.168.133.129:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:00a111079e7d2e367e2b21500c64202a981898cf7e058957cfa5d06e933c2362
Containerd参考
使用Containerd作为RUNC
Containerd 安装配置参考:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
注意: 还需要按照kubeadm的版本相应修改image registry为:registry.aliyuncs.com/google_containers
pause容器版本为K8S兼容的版本。
......sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"......[plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]......[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup = true
参考配置文件:/resources/playbooks/k8s/config.toml
部署步骤:
/resources/playbooks/k8sansible-playbook -i hosts containerd.yamlansible-playbook -i hosts tune-os.ymlansible-playbook -i hosts kubeadm-kubelet.ymlkubeadm init --config /resources/yaml/cluster-init-containerd.yaml
在cluster-init-containerd.yaml 中需要修改criSocket指定RUNC,位置与containerd配置相同:
apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 192.168.126.128bindPort: 6443nodeRegistration:criSocket: unix:///run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: master.example.comtaints: null---apiServer:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:local:dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: 1.23.0networking:dnsDomain: example.comserviceSubnet: 10.96.0.0/12scheduler: {}---apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: systemd
