Kubernetes结构图

image.png

kubeadm安装k8s

master & node

硬件配置

每个节点都要2核4G。

Linux配置

  1. #各个机器设置自己的域名
  2. hostnamectl set-hostname xxxx
  3. # 将 SELinux 设置为 permissive 模式(相当于将其禁用)
  4. sudo setenforce 0
  5. sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  6. #关闭swap
  7. swapoff -a
  8. sed -ri 's/.*swap.*/#&/' /etc/fstab
  9. #允许 iptables 检查桥接流量
  10. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
  11. br_netfilter
  12. EOF
  13. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
  14. net.bridge.bridge-nf-call-ip6tables = 1
  15. net.bridge.bridge-nf-call-iptables = 1
  16. EOF
  17. sudo sysctl --system

安装Docker

  1. # 卸载已存在的docker
  2. sudo yum remove docker \
  3. docker-client \
  4. docker-client-latest \
  5. docker-common \
  6. docker-latest \
  7. docker-latest-logrotate \
  8. docker-logrotate \
  9. docker-engine
  10. # 安装yum工具
  11. yum install -y yum-utils
  12. # 配置docker repo的国内地址
  13. yum-config-manager \
  14. --add-repo \
  15. https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  16. # 刷新yum索引
  17. yum makecache fast
  18. # 安装docker相关的 docker-ce 社区版(ee是企业版,要收费)
  19. yum install docker-ce docker-ce-cli containerd.io
  20. # 启动docker
  21. systemctl enable docker
  22. systemctl start docker
  23. # 使用docker version查看是否按照成功
  24. [root@VM-8-9-centos ~]# docker version
  25. Client: Docker Engine - Community
  26. Version: 20.10.9
  27. API version: 1.41
  28. Go version: go1.16.8
  29. Git commit: c2ea9bc
  30. Built: Mon Oct 4 16:08:25 2021
  31. OS/Arch: linux/amd64
  32. Context: default
  33. Experimental: true
  34. Server: Docker Engine - Community
  35. Engine:
  36. Version: 20.10.9
  37. API version: 1.41 (minimum version 1.12)
  38. Go version: go1.16.8
  39. Git commit: 79ea9d3
  40. Built: Mon Oct 4 16:06:48 2021
  41. OS/Arch: linux/amd64
  42. Experimental: false
  43. containerd:
  44. Version: 1.4.11
  45. GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8
  46. runc:
  47. Version: 1.0.2
  48. GitCommit: v1.0.2-0-g52b36a2
  49. docker-init:
  50. Version: 0.19.0
  51. GitCommit: de40ad0

下载可能用到的镜像

  1. sudo tee ./images.sh <<-'EOF'
  2. #!/bin/bash
  3. images=(
  4. kube-apiserver:v1.20.9
  5. kube-proxy:v1.20.9
  6. kube-controller-manager:v1.20.9
  7. kube-scheduler:v1.20.9
  8. coredns:1.7.0
  9. etcd:3.4.13-0
  10. pause:3.2
  11. )
  12. for imageName in ${images[@]} ; do
  13. docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
  14. done
  15. EOF
  16. chmod +x ./images.sh && ./images.sh

配置master节点域名

挑选一台机子作为master节点,其他机子作为node节点,查看master节点ip地址
image.png
将这个地址写入master节点所有node节点的hosts。

  1. echo "xx.xx.xx.xx cluster-endpoint" >> /etc/hosts

安装kubelet、kubeadm、kubectl

  1. cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. exclude=kubelet kubeadm kubectl
  11. EOF
  12. sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
  13. sudo systemctl enable --now kubelet

master

安装控制平面

在master节点安装控制平面。安装的时候需要保证service-cidr、pod-network-cidr、docker0和eth0的网络范围不重叠。

  1. kubeadm init \
  2. --apiserver-advertise-address=`master 节点的IP` \
  3. --control-plane-endpoint=cluster-endpoint \
  4. --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
  5. --kubernetes-version v1.20.9 \
  6. --service-cidr=10.96.0.0/16 \
  7. --pod-network-cidr=192.168.0.0/16

安装好之后会出现下面一段话。

  1. Your Kubernetes control-plane has initialized successfully!
  2. To start using your cluster, you need to run the following as a regular user:
  3. mkdir -p $HOME/.kube
  4. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  5. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  6. Alternatively, if you are the root user, you can run:
  7. export KUBECONFIG=/etc/kubernetes/admin.conf
  8. You should now deploy a pod network to the cluster.
  9. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  10. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  11. You can now join any number of control-plane nodes by copying certificate authorities
  12. and service account keys on each node and then running the following as root:
  13. kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
  14. --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \
  15. --control-plane
  16. Then you can join any number of worker nodes by running the following on each as root:
  17. kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
  18. --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

按照上面的输出执行就行。

处理配置文件

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. export KUBECONFIG=/etc/kubernetes/admin.conf

安装网络组件

  1. curl https://docs.projectcalico.org/manifests/calico.yaml -O
  2. kubectl apply -f calico.yaml

node

加入集群

下面的命令是控制平面安装好之后输出的内容中最后一段后,给它拷贝到node节点执行即可。

  1. kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \
  2. --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

获取token

安装控制平面之后输出的令牌,只在一段时间内有效,如果过了时间,node想加入集群,需要执行

  1. [root@k8s-master ~]# kubeadm token create --print-join-command
  2. kubeadm join cluster-endpoint:6443 --token 0d4c8q.tjsk2ge3u95070ll --discovery-token-ca-cert-hash sha256:5f7659f39b097ed9312bb5b266d629551d93aaaa087c40076b9cba03988f50f9

查看集群状态

查询集群的节点

  1. [root@k8s-master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready control-plane,master 44h v1.20.9
  4. k8s-node1 Ready <none> 44h v1.20.9
  5. k8s-node2 Ready <none> 44h v1.20.9

集群是否正常运行

  1. [root@k8s-master ~]# kubectl get pods -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system calico-kube-controllers-659bd7879c-7r9cg 1/1 Running 2 44h
  4. kube-system calico-node-c5bcv 1/1 Running 2 44h
  5. kube-system calico-node-plbqz 1/1 Running 2 44h
  6. kube-system calico-node-v9lrm 1/1 Running 2 44h
  7. kube-system coredns-5897cd56c4-lb5zl 1/1 Running 2 44h
  8. kube-system coredns-5897cd56c4-x9sc9 1/1 Running 2 44h
  9. kube-system etcd-k8s-master 1/1 Running 2 44h
  10. kube-system kube-apiserver-k8s-master 1/1 Running 3 44h
  11. kube-system kube-controller-manager-k8s-master 1/1 Running 2 44h
  12. kube-system kube-proxy-58mnj 1/1 Running 2 44h
  13. kube-system kube-proxy-9wxjw 1/1 Running 2 44h
  14. kube-system kube-proxy-jzv5s 1/1 Running 2 44h
  15. kube-system kube-scheduler-k8s-master 1/1 Running 2 44h

当我们新创建一个集群之后,如果上面获取集群所有pod的命令输出的结果是所有的pod都READY 1/1,就说明这个集群正常运行了。