系统配置

  1. 机器数量
  2. 3
  3. 操作系统
  4. Centos8.1---7.3
  5. 设置主机名称
  6. 分别设置主机名称为:
  7. master node1 node2
  8. 每台机器必须设置域名解析
  9. 192.168.81.30 master
  10. 192.168.81.31 node1
  11. 192.168.81.32 node2
  12. 禁用开机启动防火墙
  13. # systemctl disable firewalld
  14. 永久禁用SELinux
  15. # sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
  16. SELINUX=disabled
  17. 关闭系统Swap
  18. 1.8版本之后的新规定
  19. Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。
  20. 修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。
  21. [root@master /]# sed -i 's/.*swap.*/#&/' /etc/fstab

基础软件安装

  1. docker安装
  2. yum install -y yum-utils device-mapper-persistent-data lvm2
  3. && yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  4. && yum makecache &&
  5. yum -y install docker-ce -y &&
  6. systemctl enable docker.service && systemctl start docker
  7. 所有机器安装kubeadmkubelet
  8. 配置aliyunyum
  9. # cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  10. [kubernetes]
  11. name=Kubernetes
  12. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  13. enabled=1
  14. gpgcheck=1
  15. repo_gpgcheck=1
  16. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  17. EOF
  18. 安装最新版kubeadm
  19. # yum makecache
  20. # yum install -y kubelet kubeadm kubectl ipvsadm
  21. 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用
  22. yum install -y --nogpgcheck kubelet kubeadm kubectl
  23. 说明:如果想安装指定版本的kubeadminv17.2的无法使用yum默认版本的】
  24. #yum install kubelet-1.16.0-0.x86_64 kubeadm-1.16.0-0.x86_64 kubectl-1.16.0-0.x86_64
  25. 配置内核参数
  26. # cat <<EOF > /etc/sysctl.d/k8s.conf
  27. net.bridge.bridge-nf-call-ip6tables = 1
  28. net.bridge.bridge-nf-call-iptables = 1
  29. vm.swappiness=0
  30. EOF
  31. # sysctl --system
  32. # modprobe br_netfilter
  33. # sysctl -p /etc/sysctl.d/k8s.conf
  34. 加载ipvs相关内核模块
  35. 如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
  36. # modprobe ip_vs
  37. # modprobe ip_vs_rr
  38. # modprobe ip_vs_wrr
  39. # modprobe ip_vs_sh
  40. # modprobe nf_conntrack_ipv4
  41. 查看是否加载成功
  42. # lsmod | grep ip_vs

image.png

获取镜像

  1. 三个节点都要下载
  2. - 注意下载时把版本号修改到官方最新版,即使下载了最新版也可能版本不对应,需要按报错提示下载
  3. - 每次部署都会有版本更新,具体版本要求,运行初始化过程失败会有版本提示
  4. - kubeadm的版本和镜像的版本必须是对应的
  5. 用命令查看版本当前kubeadm对应的k8s镜像版本
  6. 【最新的版本 [root@master ~]# kubeadm config images list
  7. [root@master ~]# kubeadm config images list
  8. k8s.gcr.io/kube-apiserver:v1.17.2
  9. k8s.gcr.io/kube-controller-manager:v1.17.2
  10. k8s.gcr.io/kube-scheduler:v1.17.2
  11. k8s.gcr.io/kube-proxy:v1.17.2
  12. k8s.gcr.io/pause:3.1
  13. k8s.gcr.io/etcd:3.4.3-0
  14. k8s.gcr.io/coredns:1.6.5
  15. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
  16. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2 k8s.gcr.io/kube-apiserver:v1.17.2
  17. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
  18. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2 k8s.gcr.io/kube-controller-manager:v1.17.2
  19. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
  20. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2 k8s.gcr.io/kube-scheduler:v1.17.2
  21. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
  22. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2 k8s.gcr.io/kube-proxy:v1.17.2
  23. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
  24. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
  25. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
  26. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
  27. docker pull coredns/coredns:1.6.5
  28. docker tag coredns/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5

image.png

所有节点配置启动kubelet

  1. 配置kubelet使用国内pause镜像
  2. 获取dockercgroups
  3. DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f4)
  4. # echo $DOCKER_CGROUPS
  5. cgroupfs 1
  6. 【使用如下命令】
  7. DOCKER_CGROUPS=$(docker info |grep cgro|awk -F':| ' '{print $NF}')
  8. # echo $DOCKER_CGROUPS
  9. cgroupfs
  10. 配置kubeletcgroups
  11. [root@master ~]# cat >/etc/sysconfig/kubelet<<EOF
  12. KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.1"
  13. EOF
  14. 启动
  15. # systemctl daemon-reload
  16. # systemctl enable kubelet && systemctl start kubelet

说明

  1. 做完如上操作后kubelet并没启动 是正常的
  2. 简单地说就是在kubeadm init 之前kubelet会不断重启。

初始化集群

  1. master节点进行初始化操作【默认yum安装会报这个】
  2. this version of kubeadm only supports deploying
  3. clusters with the control plane version >= 1.24.0. Current version: v1.17.2
  4. 初始化的时候 根据要求下载对应版本
  5. 补安装
  6. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
  7. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
  8. docker pull coredns/coredns:1.6.2
  9. docker tag coredns/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
  10. 特别说明:
  11. 初始化完成必须要记录下初始化过程最后的命令,如下所示
  12. To start using your cluster, you need to run the following as a regular user:
  13. mkdir -p $HOME/.kube
  14. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  15. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  16. Then you can join any number of worker nodes by running the following on each as root:
  17. kubeadm join 192.168.81.30:6443 --token dcg8kk.t42rp6f32p2szdda \
  18. --discovery-token-ca-cert-hash sha256:7b21de9e4be342bd5d971fbc9859169c7606c29d735f057a767ffcdd439bce10
  19. 其中有以下关键内容:
  20. [kubelet] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml
  21. [certificates]生成相关的各种证书
  22. [kubeconfig]生成相关的kubeconfig文件
  23. [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  24. master节点配置使用kubectl
  25. [root@master ~]# mkdir -p $HOME/.kube
  26. [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  27. [root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
  28. 查看node节点
  29. [root@master ~]# kubectl get nodes
  30. NAME STATUS ROLES AGE VERSION
  31. master NotReady master 6m31s v1.16.0

image.png

初始化过程

  1. [root@master ~]# kubeadm init --kubernetes-version=v1.17.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.81.30 --ignore-preflight-errors=Swap
  2. [init] Using Kubernetes version: v1.17.2
  3. [preflight] Running pre-flight checks
  4. [WARNING KubernetesVersion]: Kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. Kubernetes version: 1.17.2. Kubeadm version: 1.16.x
  5. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
  7. [preflight] Pulling images required for setting up a Kubernetes cluster
  8. [preflight] This might take a minute or two, depending on the speed of your internet connection
  9. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  10. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  11. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  12. [kubelet-start] Activating the kubelet service
  13. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  14. [certs] Generating "ca" certificate and key
  15. [certs] Generating "apiserver" certificate and key
  16. [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.81.30]
  17. [certs] Generating "apiserver-kubelet-client" certificate and key
  18. [certs] Generating "front-proxy-ca" certificate and key
  19. [certs] Generating "front-proxy-client" certificate and key
  20. [certs] Generating "etcd/ca" certificate and key
  21. [certs] Generating "etcd/server" certificate and key
  22. [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.81.30 127.0.0.1 ::1]
  23. [certs] Generating "etcd/peer" certificate and key
  24. [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.81.30 127.0.0.1 ::1]
  25. [certs] Generating "etcd/healthcheck-client" certificate and key
  26. [certs] Generating "apiserver-etcd-client" certificate and key
  27. [certs] Generating "sa" key and public key
  28. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  29. [kubeconfig] Writing "admin.conf" kubeconfig file
  30. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  31. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  32. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  33. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  34. [control-plane] Creating static Pod manifest for "kube-apiserver"
  35. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  36. [control-plane] Creating static Pod manifest for "kube-scheduler"
  37. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  38. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  39. [apiclient] All control plane components are healthy after 16.502507 seconds
  40. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  41. [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
  42. [upload-certs] Skipping phase. Please see --upload-certs
  43. [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
  44. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. [bootstrap-token] Using token: dcg8kk.t42rp6f32p2szdda
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  48. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  49. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  50. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  51. [addons] Applied essential addon: CoreDNS
  52. [addons] Applied essential addon: kube-proxy
  53. Your Kubernetes control-plane has initialized successfully!
  54. To start using your cluster, you need to run the following as a regular user:
  55. mkdir -p $HOME/.kube
  56. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  57. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  58. You should now deploy a pod network to the cluster.
  59. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  60. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  61. Then you can join any number of worker nodes by running the following on each as root:
  62. kubeadm join 192.168.81.30:6443 --token dcg8kk.t42rp6f32p2szdda \
  63. --discovery-token-ca-cert-hash sha256:7b21de9e4be342bd5d971fbc9859169c7606c29d735f057a767ffcdd439bce10

image.png

配置网络插件

  1. master节点下载yaml配置文件
  2. 特别说明:版本会经常更新,如果配置成功,就手动去
  3. https://raw.githubusercontent.com/coreos/flannel/master/Documentation/
  4. 下载最新版yaml文件
  5. [root@master ~]# cd ~ && mkdir flannel && cd flannel
  6. [root@master flannel]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  7. 修改配置文件kube-flannel.yml
  8. [root@master flannel]# grep install-cni -A 2 kube-flannel.yml
  9. [root@master flannel]# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
  10. 【确保后续没问题下载所有kube-flannel.yml文件的image
  11. [root@master flannel]# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
  12. 说明:默认的镜像是quay.io/coreos/flannel:v0.10.0-amd64
  13. 如果你能pull下来就不用修改镜像地址,否则,修改yml中镜像地址为阿里镜像源,
  14. 要修改所有的镜像版本,里面有好几条flannel镜像地址
  15. image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
  16. 指定启动网卡
  17. [root@master flannel]# cp kube-flannel.yml kube-flannel.yml_init
  18. flanneld启动参数加上--iface=<iface-name>
  19. containers:
  20. - name: kube-flannel
  21. image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
  22. command:
  23. - /opt/bin/flanneld
  24. args:
  25. - --ip-masq
  26. - --kube-subnet-mgr
  27. - --iface=ens33 【增加这一行---网卡名称】
  28. - --iface=eth0
  29. ⚠️⚠️⚠️--iface=ens33 的值,是你当前的网卡,或者可以指定多网卡
  30. 启动
  31. [root@master flannel]# kubectl apply -f ~/flannel/kube-flannel.yml
  32. namespace/kube-flannel created
  33. clusterrole.rbac.authorization.k8s.io/flannel created
  34. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  35. serviceaccount/flannel created
  36. configmap/kube-flannel-cfg created
  37. daemonset.apps/kube-flannel-ds created
  38. 查看
  39. [root@master flannel]# kubectl get pods --namespace kube-system
  40. NAME READY STATUS RESTARTS AGE
  41. coredns-5644d7b6d9-bs4gj 1/1 Running 0 31m
  42. coredns-5644d7b6d9-xf25l 1/1 Running 0 31m
  43. etcd-master 1/1 Running 0 30m
  44. kube-apiserver-master 1/1 Running 0 30m
  45. kube-controller-manager-master 1/1 Running 0 30m
  46. kube-proxy-mp8tf 1/1 Running 0 31m
  47. kube-scheduler-master 1/1 Running 0 30m
  48. [root@master flannel]# kubectl get service
  49. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  50. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m
  51. [root@master flannel]# kubectl get svc --namespace kube-system
  52. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  53. kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 32m

image.png
image.png
image.png

配置所有node节点加入集群

  1. 在所有node节点操作,此命令为初始化master成功后返回的结果
  2. kubeadm join 192.168.81.30:6443 --token dcg8kk.t42rp6f32p2szdda \
  3. --discovery-token-ca-cert-hash sha256:7b21de9e4be342bd5d971fbc9859169c7606c29d735f057a767ffcdd439bce10
  4. 然后master查看
  5. [root@master flannel]# kubectl get nodes
  6. NAME STATUS ROLES AGE VERSION
  7. master Ready master 42m v1.16.0
  8. node1 Ready <none> 5m34s v1.16.0
  9. node2 Ready <none> 5m30s v1.16.0

image.png