kubeadm方式部署k8s集群

官方文档:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

切记要关闭防火墙和selinux,cpu核心数至少为2;内存4G

kubeadm部署k8s高可用集群的官方文档:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

获取镜像

在docker hub拉取相应的镜像并重新打标:

  1. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.1
  2. docker tag mirrorgooglecontainers/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.19.1
  3. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.1
  4. docker tag mirrorgooglecontainers/kube-controller-manager:v1.19.1 k8s.gcr.io/kube-controller-manager:v1.16.1
  5. docker pull mirrorgooglecontainers/kube-scheduler:v1.16.1
  6. docker tag mirrorgooglecontainers/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1
  7. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.1
  8. docker tag mirrorgooglecontainers/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.19.1
  9. docker pull mirrorgooglecontainers/pause:3.1
  10. docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
  11. docker pull mirrorgooglecontainers/etcd:3.3.10
  12. docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10-1
  13. docker pull coredns/coredns:1.3.1
  14. docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

每次部署都会有版本更新下图是1.15版本的具体镜像:

image.png

从docker hub拉取google的镜像也非常慢,慢的不行,没有拉取下来

下载镜像

  1. docker pull k8s.gcr.io/kube-apiserver:v1.16.1
  2. docker pull k8s.gcr.io/kube-proxy:v1.16.1
  3. docker pull k8s.gcr.io/kube-controller-manager:v1.16.1
  4. docker pull k8s.gcr.io/kube-scheduler:v1.16.1
  5. docker pull k8s.gcr.io/etcd:3.3.15
  6. docker pull k8s.gcr.io/pause:3.1
  7. docker pull k8s.gcr.io/coredns:1.6.2
  8. 由于镜像每次都要更新,所以下载时需要修改版本

阿里仓库下载

  1. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4
  2. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4
  3. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4
  4. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4
  5. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
  6. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
  7. [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
  8. docker pull quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 --下载网络插件的配置文件
  9. 下载完了之后需要将aliyun下载下来的所有镜像打成k8s.gcr.io/kube-controller-manager:v1.17.0这样的tag
  10. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4
  11. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4
  12. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4
  13. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4
  14. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5
  15. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
  16. [root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
  17. 版本号不用变

所有机器都必须有镜像

安装docker—三台机器都操作

  1. # yum remove docker \
  2. docker-client \
  3. docker-client-latest \
  4. docker-common \
  5. docker-latest \
  6. docker-latest-logrotate \
  7. docker-logrotate \
  8. docker-selinux \
  9. docker-engine-selinux \
  10. docker-engine
  11. # yum install -y yum-utils device-mapper-persistent-data lvm2 git
  12. # yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  13. # yum install docker-ce -y
  14. 启动并设置开机启动

完整安装过程

准备三台机器,

  1. 192.168.246.166 kub-k8s-master
  2. 192.168.246.167 kub-k8s-node1
  3. 192.168.246.169 kub-k8s-node2
  4. 制作本地解析,修改主机名。相互解析
  5. # vim /etc/hosts

所有机器系统配置

  1. 1.关闭防火墙:
  2. # systemctl stop firewalld
  3. # systemctl disable firewalld
  4. 2.禁用SELinux
  5. # setenforce 0
  6. 3.编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:
  7. # sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
  8. SELINUX=disabled

关闭系统Swap:1.5之后的新规定

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。方法一,通过kubelet的启动参数–fail-swap-on=false更改这个限制。方法二,关闭系统的Swap。

  1. # swapoff -a
  2. 修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。
  3. 2.注释掉swap分区:
  4. [root@localhost /]# sed -i 's/.*swap.*/#&/' /etc/fstab
  5. # free -m
  6. total used free shared buff/cache available
  7. Mem: 3935 144 3415 8 375 3518
  8. Swap: 0 0 0

使用kubeadm部署Kubernetes:

所有节点安装kubeadm和kubelet:

  1. 配置源
  2. # cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=1
  8. repo_gpgcheck=1
  9. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF
  1. 所有节点:
  2. 1.安装
  3. # yum makecache fast
  4. # yum install -y kubelet kubeadm kubectl ipvsadm
  5. ======================================================================
  6. [root@k8s-master ~]# yum install -y kubelet-1.17.4-0.x86_64 kubeadm-1.17.4-0.x86_64 kubectl-1.17.4-0.x86_64 ipvsadm
  7. 2.加载ipvs相关内核模块
  8. 如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
  9. # modprobe ip_vs
  10. # modprobe ip_vs_rr
  11. # modprobe ip_vs_wrr
  12. # modprobe ip_vs_sh
  13. # modprobe nf_conntrack_ipv4
  14. 3.编辑文件添加开机启动
  15. # vim /etc/rc.local
  16. # chmod +x /etc/rc.local
  17. 4.配置:
  18. 配置转发相关参数,否则可能会出错
  19. # cat <<EOF > /etc/sysctl.d/k8s.conf
  20. net.bridge.bridge-nf-call-ip6tables = 1
  21. net.bridge.bridge-nf-call-iptables = 1
  22. vm.swappiness=0
  23. EOF
  24. 5.使配置生效
  25. # sysctl --system
  26. 6.如果net.bridge.bridge-nf-call-iptables报错,加载br_netfilter模块
  27. # modprobe br_netfilter
  28. # sysctl -p /etc/sysctl.d/k8s.conf
  29. 7.查看是否加载成功
  30. # lsmod | grep ip_vs
  31. ip_vs_sh 12688 0
  32. ip_vs_wrr 12697 0
  33. ip_vs_rr 12600 0
  34. ip_vs 141092 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
  35. nf_conntrack 133387 2 ip_vs,nf_conntrack_ipv4
  36. libcrc32c 12644 3 xfs,ip_vs,nf_conntrack

Kubeadm部署k8s - 图2

配置启动kubelet(所有节点)

  1. 1.配置kubelet使用pause镜像
  2. 获取dockercgroups
  3. # systemctl start docker && systemctl enable docker
  4. # DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f4)
  5. # echo $DOCKER_CGROUPS
  6. =================================
  7. 配置变量:
  8. [root@k8s-master ~]# DOCKER_CGROUPS=`docker info |grep 'Cgroup' | awk '{print $3}'`
  9. [root@k8s-master ~]# echo $DOCKER_CGROUPS
  10. cgroupfs
  11. 这个是使用国内的源。-###注意我们使用谷歌的镜像--操作下面的第3标题
  12. 2.配置kubeletcgroups
  13. # cat >/etc/sysconfig/kubelet<<EOF
  14. KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
  15. EOF
  16. 3.配置kubeletcgroups
  17. # cat >/etc/sysconfig/kubelet<<EOF
  18. KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.1"
  19. EOF
  20. # cat >/etc/sysconfig/kubelet<<EOF
  21. KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=k8s.gcr.io/pause:3.1"
  22. EOF
  1. 启动
  2. # systemctl daemon-reload
  3. # systemctl enable kubelet && systemctl restart kubelet
  4. 在这里使用 # systemctl status kubelet,你会发现报错误信息;
  5. 10 11 00:26:43 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
  6. 10 11 00:26:43 node1 systemd[1]: Unit kubelet.service entered failed state.
  7. 10 11 00:26:43 node1 systemd[1]: kubelet.service failed.
  8. 运行 # journalctl -xefu kubelet 命令查看systemd日志才发现,真正的错误是:
  9. unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
  10. #这个错误在运行kubeadm init 生成CA证书后会被自动解决,此处可先忽略。
  11. #简单地说就是在kubeadm init 之前kubelet会不断重启。

配置master节点

  1. 运行初始化过程如下:
  2. 初始化之前,切记要关闭防火墙和selinuxcpu核心数至少为2
  3. [root@master ~]# kubeadm init --kubernetes-version=v1.16.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.246.166 --ignore-preflight-errors=Swap
  4. 注:
  5. apiserver-advertise-address=192.168.246.166 ---masterip地址。
  6. --kubernetes-version=v1.16.1 --更具具体版本进行修改
  7. 注意在检查一下swap分区是否关闭
  8. 如果报错会有版本提示,那就是有更新新版本了
  9. [init] Using Kubernetes version: v1.16.1
  10. [preflight] Running pre-flight checks
  11. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  12. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.0-ce. Latest validated version: 18.09
  13. [preflight] Pulling images required for setting up a Kubernetes cluster
  14. [preflight] This might take a minute or two, depending on the speed of your internet connection
  15. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  16. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  17. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  18. [kubelet-start] Activating the kubelet service
  19. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  20. [certs] Generating "ca" certificate and key
  21. [certs] Generating "apiserver" certificate and key
  22. [certs] apiserver serving cert is signed for DNS names [kub-k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.246.166]
  23. [certs] Generating "apiserver-kubelet-client" certificate and key
  24. [certs] Generating "front-proxy-ca" certificate and key
  25. [certs] Generating "front-proxy-client" certificate and key
  26. [certs] Generating "etcd/ca" certificate and key
  27. [certs] Generating "etcd/server" certificate and key
  28. [certs] etcd/server serving cert is signed for DNS names [kub-k8s-master localhost] and IPs [192.168.246.166 127.0.0.1 ::1]
  29. [certs] Generating "etcd/peer" certificate and key
  30. [certs] etcd/peer serving cert is signed for DNS names [kub-k8s-master localhost] and IPs [192.168.246.166 127.0.0.1 ::1]
  31. [certs] Generating "etcd/healthcheck-client" certificate and key
  32. [certs] Generating "apiserver-etcd-client" certificate and key
  33. [certs] Generating "sa" key and public key
  34. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  35. [kubeconfig] Writing "admin.conf" kubeconfig file
  36. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  37. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  38. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  39. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  40. [control-plane] Creating static Pod manifest for "kube-apiserver"
  41. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  42. [control-plane] Creating static Pod manifest for "kube-scheduler"
  43. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  44. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  45. [apiclient] All control plane components are healthy after 24.575209 seconds
  46. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  47. [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
  48. [upload-certs] Skipping phase. Please see --upload-certs
  49. [mark-control-plane] Marking the node kub-k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
  50. [mark-control-plane] Marking the node kub-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  51. [bootstrap-token] Using token: 93erio.hbn2ti6z50he0lqs
  52. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  53. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  54. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  55. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  56. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  57. [addons] Applied essential addon: CoreDNS
  58. [addons] Applied essential addon: kube-proxy
  59. Your Kubernetes control-plane has initialized successfully!
  60. To start using your cluster, you need to run the following as a regular user:
  61. mkdir -p $HOME/.kube
  62. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  63. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  64. You should now deploy a pod network to the cluster.
  65. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  66. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  67. Then you can join any number of worker nodes by running the following on each as root:
  68. kubeadm join 192.168.246.166:6443 --token 93erio.hbn2ti6z50he0lqs \
  69. --discovery-token-ca-cert-hash sha256:3bc60f06a19bd09f38f3e05e5cff4299011b7110ca3281796668f4edb29a56d9 #需要记住
  70. =======================================================================================
  71. 上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。
  72. 其中有以下关键内容:
  73. [kubelet] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml
  74. [certificates]生成相关的各种证书
  75. [kubeconfig]生成相关的kubeconfig文件
  76. [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  77. 配置使用kubectl
  78. 如下操作在master节点操作
  79. [root@kub-k8s-master ~]# rm -rf $HOME/.kube
  80. [root@kub-k8s-master ~]# mkdir -p $HOME/.kube
  81. [root@kub-k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  82. [root@kub-k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
  83. 查看node节点
  84. [root@k8s-master ~]# kubectl get nodes
  85. NAME STATUS ROLES AGE VERSION
  86. k8s-master NotReady master 2m41s v1.17.4

配置使用网络插件

在master节点操作
下载配置
# cd ~ && mkdir flannel && cd flannel
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改配置文件kube-flannel.yml:
此处的ip配置要与上面kubeadm的pod-network一致,本来就一致,不用改
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
# 这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
# 默认的镜像是quay.io/coreos/flannel:v0.11.0-amd64,需要提前pull下来。


# 如果Node有多个网卡的话,参考flannel issues 39701,
# https://github.com/kubernetes/kubernetes/issues/39701
# 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,
# 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,
# flanneld启动参数加上--iface=<iface-name>
    containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33
        - --iface=eth0

⚠️⚠️⚠️--iface=ens33 的值,是你当前的网卡,或者可以指定多网卡

# 1.12版本的kubeadm额外给node1节点设置了一个污点(Taint):node.kubernetes.io/not-ready:NoSchedule,
# 很容易理解,即如果节点还没有ready之前,是不接受调度的。可是如果Kubernetes的网络插件还没有部署的话,节点是不会进入ready状态的。
# 因此修改以下kube-flannel.yaml的内容,加入对node.kubernetes.io/not-ready:NoSchedule这个污点的容忍:
    - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      - key: node.kubernetes.io/not-ready  #添加如下三行---在261行左右
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel

image.png

image.png

上面的这个镜像,是解决网络问题的flannel镜像,每个节点都要拉取

# docker pull quay.io/coreos/flannel:v0.12.0-amd64 #其实,它也会自动拉取。

image.png

启动:
# kubectl apply -f ~/flannel/kube-flannel.yml  #启动完成之后需要等待一会
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-5644d7b6d9-sm8hs                 1/1     Running   0          9m18s
coredns-5644d7b6d9-vddll                 1/1     Running   0          9m18s
etcd-kub-k8s-master                      1/1     Running   0          8m14s
kube-apiserver-kub-k8s-master            1/1     Running   0          8m17s
kube-controller-manager-kub-k8s-master   1/1     Running   0          8m20s
kube-flannel-ds-amd64-9wgd8              1/1     Running   0          8m42s
kube-proxy-sgphs                         1/1     Running   0          9m18s
kube-scheduler-kub-k8s-master            1/1     Running   0          8m10s

查看:
# kubectl get pods -n kube-system
# kubectl get service
# kubectl get svc --namespace kube-system
只有网络插件也安装配置完成之后,才能会显示为ready状态

所有node节点操作

配置node节点加入集群:
如果报错开启ip转发:
# sysctl -w net.ipv4.ip_forward=1

在所有node节点操作,此命令为初始化master成功后返回的结果
# kubeadm join 192.168.246.166:6443 --token 93erio.hbn2ti6z50he0lqs \
    --discovery-token-ca-cert-hash sha256:3bc60f06a19bd09f38f3e05e5cff4299011b7110ca3281796668f4edb29a56d9

image.png

在master操作:

各种检测:
1.查看pods:
[root@kub-k8s-master ~]# kubectl get pods -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-5644d7b6d9-sm8hs                 1/1     Running   0          39m
coredns-5644d7b6d9-vddll                 1/1     Running   0          39m
etcd-kub-k8s-master                      1/1     Running   0          37m
kube-apiserver-kub-k8s-master            1/1     Running   0          38m
kube-controller-manager-kub-k8s-master   1/1     Running   0          38m
kube-flannel-ds-amd64-9wgd8              1/1     Running   0          38m
kube-flannel-ds-amd64-lffc8              1/1     Running   0          2m11s
kube-flannel-ds-amd64-m8kk2              1/1     Running   0          2m2s
kube-proxy-dwq9l                         1/1     Running   0          2m2s
kube-proxy-l77lz                         1/1     Running   0          2m11s
kube-proxy-sgphs                         1/1     Running   0          39m
kube-scheduler-kub-k8s-master            1/1     Running   0          37m

2.查看异常pod信息:
[root@kub-k8s-master ~]# kubectl  describe pods kube-flannel-ds-sr6tq -n  kube-system
Name:               kube-flannel-ds-sr6tq
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
。。。。。
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Pulling    12m                  kubelet, node2     pulling image "registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64"
  Normal   Pulled     11m                  kubelet, node2     Successfully pulled image "registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64"
  Normal   Created    11m                  kubelet, node2     Created container
  Normal   Started    11m                  kubelet, node2     Started container
  Normal   Created    11m (x4 over 11m)    kubelet, node2     Created container
  Normal   Started    11m (x4 over 11m)    kubelet, node2     Started container
  Normal   Pulled     10m (x5 over 11m)    kubelet, node2     Container image "registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64" already present on machine
  Normal   Scheduled  7m15s                default-scheduler  Successfully assigned kube-system/kube-flannel-ds-sr6tq to node2
  Warning  BackOff    7m6s (x23 over 11m)  kubelet, node2     Back-off restarting failed container

3.遇到这种情况直接 删除异常pod:
[root@kub-k8s-master ~]# kubectl delete pod kube-flannel-ds-sr6tq -n kube-system
pod "kube-flannel-ds-sr6tq" deleted

4.查看pods:
[root@kub-k8s-master ~]# kubectl get pods -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-5644d7b6d9-sm8hs                 1/1     Running   0          44m
coredns-5644d7b6d9-vddll                 1/1     Running   0          44m
etcd-kub-k8s-master                      1/1     Running   0          42m
kube-apiserver-kub-k8s-master            1/1     Running   0          43m
kube-controller-manager-kub-k8s-master   1/1     Running   0          43m
kube-flannel-ds-amd64-9wgd8              1/1     Running   0          43m
kube-flannel-ds-amd64-lffc8              1/1     Running   0          7m10s
kube-flannel-ds-amd64-m8kk2              1/1     Running   0          7m1s
kube-proxy-dwq9l                         1/1     Running   0          7m1s
kube-proxy-l77lz                         1/1     Running   0          7m10s
kube-proxy-sgphs                         1/1     Running   0          44m
kube-scheduler-kub-k8s-master            1/1     Running   0          42m

5.查看节点:
[root@kub-k8s-master ~]# kubectl get nodes
NAME             STATUS   ROLES    AGE     VERSION
kub-k8s-master   Ready    master   43m     v1.16.1
kub-k8s-node1    Ready    <none>   6m46s   v1.16.1
kub-k8s-node2    Ready    <none>   6m37s   v1.16.1
到此集群配置完成

错误整理

错误
问题1:服务器时间不一致会报错
查看服务器时间
=====================================
问题2:kubeadm init不成功,发现如下提示,然后超时报错
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

查看kubelet状态发现如下错误,主机master找不到和镜像下载失败,发现pause镜像是从aliyuncs下载的,其实我已经下载好了官方的pause镜像,按着提示的镜像名称重新给pause镜像打个ali的tag,最后重置kubeadm的环境重新初始化,错误解决
[root@master manifests]# systemctl  status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since 四 2019-01-31 15:20:32 CST; 5min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 23908 (kubelet)
    Tasks: 19
   Memory: 30.8M
   CGroup: /system.slice/kubelet.service
           └─23908 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1

1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.432357   23908 kubelet.go:2266] node "master" not found
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.532928   23908 kubelet.go:2266] node "master" not found
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.633192   23908 kubelet.go:2266] node "master" not found
1月 31 15:25:41 master kubelet[23908]: I0131 15:25:41.729296   23908 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.733396   23908 kubelet.go:2266] node "master" not found
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.740110   23908 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1": Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: dial tcp 0.0.0.80:443: connect: invalid argument
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.740153   23908 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-controller-manager-master_kube-system(e8f43404e60ae844e375d50b1e39d91e)" failed: rpc error: code = Unknown desc = failed pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1": Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: dial tcp 0.0.0.80:443: connect: invalid argument
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.740166   23908 kuberuntime_manager.go:662] createPodSandbox for pod "kube-controller-manager-master_kube-system(e8f43404e60ae844e375d50b1e39d91e)" failed: rpc error: code = Unknown desc = failed pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1": Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: dial tcp 0.0.0.80:443: connect: invalid argument
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.740207   23908 pod_workers.go:190] Error syncing pod e8f43404e60ae844e375d50b1e39d91e ("kube-controller-manager-master_kube-system(e8f43404e60ae844e375d50b1e39d91e)"), skipping: failed to "CreatePodSandbox" for "kube-controller-manager-master_kube-system(e8f43404e60ae844e375d50b1e39d91e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-controller-manager-master_kube-system(e8f43404e60ae844e375d50b1e39d91e)\" failed: rpc error: code = Unknown desc = failed pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1\": Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: dial tcp 0.0.0.80:443: connect: invalid argument"
1月 31 15:25:41 master kubelet[23908]: E0131 15:25:41.833981   23908 kubelet.go:2266] node "master" not found

解决方式

重置kubeadm环境
整个集群所有节点(包括master)重置/移除节点
1.驱离k8s-node-1节点上的pod(master上)
[root@kub-k8s-master ~]# kubectl drain kub-k8s-node1 --delete-local-data --force --ignore-daemonsets

2.删除节点(master上)
[root@kub-k8s-master ~]# kubectl delete node kub-k8s-node1

3.重置节点(node上-也就是在被删除的节点上)
[root@kub-k8s-node1 ~]# kubeadm reset

注1:需要把master也驱离、删除、重置,这里给我坑死了,第一次没有驱离和删除master,最后的结果是查看结果一切正常,但coredns死活不能用,搞了整整1天,切勿尝试

注2:master上在reset之后需要删除如下文件
# rm -rf /var/lib/cni/ $HOME/.kube/config

###注意:如果整个k8s集群都做完了,需要重置按照上面步骤操作。如果是在初始化出错只需要操作第三步

重新生成token

kubeadm 生成的token过期后,集群增加节点

通过kubeadm初始化后,都会提供node加入的token:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.246.166:6443 --token n38l80.y2icehgzsyuzkthi \
    --discovery-token-ca-cert-hash sha256:5fb6576ef82b5655dee285e0c93432aee54d38779bc8488c32f5cbbb90874bac
默认token的有效期为24小时,当过期之后,该token就不可用了。

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

解决方法:
1. 重新生成新的token:
[root@node1 flannel]# kubeadm  token create
kiyfhw.xiacqbch8o8fa8qj
[root@node1 flannel]# kubeadm  token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
gvvqwk.hn56nlsgsv11mik6   <invalid>   2018-10-25T14:16:06+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token
kiyfhw.xiacqbch8o8fa8qj   23h         2018-10-27T06:39:24+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

2. 获取ca证书sha256编码hash值:
[root@node1 flannel]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
5417eb1b68bd4e7a4c82aded83abc55ec91bd601e45734d6aba85de8b1ebb057

3. 节点加入集群:
  kubeadm join 18.16.202.35:6443 --token kiyfhw.xiacqbch8o8fa8qj --discovery-token-ca-cert-hash sha256:5417eb1b68bd4e7a4c82aded83abc55ec91bd601e45734d6aba85de8b1ebb057
几秒钟后,您应该注意到kubectl get nodes在主服务器上运行时输出中的此节点。

上面的方法比较繁琐,一步到位:
kubeadm token create --print-join-command

第二种方法:
token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0