使用kubeadm初始化mstaer

初始化主节点

  • 初始化master

    1. kubeadm init --config=kubeadm-config.yml --upload-certs | tee kubeadm-init.log
  • 初始化过程日志输出: ``` [init] Using Kubernetes version: v1.17.4 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’ [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [kubelet-start] Starting the kubelet [certs] Using certificateDir folder “/etc/kubernetes/pki” [certs] Generating “ca” certificate and key [certs] Generating “apiserver” certificate and key [certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.233.200] [certs] Generating “apiserver-kubelet-client” certificate and key [certs] Generating “front-proxy-ca” certificate and key [certs] Generating “front-proxy-client” certificate and key [certs] Generating “etcd/ca” certificate and key [certs] Generating “etcd/server” certificate and key [certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.233.200 127.0.0.1 ::1] [certs] Generating “etcd/peer” certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.233.200 127.0.0.1 ::1] [certs] Generating “etcd/healthcheck-client” certificate and key [certs] Generating “apiserver-etcd-client” certificate and key [certs] Generating “sa” key and public key [kubeconfig] Using kubeconfig folder “/etc/kubernetes” [kubeconfig] Writing “admin.conf” kubeconfig file [kubeconfig] Writing “kubelet.conf” kubeconfig file [kubeconfig] Writing “controller-manager.conf” kubeconfig file [kubeconfig] Writing “scheduler.conf” kubeconfig file [control-plane] Using manifest folder “/etc/kubernetes/manifests” [control-plane] Creating static Pod manifest for “kube-apiserver” [control-plane] Creating static Pod manifest for “kube-controller-manager” [control-plane] Creating static Pod manifest for “kube-scheduler” [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests” [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s [apiclient] All control plane components are healthy after 18.507655 seconds [upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace [kubelet] Creating a ConfigMap “kubelet-config-1.17” in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace [upload-certs] Using certificate key: a4c99ea3701d5245d6ee81a04ed4d041340fd7d5cf837f1e53c8536151ae78f1 [mark-control-plane] Marking the node k8smaster as control-plane by adding the label “node-role.kubernetes.io/master=’’” [mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace [kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.233.200:6443 —token abcdef.0123456789abcdef \ —discovery-token-ca-cert-hash sha256:8e523ebfd1ecd1fdb3578d254d94e60067e987ce3e9ad6c71af0245dea04d14b

  1. > 注意:如果安装 kubernetes 版本和下载的镜像版本不统一则会出现 `timed out waiting for the condition` 错误。中途失败或是想修改配置可以使用 `kubeadm reset` 命令重置配置,再做初始化操作即可。
  2. <a name="9dd1c002"></a>
  3. ### 配置 kubectl
  4. 依据上面日志输出的信息进行配置操作
  5. - 创建kube文件夹

mkdir -p $HOME/.kube

  1. - 复制配置文件到config路径

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  1. - 授权,将配置文件授权给当前用户

sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. <a name="4fd78a67"></a>
  2. ### 验证配置
  3. - 查看节点

kubectl get node

  1. > [root[@k8smaster ](https://www.yuque.com/k8smaster) kubernetes]# kubectl get node
  2. > NAME STATUS ROLES AGE VERSION
  3. > k8smaster NotReady master 3h37m v1.17.4
  4. 至此主节点配置完成
  5. <a name="TbLx9"></a>
  6. ### kubeadm init 的执行过程
  7. - init:指定版本进行初始化操作
  8. - preflight:初始化前的检查和下载所需要的 Docker 镜像文件
  9. - kubelet-start:生成 kubelet 的配置文件 `var/lib/kubelet/config.yaml`,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动不会成功
  10. - certificates:生成 Kubernetes 使用的证书,存放在 `/etc/kubernetes/pki` 目录中
  11. - kubeconfig:生成 KubeConfig 文件,存放在 `/etc/kubernetes` 目录中,组件之间通信需要使用对应文件
  12. - control-plane:使用 `/etc/kubernetes/manifest` 目录下的 YAML 文件,安装 Master 组件
  13. - etcd:使用 `/etc/kubernetes/manifest/etcd.yaml` 安装 Etcd 服务
  14. - wait-control-plane:等待 control-plan 部署的 Master 组件启动
  15. - apiclient:检查 Master 组件服务状态。
  16. - uploadconfig:更新配置
  17. - kubelet:使用 configMap 配置 kubelet
  18. - patchnode:更新 CNI 信息到 Node 上,通过注释的方式记录
  19. - mark-control-plane:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod
  20. - bootstrap-token:生成 token 记录下来,后边使用 `kubeadm join` 往集群中添加节点时会用到
  21. - addons:安装附加组件 CoreDNS kube-proxy
  22. <a name="B9usC"></a>
  23. ## kubeadm配置slave节点
  24. slave 节点加入到集群中很简单,只需要在 slave 服务器上安装 kubeadmkubectlkubelet 三个工具,然后使用 `kubeadm join` 命令加入即可。准备工作如下:
  25. <a name="KaUMG"></a>
  26. ### 将node加入到集群
  27. 执行初始化master节点时候输出日志的`kubeadm join`命令,将node节点加入到集群中
  28. - 执行命令

kubeadm join 192.168.233.200:6443 —token abcdef.0123456789abcdef \ —discovery-token-ca-cert-hash sha256:8e523ebfd1ecd1fdb3578d254d94e60067e987ce3e9ad6c71af0245dea04d14b

  1. 说明:
  2. - token
  3. - 可以通过安装 master 时的日志查看 token 信息
  4. - 可以通过 `kubeadm token list` 命令打印出 token 信息
  5. - 如果 token 过期,可以使用 `kubeadm token create` 命令创建新的 token
  6. - discovery-token-ca-cert-hash
  7. - 可以通过安装 master 时的日志查看 sha256 信息
  8. - 可以通过 `openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'` 命令查看 sha256 信息
  9. <a name="tn1no"></a>
  10. ### 验证是否成功
  11. - 回到master,输入命令验证

kubectl get nodes

  1. > [root[@k8smaster ](https://www.yuque.com/k8smaster) kubernetes]# kubectl get node
  2. > NAME STATUS ROLES AGE VERSION
  3. > k8smaster NotReady master 3h37m v1.17.4
  4. > k8snode01 NotReady node 3h34m v1.17.4
  5. - 查看 pod 状态

kubectl get pod -n kube-system -o wide ```

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-9d85f5447-7p478 0/1 Pending 0 6h23m
coredns-9d85f5447-dctn9 0/1 Pending 0 6h23m
etcd-k8smaster 1/1 Running 0 6h23m 192.168.233.200 k8smaster
kube-apiserver-k8smaster 1/1 Running 0 6h23m 192.168.233.200 k8smaster
kube-controller-manager-k8smaster 1/1 Running 0 6h23m 192.168.233.200 k8smaster
kube-proxy-bn8lx 1/1 Running 1 6h23m 192.168.233.200 k8smaster
kube-proxy-jsw59 1/1 Running 1 6h18m 192.168.233.210 k8snode01
kube-proxy-s5q54 1/1 Running 1 5h3m 192.168.233.211 k8snode02
kube-scheduler-k8smaster 1/1 Running 0 6h23m 192.168.233.200 k8smaster

由此可以看出 coredns 尚未运行,此时我们还需要安装网络插件。