kubeadm安装Kubernetes

本次配置使用kubeadm安装Kubernetes,使用kubeadm init和kubeadm join两个命令可以很容易的初始化master节点和将node节点加入到master节点上。

环境准备

系统 版本
Kubernetes v1.15.1
Docker 18.06.1-ce
Centos7 CentOS Linux release 7.1.1503 (Core)
节点 主机名 Roles
10.32.170.109 dx-ee-releng-webserver01 master
10.21.88.3 gh-ee-plus09 node1
10.4.95.37 yf-ee-releng-plus02 node2

基础配置

每个节点都需要进行如下配置,包括master和node。

安装Docker

使用yum安装docker

  1. #使用yum安装docker-ce
  2. yum install -y docker-ce-18.06.1.ce-3.el7
  3. #创建docker daemon的配置文件
  4. mkdir /etc/docker && touch /etc/docker/daemon.json
  5. #配置国内镜像仓库加速器。
  6. tee /etc/docker/daemon.json <<-'EOF'
  7. {
  8. "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"]
  9. }
  10. EOF
  11. #启动docker
  12. systemctl enable docker && service docker restart

关闭防火墙

  1. systemctl stop firewalld
  2. systemctl disable firewalld

关闭SELinux

  1. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
  2. setenforce 0

关闭swap

  1. swapoff -a
  2. sed -i 's/.*swap.*/#&/' /etc/fstab.hd

配置转发参数

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. EOF
  5. sysctl --system

配置Kubernetes阿里云源

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

安装Kubernetes相关组件

  1. yum install kubelet kubeadm kubectl -y
  2. systemctl enable kubelet && systemctl start kubelet

加载IPVS内核

加载ipvs内核,并添加到开机启动文件中

  1. modprobe ip_vs_rr
  2. modprobe ip_vs_wrr
  3. modprobe ip_vs_sh
  4. cat <<EOF >> /etc/rc.local
  5. modprobe ip_vs_rr
  6. modprobe ip_vs_wrr
  7. modprobe ip_vs_sh
  8. EOF

安装Master节点

执行kubeadm init 失败

在master节点上切换到root,并执行kubeadm init。发现failed to pull image。因为国内无法访问Google的镜像源,因此需要换另一种方式下载镜像源。这里我们采用的办法使用阿里云的镜像源下载镜像并对镜像重新打tag。

  1. [root@dx-ee-releng-webserver01 docker]# kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
  2. W0731 21:25:35.776310 6324 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  3. W0731 21:25:35.776456 6324 version.go:99] falling back to the local client version: v1.15.1
  4. [init] Using Kubernetes version: v1.15.1
  5. [preflight] Running pre-flight checks
  6. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  7. [preflight] Pulling images required for setting up a Kubernetes cluster
  8. [preflight] This might take a minute or two, depending on the speed of your internet connection
  9. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  10. error execution phase preflight: [preflight] Some fatal errors occurred:
  11. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  12. , error: exit status 1
  13. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  14. , error: exit status 1
  15. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  16. , error: exit status 1
  17. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  18. , error: exit status 1
  19. [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  20. , error: exit status 1
  21. [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  22. , error: exit status 1
  23. [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  24. , error: exit status 1
  25. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

手动下载镜像

这里使用从阿里镜像源下载所需要的镜像,并对下载后的镜像重新打上tag。需要的具体镜像及对应的版本可以通过kubeadm config images list指令查看

  1. [root@dx-ee-releng-webserver01 ~]# kubeadm config images list
  2. W0731 21:29:32.916158 14480 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  3. W0731 21:29:32.916248 14480 version.go:99] falling back to the local client version: v1.15.1
  4. k8s.gcr.io/kube-apiserver:v1.15.1
  5. k8s.gcr.io/kube-controller-manager:v1.15.1
  6. k8s.gcr.io/kube-scheduler:v1.15.1
  7. k8s.gcr.io/kube-proxy:v1.15.1
  8. k8s.gcr.io/pause:3.1
  9. k8s.gcr.io/etcd:3.3.10
  10. k8s.gcr.io/coredns:1.3.1

下面提供了一个下载镜像的脚本

  1. #!/bin/bash
  2. kube_version=:v1.15.1
  3. kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)
  4. addon_images=(etcd-amd64:3.3.10 coredns:1.3.1 pause-amd64:3.1)
  5. for imageName in ${kube_images[@]} ; do
  6. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
  7. docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version
  8. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
  9. done
  10. for imageName in ${addon_images[@]} ; do
  11. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  12. docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  13. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  14. done
  15. docker tag k8s.gcr.io/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
  16. docker image rm k8s.gcr.io/etcd-amd64:3.3.10
  17. docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
  18. docker image rm k8s.gcr.io/pause-amd64:3.1

成功执行后执行docker images指令,可以发现需要的镜像已经被下载了

  1. [root@dx-ee-releng-webserver01 ~]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 2 weeks ago 81.1MB
  4. k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 2 weeks ago 159MB
  5. k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 2 weeks ago 207MB
  6. k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 2 weeks ago 82.4MB
  7. k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB
  8. k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
  9. k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB

执行kubeadm init

在成功下载镜像后,在master节点上执行如下kubeadm init —kubernetes-version=v1.15.1 —pod-network-cidr=10.244.0.0/16 —service-cidr=10.96.0.0/12指令用于初始化master节点。

  1. [root@dx-ee-releng-webserver01 k8s]# kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
  2. [init] Using Kubernetes version: v1.15.1
  3. [preflight] Running pre-flight checks
  4. [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  5. [preflight] Pulling images required for setting up a Kubernetes cluster
  6. [preflight] This might take a minute or two, depending on the speed of your internet connection
  7. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  8. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  9. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  10. [kubelet-start] Activating the kubelet service
  11. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  12. [certs] Generating "ca" certificate and key
  13. [certs] Generating "apiserver-kubelet-client" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [dx-ee-releng-webserver01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.32.170.109]
  16. [certs] Generating "etcd/ca" certificate and key
  17. [certs] Generating "etcd/peer" certificate and key
  18. [certs] etcd/peer serving cert is signed for DNS names [dx-ee-releng-webserver01 localhost] and IPs [10.32.170.109 127.0.0.1 ::1]
  19. [certs] Generating "apiserver-etcd-client" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [dx-ee-releng-webserver01 localhost] and IPs [10.32.170.109 127.0.0.1 ::1]
  22. [certs] Generating "etcd/healthcheck-client" certificate and key
  23. [certs] Generating "front-proxy-ca" certificate and key
  24. [certs] Generating "front-proxy-client" certificate and key
  25. [certs] Generating "sa" key and public key
  26. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  27. [kubeconfig] Writing "admin.conf" kubeconfig file
  28. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  29. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  30. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  31. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  32. [control-plane] Creating static Pod manifest for "kube-apiserver"
  33. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  34. [control-plane] Creating static Pod manifest for "kube-scheduler"
  35. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  36. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  37. [apiclient] All control plane components are healthy after 37.502506 seconds
  38. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  39. [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
  40. [upload-certs] Skipping phase. Please see --upload-certs
  41. [mark-control-plane] Marking the node dx-ee-releng-webserver01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
  42. [mark-control-plane] Marking the node dx-ee-releng-webserver01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  43. [bootstrap-token] Using token: 0gpfb4.ee1nalr0vmzxa9cv
  44. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  45. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  46. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  47. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  48. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  49. [addons] Applied essential addon: CoreDNS
  50. [addons] Applied essential addon: kube-proxy
  51. Your Kubernetes control-plane has initialized successfully!
  52. To start using your cluster, you need to run the following as a regular user:
  53. mkdir -p $HOME/.kube
  54. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  55. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  56. You should now deploy a pod network to the cluster.
  57. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  58. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  59. Then you can join any number of worker nodes by running the following on each as root:
  60. kubeadm join 10.32.170.109:6443 --token 0gpfb4.ee1nalr0vmzxa9cv \
  61. --discovery-token-ca-cert-hash sha256:f7abecbc3c8ff1ab9e91ff601cb16f952f90dfccd2d2ad71ed86c29e7363d699

如上所示,在初始化成功后,执行下面的命令

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置网络

在kubeadm init成功初始化后,需要在master上配置网络,否则master节点上的coredns容器无法启动。

  1. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

安装Node节点

下载镜像

同样的,在Node节点上事先下载好需要的镜像,下面给了一份脚本用于下载Node节点需要的镜像。

  1. #!/bin/bash
  2. kube_version=:v1.15.1
  3. coredns_version=1.3.1
  4. pause_version=3.1
  5. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
  6. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version
  7. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
  8. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
  9. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version
  10. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
  11. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
  12. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version
  13. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version

添加Node节点

使用kubeadm join将该node节点加入master节点

  1. kubeadm join 10.32.170.109:6443 --token 0gpfb4.ee1nalr0vmzxa9cv \
  2. --discovery-token-ca-cert-hash sha256:f7abecbc3c8ff1ab9e91ff601cb16f952f90dfccd2d2ad71ed86c29e7363d699

上述命令成功执行后,在master节点上执行kubectl get nodes,可以发现两个node节点和master节点都处于Ready的状态了。

  1. [root@dx-ee-releng-webserver01 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. dx-ee-releng-webserver01 Ready master 27h v1.15.1
  4. gh-ee-plus09.gh.sankuai.com Ready <none> 24h v1.15.1
  5. yf-ee-releng-plus02.mt Ready <none> 23h v1.15.1

FAQ

在配置Kubernetes的过程中,踩了不少的坑,下面将这些坑和大家分享一下

执行kubeadm init显示kubelet not running

执行kubeadm init执行,显示kubelet并没有running或者处于unhealthy的状态。

  1. [kubelet-check] It seems like the kubelet isn't running or healthy.
  2. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
  3. [kubelet-check] It seems like the kubelet isn't running or healthy.
  4. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
  5. [kubelet-check] It seems like the kubelet isn't running or healthy.
  6. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
  7. [kubelet-check] It seems like the kubelet isn't running or healthy.
  8. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
  9. [kubelet-check] It seems like the kubelet isn't running or healthy.
  10. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
  11. Unfortunately, an error has occurred:
  12. timed out waiting for the condition
  13. This error is likely caused by:
  14. - The kubelet is not running
  15. - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  16. - No internet connection is available so the kubelet cannot pull or find the following control plane images:
  17. - k8s.gcr.io/kube-apiserver-amd64:v1.11.2
  18. - k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
  19. - k8s.gcr.io/kube-scheduler-amd64:v1.11.2
  20. - k8s.gcr.io/etcd-amd64:3.2.18
  21. - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
  22. are downloaded locally and cached.
  23. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
  24. - 'systemctl status kubelet'
  25. - 'journalctl -xeu kubelet'
  26. Additionally, a control plane component may have crashed or exited when started by the container runtime.
  27. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
  28. Here is one example how you may list all Kubernetes containers running in docker:
  29. - 'docker ps -a | grep kube | grep -v pause'
  30. Once you have found the failing container, you can inspect its logs with:
  31. - 'docker logs CONTAINERID'
  32. couldn't initialize a Kubernetes cluster

用journalctl -xeu kubelet的观察kubelet的日志,发现一条Fatal的日志,根据该信息进行了查询,发现使用k8s的版本太新导致的,具体错误原因可参考:github.com/kubernetes/…

  1. Aug 06 14:52:09 dx-ee-releng-webserver01 kubelet[29231]: F0806 14:52:09.563107 29231 kubelet.go:1370] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to find subsystem mount for required subsystem: pids

因此修改kubelet的启动文件,在ExecStart上添加 —feature-gates SupportPodPidsLimit=false —feature-gates SupportNodePidsLimit=false,修改后执行systemctl daemon-reload && systemctl restart kubelet。至此,kubelet已经能成功启动。

  1. [root@dx-ee-releng-webserver01 ~]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
  2. # Note: This dropin only works with kubeadm and kubelet v1.11+
  3. [Service]
  4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  6. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
  7. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  8. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
  9. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
  10. EnvironmentFile=-/etc/sysconfig/kubelet
  11. ExecStart=
  12. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --feature-gates SupportPodPidsLimit=false --feature-gates SupportNodePidsLimit=false

获取不到pods

执行kubectl get pods报错

  1. [root@dx-ee-releng-webserver01 ~]# kubectl get pods --all-namespaces
  2. error: the server doesn't have a resource type "pods"

解决方案:删除.kube路径下的http-cache文件夹,重新执行命令,发现能获取到所有的pods了

  1. [root@dx-ee-releng-webserver01 ~]# cd ~/.kube/
  2. [root@dx-ee-releng-webserver01 .kube]# rm -rf http-cache/
  3. [root@dx-ee-releng-webserver01 .kube]# kubectl get pods --all-namespaces
  4. NAMESPACE NAME READY STATUS RESTARTS AGE
  5. kube-system coredns-5c98db65d4-gxwh6 0/1 Pending 0 2m48s
  6. kube-system coredns-5c98db65d4-mc2p4 0/1 Pending 0 2m48s
  7. kube-system etcd-dx-ee-releng-webserver01 1/1 Running 0 2m1s
  8. kube-system kube-apiserver-dx-ee-releng-webserver01 1/1 Running 0 108s
  9. kube-system kube-controller-manager-dx-ee-releng-webserver01 1/1 Running 0 2m4s
  10. kube-system kube-proxy-4jtm8 1/1 Running 0 2m48s
  11. kube-system kube-scheduler-dx-ee-releng-webserver01 1/1 Running 0

K8s v1.18.2

  1. $ kubeadm config image list
  2. k8s.gcr.io/kube-apiserver:v1.18.2
  3. k8s.gcr.io/kube-controller-manager:v1.18.2
  4. k8s.gcr.io/kube-scheduler:v1.18.2
  5. k8s.gcr.io/kube-proxy:v1.18.2
  6. k8s.gcr.io/pause:3.2
  7. k8s.gcr.io/etcd:3.4.3-0
  8. k8s.gcr.io/coredns:1.6.7
  1. #!/bin/bash
  2. kube_version=:v1.18.2
  3. kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)
  4. addon_images=(etcd-amd64:3.4.3-0 coredns:1.6.7 pause-amd64:3.2)
  5. for imageName in ${kube_images[@]} ; do
  6. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
  7. docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version
  8. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
  9. done
  10. for imageName in ${addon_images[@]} ; do
  11. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  12. docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  13. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  14. done
  15. docker tag k8s.gcr.io/etcd-amd64:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
  16. docker image rm k8s.gcr.io/etcd-amd64:3.4.3-0
  17. docker tag k8s.gcr.io/pause-amd64:3.2 k8s.gcr.io/pause:3.2
  18. docker image rm k8s.gcr.io/pause-amd64:3.2
  1. #!/bin/bash
  2. kube_version=:v1.18.2
  3. coredns_version=1.6.7
  4. pause_version=3.2
  5. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
  6. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version
  7. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
  8. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
  9. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version
  10. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
  11. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
  12. docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version
  13. docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version