https://kubernetes.io/zh/docs/tasks/

1: 环境准备

yum install vim wget bash-completion lrzsz nmap nc tree htop iftop net-tools ipvsadm -y

2: 修改主机名

hostnamectl set-hostname master

  1. systemctl disable firewalld.service
  2. systemctl stop firewalld.service
  3. setenforce 0

3: 关闭虚拟机内存

  1. swapoff -a
  2. sed -i 's/.*swap.*/#&/' /etc/fstab

4: 关闭selinux

  1. sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
  2. sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
  3. sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
  4. sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

补充
Docker版本要求:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md

image.png
v1.17 docker 要求
Docker versions docker-ce-19.03
v1.16 要求:
Drop the support for docker 1.9.x. Docker versions 1.10.3, 1.11.2, 1.12.6 have been validated.

5: 安装依赖

  1. yum install yum-utils device-mapper-persistent-data lvm2 -y

6:添加docker源

  1. yum-config-manager \
  2. --add-repo \
  3. https://download.docker.com/linux/centos/docker-ce.repo

7: 更新源并安装指定版本

  1. yum install docker-ce-18.06.3.ce -y

查看docker 版本:

  1. yum list docker-ce --showduplicates|sort -r

8: 配置k8s的docker环境要求和镜像加速

  1. sudo mkdir -p /etc/docker
  1. cat << EOF > /etc/docker/daemon.json
  2. {
  3. "exec-opts": ["native.cgroupdriver=systemd"],
  4. "registry-mirrors": ["https://0bb06s1q.mirror.aliyuncs.com"],
  5. "log-driver": "json-file",
  6. "log-opts": {
  7. "max-size": "100m"
  8. },
  9. "storage-driver": "overlay2",
  10. "storage-opts": ["overlay2.override_kernel_check=true"]
  11. }
  12. EOF
  1. systemctl daemon-reload && systemctl restart docker && systemctl enable docker.service


配置 daocloud 镜像加速
https://www.daocloud.io/mirror#accelerator-doc

  1. curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

9:安装和配置containerd

新版本dockers自动安装这个包 可忽略这一步

  1. yum install containerd.io -y
  2. mkdir -p /etc/containerd
  3. containerd config default > /etc/containerd/config.toml
  4. systemctl restart containerd

10:添加kubernetes源

kubernetes安装:

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

11:安装最新版本或指定版本

  1. yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  2. systemctl enable --now kubelet

指定版本

  1. yum install -y kubeadm-1.18.0-0 kubelet-1.18.0-0 kubectl-1.18.0-0 --disableexcludes=kubernetes
  2. systemctl enable --now kubelet

12:开启内核转发

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. net.ipv4.ip_forward = 1
  5. #避免cpu资源长期使用率过高导致系统内核锁
  6. kernel.watchdog_thresh=30
  7. #开启iptables bridge
  8. net.bridge.bridge-nf-call-iptables=1
  9. #调优ARP高速缓存
  10. net.ipv4.neigh.default.gc_thresh1=4096
  11. net.ipv4.neigh.default.gc_thresh2=6144
  12. net.ipv4.neigh.default.gc_thresh3=8192
  13. EOF
  14. sysctl -p
  15. systemctl restart docker

13:加载检查 ipvs内核

检查:
lsmod | grep ip_vs
加载:
modprobe ip_vs

kubeadm 常用命令

  1. Last login: Sun Apr 12 21:21:17 2020 from 192.168.31.150
  2. [root@master ~]# kubeadm
  3. ┌──────────────────────────────────────────────────────────┐
  4. KUBEADM
  5. Easily bootstrap a secure Kubernetes cluster
  6. Please give us feedback at:
  7. https://github.com/kubernetes/kubeadm/issues
  8. └──────────────────────────────────────────────────────────┘
  9. Example usage:
  10. Create a two-machine cluster with one control-plane node
  11. (which controls the cluster), and one worker node
  12. (where your workloads, like Pods and Deployments run).
  13. ┌──────────────────────────────────────────────────────────┐
  14. On the first machine:
  15. ├──────────────────────────────────────────────────────────┤
  16. control-plane# kubeadm init │
  17. └──────────────────────────────────────────────────────────┘
  18. ┌──────────────────────────────────────────────────────────┐
  19. On the second machine:
  20. ├──────────────────────────────────────────────────────────┤
  21. worker# kubeadm join <arguments-returned-from-init> │
  22. └──────────────────────────────────────────────────────────┘
  23. You can then repeat the second step on as many other machines as you like.
  24. Usage:
  25. kubeadm [command]
  26. Available Commands:
  27. alpha Kubeadm experimental sub-commands
  28. completion Output shell completion code for the specified shell (bash or zsh)
  29. config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
  30. help Help about any command
  31. init Run this command in order to set up the Kubernetes control plane
  32. join Run this on any machine you wish to join an existing cluster
  33. reset Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'
  34. token Manage bootstrap tokens
  35. upgrade Upgrade your cluster smoothly to a newer version with this command
  36. version Print the version of kubeadm
  37. Flags:
  38. --add-dir-header If true, adds the file directory to the header
  39. -h, --help help for kubeadm
  40. --log-file string If non-empty, use this log file
  41. --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  42. --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
  43. --skip-headers If true, avoid header prefixes in the log messages
  44. --skip-log-headers If true, avoid headers when opening log files
  45. -v, --v Level number for the log level verbosity
  46. Use "kubeadm [command] --help" for more information about a command.
  1. [root@master ~]# kubeadm init --help
  2. Run this command in order to set up the Kubernetes control plane
  3. The "init" command executes the following phases:

preflight Run pre-flight checks kubelet-start Write kubelet settings and (re)start the kubelet certs Certificate generation /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generate the certificate for serving the Kubernetes API /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet /front-proxy-ca Generate the self-signed CA to provision identities for front proxy /front-proxy-client Generate the certificate for the front proxy client /etcd-ca Generate the self-signed CA to provision identities for etcd /etcd-server Generate the certificate for serving etcd /etcd-peer Generate the certificate for etcd nodes to communicate with each other /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd /sa Generate a private key for signing service account tokens along with its public key kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file /admin Generate a kubeconfig file for the admin to use and for kubeadm itself /kubelet Generate a kubeconfig file for the kubelet to use only for cluster bootstrapping purposes /controller-manager Generate a kubeconfig file for the controller manager to use /scheduler Generate a kubeconfig file for the scheduler to use control-plane Generate all static Pod manifest files necessary to establish the control plane /apiserver Generates the kube-apiserver static Pod manifest /controller-manager Generates the kube-controller-manager static Pod manifest /scheduler Generates the kube-scheduler static Pod manifest etcd Generate static Pod manifest file for local etcd /local Generate the static Pod manifest file for a local, single-node local etcd instance upload-config Upload the kubeadm and kubelet configuration to a ConfigMap /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap /kubelet Upload the kubelet component config to a ConfigMap upload-certs Upload certificates to kubeadm-certs mark-control-plane Mark a node as a control-plane bootstrap-token Generates bootstrap tokens used to join a node to a cluster kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap /experimental-cert-rotation Enable kubelet client certificate rotation addon Install required addons for passing Conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster

  1. Usage:
  2. kubeadm init [flags]
  3. kubeadm init [command]
  4. Available Commands:
  5. phase Use this command to invoke single phase of the init workflow
  6. Flags:
  7. --apiserver-advertise-address string The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
  8. --apiserver-bind-port int32 Port for the API Server to bind to. (default 6443)
  9. --apiserver-cert-extra-sans strings Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
  10. --cert-dir string The path where to save and store the certificates. (default "/etc/kubernetes/pki")
  11. --certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
  12. --config string Path to a kubeadm configuration file.
  13. --control-plane-endpoint string Specify a stable IP address or DNS name for the control plane.
  14. --cri-socket string Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
  15. --dry-run Don't apply any changes; just output what would be done.
  16. -k, --experimental-kustomize string The path where kustomize patches for static pod manifests are stored.
  17. --feature-gates string A set of key=value pairs that describe feature gates for various features. Options are:
  18. IPv6DualStack=true|false (ALPHA - default=false)
  19. PublicKeysECDSA=true|false (ALPHA - default=false)
  20. -h, --help help for init
  21. --ignore-preflight-errors strings A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
  22. --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io")
  23. --kubernetes-version string Choose a specific Kubernetes version for the control plane. (default "stable-1")
  24. --node-name string Specify the node name.
  25. --pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
  26. --service-cidr string Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")
  27. --service-dns-domain string Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local")
  28. --skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates.
  29. --skip-phases strings List of phases to be skipped
  30. --skip-token-print Skip printing of the default bootstrap token generated by 'kubeadm init'.
  31. --token string The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
  32. --token-ttl duration The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)
  33. --upload-certs Upload control-plane certificates to the kubeadm-certs Secret.
  34. Global Flags:
  35. --add-dir-header If true, adds the file directory to the header
  36. --log-file string If non-empty, use this log file
  37. --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  38. --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
  39. --skip-headers If true, avoid header prefixes in the log messages
  40. --skip-log-headers If true, avoid headers when opening log files
  41. -v, --v Level number for the log level verbosity
  42. Use "kubeadm init [command] --help" for more information about a command.

自定义配置文件
master节点部署:

14:导出kubeadm集群部署自定义文件

导出配置文件

  1. kubeadm config print init-defaults > init.default.yaml

15:修改自定义配置文件

A:修改配置文件 添加 主节点IP advertiseAddress:
B:修改国内阿里镜像地址imageRepository:
C:自定义pod地址 podSubnet: “192.168.0.0/16”
D:开启 IPVS 模式 —-
E:阿里云部署使用内网地址
F:修改Master节点IP地址和需要部署kubernetes的版本

  1. cat <<EOF > init.default.yaml
  2. apiVersion: kubeadm.k8s.io/v1beta2
  3. bootstrapTokens:
  4. - groups:
  5. - system:bootstrappers:kubeadm:default-node-token
  6. token: abcdef.0123456789abcdef
  7. ttl: 24h0m0s
  8. usages:
  9. - signing
  10. - authentication
  11. kind: InitConfiguration
  12. localAPIEndpoint:
  13. #配置主节点IP信息
  14. advertiseAddress: 192.168.31.147
  15. bindPort: 6443
  16. nodeRegistration:
  17. criSocket: /var/run/dockershim.sock
  18. name: master
  19. taints:
  20. - effect: NoSchedule
  21. key: node-role.kubernetes.io/master
  22. ---
  23. apiServer:
  24. timeoutForControlPlane: 4m0s
  25. apiVersion: kubeadm.k8s.io/v1beta2
  26. certificatesDir: /etc/kubernetes/pki
  27. clusterName: kubernetes
  28. controllerManager: {}
  29. dns:
  30. type: CoreDNS
  31. etcd:
  32. local:
  33. dataDir: /var/lib/etcd
  34. #自定义容器镜像拉取国内仓库地址
  35. imageRepository: registry.aliyuncs.com/google_containers
  36. kind: ClusterConfiguration
  37. kubernetesVersion: v1.18.0
  38. networking:
  39. dnsDomain: cluster.local
  40. #自定义podIP地址段
  41. podSubnet: "192.168.0.0/16"
  42. serviceSubnet: 10.96.0.0/12
  43. scheduler: {}
  44. # 开启 IPVS 模式
  45. ---
  46. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  47. kind: KubeProxyConfiguration
  48. featureGates:
  49. SupportIPVSProxyMode: true
  50. mode: ipvs
  51. EOF
  1. [root@master ~]# cat init.default.yaml
  2. apiVersion: kubeadm.k8s.io/v1beta2
  3. bootstrapTokens:
  4. - groups:
  5. - system:bootstrappers:kubeadm:default-node-token
  6. token: abcdef.0123456789abcdef
  7. ttl: 24h0m0s
  8. usages:
  9. - signing
  10. - authentication
  11. kind: InitConfiguration
  12. localAPIEndpoint:
  13. advertiseAddress: 192.168.6.111
  14. bindPort: 6443
  15. nodeRegistration:
  16. criSocket: /var/run/dockershim.sock
  17. name: master
  18. taints:
  19. - effect: NoSchedule
  20. key: node-role.kubernetes.io/master
  21. ---
  22. apiServer:
  23. timeoutForControlPlane: 4m0s
  24. apiVersion: kubeadm.k8s.io/v1beta2
  25. certificatesDir: /etc/kubernetes/pki
  26. clusterName: kubernetes
  27. controllerManager: {}
  28. dns:
  29. type: CoreDNS
  30. etcd:
  31. local:
  32. dataDir: /var/lib/etcd
  33. imageRepository: registry.aliyuncs.com/google_containers
  34. kind: ClusterConfiguration
  35. kubernetesVersion: v1.18.0
  36. networking:
  37. dnsDomain: cluster.local
  38. podSubnet: 192.168.0.0/16
  39. serviceSubnet: 10.96.0.0/12
  40. scheduler: {}
  41. ---
  42. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  43. kind: KubeProxyConfiguration
  44. featureGates:
  45. SupportIPVSProxyMode: true
  46. mode: ipvs
  47. [root@master ~]#
  1. cat > /etc/sysconfig/kubelet <<EOF
  2. KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2
  3. EOF

16: 检查需要拉取的镜像

  1. kubeadm config images list --config init.default.yaml
  1. [root@riyimei ~]# kubeadm config images list
  2. k8s.gcr.io/kube-apiserver:v1.21.1
  3. k8s.gcr.io/kube-controller-manager:v1.21.1
  4. k8s.gcr.io/kube-scheduler:v1.21.1
  5. k8s.gcr.io/kube-proxy:v1.21.1
  6. k8s.gcr.io/pause:3.4.1
  7. k8s.gcr.io/etcd:3.4.13-0
  8. k8s.gcr.io/coredns/coredns:v1.8.0
  9. [root@riyimei ~]#
  10. [root@riyimei ~]# kubeadm config images list --config init.default.yaml
  11. registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
  12. registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
  13. registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
  14. registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
  15. registry.aliyuncs.com/google_containers/pause:3.4.1
  16. registry.aliyuncs.com/google_containers/etcd:3.4.13-0
  17. registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
  18. [root@riyimei ~]#

17: 拉取阿里云kubernetes容器镜像

  1. kubeadm config images pull --config init.default.yaml
  1. [root@master ~]# kubeadm config images list --config init.default.yaml
  2. W0204 00:37:10.879146 30608 validation.go:28] Cannot validate kubelet config - no validator is available
  3. W0204 00:37:10.879175 30608 validation.go:28] Cannot validate kube-proxy config - no validator is available
  4. registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
  5. registry.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
  6. registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
  7. registry.aliyuncs.com/google_containers/kube-proxy:v1.17.2
  8. registry.aliyuncs.com/google_containers/pause:3.1
  9. registry.aliyuncs.com/google_containers/etcd:3.4.3-0
  10. registry.aliyuncs.com/google_containers/coredns:1.6.5
  11. [root@master ~]# kubeadm config images pull --config init.default.yaml
  12. W0204 00:37:25.590147 30636 validation.go:28] Cannot validate kube-proxy config - no validator is available
  13. W0204 00:37:25.590179 30636 validation.go:28] Cannot validate kubelet config - no validator is available
  14. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
  15. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
  16. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
  17. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.17.2
  18. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
  19. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
  20. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.5
  21. [root@master ~]#

18: kubernetes集群部署

开始部署集群 kubeadm init —config=init.default.yaml

  1. [root@master ~]# kubeadm init --config=init.default.yaml | tee kubeadm-init.log
  2. W0204 00:39:48.825538 30989 validation.go:28] Cannot validate kube-proxy config - no validator is available
  3. W0204 00:39:48.825592 30989 validation.go:28] Cannot validate kubelet config - no validator is available
  4. [init] Using Kubernetes version: v1.17.2
  5. [preflight] Running pre-flight checks
  6. [WARNING Hostname]: hostname "master" could not be reached
  7. [WARNING Hostname]: hostname "master": lookup master on 114.114.114.114:53: no such host
  8. [preflight] Pulling images required for setting up a Kubernetes cluster
  9. [preflight] This might take a minute or two, depending on the speed of your internet connection
  10. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  11. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  12. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  13. [kubelet-start] Starting the kubelet
  14. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  15. [certs] Generating "ca" certificate and key
  16. [certs] Generating "apiserver" certificate and key
  17. [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.31.90]
  18. [certs] Generating "apiserver-kubelet-client" certificate and key
  19. [certs] Generating "front-proxy-ca" certificate and key
  20. [certs] Generating "front-proxy-client" certificate and key
  21. [certs] Generating "etcd/ca" certificate and key
  22. [certs] Generating "etcd/server" certificate and key
  23. [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.31.90 127.0.0.1 ::1]
  24. [certs] Generating "etcd/peer" certificate and key
  25. [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.31.90 127.0.0.1 ::1]
  26. [certs] Generating "etcd/healthcheck-client" certificate and key
  27. [certs] Generating "apiserver-etcd-client" certificate and key
  28. [certs] Generating "sa" key and public key
  29. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  30. [kubeconfig] Writing "admin.conf" kubeconfig file
  31. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  32. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  33. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  34. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  35. [control-plane] Creating static Pod manifest for "kube-apiserver"
  36. W0204 00:39:51.810550 30989 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  37. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  38. W0204 00:39:51.811109 30989 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
  39. [control-plane] Creating static Pod manifest for "kube-scheduler"
  40. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  41. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  42. [apiclient] All control plane components are healthy after 35.006692 seconds
  43. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  44. [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
  45. [upload-certs] Skipping phase. Please see --upload-certs
  46. [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
  47. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  48. [bootstrap-token] Using token: abcdef.0123456789abcdef
  49. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  50. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  51. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  52. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  53. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  54. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  55. [addons] Applied essential addon: CoreDNS
  56. [addons] Applied essential addon: kube-proxy
  57. Your Kubernetes control-plane has initialized successfully!
  58. To start using your cluster, you need to run the following as a regular user:
  59. mkdir -p $HOME/.kube
  60. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  61. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  62. You should now deploy a pod network to the cluster.
  63. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  64. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  65. Then you can join any number of worker nodes by running the following on each as root:
  66. kubeadm join 192.168.31.90:6443 --token abcdef.0123456789abcdef \
  67. --discovery-token-ca-cert-hash sha256:09959f846dba6a855fbbd090e99b4ba1df4e643ec1a1578c28eaf9a9d3ea6a03
  68. [root@master ~]#

19:配置用户证书

[root@master ~]#mkdir -p $HOME/.kube
[root@master ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config

20:查看集群状态

  1. [root@master ~]#kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master NotReady master 2m34s v1.17.2
  4. [root@master ~]#
  5. [root@master ~]# kubectl get cs
  6. NAME STATUS MESSAGE ERROR
  7. scheduler Healthy ok
  8. controller-manager Healthy ok
  9. etcd-0 Healthy {"health":"true"}

21:kuberctl 命令自动补全

  1. source <(kubectl completion bash)
  2. echo "source <(kubectl completion bash)" >> ~/.bashrc

22:kubernetes网络部署

Calico
https://www.projectcalico.org/
https://docs.projectcalico.org/getting-started/kubernetes/
Calico网络部署:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

  1. [root@master ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  2. configmap/calico-config created
  3. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  4. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  5. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  6. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  7. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  8. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  9. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  10. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  11. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  12. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  13. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  14. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  15. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  16. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  17. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  18. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  19. clusterrole.rbac.authorization.k8s.io/calico-node created
  20. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  21. daemonset.apps/calico-node created
  22. serviceaccount/calico-node created
  23. deployment.apps/calico-kube-controllers created
  24. serviceaccount/calico-kube-controllers created
  25. [root@master ~]#

23:查看网络插件部署状态

  1. [root@master ~]# kubectl get pod -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-77c4b7448-hfqws 1/1 Running 0 32s
  4. calico-node-59p6f 1/1 Running 0 32s
  5. coredns-9d85f5447-6wgkd 1/1 Running 0 2m5s
  6. coredns-9d85f5447-bkjj8 1/1 Running 0 2m5s
  7. etcd-master 1/1 Running 0 2m2s
  8. kube-apiserver-master 1/1 Running 0 2m2s
  9. kube-controller-manager-master 1/1 Running 0 2m2s
  10. kube-proxy-lwww6 1/1 Running 0 2m5s
  11. kube-scheduler-master 1/1 Running 0 2m2s

24:添加node节点

  1. [root@node02 ~]# kubeadm join 192.168.31.90:6443 --token abcdef.0123456789abcdef \
  2. > --discovery-token-ca-cert-hash sha256:09959f846dba6a855fbbd090e99b4ba1df4e643ec1a1578c28eaf9a9d3ea6a03
  3. W0204 00:46:53.878006 30928 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
  4. [preflight] Running pre-flight checks
  5. [WARNING Hostname]: hostname "node02" could not be reached
  6. [WARNING Hostname]: hostname "node02": lookup node02 on 114.114.114.114:53: no such host
  7. [preflight] Reading configuration from the cluster...
  8. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  9. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  12. [kubelet-start] Starting the kubelet
  13. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  14. This node has joined the cluster:
  15. * Certificate signing request was sent to apiserver and a response was received.
  16. * The Kubelet was informed of the new secure connection details.
  17. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  1. #如果过期可先执行此命令
  2. kubeadm token create #重新生成token
  3. 1.列出token
  4. kubeadm token list | awk -F" " '{print $1}' |tail -n 1
  5. 2.获取CA公钥的哈希值
  6. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //'
  7. 3.从节点加入集群
  8. kubeadm join 192.168.40.8:6443 --token token填这里 --discovery-token-ca-cert-hash sha256:哈希值填这里
  1. [root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //'
  2. (stdin)= 5be219290c207c460901a557a1727cbda0cf5638eba06a87ffe99962f6438966
  3. [root@master ~]#
  4. [root@master ~]# kubeadm token list | awk -F" " '{print $1}' |tail -n 1
  5. abcdef.0123456789abcdef
  6. [root@master ~]#
  7. [root@master ~]# kubeadm join 192.168.6.111:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5be219290c207c460901a557a1727cbda0cf5638eba06a87ffe99962f6438966

25:集群检查

image.png

  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 19m v1.17.2
  4. node01 Ready <none> 15m v1.17.2
  5. node02 Ready <none> 13m v1.17.2
  6. [root@master ~]# kubectl get cs
  7. NAME STATUS MESSAGE ERROR
  8. controller-manager Healthy ok
  9. scheduler Healthy ok
  10. etcd-0 Healthy {"health":"true"}
  11. [root@master ~]# kubectl get pod -A
  12. NAMESPACE NAME READY STATUS RESTARTS AGE
  13. kube-system calico-kube-controllers-77c4b7448-zd6dt 0/1 Error 1 119s
  14. kube-system calico-node-2fcbs 1/1 Running 0 119s
  15. kube-system calico-node-56f95 1/1 Running 0 119s
  16. kube-system calico-node-svlg9 1/1 Running 0 119s
  17. kube-system coredns-9d85f5447-4f4hq 0/1 Running 0 19m
  18. kube-system coredns-9d85f5447-n68wd 0/1 Running 0 19m
  19. kube-system etcd-master 1/1 Running 0 19m
  20. kube-system kube-apiserver-master 1/1 Running 0 19m
  21. kube-system kube-controller-manager-master 1/1 Running 0 19m
  22. kube-system kube-proxy-ch4vl 1/1 Running 1 15m
  23. kube-system kube-proxy-fjl5c 1/1 Running 1 19m
  24. kube-system kube-proxy-hhsqc 1/1 Running 1 13m
  25. kube-system kube-scheduler-master 1/1 Running 0 19m
  26. [root@master ~]#

image.png
kubectl get cs 无法使用 请注释掉kubeadm静态pod的kube-controller-manager、kube-scheduler —port=0 端口

  1. [root@master ~]# ll /etc/kubernetes/manifests/
  2. total 16
  3. -rw------- 1 root root 2126 Mar 13 14:37 etcd.yaml
  4. -rw------- 1 root root 3186 Mar 13 14:37 kube-apiserver.yaml
  5. -rw------- 1 root root 2860 Mar 13 14:40 kube-controller-manager.yaml
  6. -rw------- 1 root root 1414 Mar 13 14:40 kube-scheduler.yaml
  7. [root@master ~]#

image.png

26:查看证书

  1. [root@master ~]# cd /etc/kubernetes/
  2. [root@master kubernetes]# ll
  3. total 32
  4. -rw------- 1 root root 5453 Feb 4 21:26 admin.conf
  5. -rw------- 1 root root 5485 Feb 4 21:26 controller-manager.conf
  6. -rw------- 1 root root 1861 Feb 4 21:27 kubelet.conf
  7. drwxr-xr-x 2 root root 113 Feb 4 21:26 manifests
  8. drwxr-x--- 3 root root 4096 Feb 4 21:26 pki
  9. -rw------- 1 root root 5437 Feb 4 21:26 scheduler.conf
  10. [root@master kubernetes]# tree
  11. .
  12. ├── admin.conf
  13. ├── controller-manager.conf
  14. ├── kubelet.conf
  15. ├── manifests
  16. ├── etcd.yaml
  17. ├── kube-apiserver.yaml
  18. ├── kube-controller-manager.yaml
  19. └── kube-scheduler.yaml
  20. ├── pki
  21. ├── apiserver.crt
  22. ├── apiserver-etcd-client.crt
  23. ├── apiserver-etcd-client.key
  24. ├── apiserver.key
  25. ├── apiserver-kubelet-client.crt
  26. ├── apiserver-kubelet-client.key
  27. ├── ca.crt
  28. ├── ca.key
  29. ├── etcd
  30. ├── ca.crt
  31. ├── ca.key
  32. ├── healthcheck-client.crt
  33. ├── healthcheck-client.key
  34. ├── peer.crt
  35. ├── peer.key
  36. ├── server.crt
  37. └── server.key
  38. ├── front-proxy-ca.crt
  39. ├── front-proxy-ca.key
  40. ├── front-proxy-client.crt
  41. ├── front-proxy-client.key
  42. ├── sa.key
  43. └── sa.pub
  44. └── scheduler.conf
  45. 3 directories, 30 files
  46. [root@master kubernetes]#

image.png

27:查看证书有效时间

  1. [root@master ~]# kubeadm alpha certs check-expiration
  2. [check-expiration] Reading configuration from the cluster...
  3. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  4. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  5. admin.conf Mar 03, 2022 13:47 UTC 364d no
  6. apiserver Mar 03, 2022 13:47 UTC 364d ca no
  7. apiserver-etcd-client Mar 03, 2022 13:47 UTC 364d etcd-ca no
  8. apiserver-kubelet-client Mar 03, 2022 13:47 UTC 364d ca no
  9. controller-manager.conf Mar 03, 2022 13:47 UTC 364d no
  10. etcd-healthcheck-client Mar 03, 2022 13:47 UTC 364d etcd-ca no
  11. etcd-peer Mar 03, 2022 13:47 UTC 364d etcd-ca no
  12. etcd-server Mar 03, 2022 13:47 UTC 364d etcd-ca no
  13. front-proxy-client Mar 03, 2022 13:47 UTC 364d front-proxy-ca no
  14. scheduler.conf Mar 03, 2022 13:47 UTC 364d no
  15. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  16. ca Mar 01, 2031 13:47 UTC 9y no
  17. etcd-ca Mar 01, 2031 13:47 UTC 9y no
  18. front-proxy-ca Mar 01, 2031 13:47 UTC 9y no
  19. [root@master ~]#
  1. [root@master ~]# cd /etc/kubernetes/pki/
  2. [root@master pki]# ll
  3. total 56
  4. -rw-r----- 1 root root 1216 Feb 4 21:26 apiserver.crt
  5. -rw-r----- 1 root root 1090 Feb 4 21:26 apiserver-etcd-client.crt
  6. -rw------- 1 root root 1679 Feb 4 21:26 apiserver-etcd-client.key
  7. -rw------- 1 root root 1679 Feb 4 21:26 apiserver.key
  8. -rw-r----- 1 root root 1099 Feb 4 21:26 apiserver-kubelet-client.crt
  9. -rw------- 1 root root 1679 Feb 4 21:26 apiserver-kubelet-client.key
  10. -rw-r----- 1 root root 1025 Feb 4 21:26 ca.crt
  11. -rw------- 1 root root 1675 Feb 4 21:26 ca.key
  12. drwxr-x--- 2 root root 162 Feb 4 21:26 etcd
  13. -rw-r----- 1 root root 1038 Feb 4 21:26 front-proxy-ca.crt
  14. -rw------- 1 root root 1679 Feb 4 21:26 front-proxy-ca.key
  15. -rw-r----- 1 root root 1058 Feb 4 21:26 front-proxy-client.crt
  16. -rw------- 1 root root 1679 Feb 4 21:26 front-proxy-client.key
  17. -rw------- 1 root root 1679 Feb 4 21:26 sa.key
  18. -rw------- 1 root root 451 Feb 4 21:26 sa.pub
  19. [root@master pki]# openssl x509 -in apiserver.crt -noout -text |grep Not
  20. Not Before: Feb 4 13:26:53 2020 GMT
  21. Not After : Feb 3 13:26:53 2021 GMT
  22. [root@master pki]#

image.png

28:更新证书

https://kubernetes.io/zh/docs/tasks/tls/certificate-rotation

  1. [root@master ~]# kubeadm config view > /root/kubeadm.yaml
  2. [root@master ~]# ll /root/kubeadm.yaml
  3. -rw-r----- 1 root root 492 Feb 24 14:07 /root/kubeadm.yaml
  4. [root@master ~]# cat /root/kubeadm.yaml
  5. apiServer:
  6. extraArgs:
  7. authorization-mode: Node,RBAC
  8. timeoutForControlPlane: 4m0s
  9. apiVersion: kubeadm.k8s.io/v1beta2
  10. certificatesDir: /etc/kubernetes/pki
  11. clusterName: kubernetes
  12. controllerManager: {}
  13. dns:
  14. type: CoreDNS
  15. etcd:
  16. local:
  17. dataDir: /var/lib/etcd
  18. imageRepository: registry.aliyuncs.com/google_containers
  19. kind: ClusterConfiguration
  20. kubernetesVersion: v1.17.2
  21. networking:
  22. dnsDomain: cluster.local
  23. podSubnet: 192.168.0.0/16
  24. serviceSubnet: 10.96.0.0/12
  25. scheduler: {}
  26. [root@master ~]# kubeadm alpha certs renew all --config=/root/kubeadm.yaml
  27. W0224 14:08:21.385077 47490 validation.go:28] Cannot validate kube-proxy config - no validator is available
  28. W0224 14:08:21.385111 47490 validation.go:28] Cannot validate kubelet config - no validator is available
  29. certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
  30. certificate for serving the Kubernetes API renewed
  31. certificate the apiserver uses to access etcd renewed
  32. certificate for the API server to connect to kubelet renewed
  33. certificate embedded in the kubeconfig file for the controller manager to use renewed
  34. certificate for liveness probes to healthcheck etcd renewed
  35. certificate for etcd nodes to communicate with each other renewed
  36. certificate for serving etcd renewed
  37. certificate for the front proxy client renewed
  38. certificate embedded in the kubeconfig file for the scheduler manager to use renewed
  39. [root@master ~]# kubeadm alpha certs check-expiration
  40. [check-expiration] Reading configuration from the cluster...
  41. [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  42. CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  43. admin.conf Feb 23, 2021 06:08 UTC 364d no
  44. apiserver Feb 23, 2021 06:08 UTC 364d ca no
  45. apiserver-etcd-client Feb 23, 2021 06:08 UTC 364d etcd-ca no
  46. apiserver-kubelet-client Feb 23, 2021 06:08 UTC 364d ca no
  47. controller-manager.conf Feb 23, 2021 06:08 UTC 364d no
  48. etcd-healthcheck-client Feb 23, 2021 06:08 UTC 364d etcd-ca no
  49. etcd-peer Feb 23, 2021 06:08 UTC 364d etcd-ca no
  50. etcd-server Feb 23, 2021 06:08 UTC 364d etcd-ca no
  51. front-proxy-client Feb 23, 2021 06:08 UTC 364d front-proxy-ca no
  52. scheduler.conf Feb 23, 2021 06:08 UTC 364d no
  53. CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  54. ca Feb 01, 2030 13:26 UTC 9y no
  55. etcd-ca Feb 01, 2030 13:26 UTC 9y no
  56. front-proxy-ca Feb 01, 2030 13:26 UTC 9y no
  57. [root@master ~]# docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd' | awk -F ' ' '{print $1}' |xargs docker restart
  58. 9d2d1fa8ce88
  59. 4182331c88c3
  60. 1620f3a3a4da
  61. 6904710a5e7d
  62. [root@master ~]# cd /etc/kubernetes/pki/
  63. [root@master pki]# openssl x509 -in apiserver.crt -noout -text |grep Not
  64. Not Before: Feb 4 13:26:53 2020 GMT
  65. Not After : Feb 23 06:08:21 2021 GMT
  66. [root@master pki]#

image.png

29:Kbernetes管理平台

rancher
https://docs.rancher.cn/rancher2x/
image.png

30:集群升级 v1.17.2 升级 v1.17.4 不可跨大版本升级

查看新版本 kubeadm upgrade plan

  1. [root@master ~]# kubeadm upgrade plan
  2. [upgrade/config] Making sure the configuration is correct:
  3. [upgrade/config] Reading configuration from the cluster...
  4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  5. [preflight] Running pre-flight checks.
  6. [upgrade] Making sure the cluster is healthy:
  7. [upgrade] Fetching available versions to upgrade to
  8. [upgrade/versions] Cluster version: v1.17.2
  9. [upgrade/versions] kubeadm version: v1.17.2
  10. I0326 09:28:34.816310 3805 version.go:251] remote version is much newer: v1.18.0; falling back to: stable-1.17
  11. [upgrade/versions] Latest stable version: v1.17.4
  12. [upgrade/versions] Latest version in the v1.17 series: v1.17.4
  13. Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
  14. COMPONENT CURRENT AVAILABLE
  15. Kubelet 3 x v1.17.2 v1.17.4
  16. Upgrade to the latest version in the v1.17 series:
  17. COMPONENT CURRENT AVAILABLE
  18. API Server v1.17.2 v1.17.4
  19. Controller Manager v1.17.2 v1.17.4
  20. Scheduler v1.17.2 v1.17.4
  21. Kube Proxy v1.17.2 v1.17.4
  22. CoreDNS 1.6.5 1.6.5
  23. Etcd 3.4.3 3.4.3-0
  24. You can now apply the upgrade by executing the following command:
  25. kubeadm upgrade apply v1.17.4
  26. Note: Before you can perform this upgrade, you have to update kubeadm to v1.17.4.
  27. _____________________________________________________________________
  28. [root@master ~]#
  1. [root@master ~]# kubectl -n kube-system get cm kubeadm-config -oyaml
  2. apiVersion: v1
  3. data:
  4. ClusterConfiguration: |
  5. apiServer:
  6. extraArgs:
  7. authorization-mode: Node,RBAC
  8. timeoutForControlPlane: 4m0s
  9. apiVersion: kubeadm.k8s.io/v1beta2
  10. certificatesDir: /etc/kubernetes/pki
  11. clusterName: kubernetes
  12. controllerManager: {}
  13. dns:
  14. type: CoreDNS
  15. etcd:
  16. local:
  17. dataDir: /var/lib/etcd
  18. imageRepository: registry.aliyuncs.com/google_containers
  19. kind: ClusterConfiguration
  20. kubernetesVersion: v1.18.0
  21. networking:
  22. dnsDomain: cluster.local
  23. serviceSubnet: 10.96.0.0/12
  24. scheduler: {}
  25. ClusterStatus: |
  26. apiEndpoints:
  27. master:
  28. advertiseAddress: 192.168.11.90
  29. bindPort: 6443
  30. apiVersion: kubeadm.k8s.io/v1beta2
  31. kind: ClusterStatus
  32. kind: ConfigMap
  33. metadata:
  34. creationTimestamp: "2021-02-27T03:08:25Z"
  35. managedFields:
  36. - apiVersion: v1
  37. fieldsType: FieldsV1
  38. fieldsV1:
  39. f:data:
  40. .: {}
  41. f:ClusterConfiguration: {}
  42. f:ClusterStatus: {}
  43. manager: kubeadm
  44. operation: Update
  45. time: "2021-02-27T03:08:25Z"
  46. name: kubeadm-config
  47. namespace: kube-system
  48. resourceVersion: "158"
  49. selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  50. uid: e7ba4014-ba35-47bd-b8fc-c4747c4b4603
  51. [root@master ~]# kubeadm upgrade plan
  52. [upgrade/config] Making sure the configuration is correct:
  53. [upgrade/config] Reading configuration from the cluster...
  54. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  55. [preflight] Running pre-flight checks.
  56. [upgrade] Running cluster health checks
  57. [upgrade] Fetching available versions to upgrade to
  58. [upgrade/versions] Cluster version: v1.18.0
  59. [upgrade/versions] kubeadm version: v1.18.0
  60. I0328 01:13:38.358116 47799 version.go:252] remote version is much newer: v1.20.5; falling back to: stable-1.18
  61. [upgrade/versions] Latest stable version: v1.18.17
  62. [upgrade/versions] Latest stable version: v1.18.17
  63. [upgrade/versions] Latest version in the v1.18 series: v1.18.17
  64. [upgrade/versions] Latest version in the v1.18 series: v1.18.17
  65. Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
  66. COMPONENT CURRENT AVAILABLE
  67. Kubelet 3 x v1.18.0 v1.18.17
  68. Upgrade to the latest version in the v1.18 series:
  69. COMPONENT CURRENT AVAILABLE
  70. API Server v1.18.0 v1.18.17
  71. Controller Manager v1.18.0 v1.18.17
  72. Scheduler v1.18.0 v1.18.17
  73. Kube Proxy v1.18.0 v1.18.17
  74. CoreDNS 1.6.7 1.6.7
  75. Etcd 3.4.3 3.4.3-0
  76. You can now apply the upgrade by executing the following command:
  77. kubeadm upgrade apply v1.18.17
  78. Note: Before you can perform this upgrade, you have to update kubeadm to v1.18.17.
  79. _____________________________________________________________________
  80. [root@master ~]# kubectl get node
  81. NAME STATUS ROLES AGE VERSION
  82. master Ready,SchedulingDisabled master 28d v1.18.0
  83. node01 Ready <none> 28d v1.18.0
  84. node02 Ready <none> 28d v1.18.0
  85. [root@master ~]#

升级
kubeadm kubectl
yum install -y kubelet kubeadm kubectl —disableexcludes=kubernetes
指定升级版本

  1. systemctl daemon-reload
  2. systemctl restart kubelet.service
  1. [root@master ~]# kubectl version
  2. Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  3. Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
  4. [root@master ~]#

使用集群的配置文件检查升级信息

  1. kubeadm upgrade apply v1.17.4 --config init.default.yaml --dry-run

升级指定版本:

  1. [root@master ~]# kubeadm upgrade apply v1.17.4 --config init.default.yaml
  2. W0326 09:42:59.690575 25446 validation.go:28] Cannot validate kube-proxy config - no validator is available
  3. W0326 09:42:59.690611 25446 validation.go:28] Cannot validate kubelet config - no validator is available
  4. [upgrade/config] Making sure the configuration is correct:
  5. W0326 09:42:59.701115 25446 common.go:94] WARNING: Usage of the --config flag for reconfiguring the cluster during upgrade is not recommended!
  6. W0326 09:42:59.701862 25446 validation.go:28] Cannot validate kube-proxy config - no validator is available
  7. W0326 09:42:59.701870 25446 validation.go:28] Cannot validate kubelet config - no validator is available
  8. [preflight] Running pre-flight checks.
  9. [upgrade] Making sure the cluster is healthy:
  10. [upgrade/version] You have chosen to change the cluster version to "v1.17.4"
  11. [upgrade/versions] Cluster version: v1.17.2
  12. [upgrade/versions] kubeadm version: v1.17.4
  13. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
  14. [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
  15. [upgrade/prepull] Prepulling image for component etcd.
  16. [upgrade/prepull] Prepulling image for component kube-apiserver.
  17. [upgrade/prepull] Prepulling image for component kube-controller-manager.
  18. [upgrade/prepull] Prepulling image for component kube-scheduler.
  19. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
  20. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
  21. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
  22. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
  23. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
  24. [upgrade/prepull] Prepulled image for component etcd.
  25. [upgrade/prepull] Prepulled image for component kube-scheduler.
  26. [upgrade/prepull] Prepulled image for component kube-apiserver.
  27. [upgrade/prepull] Prepulled image for component kube-controller-manager.
  28. [upgrade/prepull] Successfully prepulled the images for all the control plane components
  29. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.4"...
  30. Static pod: kube-apiserver-master hash: 221485f981f15fee9c0123f34ca082cd
  31. Static pod: kube-controller-manager-master hash: 8d9fdb8447a20709c28d62b361e21c5c
  32. Static pod: kube-scheduler-master hash: 5fd6ddfbc568223e0845f80bd6fd6a1a
  33. [upgrade/etcd] Upgrading to TLS for etcd
  34. [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.17.4" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
  35. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests668951726"
  36. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
  37. [upgrade/staticpods] Renewing apiserver certificate
  38. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
  39. [upgrade/staticpods] Renewing front-proxy-client certificate
  40. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
  41. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-26-09-43-17/kube-apiserver.yaml"
  42. [upgrade/staticpods] Waiting for the kubelet to restart the component
  43. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  44. Static pod: kube-apiserver-master hash: 221485f981f15fee9c0123f34ca082cd
  45. Static pod: kube-apiserver-master hash: 5fc5a9f3b46c1fd494c3e99e0c7d307c
  46. [apiclient] Found 1 Pods for label selector component=kube-apiserver
  47. [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
  48. [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
  49. [upgrade/staticpods] Renewing controller-manager.conf certificate
  50. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-26-09-43-17/kube-controller-manager.yaml"
  51. [upgrade/staticpods] Waiting for the kubelet to restart the component
  52. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  53. Static pod: kube-controller-manager-master hash: 8d9fdb8447a20709c28d62b361e21c5c
  54. Static pod: kube-controller-manager-master hash: 31ffb42eb9357ac50986e0b46ee527f8
  55. [apiclient] Found 1 Pods for label selector component=kube-controller-manager
  56. [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
  57. [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
  58. [upgrade/staticpods] Renewing scheduler.conf certificate
  59. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-26-09-43-17/kube-scheduler.yaml"
  60. [upgrade/staticpods] Waiting for the kubelet to restart the component
  61. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  62. Static pod: kube-scheduler-master hash: 5fd6ddfbc568223e0845f80bd6fd6a1a
  63. Static pod: kube-scheduler-master hash: b265ed564e34d3887fe43a6a6210fbd4
  64. [apiclient] Found 1 Pods for label selector component=kube-scheduler
  65. [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
  66. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  67. [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
  68. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
  69. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  70. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  71. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  72. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  73. [addons]: Migrating CoreDNS Corefile
  74. [addons] Applied essential addon: CoreDNS
  75. [addons] Applied essential addon: kube-proxy
  76. [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.4". Enjoy!
  77. [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  78. [root@master ~]#

组件升级成功:
重启master节点 kubelet组件

  1. systemctl daemon-reload
  2. systemctl restart kubelet

image.png
等待一会 master组件可查询到更新完成

  1. [root@master ~]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-77c4b7448-9nrx2 1/1 Running 6 50d
  4. calico-node-mc7s7 0/1 Running 6 50d
  5. calico-node-p8rm7 1/1 Running 5 50d
  6. calico-node-wwtkl 1/1 Running 5 50d
  7. coredns-9d85f5447-ht4c4 1/1 Running 6 50d
  8. coredns-9d85f5447-wvjlb 1/1 Running 6 50d
  9. etcd-master 1/1 Running 6 50d
  10. kube-apiserver-master 1/1 Running 0 2m59s
  11. kube-controller-manager-master 1/1 Running 0 2m55s
  12. kube-proxy-5hdvx 1/1 Running 0 2m1s
  13. kube-proxy-7hdzf 1/1 Running 0 2m7s
  14. kube-proxy-v4hjt 1/1 Running 0 2m17s
  15. kube-scheduler-master 1/1 Running 0 2m52s
  16. [root@master ~]# systemctl daemon-reload
  17. [root@master ~]# systemctl restart kubelet
  18. [root@master ~]# kubectl get nodes
  19. NAME STATUS ROLES AGE VERSION
  20. master NotReady master 50d v1.17.2
  21. node01 Ready <none> 50d v1.17.2
  22. node02 Ready <none> 50d v1.17.2
  23. [root@master ~]# kubectl get nodes
  24. NAME STATUS ROLES AGE VERSION
  25. master Ready master 50d v1.17.4
  26. node01 Ready <none> 50d v1.17.2
  27. node02 Ready <none> 50d v1.17.2
  28. [root@master ~]#

升级nodes节点 kubelet

  1. yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  2. systemctl daemon-reload
  3. systemctl restart kubelet.service

等待etcd更新数据 即可查询到节点都已升级完成

  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 50d v1.17.4
  4. node01 Ready <none> 50d v1.17.2
  5. node02 Ready <none> 50d v1.17.2
  6. [root@master ~]# kubectl get nodes
  7. NAME STATUS ROLES AGE VERSION
  8. master Ready master 50d v1.17.4
  9. node01 Ready <none> 50d v1.17.4
  10. node02 Ready <none> 50d v1.17.4
  11. [root@master ~]#

官方
https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

  1. yum list --showduplicates kubeadm --disableexcludes=kubernetes
  2. yum install kubeadm-1.18.0-0 --disableexcludes=kubernetes
  3. yum install kubelet-1.18.0-0 kubectl-1.18.0-0 --disableexcludes=kubernetes
  1. [root@master ~]# kubectl get pod -v=9
  2. I0412 19:14:41.572613 118710 loader.go:375] Config loaded from file: /root/.kube/config
  3. I0412 19:14:41.579296 118710 round_trippers.go:423] curl -k -v -XGET -H "User-Agent: kubectl/v1.18.0 (linux/amd64) kubernetes/9e99141" -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" 'https://192.168.31.90:6443/api/v1/namespaces/default/pods?limit=500'
  4. I0412 19:14:41.588188 118710 round_trippers.go:443] GET https://192.168.31.90:6443/api/v1/namespaces/default/pods?limit=500 200 OK in 8 milliseconds
  5. I0412 19:14:41.588216 118710 round_trippers.go:449] Response Headers:
  6. I0412 19:14:41.588220 118710 round_trippers.go:452] Content-Type: application/json
  7. I0412 19:14:41.588222 118710 round_trippers.go:452] Content-Length: 2927
  8. I0412 19:14:41.588224 118710 round_trippers.go:452] Date: Sun, 12 Apr 2020 11:14:41 GMT
  9. I0412 19:14:41.588317 118710 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"580226"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"integer","format":"","description":"The number of times the containers in this pod have been restarted.","priority":0},{"name":"Age","type":"string","format":"","description":"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata","priority":0},{"name":"IP","type":"string","format":"","description":"IP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated.","priority":1},{"name":"Node","type":"string","format":"","description":"NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements.","priority":1},{"name":"Nominated Node","type":"string","format":"","description":"nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.","priority":1},{"name":"Readiness Gates","type":"string","format":"","description":"If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to \"True\" More info: https://git.k8s.io/enhancements/keps/sig-network/0007-pod-ready%2B%2B.md","priority":1}],"rows":[]}
  10. No resources found in default namespace.
  11. [root@master ~]#
  1. [root@master ~]# kubeadm upgrade plan
  2. [upgrade/config] Making sure the configuration is correct:
  3. [upgrade/config] Reading configuration from the cluster...
  4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  5. [preflight] Running pre-flight checks.
  6. [upgrade] Running cluster health checks
  7. [upgrade] Fetching available versions to upgrade to
  8. [upgrade/versions] Cluster version: v1.18.0
  9. [upgrade/versions] kubeadm version: v1.18.0
  10. [upgrade/versions] Latest stable version: v1.18.1
  11. [upgrade/versions] Latest stable version: v1.18.1
  12. [upgrade/versions] Latest version in the v1.18 series: v1.18.1
  13. [upgrade/versions] Latest version in the v1.18 series: v1.18.1
  14. Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
  15. COMPONENT CURRENT AVAILABLE
  16. Kubelet 3 x v1.18.0 v1.18.1
  17. Upgrade to the latest version in the v1.18 series:
  18. COMPONENT CURRENT AVAILABLE
  19. API Server v1.18.0 v1.18.1
  20. Controller Manager v1.18.0 v1.18.1
  21. Scheduler v1.18.0 v1.18.1
  22. Kube Proxy v1.18.0 v1.18.1
  23. CoreDNS 1.6.7 1.6.7
  24. Etcd 3.4.3 3.4.3-0
  25. You can now apply the upgrade by executing the following command:
  26. kubeadm upgrade apply v1.18.1
  27. Note: Before you can perform this upgrade, you have to update kubeadm to v1.18.1.
  28. _____________________________________________________________________
  29. [root@master ~]#
  1. [root@master ~]# yum install kubeadm-1.18.1-0 --disableexcludes=kubernetes
  2. Loaded plugins: fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. * base: mirrors.aliyun.com
  5. * extras: mirrors.aliyun.com
  6. * updates: mirrors.aliyun.com
  7. base | 3.6 kB 00:00:00
  8. docker-ce-stable | 3.5 kB 00:00:00
  9. Resolving Dependencies
  10. --> Running transaction check
  11. ---> Package kubeadm.x86_64 0:1.18.0-0 will be updated
  12. ---> Package kubeadm.x86_64 0:1.18.1-0 will be an update
  13. --> Finished Dependency Resolution
  14. Dependencies Resolved
  15. =======================================================================================================================================
  16. Package Arch Version Repository Size
  17. =======================================================================================================================================
  18. Updating:
  19. kubeadm x86_64 1.18.1-0 kubernetes 8.8 M
  20. Transaction Summary
  21. =======================================================================================================================================
  22. Upgrade 1 Package
  23. Total download size: 8.8 M
  24. Is this ok [y/d/N]: y
  25. Downloading packages:
  26. Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
  27. a86b51d48af8df740035f8bc4c0c30d994d5c8ef03388c21061372d5c5b2859d-kubeadm-1.18.1-0.x86_64.rpm | 8.8 MB 00:00:01
  28. Running transaction check
  29. Running transaction test
  30. Transaction test succeeded
  31. Running transaction
  32. Updating : kubeadm-1.18.1-0.x86_64 1/2
  33. Cleanup : kubeadm-1.18.0-0.x86_64 2/2
  34. Verifying : kubeadm-1.18.1-0.x86_64 1/2
  35. Verifying : kubeadm-1.18.0-0.x86_64 2/2
  36. Updated:
  37. kubeadm.x86_64 0:1.18.1-0
  38. Complete!
  39. [root@master ~]#
  1. [root@master ~]# yum install kubelet-1.18.1-0 kubectl-1.18.1-0 --disableexcludes=kubernetes -y
  2. Loaded plugins: fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. * base: mirrors.aliyun.com
  5. * extras: mirrors.aliyun.com
  6. * updates: mirrors.aliyun.com
  7. Resolving Dependencies
  8. --> Running transaction check
  9. ---> Package kubectl.x86_64 0:1.18.0-0 will be updated
  10. ---> Package kubectl.x86_64 0:1.18.1-0 will be an update
  11. ---> Package kubelet.x86_64 0:1.18.0-0 will be updated
  12. ---> Package kubelet.x86_64 0:1.18.1-0 will be an update
  13. --> Finished Dependency Resolution
  14. Dependencies Resolved
  15. =======================================================================================================================================
  16. Package Arch Version Repository Size
  17. =======================================================================================================================================
  18. Updating:
  19. kubectl x86_64 1.18.1-0 kubernetes 9.5 M
  20. kubelet x86_64 1.18.1-0 kubernetes 21 M
  21. Transaction Summary
  22. =======================================================================================================================================
  23. Upgrade 2 Packages
  24. Total download size: 30 M
  25. Downloading packages:
  26. Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
  27. (1/2): 9b65a188779e61866501eb4e8a07f38494d40af1454ba9232f98fd4ced4ba935-kubectl-1.18.1-0.x86_64.rpm | 9.5 MB 00:00:02
  28. (2/2): 39b64bb11c6c123dd502af7d970cee95606dbf7fd62905de0412bdac5e875843-kubelet-1.18.1-0.x86_64.rpm | 21 MB 00:00:03
  29. ---------------------------------------------------------------------------------------------------------------------------------------
  30. Total 9.1 MB/s | 30 MB 00:00:03
  31. Running transaction check
  32. Running transaction test
  33. Transaction test succeeded
  34. Running transaction
  35. Updating : kubectl-1.18.1-0.x86_64 1/4
  36. Updating : kubelet-1.18.1-0.x86_64 2/4
  37. Cleanup : kubectl-1.18.0-0.x86_64 3/4
  38. Cleanup : kubelet-1.18.0-0.x86_64 4/4
  39. Verifying : kubelet-1.18.1-0.x86_64 1/4
  40. Verifying : kubectl-1.18.1-0.x86_64 2/4
  41. Verifying : kubectl-1.18.0-0.x86_64 3/4
  42. Verifying : kubelet-1.18.0-0.x86_64 4/4
  43. Updated:
  44. kubectl.x86_64 0:1.18.1-0 kubelet.x86_64 0:1.18.1-0
  45. Complete!
  46. [root@master ~]#
  1. [root@master ~]# kubeadm upgrade apply v1.18.1 --config init.default.yaml
  2. W0412 19:42:53.251523 28178 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  3. [upgrade/config] Making sure the configuration is correct:
  4. W0412 19:42:53.259480 28178 common.go:94] WARNING: Usage of the --config flag for reconfiguring the cluster during upgrade is not recommended!
  5. W0412 19:42:53.260162 28178 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  6. [preflight] Running pre-flight checks.
  7. [upgrade] Running cluster health checks
  8. [upgrade/version] You have chosen to change the cluster version to "v1.18.1"
  9. [upgrade/versions] Cluster version: v1.18.0
  10. [upgrade/versions] kubeadm version: v1.18.1
  11. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
  12. [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
  13. [upgrade/prepull] Prepulling image for component etcd.
  14. [upgrade/prepull] Prepulling image for component kube-apiserver.
  15. [upgrade/prepull] Prepulling image for component kube-controller-manager.
  16. [upgrade/prepull] Prepulling image for component kube-scheduler.
  17. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
  18. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
  19. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
  20. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
  21. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
  22. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
  23. [upgrade/prepull] Prepulled image for component kube-apiserver.
  24. [upgrade/prepull] Prepulled image for component kube-controller-manager.
  25. [upgrade/prepull] Prepulled image for component kube-scheduler.
  26. [upgrade/prepull] Prepulled image for component etcd.
  27. [upgrade/prepull] Successfully prepulled the images for all the control plane components
  28. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.1"...
  29. Static pod: kube-apiserver-master hash: b8bcd214f8b135af672715d5d8102bac
  30. Static pod: kube-controller-manager-master hash: cfac61c0d2fc7bcd14585e430d31d4b3
  31. Static pod: kube-scheduler-master hash: ca2aa1b3224c37fa1791ef6c7d883bbe
  32. [upgrade/etcd] Upgrading to TLS for etcd
  33. [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.1" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
  34. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests936160488"
  35. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
  36. [upgrade/staticpods] Renewing apiserver certificate
  37. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
  38. [upgrade/staticpods] Renewing front-proxy-client certificate
  39. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
  40. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-04-12-19-48-15/kube-apiserver.yaml"
  41. [upgrade/staticpods] Waiting for the kubelet to restart the component
  42. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  43. Static pod: kube-apiserver-master hash: b8bcd214f8b135af672715d5d8102bac
  44. [upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: timed out waiting for the condition
  45. To see the stack trace of this error execute with --v=5 or higher
  46. [root@master ~]# kubeadm upgrade apply v1.18.1 --config init.default.yaml
  47. W0412 19:54:26.305065 46592 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  48. [upgrade/config] Making sure the configuration is correct:
  49. W0412 19:54:26.314312 46592 common.go:94] WARNING: Usage of the --config flag for reconfiguring the cluster during upgrade is not recommended!
  50. W0412 19:54:26.315231 46592 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  51. [preflight] Running pre-flight checks.
  52. [upgrade] Running cluster health checks
  53. [upgrade/version] You have chosen to change the cluster version to "v1.18.1"
  54. [upgrade/versions] Cluster version: v1.18.0
  55. [upgrade/versions] kubeadm version: v1.18.1
  56. [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
  57. [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
  58. [upgrade/prepull] Prepulling image for component etcd.
  59. [upgrade/prepull] Prepulling image for component kube-apiserver.
  60. [upgrade/prepull] Prepulling image for component kube-controller-manager.
  61. [upgrade/prepull] Prepulling image for component kube-scheduler.
  62. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
  63. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
  64. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
  65. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
  66. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
  67. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
  68. [upgrade/prepull] Prepulled image for component kube-apiserver.
  69. [upgrade/prepull] Prepulled image for component kube-controller-manager.
  70. [upgrade/prepull] Prepulled image for component etcd.
  71. [upgrade/prepull] Prepulled image for component kube-scheduler.
  72. [upgrade/prepull] Successfully prepulled the images for all the control plane components
  73. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.1"...
  74. Static pod: kube-apiserver-master hash: b8bcd214f8b135af672715d5d8102bac
  75. Static pod: kube-controller-manager-master hash: cfac61c0d2fc7bcd14585e430d31d4b3
  76. Static pod: kube-scheduler-master hash: ca2aa1b3224c37fa1791ef6c7d883bbe
  77. [upgrade/etcd] Upgrading to TLS for etcd
  78. [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.1" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
  79. [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests721958972"
  80. [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
  81. [upgrade/staticpods] Renewing apiserver certificate
  82. [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
  83. [upgrade/staticpods] Renewing front-proxy-client certificate
  84. [upgrade/staticpods] Renewing apiserver-etcd-client certificate
  85. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-04-12-19-59-44/kube-apiserver.yaml"
  86. [upgrade/staticpods] Waiting for the kubelet to restart the component
  87. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  88. Static pod: kube-apiserver-master hash: b8bcd214f8b135af672715d5d8102bac
  89. Static pod: kube-apiserver-master hash: 3199b786e3649f9b8b3aba73c0e5e082
  90. [apiclient] Found 1 Pods for label selector component=kube-apiserver
  91. [apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get https://192.168.31.90:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver: dial tcp 192.168.31.90:6443: connect: connection refused]
  92. [apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get https://192.168.31.90:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver: dial tcp 192.168.31.90:6443: connect: connection refused]
  93. [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
  94. [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
  95. [upgrade/staticpods] Renewing controller-manager.conf certificate
  96. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-04-12-19-59-44/kube-controller-manager.yaml"
  97. [upgrade/staticpods] Waiting for the kubelet to restart the component
  98. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  99. Static pod: kube-controller-manager-master hash: cfac61c0d2fc7bcd14585e430d31d4b3
  100. Static pod: kube-controller-manager-master hash: fbadd0eb537080d05354f06363a29760
  101. [apiclient] Found 1 Pods for label selector component=kube-controller-manager
  102. [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
  103. [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
  104. [upgrade/staticpods] Renewing scheduler.conf certificate
  105. [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-04-12-19-59-44/kube-scheduler.yaml"
  106. [upgrade/staticpods] Waiting for the kubelet to restart the component
  107. [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
  108. Static pod: kube-scheduler-master hash: ca2aa1b3224c37fa1791ef6c7d883bbe
  109. Static pod: kube-scheduler-master hash: 363a5bee1d59c51a98e345162db75755
  110. [apiclient] Found 1 Pods for label selector component=kube-scheduler
  111. [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
  112. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  113. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  114. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
  115. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  116. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  117. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  118. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  119. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  120. [addons] Applied essential addon: CoreDNS
  121. [addons] Applied essential addon: kube-proxy
  122. [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.1". Enjoy!
  123. [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  124. [root@master ~]#
  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 67d v1.18.0
  4. node01 Ready <none> 67d v1.18.0
  5. node02 Ready <none> 67d v1.18.0
  6. [root@master ~]# systemctl daemon-reload
  7. [root@master ~]# systemctl restart kubelet
  8. [root@master ~]# kubectl get nodes
  9. NAME STATUS ROLES AGE VERSION
  10. master Ready master 67d v1.18.0
  11. node01 Ready <none> 67d v1.18.0
  12. node02 Ready <none> 67d v1.18.0
  13. [root@master ~]# kubectl get nodes
  14. NAME STATUS ROLES AGE VERSION
  15. master Ready master 67d v1.18.1
  16. node01 Ready <none> 67d v1.18.0
  17. node02 Ready <none> 67d v1.18.0
  18. [root@master ~]#
  1. [root@node01 ~]# yum install kubeadm-1.18.1-0 kubelet-1.18.1-0 kubectl-1.18.1-0 --disableexcludes=kubernetes -y
  2. Loaded plugins: fastestmirror
  3. Determining fastest mirrors
  4. * base: mirrors.aliyun.com
  5. * extras: mirrors.aliyun.com
  6. * updates: mirrors.aliyun.com
  7. base | 3.6 kB 00:00:00
  8. docker-ce-stable | 3.5 kB 00:00:00
  9. epel | 4.7 kB 00:00:00
  10. extras | 2.9 kB 00:00:00
  11. kubernetes/signature | 454 B 00:00:00
  12. kubernetes/signature | 1.4 kB 00:00:00 !!!
  13. updates | 2.9 kB 00:00:00
  14. (1/5): epel/x86_64/group_gz | 95 kB 00:00:00
  15. (2/5): epel/x86_64/updateinfo | 1.0 MB 00:00:00
  16. (3/5): extras/7/x86_64/primary_db | 165 kB 00:00:00
  17. (4/5): kubernetes/primary | 66 kB 00:00:00
  18. (5/5): epel/x86_64/primary_db | 6.8 MB 00:00:00
  19. kubernetes 484/484
  20. Resolving Dependencies
  21. --> Running transaction check
  22. ---> Package kubeadm.x86_64 0:1.18.0-0 will be updated
  23. ---> Package kubeadm.x86_64 0:1.18.1-0 will be an update
  24. ---> Package kubectl.x86_64 0:1.18.0-0 will be updated
  25. ---> Package kubectl.x86_64 0:1.18.1-0 will be an update
  26. ---> Package kubelet.x86_64 0:1.18.0-0 will be updated
  27. ---> Package kubelet.x86_64 0:1.18.1-0 will be an update
  28. --> Finished Dependency Resolution
  29. Dependencies Resolved
  30. =======================================================================================================================================
  31. Package Arch Version Repository Size
  32. =======================================================================================================================================
  33. Updating:
  34. kubeadm x86_64 1.18.1-0 kubernetes 8.8 M
  35. kubectl x86_64 1.18.1-0 kubernetes 9.5 M
  36. kubelet x86_64 1.18.1-0 kubernetes 21 M
  37. Transaction Summary
  38. =======================================================================================================================================
  39. Upgrade 3 Packages
  40. Total download size: 39 M
  41. Downloading packages:
  42. Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
  43. (1/3): a86b51d48af8df740035f8bc4c0c30d994d5c8ef03388c21061372d5c5b2859d-kubeadm-1.18.1-0.x86_64.rpm | 8.8 MB 00:00:01
  44. (2/3): 39b64bb11c6c123dd502af7d970cee95606dbf7fd62905de0412bdac5e875843-kubelet-1.18.1-0.x86_64.rpm | 21 MB 00:00:02
  45. (3/3): 9b65a188779e61866501eb4e8a07f38494d40af1454ba9232f98fd4ced4ba935-kubectl-1.18.1-0.x86_64.rpm | 9.5 MB 00:00:04
  46. ---------------------------------------------------------------------------------------------------------------------------------------
  47. Total 9.0 MB/s | 39 MB 00:00:04
  48. Running transaction check
  49. Running transaction test
  50. Transaction test succeeded
  51. Running transaction
  52. Updating : kubectl-1.18.1-0.x86_64 1/6
  53. Updating : kubelet-1.18.1-0.x86_64 2/6
  54. Updating : kubeadm-1.18.1-0.x86_64 3/6
  55. Cleanup : kubeadm-1.18.0-0.x86_64 4/6
  56. Cleanup : kubectl-1.18.0-0.x86_64 5/6
  57. Cleanup : kubelet-1.18.0-0.x86_64 6/6
  58. Verifying : kubeadm-1.18.1-0.x86_64 1/6
  59. Verifying : kubelet-1.18.1-0.x86_64 2/6
  60. Verifying : kubectl-1.18.1-0.x86_64 3/6
  61. Verifying : kubectl-1.18.0-0.x86_64 4/6
  62. Verifying : kubelet-1.18.0-0.x86_64 5/6
  63. Verifying : kubeadm-1.18.0-0.x86_64 6/6
  64. Updated:
  65. kubeadm.x86_64 0:1.18.1-0 kubectl.x86_64 0:1.18.1-0 kubelet.x86_64 0:1.18.1-0
  66. Complete!
  67. [root@node01 ~]# systemctl daemon-reload
  68. [root@node01 ~]# systemctl restart kubelet
  69. [root@node01 ~]#
  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 67d v1.18.1
  4. node01 Ready <none> 67d v1.18.1
  5. node02 Ready <none> 67d v1.18.1
  6. [root@master ~]#

跨大版本升级

  1. yum list kubelet kubeadm kubectl --showduplicates|sort -r
  1. [root@master ~]# kubeadm upgrade plan
  2. [upgrade/config] Making sure the configuration is correct:
  3. [upgrade/config] Reading configuration from the cluster...
  4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  5. [preflight] Running pre-flight checks.
  6. [upgrade] Running cluster health checks
  7. [upgrade] Fetching available versions to upgrade to
  8. [upgrade/versions] Cluster version: v1.18.16
  9. [upgrade/versions] kubeadm version: v1.18.16
  10. I0312 10:26:34.624848 24536 version.go:255] remote version is much newer: v1.20.4; falling back to: stable-1.18
  11. [upgrade/versions] Latest stable version: v1.18.16
  12. [upgrade/versions] Latest stable version: v1.18.16
  13. [upgrade/versions] Latest version in the v1.18 series: v1.18.16
  14. [upgrade/versions] Latest version in the v1.18 series: v1.18.16
  15. Awesome, you're up-to-date! Enjoy!
  16. [root@master ~]#
  1. [root@master ~]# yum list kubeadm --showduplicates|sort -r
  2. * updates: mirrors.163.com
  3. Loading mirror speeds from cached hostfile
  4. Loaded plugins: fastestmirror, langpacks
  5. kubeadm.x86_64 1.9.9-0 kubernetes
  6. kubeadm.x86_64 1.9.8-0 kubernetes
  7. kubeadm.x86_64 1.9.7-0 kubernetes
  8. kubeadm.x86_64 1.9.6-0 kubernetes
  9. kubeadm.x86_64 1.9.5-0 kubernetes
  10. kubeadm.x86_64 1.9.4-0 kubernetes
  11. kubeadm.x86_64 1.9.3-0 kubernetes
  12. kubeadm.x86_64 1.9.2-0 kubernetes
  13. kubeadm.x86_64 1.9.11-0 kubernetes
  14. kubeadm.x86_64 1.9.1-0 kubernetes
  15. kubeadm.x86_64 1.9.10-0 kubernetes
  16. kubeadm.x86_64 1.9.0-0 kubernetes
  17. kubeadm.x86_64 1.8.9-0 kubernetes
  18. kubeadm.x86_64 1.8.8-0 kubernetes
  19. kubeadm.x86_64 1.8.7-0 kubernetes
  20. kubeadm.x86_64 1.8.6-0 kubernetes
  21. kubeadm.x86_64 1.8.5-0 kubernetes
  22. kubeadm.x86_64 1.8.4-0 kubernetes
  23. kubeadm.x86_64 1.8.3-0 kubernetes
  24. kubeadm.x86_64 1.8.2-0 kubernetes
  25. kubeadm.x86_64 1.8.15-0 kubernetes
  26. kubeadm.x86_64 1.8.14-0 kubernetes
  27. kubeadm.x86_64 1.8.13-0 kubernetes
  28. kubeadm.x86_64 1.8.12-0 kubernetes
  29. kubeadm.x86_64 1.8.11-0 kubernetes
  30. kubeadm.x86_64 1.8.1-0 kubernetes
  31. kubeadm.x86_64 1.8.10-0 kubernetes
  32. kubeadm.x86_64 1.8.0-1 kubernetes
  33. kubeadm.x86_64 1.8.0-0 kubernetes
  34. kubeadm.x86_64 1.7.9-0 kubernetes
  35. kubeadm.x86_64 1.7.8-1 kubernetes
  36. kubeadm.x86_64 1.7.7-1 kubernetes
  37. kubeadm.x86_64 1.7.6-1 kubernetes
  38. kubeadm.x86_64 1.7.5-0 kubernetes
  39. kubeadm.x86_64 1.7.4-0 kubernetes
  40. kubeadm.x86_64 1.7.3-1 kubernetes
  41. kubeadm.x86_64 1.7.2-0 kubernetes
  42. kubeadm.x86_64 1.7.16-0 kubernetes
  43. kubeadm.x86_64 1.7.15-0 kubernetes
  44. kubeadm.x86_64 1.7.14-0 kubernetes
  45. kubeadm.x86_64 1.7.11-0 kubernetes
  46. kubeadm.x86_64 1.7.1-0 kubernetes
  47. kubeadm.x86_64 1.7.10-0 kubernetes
  48. kubeadm.x86_64 1.7.0-0 kubernetes
  49. kubeadm.x86_64 1.6.9-0 kubernetes
  50. kubeadm.x86_64 1.6.8-0 kubernetes
  51. kubeadm.x86_64 1.6.7-0 kubernetes
  52. kubeadm.x86_64 1.6.6-0 kubernetes
  53. kubeadm.x86_64 1.6.5-0 kubernetes
  54. kubeadm.x86_64 1.6.4-0 kubernetes
  55. kubeadm.x86_64 1.6.3-0 kubernetes
  56. kubeadm.x86_64 1.6.2-0 kubernetes
  57. kubeadm.x86_64 1.6.13-0 kubernetes
  58. kubeadm.x86_64 1.6.12-0 kubernetes
  59. kubeadm.x86_64 1.6.11-0 kubernetes
  60. kubeadm.x86_64 1.6.1-0 kubernetes
  61. kubeadm.x86_64 1.6.10-0 kubernetes
  62. kubeadm.x86_64 1.6.0-0 kubernetes
  63. kubeadm.x86_64 1.20.4-0 kubernetes
  64. kubeadm.x86_64 1.20.2-0 kubernetes
  65. kubeadm.x86_64 1.20.1-0 kubernetes
  66. kubeadm.x86_64 1.20.0-0 kubernetes
  67. kubeadm.x86_64 1.19.8-0 kubernetes
  68. kubeadm.x86_64 1.19.7-0 kubernetes
  69. kubeadm.x86_64 1.19.6-0 kubernetes
  70. kubeadm.x86_64 1.19.5-0 kubernetes
  71. kubeadm.x86_64 1.19.4-0 kubernetes
  72. kubeadm.x86_64 1.19.3-0 kubernetes
  73. kubeadm.x86_64 1.19.2-0 kubernetes
  74. kubeadm.x86_64 1.19.1-0 kubernetes
  75. kubeadm.x86_64 1.19.0-0 kubernetes
  76. kubeadm.x86_64 1.18.9-0 kubernetes
  77. kubeadm.x86_64 1.18.8-0 kubernetes
  78. kubeadm.x86_64 1.18.6-0 kubernetes
  79. kubeadm.x86_64 1.18.5-0 kubernetes
  80. kubeadm.x86_64 1.18.4-1 kubernetes
  81. kubeadm.x86_64 1.18.4-0 kubernetes
  82. kubeadm.x86_64 1.18.3-0 kubernetes
  83. kubeadm.x86_64 1.18.2-0 kubernetes
  84. kubeadm.x86_64 1.18.16-0 kubernetes
  85. kubeadm.x86_64 1.18.16-0 @kubernetes
  86. kubeadm.x86_64 1.18.15-0 kubernetes
  87. kubeadm.x86_64 1.18.14-0 kubernetes
  88. kubeadm.x86_64 1.18.13-0 kubernetes
  89. kubeadm.x86_64 1.18.12-0 kubernetes
  90. kubeadm.x86_64 1.18.1-0 kubernetes
  91. kubeadm.x86_64 1.18.10-0 kubernetes
  92. kubeadm.x86_64 1.18.0-0 kubernetes
  93. kubeadm.x86_64 1.17.9-0 kubernetes
  94. kubeadm.x86_64 1.17.8-0 kubernetes
  95. kubeadm.x86_64 1.17.7-1 kubernetes
  96. kubeadm.x86_64 1.17.7-0 kubernetes
  97. kubeadm.x86_64 1.17.6-0 kubernetes
  98. kubeadm.x86_64 1.17.5-0 kubernetes
  99. kubeadm.x86_64 1.17.4-0 kubernetes
  100. kubeadm.x86_64 1.17.3-0 kubernetes
  101. kubeadm.x86_64 1.17.2-0 kubernetes
  102. kubeadm.x86_64 1.17.17-0 kubernetes
  103. kubeadm.x86_64 1.17.16-0 kubernetes
  104. kubeadm.x86_64 1.17.15-0 kubernetes
  105. kubeadm.x86_64 1.17.14-0 kubernetes
  106. kubeadm.x86_64 1.17.13-0 kubernetes
  107. kubeadm.x86_64 1.17.12-0 kubernetes
  108. kubeadm.x86_64 1.17.11-0 kubernetes
  109. kubeadm.x86_64 1.17.1-0 kubernetes
  110. kubeadm.x86_64 1.17.0-0 kubernetes
  111. kubeadm.x86_64 1.16.9-0 kubernetes
  112. kubeadm.x86_64 1.16.8-0 kubernetes
  113. kubeadm.x86_64 1.16.7-0 kubernetes
  114. kubeadm.x86_64 1.16.6-0 kubernetes
  115. kubeadm.x86_64 1.16.5-0 kubernetes
  116. kubeadm.x86_64 1.16.4-0 kubernetes
  117. kubeadm.x86_64 1.16.3-0 kubernetes
  118. kubeadm.x86_64 1.16.2-0 kubernetes
  119. kubeadm.x86_64 1.16.15-0 kubernetes
  120. kubeadm.x86_64 1.16.14-0 kubernetes
  121. kubeadm.x86_64 1.16.13-0 kubernetes
  122. kubeadm.x86_64 1.16.12-0 kubernetes
  123. kubeadm.x86_64 1.16.11-1 kubernetes
  124. kubeadm.x86_64 1.16.11-0 kubernetes
  125. kubeadm.x86_64 1.16.1-0 kubernetes
  126. kubeadm.x86_64 1.16.10-0 kubernetes
  127. kubeadm.x86_64 1.16.0-0 kubernetes
  128. kubeadm.x86_64 1.15.9-0 kubernetes
  129. kubeadm.x86_64 1.15.8-0 kubernetes
  130. kubeadm.x86_64 1.15.7-0 kubernetes
  131. kubeadm.x86_64 1.15.6-0 kubernetes
  132. kubeadm.x86_64 1.15.5-0 kubernetes
  133. kubeadm.x86_64 1.15.4-0 kubernetes
  134. kubeadm.x86_64 1.15.3-0 kubernetes
  135. kubeadm.x86_64 1.15.2-0 kubernetes
  136. kubeadm.x86_64 1.15.12-0 kubernetes
  137. kubeadm.x86_64 1.15.11-0 kubernetes
  138. kubeadm.x86_64 1.15.1-0 kubernetes
  139. kubeadm.x86_64 1.15.10-0 kubernetes
  140. kubeadm.x86_64 1.15.0-0 kubernetes
  141. kubeadm.x86_64 1.14.9-0 kubernetes
  142. kubeadm.x86_64 1.14.8-0 kubernetes
  143. kubeadm.x86_64 1.14.7-0 kubernetes
  144. kubeadm.x86_64 1.14.6-0 kubernetes
  145. kubeadm.x86_64 1.14.5-0 kubernetes
  146. kubeadm.x86_64 1.14.4-0 kubernetes
  147. kubeadm.x86_64 1.14.3-0 kubernetes
  148. kubeadm.x86_64 1.14.2-0 kubernetes
  149. kubeadm.x86_64 1.14.1-0 kubernetes
  150. kubeadm.x86_64 1.14.10-0 kubernetes
  151. kubeadm.x86_64 1.14.0-0 kubernetes
  152. kubeadm.x86_64 1.13.9-0 kubernetes
  153. kubeadm.x86_64 1.13.8-0 kubernetes
  154. kubeadm.x86_64 1.13.7-0 kubernetes
  155. kubeadm.x86_64 1.13.6-0 kubernetes
  156. kubeadm.x86_64 1.13.5-0 kubernetes
  157. kubeadm.x86_64 1.13.4-0 kubernetes
  158. kubeadm.x86_64 1.13.3-0 kubernetes
  159. kubeadm.x86_64 1.13.2-0 kubernetes
  160. kubeadm.x86_64 1.13.12-0 kubernetes
  161. kubeadm.x86_64 1.13.11-0 kubernetes
  162. kubeadm.x86_64 1.13.1-0 kubernetes
  163. kubeadm.x86_64 1.13.10-0 kubernetes
  164. kubeadm.x86_64 1.13.0-0 kubernetes
  165. kubeadm.x86_64 1.12.9-0 kubernetes
  166. kubeadm.x86_64 1.12.8-0 kubernetes
  167. kubeadm.x86_64 1.12.7-0 kubernetes
  168. kubeadm.x86_64 1.12.6-0 kubernetes
  169. kubeadm.x86_64 1.12.5-0 kubernetes
  170. kubeadm.x86_64 1.12.4-0 kubernetes
  171. kubeadm.x86_64 1.12.3-0 kubernetes
  172. kubeadm.x86_64 1.12.2-0 kubernetes
  173. kubeadm.x86_64 1.12.1-0 kubernetes
  174. kubeadm.x86_64 1.12.10-0 kubernetes
  175. kubeadm.x86_64 1.12.0-0 kubernetes
  176. kubeadm.x86_64 1.11.9-0 kubernetes
  177. kubeadm.x86_64 1.11.8-0 kubernetes
  178. kubeadm.x86_64 1.11.7-0 kubernetes
  179. kubeadm.x86_64 1.11.6-0 kubernetes
  180. kubeadm.x86_64 1.11.5-0 kubernetes
  181. kubeadm.x86_64 1.11.4-0 kubernetes
  182. kubeadm.x86_64 1.11.3-0 kubernetes
  183. kubeadm.x86_64 1.11.2-0 kubernetes
  184. kubeadm.x86_64 1.11.1-0 kubernetes
  185. kubeadm.x86_64 1.11.10-0 kubernetes
  186. kubeadm.x86_64 1.11.0-0 kubernetes
  187. kubeadm.x86_64 1.10.9-0 kubernetes
  188. kubeadm.x86_64 1.10.8-0 kubernetes
  189. kubeadm.x86_64 1.10.7-0 kubernetes
  190. kubeadm.x86_64 1.10.6-0 kubernetes
  191. kubeadm.x86_64 1.10.5-0 kubernetes
  192. kubeadm.x86_64 1.10.4-0 kubernetes
  193. kubeadm.x86_64 1.10.3-0 kubernetes
  194. kubeadm.x86_64 1.10.2-0 kubernetes
  195. kubeadm.x86_64 1.10.13-0 kubernetes
  196. kubeadm.x86_64 1.10.12-0 kubernetes
  197. kubeadm.x86_64 1.10.11-0 kubernetes
  198. kubeadm.x86_64 1.10.1-0 kubernetes
  199. kubeadm.x86_64 1.10.10-0 kubernetes
  200. kubeadm.x86_64 1.10.0-0 kubernetes
  201. Installed Packages
  202. * extras: mirrors.aliyun.com
  203. * base: mirrors.163.com
  204. Available Packages
  205. [root@master ~]#
  1. [root@master ~]# yum install -y kubeadm- 1.19.0-0 kubelet- 1.19.0-0 kubectl- 1.19.0-0 --disableexcludes=kubernetes
  2. Loaded plugins: fastestmirror, langpacks
  3. Loading mirror speeds from cached hostfile
  4. * base: mirrors.163.com
  5. * extras: mirrors.aliyun.com
  6. * updates: mirrors.163.com
  7. No package kubeadm- available.
  8. No package 1.19.0-0 available.
  9. No package kubelet- available.
  10. No package 1.19.0-0 available.
  11. No package kubectl- available.
  12. No package 1.19.0-0 available.
  13. Error: Nothing to do
  14. [root@master ~]# yum install -y kubeadm- 1.19.0-0 kubelet- 1.19.0-0 kubectl- 1.19.0-0 --disableexcludes=kubernetes
  15. [root@master ~]# yum install -y kubeadm-1.19.0-0 kubelet-1.19.0-0 kubectl-1.19.0-0 --disableexcludes=kubernetes
  16. Loaded plugins: fastestmirror, langpacks
  17. Loading mirror speeds from cached hostfile
  18. * base: mirrors.163.com
  19. * extras: mirrors.aliyun.com
  20. * updates: mirrors.163.com
  21. Resolving Dependencies
  22. --> Running transaction check
  23. ---> Package kubeadm.x86_64 0:1.18.16-0 will be updated
  24. ---> Package kubeadm.x86_64 0:1.19.0-0 will be an update
  25. ---> Package kubectl.x86_64 0:1.18.16-0 will be updated
  26. ---> Package kubectl.x86_64 0:1.19.0-0 will be an update
  27. ---> Package kubelet.x86_64 0:1.18.16-0 will be updated
  28. ---> Package kubelet.x86_64 0:1.19.0-0 will be an update
  29. --> Finished Dependency Resolution
  30. Dependencies Resolved
  31. ======================================================================================================================================================
  32. Package Arch Version Repository Size
  33. ======================================================================================================================================================
  34. Updating:
  35. kubeadm x86_64 1.19.0-0 kubernetes 8.3 M
  36. kubectl x86_64 1.19.0-0 kubernetes 9.0 M
  37. kubelet x86_64 1.19.0-0 kubernetes 19 M
  38. Transaction Summary
  39. ======================================================================================================================================================
  40. Upgrade 3 Packages
  41. Total download size: 37 M
  42. Downloading packages:
  43. Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
  44. (1/3): db815b934cc7c4fa05c4a3ce4c2f97d638c9d5d6935eb53bfc2c27972c1da9de-kubeadm-1.19.0-0.x86_64.rpm | 8.3 MB 00:00:01
  45. (2/3): a8667839fac47a686e23d4c90111c7ff24227d2021e5309e0f0f08dda0ef1e29-kubelet-1.19.0-0.x86_64.rpm | 19 MB 00:00:03
  46. (3/3): 5780f541fa01a41a449be5dd63e9b5bc4715d25e01f2ad0017bfe6f768057e95-kubectl-1.19.0-0.x86_64.rpm | 9.0 MB 00:00:04
  47. ------------------------------------------------------------------------------------------------------------------------------------------------------
  48. Total 7.5 MB/s | 37 MB 00:00:04
  49. Running transaction check
  50. Running transaction test
  51. Transaction test succeeded
  52. Running transaction
  53. Updating : kubectl-1.19.0-0.x86_64 1/6
  54. Updating : kubelet-1.19.0-0.x86_64 2/6
  55. Updating : kubeadm-1.19.0-0.x86_64 3/6
  56. Cleanup : kubeadm-1.18.16-0.x86_64 4/6
  57. Cleanup : kubectl-1.18.16-0.x86_64 5/6
  58. Cleanup : kubelet-1.18.16-0.x86_64 6/6
  59. Verifying : kubelet-1.19.0-0.x86_64 1/6
  60. Verifying : kubeadm-1.19.0-0.x86_64 2/6
  61. Verifying : kubectl-1.19.0-0.x86_64 3/6
  62. Verifying : kubectl-1.18.16-0.x86_64 4/6
  63. Verifying : kubelet-1.18.16-0.x86_64 5/6
  64. Verifying : kubeadm-1.18.16-0.x86_64 6/6
  65. Updated:
  66. kubeadm.x86_64 0:1.19.0-0 kubectl.x86_64 0:1.19.0-0 kubelet.x86_64 0:1.19.0-0
  67. Complete!
  68. [root@master ~]#
  1. [root@master ~]# kubeadm upgrade plan
  2. [upgrade/config] Making sure the configuration is correct:
  3. [upgrade/config] Reading configuration from the cluster...
  4. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
  5. [preflight] Running pre-flight checks.
  6. [upgrade] Running cluster health checks
  7. [upgrade] Fetching available versions to upgrade to
  8. [upgrade/versions] Cluster version: v1.18.16
  9. [upgrade/versions] kubeadm version: v1.19.0
  10. I0312 10:36:27.364195 5715 version.go:252] remote version is much newer: v1.20.4; falling back to: stable-1.19
  11. [upgrade/versions] Latest stable version: v1.19.8
  12. [upgrade/versions] Latest stable version: v1.19.8
  13. [upgrade/versions] Latest version in the v1.18 series: v1.18.16
  14. [upgrade/versions] Latest version in the v1.18 series: v1.18.16
  15. Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
  16. COMPONENT CURRENT AVAILABLE
  17. kubelet 3 x v1.18.16 v1.19.8
  18. Upgrade to the latest stable version:
  19. COMPONENT CURRENT AVAILABLE
  20. kube-apiserver v1.18.16 v1.19.8
  21. kube-controller-manager v1.18.16 v1.19.8
  22. kube-scheduler v1.18.16 v1.19.8
  23. kube-proxy v1.18.16 v1.19.8
  24. CoreDNS 1.6.7 1.7.0
  25. etcd 3.4.3-0 3.4.9-1
  26. You can now apply the upgrade by executing the following command:
  27. kubeadm upgrade apply v1.19.8
  28. Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.8.
  29. _____________________________________________________________________
  30. The table below shows the current state of component configs as understood by this version of kubeadm.
  31. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
  32. resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
  33. upgrade to is denoted in the "PREFERRED VERSION" column.
  34. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
  35. kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
  36. kubelet.config.k8s.io v1beta1 v1beta1 no
  37. _____________________________________________________________________
  38. [root@master ~]#
  1. yum install -y kubeadm-1.19.8 kubelet-1.19.8 kubectl-1.19.8 --disableexcludes=kubernetes
  2. [root@master ~]# systemctl daemon-reload
  3. [root@master ~]# systemctl restart kubelet.service
  4. [root@master ~]# kubectl get nodes
  5. NAME STATUS ROLES AGE VERSION
  6. master Ready master 8d v1.19.0
  7. node01 Ready worker 8d v1.18.16
  8. node02 Ready worker 8d v1.18.16
  9. [root@master ~]#
  10. [root@master ~]# kubectl get nodes
  11. NAME STATUS ROLES AGE VERSION
  12. master Ready master 8d v1.19.8
  13. node01 Ready worker 8d v1.18.16
  14. node02 Ready worker 8d v1.18.16
  15. [root@master ~]#
  1. [root@master ~]# kubectl get pod -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-54658cf6f7-9tq5r 1/1 Running 0 8d
  4. calico-node-7cn4w 1/1 Running 0 8d
  5. calico-node-blcj5 1/1 Running 0 8d
  6. calico-node-zqk2w 1/1 Running 0 8d
  7. coredns-6d56c8448f-62bjv 0/1 Running 0 54s
  8. coredns-6d56c8448f-gtfvm 0/1 Running 0 54s
  9. coredns-7ff77c879f-d8vvj 1/1 Running 0 8d
  10. etcd-master 1/1 Running 0 2m32s
  11. kube-apiserver-master 1/1 Running 0 109s
  12. kube-controller-manager-master 1/1 Running 0 91s
  13. kube-proxy-2mm7p 1/1 Running 0 47s
  14. kube-proxy-mrgj5 1/1 Running 0 13s
  15. kube-proxy-rvfcz 1/1 Running 0 30s
  16. kube-scheduler-master 0/1 Running 0 74s
  17. [root@master ~]#
  18. [root@master ~]#

31:标签

  1. [root@master ~]# kubectl get nodes --show-labels
  2. NAME STATUS ROLES AGE VERSION LABELS
  3. master Ready master 13h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
  4. node01 Ready <none> 12h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
  5. node02 Ready <none> 11h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux
  6. [root@master ~]#
  7. [root@master ~]# kubectl get nodes
  8. NAME STATUS ROLES AGE VERSION
  9. master Ready master 13h v1.18.0
  10. node01 Ready <none> 12h v1.18.0
  11. node02 Ready <none> 11h v1.18.0
  12. [root@master ~]#
  13. [root@master ~]# kubectl get nodes --show-labels
  14. NAME STATUS ROLES AGE VERSION LABELS
  15. master Ready master 13h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
  16. node01 Ready <none> 12h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
  17. node02 Ready <none> 11h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux
  18. [root@master ~]#
  19. [root@master ~]# kubectl label nodes node01 node-role.kubernetes.io/worker=true
  20. node/node01 labeled
  21. [root@master ~]# kubectl get nodes
  22. NAME STATUS ROLES AGE VERSION
  23. master Ready master 13h v1.18.0
  24. node01 Ready worker 12h v1.18.0
  25. node02 Ready <none> 11h v1.18.0
  26. [root@master ~]#
  27. [root@master ~]# kubectl label nodes node01 node-role.kubernetes.io/worker-
  28. node/node01 labeled
  29. [root@master ~]# kubectl label nodes node01 node-role.kubernetes.io/worker=
  30. node/node01 labeled
  31. [root@master ~]# kubectl get nodes
  32. NAME STATUS ROLES AGE VERSION
  33. master Ready master 13h v1.18.0
  34. node01 Ready worker 13h v1.18.0
  35. node02 Ready <none> 11h v1.18.0
  36. [root@master ~]# kubectl label nodes node01 node-role.kubernetes.io/worker-
  37. node/node01 labeled
  38. [root@master ~]# kubectl get nodes
  39. NAME STATUS ROLES AGE VERSION
  40. master Ready master 13h v1.18.0
  41. node01 Ready <none> 13h v1.18.0
  42. node02 Ready <none> 11h v1.18.0
  43. [root@master ~]#

32:命令行简写

  1. [root@master ~]# kubectl api-resources
  2. NAME SHORTNAMES APIGROUP NAMESPACED KIND
  3. bindings true Binding
  4. componentstatuses cs false ComponentStatus
  5. configmaps cm true ConfigMap
  6. endpoints ep true Endpoints
  7. events ev true Event
  8. limitranges limits true LimitRange
  9. namespaces ns false Namespace
  10. nodes no false Node
  11. persistentvolumeclaims pvc true PersistentVolumeClaim
  12. persistentvolumes pv false PersistentVolume
  13. pods po true Pod
  14. podtemplates true PodTemplate
  15. replicationcontrollers rc true ReplicationController
  16. resourcequotas quota true ResourceQuota
  17. secrets true Secret
  18. serviceaccounts sa true ServiceAccount
  19. services svc true Service
  20. mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration
  21. validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration
  22. customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition
  23. apiservices apiregistration.k8s.io false APIService
  24. controllerrevisions apps true ControllerRevision
  25. daemonsets ds apps true DaemonSet
  26. deployments deploy apps true Deployment
  27. replicasets rs apps true ReplicaSet
  28. statefulsets sts apps true StatefulSet
  29. tokenreviews authentication.k8s.io false TokenReview
  30. localsubjectaccessreviews authorization.k8s.io true LocalSubjectAccessReview
  31. selfsubjectaccessreviews authorization.k8s.io false SelfSubjectAccessReview
  32. selfsubjectrulesreviews authorization.k8s.io false SelfSubjectRulesReview
  33. subjectaccessreviews authorization.k8s.io false SubjectAccessReview
  34. horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
  35. cronjobs cj batch true CronJob
  36. jobs batch true Job
  37. certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest
  38. leases coordination.k8s.io true Lease
  39. bgpconfigurations crd.projectcalico.org false BGPConfiguration
  40. bgppeers crd.projectcalico.org false BGPPeer
  41. blockaffinities crd.projectcalico.org false BlockAffinity
  42. clusterinformations crd.projectcalico.org false ClusterInformation
  43. felixconfigurations crd.projectcalico.org false FelixConfiguration
  44. globalnetworkpolicies crd.projectcalico.org false GlobalNetworkPolicy
  45. globalnetworksets crd.projectcalico.org false GlobalNetworkSet
  46. hostendpoints crd.projectcalico.org false HostEndpoint
  47. ipamblocks crd.projectcalico.org false IPAMBlock
  48. ipamconfigs crd.projectcalico.org false IPAMConfig
  49. ipamhandles crd.projectcalico.org false IPAMHandle
  50. ippools crd.projectcalico.org false IPPool
  51. kubecontrollersconfigurations crd.projectcalico.org false KubeControllersConfiguration
  52. networkpolicies crd.projectcalico.org true NetworkPolicy
  53. networksets crd.projectcalico.org true NetworkSet
  54. endpointslices discovery.k8s.io true EndpointSlice
  55. events ev events.k8s.io true Event
  56. ingresses ing extensions true Ingress
  57. ingressclasses networking.k8s.io false IngressClass
  58. ingresses ing networking.k8s.io true Ingress
  59. networkpolicies netpol networking.k8s.io true NetworkPolicy
  60. runtimeclasses node.k8s.io false RuntimeClass
  61. poddisruptionbudgets pdb policy true PodDisruptionBudget
  62. podsecuritypolicies psp policy false PodSecurityPolicy
  63. clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding
  64. clusterroles rbac.authorization.k8s.io false ClusterRole
  65. rolebindings rbac.authorization.k8s.io true RoleBinding
  66. roles rbac.authorization.k8s.io true Role
  67. priorityclasses pc scheduling.k8s.io false PriorityClass
  68. csidrivers storage.k8s.io false CSIDriver
  69. csinodes storage.k8s.io false CSINode
  70. storageclasses sc storage.k8s.io false StorageClass
  71. volumeattachments storage.k8s.io false VolumeAttachment
  72. [root@master ~]#

重置清理集群

  1. rm -rf /var/lib/cni/
  2. rm -rf /var/lib/kubelet/*
  3. rm -rf /etc/cni/
  4. rm -rf /var/lib/kubelet/kubeadm-flags.env
  5. rm -rf /var/lib/kubelet/config.yaml
  6. rm -rf /etc/kubernetes/manifests
  7. rm -rf $HOME/.kube/config
  8. systemctl restart kubelet.service
  9. systemctl restart docker.service