离线部署k8s-v1.20.2集群(一)

概述

因为网络问题,使用的服务器都无法连接公网,因此必须使用离线部署的方式进行部署。 使用的设备列表如下:

ip hostname
192.168.26.71 master01
192.168.26.72 master02
192.168.26.73 node01
192.168.26.74 node02
192.168.26.75 node03

以上设备均无法连接公网,yum也只能连接base库,k8s所需的安装包都需要从别的方式来获取,因此需要一台能上网的设备进行下载所需的安装包,设备的系统最好是全新的,未进行过任何的操作,因为需要使用yum下载所需要的的包。 本地可以连接公网的设备需要保证与以上使用的设备内核版本一致,这里使用的都是Centos7.9。
在vm10上进行操作:

  1. [root@vm10 ~]# uname -r
  2. 3.10.0-1160.el7.x86_64
  3. [root@vm10 ~]# cat /etc/redhat-release
  4. CentOS Linux release 7.9.2009 (Core)
  5. ## 修改yum安装保存rpm包
  6. [root@vm10 ~]# vi /etc/yum.conf
  7. 修改:
  8. keepcache=1
  9. ~]# cat << EOF >> /etc/hosts
  10. 192.168.26.71 master01
  11. 192.168.26.72 master02
  12. 192.168.26.73 node01
  13. 192.168.26.74 node02
  14. 192.168.26.75 node03
  15. EOF

集群部署所需要的包下载

在vm10上进行操作,安装docker-ce,keepalived,以及k8s组件:kubelet,kubectl,kubeadm 操作步骤:

  1. #docker 安装
  2. ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion
  3. ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  4. ~]# yum install docker-ce docker-ce-cli containerd.io -y
  5. #k8s相关安装
  6. ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  7. [kubernetes]
  8. name=Kubernetes
  9. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  10. enabled=1
  11. gpgcheck=1
  12. repo_gpgcheck=1
  13. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  14. EOF

若想选择安装指定的版本,可以yum list kubelet —showduplicate查找对应版本

  1. ~]# yum install -y kubelet kubeadm kubectl ##这样是安装的最新版本v1.20.2

相关工具及yum源、docker安装

  1. [root@vm10 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion keepalived
  2. ...
  3. [root@vm10 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  4. ...
  5. [root@vm10 ~]# yum install docker-ce docker-ce-cli containerd.io -y
  6. ...

k8s相关安装

  1. [root@vm10 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF
  10. [root@vm10 ~]# yum install -y kubelet kubeadm kubectl

若想选择安装指定的版本,可以yum list kubelet —showduplicate查找对应版本。这里是安装的最新版本v1.20.2。

由于k8s集群启动需要docker image,所以需要在本地机器上将相关的镜像下载之后,传到部署的服务器上。

配置及启动docker

  1. [root@vm10 ~]# mkdir -p /etc/docker /data/docker
  2. [root@vm10 ~]# cat <<EOF > /etc/docker/daemon.json
  3. {
  4. "graph": "/data/docker",
  5. "storage-driver": "overlay2",
  6. "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  7. "bip": "172.26.10.1/24",
  8. "exec-opts": ["native.cgroupdriver=systemd"],
  9. "live-restore": true
  10. }
  11. EOF
  1. [root@vm10 ~]# systemctl start docker ; systemctl enable docker
  2. [root@vm10 ~]# ip a
  3. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  5. inet 127.0.0.1/8 scope host lo
  6. valid_lft forever preferred_lft forever
  7. inet6 ::1/128 scope host
  8. valid_lft forever preferred_lft forever
  9. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  10. link/ether 00:0c:29:33:18:da brd ff:ff:ff:ff:ff:ff
  11. inet 192.168.26.10/24 brd 192.168.26.255 scope global noprefixroute ens32
  12. valid_lft forever preferred_lft forever
  13. inet6 fe80::20c:29ff:fe33:18da/64 scope link
  14. valid_lft forever preferred_lft forever
  15. 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
  16. link/ether 02:42:22:21:ef:8b brd ff:ff:ff:ff:ff:ff
  17. inet 172.26.10.1/24 brd 172.26.10.255 scope global docker0
  18. valid_lft forever preferred_lft forever

拉取镜像

使用aliyun的镜像站拉取镜像,可以通过脚本的方式进行拉取。这里根据脚本分解为单个拉取来进行练习

  1. ~]# vim images.sh
  1. #!/bin/bash
  2. url=registry.cn-hangzhou.aliyuncs.com/google_containers
  3. version=v1.20.2
  4. images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
  5. for imagename in ${images[@]} ; do
  6. docker pull $url/$imagename
  7. docker tag $url/$imagename k8s.gcr.io/$imagename
  8. docker rmi -f $url/$imagename
  9. done

执行结束后,查看相关的镜像:

  1. ~]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. k8s.gcr.io/kube-proxy v1.20.2 43154ddb57a8 2 weeks ago 118MB
  4. k8s.gcr.io/kube-apiserver v1.20.2 a8c2fdb8bf76 2 weeks ago 122MB
  5. k8s.gcr.io/kube-controller-manager v1.20.2 a27166429d98 2 weeks ago 116MB
  6. k8s.gcr.io/kube-scheduler v1.20.2 ed2c44fbdd78 2 weeks ago 46.4MB
  7. k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 5 months ago 253MB
  8. k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 7 months ago 45.2MB
  9. k8s.gcr.io/pause 3.2 80d28bedfe5d 11 months ago 683kB
  10. ## 在test-node上起一个单节点的k8s
  11. ~]# kubeadm --init
  12. ## 同时需要下载flannel镜像对应的配置文件
  13. ## flannel的配置文件需要在github上找,https://github.com/coreos/flannel/tree/master/Documentation
  14. ~]# kubectl apply -f kube-flannel.yml

flannel:v0.13.1-rc1镜像对应的配置文件:(要在浏览器中打开后进行复制)
https://github.com/coreos/flannel/blob/v0.13.1-rc1/Documentation/kube-flannel.yml

  • 查看k8s安装使用的镜像

    1. [root@vm10 ~]# kubeadm config images list --kubernetes-version=v1.20.2
    2. k8s.gcr.io/kube-apiserver:v1.20.2
    3. k8s.gcr.io/kube-controller-manager:v1.20.2
    4. k8s.gcr.io/kube-scheduler:v1.20.2
    5. k8s.gcr.io/kube-proxy:v1.20.2
    6. k8s.gcr.io/pause:3.2
    7. k8s.gcr.io/etcd:3.4.13-0
    8. k8s.gcr.io/coredns:1.7.0
  • 从阿里云拉取镜像后改成安装使用的镜像名

    1. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.2
    2. ...
    3. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2
    4. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.2
    5. ...
    6. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2
    7. ...
    8. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2
    9. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2
    10. ...
    11. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.2
    12. ...
    13. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2
    14. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.2
    15. ...
    16. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.2
    17. ...
    18. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2
    19. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.2
    20. ...
    21. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
    22. ...
    23. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
    24. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
    25. ...
    26. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
    27. ...
    28. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
    29. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
    30. ...
    31. [root@vm10 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
    32. ...
    33. [root@vm10 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
    34. [root@vm10 ~]# docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
    35. ...
    36. [root@vm10 ~]# docker pull flannel.tar quay.io/coreos/flannel:v0.13.1-rc1
    37. ...
  • 查看相关的镜像

    1. [root@vm10 ~]# docker images
    2. REPOSITORY TAG IMAGE ID CREATED SIZE
    3. k8s.gcr.io/kube-proxy v1.20.2 43154ddb57a8 3 weeks ago 118MB
    4. k8s.gcr.io/kube-apiserver v1.20.2 a8c2fdb8bf76 3 weeks ago 122MB
    5. k8s.gcr.io/kube-controller-manager v1.20.2 a27166429d98 3 weeks ago 116MB
    6. k8s.gcr.io/kube-scheduler v1.20.2 ed2c44fbdd78 3 weeks ago 46.4MB
    7. quay.io/coreos/flannel v0.13.1-rc1 f03a23d55e57 2 months ago 64.6MB
    8. k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 5 months ago 253MB
    9. k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 7 months ago 45.2MB
    10. k8s.gcr.io/pause 3.2 80d28bedfe5d 11 months ago 683kB

    打包

    将下载好的yum 包,以及docker images 都打包,传到相关的服务器

  1. #打包yum包
  2. ~]# cd /var/cache/
  3. ~]# tar zcvf yum.tar.gz yum
  4. #打包docker镜像(一定要使用tag的方式进行打包,若用id进行打包,导出的时候tag信息是空)
  5. ~]# docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy:v1.20.2
  6. ~]# docker save -o kube-apiserver.tar k8s.gcr.io/kube-apiserver:v1.20.2
  7. ~]# docker save -o kube-controller-manager.tar k8s.gcr.io/kube-controller-manager:v1.20.2
  8. ~]# docker save -o kube-scheduler.tar k8s.gcr.io/kube-scheduler:v1.20.2
  9. ~]# docker save -o etcd.tar k8s.gcr.io/etcd:3.4.13-0
  10. ~]# docker save -o coredns.tar k8s.gcr.io/coredns:1.7.0
  11. ~]# docker save -o pause.tar k8s.gcr.io/pause:3.2
  12. ~]# docker save -o flannel.tar quay.io/coreos/flannel:v0.13.1-rc1
  13. #打包镜像
  14. ~]# tar zcvf images.tar.gz images
  • 打包yum包

    1. [root@vm10 ~]# cd /var/cache/
    2. [root@vm10 cache]# tar zcvf yum.tar.gz yum
    3. ...
    4. [root@vm10 cache]# mkdir /opt/tar
    5. [root@vm10 cache]# mv yum.tar.gz /opt/tar/.
  • 导出docker镜像

    1. [root@vm10 cache]# cd /opt/tar/
    2. [root@vm10 tar]# mkdir images
    3. [root@vm10 tar]# cd images/
    4. [root@vm10 images]# docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy:v1.20.2
    5. [root@vm10 images]# docker save -o kube-apiserver.tar k8s.gcr.io/kube-apiserver:v1.20.2
    6. [root@vm10 images]# docker save -o kube-controller-manager.tar k8s.gcr.io/kube-controller-manager:v1.20.2
    7. [root@vm10 images]# docker save -o kube-scheduler.tar k8s.gcr.io/kube-scheduler:v1.20.2
    8. [root@vm10 images]# docker save -o etcd.tar k8s.gcr.io/etcd:3.4.13-0
    9. [root@vm10 images]# docker save -o coredns.tar k8s.gcr.io/coredns:1.7.0
    10. [root@vm10 images]# docker save -o pause.tar k8s.gcr.io/pause:3.2
    11. [root@vm10 images]# docker save -o flannel.tar quay.io/coreos/flannel:v0.13.1-rc1

    打包镜像:

    1. [root@vm10 ~]# cd /opt/tar
    2. [root@vm10 tar]# tar zcvf images.tar.gz images
    3. ...
  • yum.tar.gzyum.tar.gz复制到相关的服务器上

    1. [root@vm10 tar]# scp /opt/tar/yum.tar.gz master01:/usr/local/src/.
    2. [root@vm10 tar]# scp /opt/tar/images.tar.gz master01:/usr/local/src/.
    3. [root@vm10 tar]# scp /opt/tar/yum.tar.gz master02:/usr/local/src/.
    4. [root@vm10 tar]# scp /opt/tar/images.tar.gz master02:/usr/local/src/.
    5. ...

    离线部署k8s-v1.20.2集群(二)

    上一篇文章将所需要的的docker images,以及rpm包上传到指定的服务器 设备信息:

ip hostname
192.168.26.71 master01
192.168.26.72 master02
192.168.26.73 node01
192.168.26.74 node02
192.168.26.75 node03
192.168.26.11 vip

开始安装配置k8s集群

所有服务器开始进行设备初始化

1. hostname设定

  • 每台设备的hostname按照之前预定的进行设定,以master01为例

    1. [root@vm71 ~]# hostnamectl set-hostname master01
  • 同时将其他设备的信息记录写到/etc/hosts中

    1. ~]# cat << EOF >> /etc/hosts
    2. 192.168.26.71 master01
    3. 192.168.26.72 master02
    4. 192.168.26.73 node01
    5. 192.168.26.74 node02
    6. 192.168.26.75 node03
    7. EOF
  • 验证各节点mac、uuid唯一

    1. ~]# cat /sys/class/net/ens32/address
    2. ~]# cat /sys/class/dmi/id/product_uuid

    2.禁用swap,关闭selinux,禁用Networkmanager,开启相关内核模块

    1. ~]# cat << EOF >> init.sh
    2. #!/bin/bash
    3. ###初始化脚本####
    4. #关闭firewalld
    5. systemctl stop firewalld && systemctl disable firewalld
    6. #关闭selinux
    7. setenforce 0
    8. sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    9. #关闭NetworkManager
    10. systemctl stop NetworkManager && systemctl disable NetworkManager
    11. #修改内核参数
    12. modprobe br_netfilter
    13. cat << EOF >>/etc/sysctl.conf
    14. net.ipv4.ip_forward = 1
    15. net.ipv4.ip_nonlocal_bind = 1
    16. net.bridge.bridge-nf-call-iptables=1
    17. net.bridge.bridge-nf-call-ip6tables=1
    18. net.ipv4.ip_forward=1
    19. EOF
    20. #修改完之后需要执行
    21. sysctl -p
    22. echo "----init finish-----"

    每台机器都需要执行此脚本
    安装相关的yum包

    1. ~]# cd /usr/local/src
    2. src]# tar -zxvf yum.tar.gz
    3. src]# cd yum/x86_64/7/
    4. drwxr-xr-x 4 base
    5. drwxr-xr-x 4 docker-ce-stable
    6. drwxr-xr-x 4 extras
    7. drwxr-xr-x 4 kubernetes
    8. -rw-r--r-- 1 timedhosts
    9. -rw-r--r-- 1 timedhosts.txt
    10. drwxr-xr-x 4 updates
  • 将base、extras、docker-ce-stable、kubernetes中所有的rpm进行安装

    1. base]# rpm -ivh packages/*.rpm --force --nodeps
    2. extras]# rpm -ivh packages/*.rpm --force --nodeps
    3. docker-ce-stable]# rpm -ivh packages/*.rpm --force --nodeps
    4. kubernetes]# rpm -ivh packages/*.rpm --force --nodeps

    使用rpm -ivh 包名rpm -Uvh 包名进行安装

    1. src]# rpm -Uvh yum/x86_64/7/base/packages/*.rpm --nodeps --force
    2. src]# rpm -Uvh yum/x86_64/7/extras/packages/*.rpm --nodeps --force
    3. src]# rpm -Uvh yum/x86_64/7/docker-ce-stable/packages/*.rpm --nodeps --force
    4. src]# rpm -Uvh yum/x86_64/7/kubernetes/packages/*.rpm --nodeps --force

    配置与启动docker

  • 所有安装包安装完成后,配置docker

    1. ~]# mkdir -p /etc/docker /data/docker
    2. ~]# cat <<EOF >/etc/docker/daemon.json
    3. {
    4. "graph": "/data/docker",
    5. "storage-driver": "overlay2",
    6. "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
    7. "bip": "172.26.71.1/24",
    8. "exec-opts": ["native.cgroupdriver=systemd"],
    9. "live-restore": true
    10. }
    11. EOF

    bip与主机地址对应,如:192.168.26.71对应172.26.71.1

  • 启动docker

    1. ~]# systemctl start docker ; systemctl enable docker
  • 查看docker0地址

    1. ~]# ip a
  • 查看容器ip

    1. ~]# docker pull busybox
    2. ~]# docker run -it --rm busybox
    3. / # ip a
    4. ...
    5. 11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    6. link/ether 02:42:ac:1a:49:02 brd ff:ff:ff:ff:ff:ff
    7. inet 172.26.71.2/24 brd 172.26.73.255 scope global eth0
    8. valid_lft forever preferred_lft forever
    9. / # exit

    配置keepalived

  • 配置master01

    1. [root@vm71 ~]# vi /etc/keepalived/keepalived.conf
    1. ! Configuration File for keepalived
    2. global_defs {
    3. router_id master01
    4. }
    5. vrrp_instance VI_1 {
    6. state MASTER
    7. interface ens32
    8. virtual_router_id 51
    9. priority 100
    10. advert_int 1
    11. authentication {
    12. auth_type PASS
    13. auth_pass 1111
    14. }
    15. virtual_ipaddress {
    16. 192.168.26.11/24
    17. }
    18. }
  • 配置master02

    1. [root@vm72 src]# vi /etc/keepalived/keepalived.conf
    1. ! Configuration File for keepalived
    2. global_defs {
    3. router_id master02
    4. }
    5. vrrp_instance VI_1 {
    6. state BACKUP
    7. interface ens32
    8. virtual_router_id 50
    9. priority 90
    10. advert_int 1
    11. authentication {
    12. auth_type PASS
    13. auth_pass 1111
    14. }
    15. virtual_ipaddress {
    16. 192.168.26.11/24
    17. }
    18. }
  • 配置好了之后,master01、master02上启动keepalived

    1. systemctl enable keepalived && systemctl start keepalived
  • 启动报错处理

    使用systemctl status keepalived进行查看,发现有如下error: …keepalived[1928]: /usr/sbin/keepalived: error while loading shared libraries: libnetsnmpmibs.so.31: …

处理方法:在原安装主机查找文件libnetsnmpmibs.so.31

  1. ~]# find / -name libnetsnmpmibs.so.31
  2. /usr/lib64/libnetsnmpmibs.so.31
  3. ~]# ls /usr/lib64/libnetsnmp*
  4. /usr/lib64/libnetsnmpagent.so.31 /usr/lib64/libnetsnmpmibs.so.31 /usr/lib64/libnetsnmp.so.31
  5. ~]# scp /usr/lib64/libnetsnmp* master01:/usr/lib64/.

复制libnetsnmpagent.so.31libnetsnmpmibs.so.31libnetsnmp.so.31这3个文件

  • 验证

    查看master上keepalive及ip

  1. ~]# ps -ef|grep keepalive
  2. ...
  3. ~]# ip addr
  4. ...

image-20210207170916001.png

keepalived高可用测试

192.168.26.11登录,查看主机ip;然后关闭systemctl stop keepalived;再以192.168.26.11进行登录,发现已经切换了。

导入镜像

  1. ]# tar -zxvf images.tar.gz
  2. images/
  3. images/kube-proxy.tar
  4. images/kube-apiserver.tar
  5. images/kube-controller-manager.tar
  6. images/kube-scheduler.tar
  7. images/etcd.tar
  8. images/coredns.tar
  9. images/pause.tar
  10. images/flannel.tar
  11. ]# cd images
  12. ]# docker load < coredns.tar
  13. ]# docker load < etcd.tar
  14. ]# docker load < flannel.tar
  15. ]# docker load < kube-apiserver.tar
  16. ]# docker load < kube-controller-manager.tar
  17. ]# docker load < kube-proxy.tar
  18. ]# docker load < kube-scheduler.tar
  19. ]# docker load < pause.tar

导入完成之后,查看

  1. ]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. k8s.gcr.io/kube-proxy v1.20.2 43154ddb57a8 3 weeks ago 118MB
  4. k8s.gcr.io/kube-apiserver v1.20.2 a8c2fdb8bf76 3 weeks ago 122MB
  5. k8s.gcr.io/kube-controller-manager v1.20.2 a27166429d98 3 weeks ago 116MB
  6. k8s.gcr.io/kube-scheduler v1.20.2 ed2c44fbdd78 3 weeks ago 46.4MB
  7. quay.io/coreos/flannel v0.13.1-rc1 f03a23d55e57 2 months ago 64.6MB
  8. k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 5 months ago 253MB
  9. k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 7 months ago 45.2MB
  10. k8s.gcr.io/pause 3.2 80d28bedfe5d 11 months ago 683kB

K8S 计算节点初始化

master01:使用kubeadm.config的文件进行初始化

  1. [root@master01 ~]# mkdir /opt/kubernetes
  2. [root@master01 ~]# cd /opt/kubernetes
  3. [root@master01 kubernetes]# vi kubeadm-config.yaml
  1. apiVersion: kubeadm.k8s.io/v1beta2
  2. kind: ClusterConfiguration
  3. kubernetesVersion: v1.20.2
  4. apiServer:
  5. certSANs: #填写所有kube-apiserver节点的hostname、IP、VIP
  6. - master01
  7. - master02
  8. - master03
  9. - node01
  10. - node02
  11. - node03
  12. - 192.168.26.71
  13. - 192.168.26.72
  14. - 192.168.26.73
  15. - 192.168.26.74
  16. - 192.168.26.75
  17. - 192.168.26.76
  18. - 192.168.26.77
  19. - 192.168.26.78
  20. - 192.168.26.11
  21. controlPlaneEndpoint: "192.168.26.11:6443"
  22. networking:
  23. podSubnet: "10.26.0.0/16"

这里需要注意的一点是network,一定要按照网络规划去实施,否则会出现各种各样的问题。
执行初始化操作

  1. [root@master01 kubernetes]# kubeadm init --config=kubeadm-config.yaml
  1. ...
  2. Your Kubernetes control-plane has initialized successfully!
  3. To start using your cluster, you need to run the following as a regular user:
  4. mkdir -p $HOME/.kube
  5. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. Alternatively, if you are the root user, you can run:
  8. export KUBECONFIG=/etc/kubernetes/admin.conf
  9. You should now deploy a pod network to the cluster.
  10. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  11. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  12. You can now join any number of control-plane nodes by copying certificate authorities
  13. and service account keys on each node and then running the following as root:
  14. kubeadm join 192.168.26.11:6443 --token twhj2u.0mqgy41h6dk12j34 \
  15. --discovery-token-ca-cert-hash sha256:a2f32400830bbfe678c98ed082c4bf9d6429f4e3f4e6bd9731a667bf64bfccdb \
  16. --control-plane
  17. Then you can join any number of worker nodes by running the following on each as root:
  18. kubeadm join 192.168.26.11:6443 --token twhj2u.0mqgy41h6dk12j34 \
  19. --discovery-token-ca-cert-hash sha256:a2f32400830bbfe678c98ed082c4bf9d6429f4e3f4e6bd9731a667bf64bfccdb

如果初始化失败需要执行:(根据提示删除相关文件和目录)

  1. kubeadm reset

若上面指令执行错误,需要加上—force 命令

加载坏境变量

  1. [root@master01 kubernetes]# mkdir -p $HOME/.kube
  2. [root@master01 kubernetes]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. [root@master01 kubernetes]# chown $(id -u):$(id -g) $HOME/.kube/config
  4. [root@master01 kubernetes]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
  5. [root@master01 kubernetes]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
  6. [root@master01 kubernetes]# source ~/.bash_profile

添加flannel网络

使用之前的下载好的kube-flannel.yml文件(需要修改镜像拉取策略、IP)

  1. [root@master01 kubernetes]# kubectl apply -f kube-flannel.yml

查看pod

  1. [root@master01 kubernetes]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. master01 Ready control-plane,master 2m16s v1.20.2
  4. [root@master01 kubernetes]# kubectl get po -n kube-system
  5. NAME READY STATUS RESTARTS AGE
  6. coredns-74ff55c5b-br454 0/1 Running 0 109s
  7. coredns-74ff55c5b-gjmr4 1/1 Running 0 109s
  8. etcd-master01 1/1 Running 0 117s
  9. kube-apiserver-master01 1/1 Running 0 117s
  10. kube-controller-manager-master01 1/1 Running 0 117s
  11. kube-flannel-ds-nwv5s 1/1 Running 0 14s
  12. kube-proxy-hfhs9 1/1 Running 0 109s
  13. kube-scheduler-master01 1/1 Running 0 117s

加入计算节点

最关键的一步是分发相关的证书:master01上的相关证书文件分发到master02、master03(若作为master)

  1. vi cert-main-master.sh
  1. USER=root # customizable
  2. CONTROL_PLANE_IPS="192.168.26.72 192.168.226.73"
  3. for host in ${CONTROL_PLANE_IPS}; do
  4. scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
  5. scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
  6. scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
  7. scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
  8. scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
  9. scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
  10. scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
  11. # Quote this line if you are using external etcd
  12. scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
  13. done
  14. master02master03上执行以下脚本:
  15. vi cert-other-master.sh
  16. USER=root # customizable
  17. mkdir -p /etc/kubernetes/pki/etcd
  18. mv /${USER}/ca.crt /etc/kubernetes/pki/
  19. mv /${USER}/ca.key /etc/kubernetes/pki/
  20. mv /${USER}/sa.pub /etc/kubernetes/pki/
  21. mv /${USER}/sa.key /etc/kubernetes/pki/
  22. mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
  23. mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
  24. mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
  25. # Quote this line if you are using external etcd
  26. mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

本次实验只采用master01、master02作为master,在master02操作如下:

  1. [root@master02 ~]# mkdir -p /etc/kubernetes/pki/etcd
  2. [root@master02 ~]# cd /etc/kubernetes/pki/
  3. [root@master02 pki]# scp master01:/etc/kubernetes/pki/ca.* .
  4. [root@master02 pki]# scp master01:/etc/kubernetes/pki/sa.* .
  5. [root@master02 pki]# scp master01:/etc/kubernetes/pki/front-proxy-ca.* .
  6. [root@master02 pki]# scp master01:/etc/kubernetes/pki/etcd/ca.* ./etcd/.

master02,mater03加入计算节点

需要注意的是,master02,03上必须完成docker、kubelet、kubectl等yum包的安装,并且导入相关镜像之后才能进行以下操作

  1. ]# kubeadm join 192.168.26.11:6443 --token twhj2u.0mqgy41h6dk12j34 \
  2. --discovery-token-ca-cert-hash sha256:a2f32400830bbfe678c98ed082c4bf9d6429f4e3f4e6bd9731a667bf64bfccdb \
  3. --control-plane
  1. ...
  2. This node has joined the cluster and a new control plane instance was created:
  3. * Certificate signing request was sent to apiserver and approval was received.
  4. * The Kubelet was informed of the new secure connection details.
  5. * Control plane (master) label and taint were applied to the new node.
  6. * The Kubernetes control plane instances scaled up.
  7. * A new etcd member was added to the local/stacked etcd cluster.
  8. To start administering your cluster from this node, you need to run the following as a regular user:
  9. mkdir -p $HOME/.kube
  10. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  11. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  12. Run 'kubectl get nodes' to see this node join the cluster.

同时导入相关的环境变量

  1. ]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
  2. ]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
  3. ]# source ~/.bash_profile

验证可以使用以下指令

  1. [root@master02 pki]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master01 Ready control-plane,master 11m v1.20.2
  4. master02 Ready control-plane,master 4m53s v1.20.2
  5. [root@master02 pki]# kubectl get pod -n kube-system -o wide
  6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  7. coredns-74ff55c5b-br454 1/1 Running 0 14m 10.26.0.2 master01 <none> <none>
  8. coredns-74ff55c5b-gjmr4 1/1 Running 0 14m 10.26.0.3 master01 <none> <none>
  9. etcd-master01 1/1 Running 1 15m 192.168.26.71 master01 <none> <none>
  10. etcd-master02 1/1 Running 0 5m15s 192.168.26.72 master02 <none> <none>
  11. kube-apiserver-master01 1/1 Running 3 15m 192.168.26.71 master01 <none> <none>
  12. kube-apiserver-master02 1/1 Running 4 8m49s 192.168.26.72 master02 <none> <none>
  13. kube-controller-manager-master01 1/1 Running 1 15m 192.168.26.71 master01 <none> <none>
  14. kube-controller-manager-master02 1/1 Running 0 8m49s 192.168.26.72 master02 <none> <none>
  15. kube-flannel-ds-9lr7q 1/1 Running 6 8m50s 192.168.26.72 master02 <none> <none>
  16. kube-flannel-ds-nwv5s 1/1 Running 0 13m 192.168.26.71 master01 <none> <none>
  17. kube-proxy-hfhs9 1/1 Running 0 14m 192.168.26.71 master01 <none> <none>
  18. kube-proxy-p9p2g 1/1 Running 0 8m50s 192.168.26.72 master02 <none> <none>
  19. kube-scheduler-master01 1/1 Running 1 15m 192.168.26.71 master01 <none> <none>
  20. kube-scheduler-master02 1/1 Running 0 8m49s 192.168.26.72 master02 <none> <none>

加入worker节点

在node01、node02、node03上进行操作(这里只添加了node01)同样这些节点需要完成节点初始化,docker、kubelet、kubeadm等包的完成,同样需要导入docker镜像

  1. ]# rpm -Uvh yum/x86_64/7/base/packages/*.rpm --nodeps --force
  2. ...
  3. ]# rpm -Uvh yum/x86_64/7/extras/packages/*.rpm --nodeps --force
  4. ...
  5. ]# rpm -Uvh yum/x86_64/7/docker-ce-stable/packages/*.rpm --nodeps --force
  6. ...
  7. ]# rpm -Uvh yum/x86_64/7/kubernetes/packages/*.rpm --nodeps --force
  8. ...

执行:

  1. ]# kubeadm join 192.168.26.11:6443 --token twhj2u.0mqgy41h6dk12j34 \
  2. --discovery-token-ca-cert-hash sha256:a2f32400830bbfe678c98ed082c4bf9d6429f4e3f4e6bd9731a667bf64bfccdb
  1. ...
  2. This node has joined the cluster:
  3. * Certificate signing request was sent to apiserver and a response was received.
  4. * The Kubelet was informed of the new secure connection details.
  5. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

执行都无问题之后,可以在master节点上验证是否加入成功。

  1. [root@master02 ~]# kubectl get node -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. master01 Ready control-plane,master 4h43m v1.20.2 192.168.26.71 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.3
  4. master02 Ready control-plane,master 4h37m v1.20.2 192.168.26.72 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.3
  5. node01 Ready <none> 10m v1.20.2 192.168.26.73 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.3

验证:部署nginx

  • 创建deployment、pod、service

    1. [root@master02 ~]# kubectl create deployment nginx-dep --image=nginx --replicas=2
    2. deployment.apps/nginx-dep created
    3. [root@master02 ~]# kubectl expose deployment nginx-dep --port=80 --target-port=80 --type=NodePort
    4. service/nginx-dep exposed
    5. [root@master02 ~]# kubectl get svc | grep nginx
    6. nginx-dep NodePort 10.98.67.230 <none> 80:30517/TCP 13s
  • 浏览器访问:http://192.168.26.11:30517/

image-20210208231958769.png

  • 查看pod
    1. [root@master02 ~]# kubectl get po -o wide
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. nginx-dep-5c5477cb4-85d6x 1/1 Running 0 11m 10.26.2.2 node01 <none> <none>
    4. nginx-dep-5c5477cb4-vm2g9 1/1 Running 0 11m 10.26.2.3 node01 <none> <none>
    至此,k8s-v1.20.2完美成功部署!
    **

    离线部署k8s-v1.20.2集群(三)

    部署Kubernetes Dashboard

    下载镜像及yaml文件

    github下载地址:https://github.com/kubernetes/dashboard/releases
    选择版本:https://github.com/kubernetes/dashboard/releases/tag/v2.1.0
    image-20210209080643276.png
    1. ~]# docker pull kubernetesui/dashboard:v2.1.0
    2. ...
    3. ~]# docker pull kubernetesui/metrics-scraper:v1.0.6
    4. ...
    5. ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
    recommended.yaml文件很难下载,可从浏览器复制:
    https://github.com/kubernetes/dashboard/blob/v2.1.0/aio/deploy/recommended.yaml
    1. [root@master01 kubernetes]# vi dashboard.yaml
    ```yaml apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard

apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard

namespace: kubernetes-dashboard

kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports:

  1. - port: 443
  2. targetPort: 8443
  3. nodePort: 30123

type: NodePort selector:

  1. k8s-app: kubernetes-dashboard

apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard

type: Opaque

apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data:

csrf: “”

apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard

type: Opaque

kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings

namespace: kubernetes-dashboard

kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules:

Allow Dashboard to get, update and delete Dashboard exclusive secrets.

  • apiGroups: [“”] resources: [“secrets”] resourceNames: [“kubernetes-dashboard-key-holder”, “kubernetes-dashboard-certs”, “kubernetes-dashboard-csrf”] verbs: [“get”, “update”, “delete”]

    Allow Dashboard to get and update ‘kubernetes-dashboard-settings’ config map.

  • apiGroups: [“”] resources: [“configmaps”] resourceNames: [“kubernetes-dashboard-settings”] verbs: [“get”, “update”]

    Allow Dashboard to get metrics.

  • apiGroups: [“”] resources: [“services”] resourceNames: [“heapster”, “dashboard-metrics-scraper”] verbs: [“proxy”]
  • apiGroups: [“”] resources: [“services/proxy”] resourceNames: [“heapster”, “http:heapster:”, “https:heapster:”, “dashboard-metrics-scraper”, “http:dashboard-metrics-scraper”] verbs: [“get”]

kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules:

Allow Metrics Scraper to get metrics from the Metrics server

  • apiGroups: [“metrics.k8s.io”] resources: [“pods”, “nodes”] verbs: [“get”, “list”, “watch”]

apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects:

  • kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects:

  • kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard

kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers:

  1. - name: kubernetes-dashboard
  2. image: kubernetesui/dashboard:v2.1.0
  3. imagePullPolicy: IfNotPresent
  4. ports:
  5. - containerPort: 8443
  6. protocol: TCP
  7. args:
  8. - --auto-generate-certificates
  9. - --namespace=kubernetes-dashboard
  10. # Uncomment the following line to manually specify Kubernetes API server Host
  11. # If not specified, Dashboard will attempt to auto discover the API server and connect
  12. # to it. Uncomment only if the default does not work.
  13. # - --apiserver-host=http://my-address:port
  14. volumeMounts:
  15. - name: kubernetes-dashboard-certs
  16. mountPath: /certs
  17. # Create on-disk volume to store exec logs
  18. - mountPath: /tmp
  19. name: tmp-volume
  20. livenessProbe:
  21. httpGet:
  22. scheme: HTTPS
  23. path: /
  24. port: 8443
  25. initialDelaySeconds: 30
  26. timeoutSeconds: 30
  27. securityContext:
  28. allowPrivilegeEscalation: false
  29. readOnlyRootFilesystem: true
  30. runAsUser: 1001
  31. runAsGroup: 2001
  32. volumes:
  33. - name: kubernetes-dashboard-certs
  34. secret:
  35. secretName: kubernetes-dashboard-certs
  36. - name: tmp-volume
  37. emptyDir: {}
  38. serviceAccountName: kubernetes-dashboard
  39. nodeSelector:
  40. "kubernetes.io/os": linux
  41. # Comment the following tolerations if Dashboard must not be deployed on master
  42. tolerations:
  43. - key: node-role.kubernetes.io/master
  44. effect: NoSchedule

kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports:

  1. - port: 8000
  2. targetPort: 8000

selector:

  1. k8s-app: dashboard-metrics-scraper

kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: ‘runtime/default’ spec: containers:

  1. - name: dashboard-metrics-scraper
  2. image: kubernetesui/metrics-scraper:v1.0.6
  3. imagePullPolicy: IfNotPresent
  4. ports:
  5. - containerPort: 8000
  6. protocol: TCP
  7. livenessProbe:
  8. httpGet:
  9. scheme: HTTP
  10. path: /
  11. port: 8000
  12. initialDelaySeconds: 30
  13. timeoutSeconds: 30
  14. volumeMounts:
  15. - mountPath: /tmp
  16. name: tmp-volume
  17. securityContext:
  18. allowPrivilegeEscalation: false
  19. readOnlyRootFilesystem: true
  20. runAsUser: 1001
  21. runAsGroup: 2001
  22. serviceAccountName: kubernetes-dashboard
  23. nodeSelector:
  24. "kubernetes.io/os": linux
  25. # Comment the following tolerations if Dashboard must not be deployed on master
  26. tolerations:
  27. - key: node-role.kubernetes.io/master
  28. effect: NoSchedule
  29. volumes:
  30. - name: tmp-volume
  31. emptyDir: {}
  1. - **默认Dashboard只能集群内部访问,修改`Service``NodePort`类型,`nodePort: 30123`,暴露到外部:**<br />
  2. - 镜像策略为:imagePullPolicy: IfNotPresent<br />
  3. <a name="idDdD"></a>
  4. ### 应用配置文件
  5. ```shell
  6. shell[root@master01 kubernetes]# kubectl apply -f dashboard.yaml
  7. ...
  8. [root@master01 kubernetes]# kubectl get pods,svc -n kubernetes-dashboard
  9. NAME READY STATUS RESTARTS AGE
  10. pod/dashboard-metrics-scraper-79c5968bdc-zjwcb 1/1 Running 0 83s
  11. pod/kubernetes-dashboard-b8995f9f8-sp22l 1/1 Running 0 83s
  12. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  13. service/dashboard-metrics-scraper ClusterIP 10.105.36.135 <none> 8000/TCP 83s
  14. service/kubernetes-dashboard NodePort 10.107.175.193 <none> 443:30123/TCP 84s

访问地址:https://NodeIP:30123 即:https://192.168.26.11:30123

登录

  • 创建service account并绑定默认cluster-admin管理员集群角色

    kubernetes-dashboard命名空间的token权限太小,这里使用管理员角色可以查看集群中所有资源对象。

[root@master01 kubernetes]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master01 kubernetes]# kubectl create clusterrolebinding dashboard-admin —clusterrole=cluster-admin —serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

  • 获取登录token

[root@master01 kubernetes]# kubectl -n kube-system get secret | grep dashboard
dashboard-admin-token-g6tg7 kubernetes.io/service-account-token 3 26s
[root@master01 kubernetes]# kubectl describe secrets -n kube-system dashboard-admin-token-g6tg7
Name: dashboard-admin-token-g6tg7
Namespace: kube-system
Labels:
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: f6e73447-5f15-46a4-a578-12e815a998f5
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImFjVm9aWHZ5eFc2M2VTbFdZUXdpZXFhWm5UQXZZY05zZFVGdm8xcFhqSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZzZ0ZzciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjZlNzM0NDctNWYxNS00NmE0LWE1NzgtMTJlODE1YTk5OGY1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.F5ayMbfCA5H0-W9FU211WpE74w0kzPfH0Sj-sPCK-EGeSSnZ6KqtO2HD7H01f5rv45IIIyguj1_0Z6b3qVtDsR9EwlJVEKxBvaV7aoU-MopuUL4W-ZoXqndgGHLsZp3G9tdcw-MdTsIBXa5Kx7BVMmNTMQasSO4RkKHA03GASxbhcS5NbMYIsBe9ZAJ1X4wE-SMix31c0l2Wf6fIWJOb4KFBxUWzYH0gSZrUhDsKkKSF4GNEXi99AccopbOYFuY_xZ0BEqmanhQKU2NqtttaVm01XC9B1WuGsDUyq_CBzB9sewERpaL06MDGsd6FUPv8_Fa8yKMXEIRjUqBrgQZ-uA

image-20210209094234324.png

  • 查看之前部署的nginx

image-20210209094419789.png
至此,kubernetes dashboard完美成功部署。

2021/2/9 广州

相关文章参考:
https://gitee.com/cloudlove2007/k8s-center