5.4 镜像

镜像 镜像名 说明
metrics-server:v0.6.1 registry.aliyuncs.com/google_containers/metrics-server:v0.6.1 部署
coredns:1.8.6 registry.aliyuncs.com/google_containers/coredns:1.8.6 部署
dashboard:v2.0.1 kubernetesui/dashboard:v2.0.1 部署
metrics-scraper:v1.0.4 kubernetesui/metrics-scraper:v1.0.4 部署
pause:3.2 registry.aliyuncs.com/google_containers/pause:3.2 部署
centos:latest centos:latest 验证
busybox:1.28.4 busybox:1.28.4 验证
infoblox/dnstools:latest infoblox/dnstools:latest 验证
nginx:latest nginx:latest 验证

二、规划

1、部署步骤

1~4为二进制部署,5、7为yaml及镜像部署在k8s集群内(云原生部署)。

  • 1、部署etcd集群(3个节点,奇数个)(因为flannel需要用到etcd,所以要先部署)
  • 2、各节点部署docker、flannel,测试验证容器互访、容器与主机互访
  • 3、在master节点依次部署kube-apiserver、kube-controller-manager、kube-schedule
  • 4、在node节点部署kubelet、kube-proxy
  • 5、在master节点部署coredns(可考虑部署在指定节点)
  • 6、在master节点部署dashboard (可考虑部署在指定节点)
  • 7、部署metrics server

    k8s集群出问题不能使用时,可以重置后重新部署,相关操作:

  • 1、先确认docker、flannel、etcd是正常的,如果这几个有问题则需要先处理好。

  • 2、依次各节点停止kubelet、kube-proxy、kube-schedule、kube-controller-manager、kube-apiserver。
  • 3、检查和确认配置,删除部署过程中自动生成的配置和证书文件。
  • 4、清除etcd存储记录后,停止etcd,然后删除etcd数据存储目录。
  • 5、注意清除/root/.kube目录。
    然后开始按部署步骤逐一检查配置和启动,每一小做好验证。

    2、服务器

    | hostname | host ip | docker ip | role | | —- | —- | —- | —- | | k8s-7 | 192.168.26.7 | 172.26.7.1/24 | master&worker、etcd、docker、flannel | | k8s-8 | 192.168.26.8 | 172.26.8.1/24 | master&worker、etcd、docker、flannel | | k8s-9 | 192.168.26.9 | 172.26.9.1/24 | master&worker、etcd、docker、flannel |

3、软件列表

名称 版本 下载链接
centos 7.9.2009-Minimal http://ftp.sjtu.edu.cn/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.iso
Kubernetes v1.23.6 https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz
docker 20.10.14 https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz
etcd v3.5.4 https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
flanner v0.17.0 https://github.com/flannel-io/flannel/releases/download/v0.17.0/flannel-v0.17.0-linux-amd64.tar.gz
cfssl R1.2 https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

4、集群网络规划

  • docker容器IP与主机IP具备对应关系,看到容器IP即可知道所在的主机。容器IP的第2、3节分别为主机IP的第3、4节。

    172.主机IP第3节.主机IP第4节.1/24,每台主机上的docker配置不同。这样在排错时很容易知道容器所在的主机。
    k8s-7容器ip:172.26.7.1/24
    k8s-8容器ip:172.26.8.1/24
    k8s-9容器ip:172.26.9.1/24
    
  • 集群中的IP设置

    1. Node IP,Node节点的IP地址,即物理机(宿主机)的网卡地址。
    2. Pod IP,Pod的IP地址,docker0网桥分配的地址。
    3. Cluster IP,也可叫Service IP,Service的IP地址。
apiserver:
--service-cluster-ip-range 10.168.0.0/16

controller:
--cluster-cidr 172.26.0.0/16
--service-cluster-ip-range 10.168.0.0/16

kubelet:
--cluster-dns 10.168.0.2

proxy:
--cluster-cidr 172.26.0.0/16

flannel:
FLANNEL_NETWORK=172.26.0.0/16
FLANNEL_SUBNET=172.26.[7/8/9].1/24  ## 对应docker配置

5、部署目录规划

  • /opt/src:存放部署软件包、清单配置原始文件
  • /opt/bin:用于统一链接到/opt/apps目录下的各运行软件
  • /opt/apps:存放解压后的各个软件包,文件目录带版本标识
  • /opt/etcd:包括cfg(配置)、ssl(证书)、logs(日志)等三个子目录
  • /opt/kubernetes:包括cfg(配置)、ssl(证书)、logs(日志)等三个子目录
  • /opt/certs:包括etcd(用于生成etcd证书)、k8s(用于生成k8s组件证书)等两个子目录
  • /var/log/kubernetes:统一日志目录

    6、安装centos及环境准备

    6.1 安装centos

  • 安装虚拟机模板:参考https://www.cnblogs.com/sw-blog/p/14394949.html

image.png

  • 设置虚拟机网络:从菜单【编辑】->【虚拟网络编辑器…】

image.png

  • 启动模板,删除网卡配置信息IP、MAC、UUID

不能直接克隆 (IP、MAC、。。。等相同,要删除虚拟机里特有的内容)
查看/etc/sysconfig/network-scripts目录下是否有ifcfg-ens*文件(windows主机下一般是ifcfg-ens32

ls /etc/sysconfig/network-scripts
vi /etc/sysconfig/network-scripts/ifcfg-ens32

只需要保留以下信息

TYPE=Ethernet
BOOTPROTO=dhcp
NAME=ens32
DEVICE=ens32
ONBOOT=yes
  • 删除ssh的key文件:/etc/ssh/sshhost*

    rm -rf  /etc/ssh/ssh_host_*
    
  • 清空/etc/machine-id

    cat /dev/null > /etc/machine-id
    
  • set.sh:修改主机名和ip地址 ```shell

    !/bin/bash

    if [ $# -eq 0 ]; then echo “usage: basename $0 num” exit 1 fi [[ $1 =~ ^[0-9]+$ ]] if [ $? -ne 0 ]; then echo “usage: basename $0 10~240” exit 1 fi

cat > /etc/sysconfig/network-scripts/ifcfg-ens32 <<EOF TYPE=Ethernet BOOTPROTO=none NAME=ens32 DEVICE=ens32 ONBOOT=yes IPADDR=192.168.26.${1} NETMASK=255.255.255.0 GATEWAY=192.168.26.2 DNS1=192.168.26.2 EOF

systemctl restart network &> /dev/null ip=$(ifconfig ens32 | awk ‘/inet /{print $2}’) sed -i ‘/192/d’ /etc/issue echo $ip echo $ip >> /etc/issue hostnamectl set-hostname k8s-${1} echo “192.168.26.$1 k8s-${1}.example.com k8s-${1}” >> /etc/hosts

因为最小化安装,可能缺少ifconfig,需要:yum install net-tools -y

- poweroff 关掉模板机,以后不要启动,否则需要重新设置。
<a name="f812d5fc"></a>
#### 6.2 环境准备

- 关闭selinux、swap、firewalld

关闭selinux

~]# setenforce 0 ~]# sed -i ‘/SELINUX/s/enforcing/disabled/‘ /etc/selinux/config

关闭swap

~]# swapoff -a ~]# sed -i ‘/ swap / s/^(.*)$/#\1/g’ /etc/fstab

关闭防火墙

~]# systemctl stop firewalld && systemctl disable firewalld ~]# systemctl mask firewalld.service


- /etc/hosts

192.168.26.7 k8s-7.example.com k8s-7 etcd_node1 192.168.26.8 k8s-8.example.com k8s-8 etcd_node2 192.168.26.9 k8s-9.example.com k8s-9 etcd_node3


- k8s内核参数设置

]# cat < /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF

]# sysctl -p /etc/sysctl.d/k8s.conf 如果出现: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录 则执行下面命令: ]# modprobe br_netfilter 然后: ]# sysctl -p /etc/sysctl.d/k8s.conf

<a name="5f357af0"></a>
#### 6.3 创建部署目录

~]# mkdir /opt/{src,bin,apps,etcd,kubernetes,certs} -p

将下载软件放到:/opt/src
<a name="63d8100b"></a>
#### 6.4 环境变量

ip add |grep 192.168|awk -F “ “ ‘{print $2}’|awk -F “/“ ‘{print “node_ip=”$1}’>/tmp/env echo ‘etcd_node1=192.168.26.7’>>/tmp/env echo ‘etcd_node2=192.168.26.8’>>/tmp/env echo ‘etcd_node3=192.168.26.9’>>/tmp/env echo ‘etcd_node1_name=k8s-7’>>/tmp/env echo ‘etcd_node2_name=k8s-8’>>/tmp/env echo ‘etcd_node3_name=k8s-9’>>/tmp/env

<a name="34998c7e"></a>
#### 6.5 安装cfssl工具、创建ca证书
```shell
~]# cd /opt/src
src]# ls cfssl* -l
-rw-r--r--. 1 root root  6595195 4月  30 23:12 cfssl-certinfo_linux-amd64
-rw-r--r--. 1 root root  2277873 4月  30 23:12 cfssljson_linux-amd64
-rw-r--r--. 1 root root 10376657 4月  30 23:12 cfssl_linux-amd64
src]# chmod +x cfssl*
src]# ls cfssl* -l
-rwxr-xr-x. 1 root root  6595195 4月  30 23:12 cfssl-certinfo_linux-amd64
-rwxr-xr-x. 1 root root  2277873 4月  30 23:12 cfssljson_linux-amd64
-rwxr-xr-x. 1 root root 10376657 4月  30 23:12 cfssl_linux-amd64
src]# mv cfssl* /opt/apps/.
src]# ln -s /opt/apps/cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
src]# ln -s /opt/apps/cfssljson_linux-amd64 /usr/bin/cfssljson
src]# ln -s /opt/apps/cfssl_linux-amd64 /usr/bin/cfssl
src]# cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6
src]# cfssl
No command is given.
Usage:
Available commands:
        ocspdump
        ocsprefresh
        ocspsign
        scan
        version
        genkey
        info
        sign
        gencrl
        ocspserve
        print-defaults
        revoke
        bundle
        gencert
        selfsign
        certinfo
        serve
...
  • 默认csr请求模板

    src]# cfssl print-defaults
    not enough arguments are supplied --- please refer to the usage
    [root@k8s-7 src]# cfssl print-defaults csr
    {
      "CN": "example.net",
      "hosts": [
          "example.net",
          "www.example.net"
      ],
      "key": {
          "algo": "ecdsa",
          "size": 256
      },
      "names": [
          {
              "C": "US",
              "L": "CA",
              "ST": "San Francisco"
          }
      ]
    }
    

    CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name)
    O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
    CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
    C: Country, 国家
    L: Locality,地区,城市
    O: Organization Name,组织名称,公司名称
    OU: Organization Unit Name,组织单位名称,公司部门
    ST: State,州,省

  • 配置ca请求文件:/opt/certs/ca-csr.json

    {
    "CN": "kubernetes",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "k8s",
        "OU": "System"
      }
    ],
    "ca": {
      "expiry": "87600h"
    }
    }
    

    创建ca证书:

    ]# cd /opt/certs
    certs]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    2022/05/01 00:12:51 [INFO] generating a new CA key and certificate from CSR
    2022/05/01 00:12:51 [INFO] generate received request
    2022/05/01 00:12:51 [INFO] received CSR
    2022/05/01 00:12:51 [INFO] generating key: rsa-2048
    2022/05/01 00:12:51 [INFO] encoded CSR
    2022/05/01 00:12:51 [INFO] signed certificate with serial number 666505233292673571095777597686635125020926347700
    [root@k8s-7 certs]# ll
    总用量 16
    -rw-r--r--. 1 root root 1001 5月   1 00:12 ca.csr
    -rw-r--r--. 1 root root  246 5月   1 00:12 ca-csr.json
    -rw-------. 1 root root 1675 5月   1 00:12 ca-key.pem
    -rw-r--r--. 1 root root 1359 5月   1 00:12 ca.pem
    
  • 默认证书生产策略配置模板

    certs]# cfssl print-defaults config
    {
      "signing": {
          "default": {
              "expiry": "168h"
          },
          "profiles": {
              "www": {
                  "expiry": "8760h",
                  "usages": [
                      "signing",
                      "key encipherment",
                      "server auth"
                  ]
              },
              "client": {
                  "expiry": "8760h",
                  "usages": [
                      "signing",
                      "key encipherment",
                      "client auth"
                  ]
              }
          }
      }
    }
    

    根据上面模板修改,生成:/opt/certs/ca-config.json

    {
    "signing": {
      "default": {
        "expiry": "87600h"
      },
      "profiles": {
        "kubernetes": {
          "expiry": "87600h",
          "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
          ]
        }
      }
    }
    }
    

    三、部署etcd

    1、配置etcd请求csr文件

  • /opt/certs/etcd-csr.json

    {
      "CN": "etcd",
      "hosts": [
      "127.0.0.1",
      "192.168.26.7",
      "192.168.26.8",
      "192.168.26.9"
      ],
      "key": {
          "algo": "rsa",
          "size": 2048
      },
      "names": [
          {
              "C": "CN",
              "L": "BeiJing",
              "ST": "BeiJing",
              "O": "k8s",
              "OU": "System"
          }
      ]
    }
    
    certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
    certs]# ls -l etcd*
    -rw-r--r--. 1 root root 1062 5月   1 00:27 etcd.csr
    -rw-r--r--. 1 root root  356 5月   1 00:26 etcd-csr.json
    -rw-------. 1 root root 1675 5月   1 00:27 etcd-key.pem
    -rw-r--r--. 1 root root 1436 5月   1 00:27 etcd.pem
    

    2、解压、创建软连接

    ]# cd /opt/src
    src]# tar zxvf etcd-v3.5.4-linux-amd64.tar.gz
    ...
    src]# mv etcd-v3.5.4-linux-amd64 /opt/apps/.
    src]# cd /opt/apps/etcd-v3.5.4-linux-amd64/
    etcd-v3.5.4-linux-amd64]# ln -s /opt/apps/etcd-v3.5.4-linux-amd64/etcd /opt/bin/etcd
    etcd-v3.5.4-linux-amd64]# ln -s /opt/apps/etcd-v3.5.4-linux-amd64/etcdctl /usr/bin/etcdctl
    

    3、创建目录、复制证书

    ]# mkdir -p /data/etcd  ## 数据目录
    ]# mkdir /opt/etcd/ssl  ## 证书目录
    ]# cd /opt/certs/
    certs]# cp ca.pem etcd.pem etcd-key.pem /opt/etcd/ssl/.
    certs]# ls -l /opt/etcd/ssl/
    总用量 12
    -rw-r--r--. 1 root root 1359 5月   1 00:58 ca.pem
    -rw-------. 1 root root 1675 5月   1 00:58 etcd-key.pem
    -rw-r--r--. 1 root root 1436 5月   1 00:58 etcd.pem
    

    4、创建配置文件:/opt/etcd/etcd.yaml

    ]# source /tmp/env
    ]# cat > /opt/etcd/etcd.yaml <<EOF
    name: ${HOSTNAME}-etcd
    data-dir: /data/etcd
    listen-peer-urls: https://${etcd_node1}:2380
    listen-client-urls: https://${etcd_node1}:2379,https://127.0.0.1:2379
    advertise-client-urls: https://${etcd_node1}:2379,https://127.0.0.1:2379
    initial-advertise-peer-urls: https://${etcd_node1}:2380
    initial-cluster: ${etcd_node1_name}-etcd=https://${etcd_node1}:2380,${etcd_node2_name}-etcd=https://${etcd_node2}:2380,${etcd_node3_name}-etcd=https://${etcd_node3}:2380
    initial-cluster-token: etcd-cluster-token
    initial-cluster-state: new
    client-transport-security:
    cert-file: /opt/etcd/ssl/etcd.pem
    key-file: /opt/etcd/ssl/etcd-key.pem
    client-cert-auth: true
    trusted-ca-file: /opt/etcd/ssl/ca.pem
    peer-transport-security:
    cert-file: /opt/etcd/ssl/etcd.pem
    key-file: /opt/etcd/ssl/etcd-key.pem
    client-cert-auth: true
    trusted-ca-file: /opt/etcd/ssl/ca.pem
    #log-level: debug
    enable-v2: true
    EOF
    

    注意修改:k8s-7:etcd_node1;k8s-8:etcd_node2;k8s-9:etcd_node3;

    5、创建启动文件

    ``` cat > /usr/lib/systemd/system/etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target

[Service] Type=notify WorkingDirectory=/data/etcd ExecStart=/opt/bin/etcd —config-file=/opt/etcd/etcd.yaml Restart=on-failure RestartSec=5 LimitNOFILE=65536

[Install] WantedBy=multi-user.target EOF

<a name="52ff1a4f"></a>
### 6、复制到k8s-8、k8s-9

]# scp -r /opt/apps/etcd-v3.5.4-linux-amd64/ root@k8s-8:/opt/apps/. ]# scp -r /opt/apps/etcd-v3.5.4-linux-amd64/ root@k8s-9:/opt/apps/. ]# scp -r /opt/etcd root@k8s-8:/opt/. ]# scp -r /opt/etcd root@k8s-9:/opt/. ]# scp /usr/lib/systemd/system/etcd.service root@k8s-8:/usr/lib/systemd/system/. ]# scp /usr/lib/systemd/system/etcd.service root@k8s-9:/usr/lib/systemd/system/.

<a name="a83a8a47"></a>
### 7、在k8s-8、k8s-9上创建软链接、目录

ln -s /opt/apps/etcd-v3.5.4-linux-amd64/etcd /opt/bin/etcd ln -s /opt/apps/etcd-v3.5.4-linux-amd64/etcdctl /usr/bin/etcdctl mkdir -p /data/etcd ## 数据目录

<a name="40210b8e"></a>
### 8、在k8s-8、k8s-9修改配置文件:/opt/etcd/etcd.yaml

- k8s-8

name: k8s-8-etcd data-dir: /data/etcd listen-peer-urls: https://192.168.26.8:2380 listen-client-urls: https://192.168.26.8:2379,https://127.0.0.1:2379 advertise-client-urls: https://192.168.26.8:2379,https://127.0.0.1:2379 initial-advertise-peer-urls: https://192.168.26.8:2380


- k8s-9

name: k8s-9-etcd data-dir: /data/etcd listen-peer-urls: https://192.168.26.9:2380 listen-client-urls: https://192.168.26.9:2379,https://127.0.0.1:2379 advertise-client-urls: https://192.168.26.9:2379,https://127.0.0.1:2379 initial-advertise-peer-urls: https://192.168.26.9:2380

<a name="70406439"></a>
### 9、启动、检查

systemctl daemon-reload systemctl start etcd systemctl enable etcd

查看:
```properties
]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since 日 2022-05-01 01:36:37 CST; 1min 28s ago
 Main PID: 8852 (etcd)
   CGroup: /system.slice/etcd.service
           └─8852 /opt/bin/etcd --config-file=/opt/etcd/etcd.yaml
...
]# ETCDCTL_API=3 etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints=192.168.26.7:2379,192.168.26.8:2379,192.168.26.9:2379 endpoint health
]# etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints=192.168.26.7:2379,192.168.26.8:2379,192.168.26.9:2379 endpoint health
+-------------------+--------+-------------+-------+
|     ENDPOINT      | HEALTH |    TOOK     | ERROR |
+-------------------+--------+-------------+-------+
| 192.168.26.7:2379 |   true | 30.131678ms |       |
| 192.168.26.9:2379 |   true | 30.974601ms |       |
| 192.168.26.8:2379 |   true |  38.56016ms |       |
+-------------------+--------+-------------+-------+
]# etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem check perf
 59 / 60 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooom  !  98.33%PASS: Throughput is 150 writes/s
PASS: Slowest request took 0.086543s
PASS: Stddev is 0.002367s
PASS
]# etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem member list
+------------------+---------+------------+---------------------------+--------------------------------------------------+------------+
|        ID        | STATUS  |    NAME    |        PEER ADDRS         |                   CLIENT ADDRS                   | IS LEARNER |
+------------------+---------+------------+---------------------------+--------------------------------------------------+------------+
|  f608b7c4a6934f6 | started | k8s-7-etcd | https://192.168.26.7:2380 | https://127.0.0.1:2379,https://192.168.26.7:2379 |      false |
| 36a4b324a72c13c0 | started | k8s-8-etcd | https://192.168.26.8:2380 | https://127.0.0.1:2379,https://192.168.26.8:2379 |      false |
| bc05853beb4b841a | started | k8s-9-etcd | https://192.168.26.9:2380 | https://127.0.0.1:2379,https://192.168.26.9:2379 |      false |
+------------------+---------+------------+---------------------------+--------------------------------------------------+------------+
]# etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints=192.168.26.7:2379,192.168.26.8:2379,192.168.26.9:2379 endpoint status
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.26.7:2379 |  f608b7c4a6934f6 |   3.5.4 |   22 MB |      true |      false |         2 |       8973 |               8973 |        |
| 192.168.26.8:2379 | 36a4b324a72c13c0 |   3.5.4 |   22 MB |     false |      false |         2 |       8973 |               8973 |        |
| 192.168.26.9:2379 | bc05853beb4b841a |   3.5.4 |   22 MB |     false |      false |         2 |       8973 |               8973 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

四、部署docker

1、解压、创建软连接

]# cd /opt/src
src]# tar zxvf docker-20.10.14.tgz
src]# mv docker /opt/apps/docker-20.10.14
src]# ln -s /opt/apps/docker-20.10.14/docker /usr/bin/docker
src]# ln -s /opt/apps/docker-20.10.14/dockerd /usr/bin/dockerd
src]# ln -s /opt/apps/docker-20.10.14/docker-init /usr/bin/docker-init
src]# ln -s /opt/apps/docker-20.10.14/docker-proxy /usr/bin/docker-proxy
src]# ln -s /opt/apps/docker-20.10.14/runc /usr/bin/runc
src]# ln -s /opt/apps/docker-20.10.14/ctr /usr/bin/ctr
src]# ln -s /opt/apps/docker-20.10.14/containerd /usr/bin/containerd
src]# ln -s /opt/apps/docker-20.10.14/containerd-shim /usr/bin/containerd-shim
src]# ln -s /opt/apps/docker-20.10.14/containerd-shim-runc-v2 /usr/bin/containerd-shim-runc-v2
src]# docker -v
Docker version 20.10.14, build a224086

## 复制到k8s-8、k8s-9,并创建软连接
src]# scp -r /opt/apps/docker-20.10.14/ root@k8s-8:/opt/apps/.
src]# scp -r /opt/apps/docker-20.10.14/ root@k8s-9:/opt/apps/.

2、创建目录、配置文件

  • 在3台主机上: ``` src]# mkdir -p /data/docker /etc/docker

    k8s-7

    ]# cat /etc/docker/daemon.json { “graph”: “/data/docker”, “storage-driver”: “overlay2”, “insecure-registries”: [“harbor.oss.com:32105”], “registry-mirrors”: [“https://5gce61mx.mirror.aliyuncs.com“], “bip”: “172.26.7.1/24”, “exec-opts”: [“native.cgroupdriver=systemd”], “live-restore”: true }

k8s-8

]# cat /etc/docker/daemon.json { “graph”: “/data/docker”, “storage-driver”: “overlay2”, “insecure-registries”: [“harbor.oss.com:32105”], “registry-mirrors”: [“https://5gce61mx.mirror.aliyuncs.com“], “bip”: “172.26.8.1/24”, “exec-opts”: [“native.cgroupdriver=systemd”], “live-restore”: true }

k8s-9

]# cat /etc/docker/daemon.json { “graph”: “/data/docker”, “storage-driver”: “overlay2”, “insecure-registries”: [“harbor.oss.com:32105”], “registry-mirrors”: [“https://5gce61mx.mirror.aliyuncs.com“], “bip”: “172.26.9.1/24”, “exec-opts”: [“native.cgroupdriver=systemd”], “live-restore”: true }

<a name="dc7bbe5d"></a>
### 3、创建启动文件

]# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target

<a name="62089e15"></a>
### 4、启动、检查

]# systemctl daemon-reload ]# systemctl start docker; systemctl enable docker

```
]# docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.14
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3df54a852345ae127d1fa3092b95168e4a88e2f8
 runc version: v1.0.3-0-gf46b6ba2
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-1160.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 972.4MiB
 Name: k8s-7
 ID: XRJ2:2FSH:7WGZ:GBPQ:FNNI:GLOG:7MDJ:7FRC:DP3F:HFBB:4DKS:5BAA
 Docker Root Dir: /data/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  harbor.oss.com:32105
  127.0.0.0/8
 Registry Mirrors:
  https://5gce61mx.mirror.aliyuncs.com/
 Live Restore Enabled: true
 Product License: Community Engine
]# docker version
Client:
 Version:           20.10.14
 API version:       1.41
 Go version:        go1.16.15
 Git commit:        a224086
 Built:             Thu Mar 24 01:45:09 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.14
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.15
  Git commit:       87a90dc
  Built:            Thu Mar 24 01:49:54 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.5.11
  GitCommit:        3df54a852345ae127d1fa3092b95168e4a88e2f8
 runc:
  Version:          1.0.3
  GitCommit:        v1.0.3-0-gf46b6ba2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:bf:10:37 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.7/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:febf:1037/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:04:3b:65:ca brd ff:ff:ff:ff:ff:ff
    inet 172.26.7.1/24 brd 172.26.7.255 scope global docker0
       valid_lft forever preferred_lft forever
]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:42:83:d2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.8/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe42:83d2/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:80:6c:d8:e6 brd ff:ff:ff:ff:ff:ff
    inet 172.26.8.1/24 brd 172.26.8.255 scope global docker0
       valid_lft forever preferred_lft forever
~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:a7:5d:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.9/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fea7:5d3d/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:8f:f4:e1:d8 brd ff:ff:ff:ff:ff:ff
    inet 172.26.9.1/24 brd 172.26.9.255 scope global docker0
       valid_lft forever preferred_lft forever
  • 拉取镜像
    ~]# docker pull centos
    Using default tag: latest
    latest: Pulling from library/centos
    a1d0c7532777: Pull complete
    Digest: sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
    Status: Downloaded newer image for centos:latest
    docker.io/library/centos:latest
    [root@k8s-9 ~]# docker images
    REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
    centos       latest    5d0da3dc9764   7 months ago   231MB
    
    ``` ]# docker run -i -t —name test centos /bin/bash [root@47b8a504d9bf /]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
    
    4: eth0@if5: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:1a:08:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.26.8.2/24 brd 172.26.8.255 scope global eth0
     valid_lft forever preferred_lft forever
    

]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 47b8a504d9bf centos “/bin/bash” 4 minutes ago Exited (0) 9 seconds ago test ]# docker rm test

<a name="bae88e60"></a>
## 五、部署flannel
<a name="edb2c9a0"></a>
### 1、确认etcd正常
<a name="92b062b0"></a>
### 2、检查路由

k8s-7: ]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.26.2 0.0.0.0 UG 100 0 0 ens32 172.26.7.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0 192.168.26.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32

k8s-8: ]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.26.2 0.0.0.0 UG 100 0 0 ens32 172.26.8.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0 192.168.26.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32

k8s-9: ]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.26.2 0.0.0.0 UG 100 0 0 ens32 172.26.9.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0 192.168.26.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32

清除其它路由,如:`route del -net 172.26.57.0 netmask 255.255.255.0 gw 192.168.26.9`
<a name="0062a57d"></a>
### 3、添加flannel网络信息到etcd

- 在任一etcd节点进行操作
```properties
ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.26.7:2379,https://192.168.26.8:2379,https://192.168.26.9:2379" set /coreos.com/network/config '{"Network": "172.26.0.0/16", "Backend": {"Type": "host-gw"}}'
  • 检查

    ]# ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.26.7:2379,https://192.168.26.8:2379,https://192.168.26.9:2379" get /coreos.com/network/config
    {"Network": "172.26.0.0/16", "Backend": {"Type": "host-gw"}}
    
  • 删除配置

    ]# ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.26.7:2379,https://192.168.26.8:2379,https://192.168.26.9:2379" rm /coreos.com/network/config
    ## 检查上一级,如果有产生的其它数据删除,则也要删除
    ]# ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.26.7:2379,https://192.168.26.8:2379,https://192.168.26.9:2379" ls /coreos.com/network
    ...
    ]# ETCDCTL_API=2 etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/etcd.pem --key-file=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.26.7:2379,https://192.168.26.8:2379,https://192.168.26.9:2379" rmdir /coreos.com/network/subnets
    

    4、flannel解压、创建软连接

    k8s-7:

]# cd /opt/src
src]# mkdir /opt/apps/flannel-v0.17.0-linux-amd64
src]# tar zxvf flannel-v0.17.0-linux-amd64.tar.gz -C /opt/apps/flannel-v0.17.0-linux-amd64
src]# scp -r /opt/apps/flannel-v0.17.0-linux-amd64 root@k8s-8:/opt/apps/.
src]# scp -r /opt/apps/flannel-v0.17.0-linux-amd64 root@k8s-9:/opt/apps/.
src]# cd /opt/apps/flannel-v0.17.0-linux-amd64
## 注意在k8s-8、k8s-9上创建软连接
flannel-v0.17.0-linux-amd64]# ln -s /opt/apps/flannel-v0.17.0-linux-amd64/flanneld /usr/bin/flanneld
flannel-v0.17.0-linux-amd64]# ln -s /opt/apps/flannel-v0.17.0-linux-amd64/mk-docker-opts.sh /usr/bin/mk-docker-opts.sh

5、添加配置文件

因为各节点docker设置了不同的容器IP,使用以下配置:

]# vi /opt/flannel/etc/subnet.env

k8s-7:
FLANNEL_NETWORK=172.26.0.0/16
FLANNEL_SUBNET=172.26.7.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

k8s-8:
FLANNEL_NETWORK=172.26.0.0/16
FLANNEL_SUBNET=172.26.8.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

k8s-9:
FLANNEL_NETWORK=172.26.0.0/16
FLANNEL_SUBNET=172.26.9.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
cat > /opt/flannel/etc/kube-flanneld.conf << EOF
KUBE_FLANNELD_OPTS="--public-ip=192.168.26.7 \\
--etcd-endpoints=https://192.168.26.7:2379,https://192.168.26.8:2379,https://192.168.26.9:2379 \\
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \\
--etcd-certfile=/opt/etcd/ssl/etcd.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--kube-subnet-mgr=false \\
--iface=ens32 \\
--iptables-resync=5 \\
--subnet-file=/opt/flannel/etc/subnet.env \\
--healthz-port=2401"
EOF

k8s-7:;k8s-8:;k8s-9:--public-ip=192.168.26.7``--public-ip=192.168.26.8``--public-ip=192.168.26.9

6、添加启动文件

cat > /usr/lib/systemd/system/kube-flanneld.service << EOF
[Unit]
Description=Kubernetes flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service

[Service]
EnvironmentFile=/opt/flannel/etc/kube-flanneld.conf
ExecStartPost=/usr/bin/mk-docker-opts.sh
ExecStart=/usr/bin/flanneld \$KUBE_FLANNELD_OPTS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

7、启动、检查

]# systemctl daemon-reload && systemctl start kube-flanneld && systemctl enable kube-flanneld
]# systemctl status kube-flanneld
  • k8s-7

    ]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
    172.26.7.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
    172.26.8.0      192.168.26.8    255.255.255.0   UG    0      0        0 ens32
    172.26.9.0      192.168.26.9    255.255.255.0   UG    0      0        0 ens32
    192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32
    
  • k8s-8

    ]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
    172.26.7.0      192.168.26.7    255.255.255.0   UG    0      0        0 ens32
    172.26.8.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
    172.26.9.0      192.168.26.9    255.255.255.0   UG    0      0        0 ens32
    192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32
    
  • k8s-9

    ]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
    172.26.7.0      192.168.26.7    255.255.255.0   UG    0      0        0 ens32
    172.26.8.0      192.168.26.8    255.255.255.0   UG    0      0        0 ens32
    172.26.9.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
    192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32
    

    8、测试验证容器互访、容器与主机互访

    8.1 跨主机之间的容器访问:从k8s-7容器访问另一主机上的容器

    ```properties [root@k8s-7 ~]# docker run -i -t —name node1 centos /bin/bash [root@aef133d543aa /]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo

     valid_lft forever preferred_lft forever
    

    5: eth0@if6: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:1a:07:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.26.7.2/24 brd 172.26.7.255 scope global eth0

     valid_lft forever preferred_lft forever
    

    [root@aef133d543aa /]#

    跨主机之间的容器访问:从容器访问另一主机上的容器

    [root@aef133d543aa /]# ping 172.26.8.2 -c 3 ## ping k8s-8上的容器 PING 172.26.8.2 (172.26.8.2) 56(84) bytes of data. 64 bytes from 172.26.8.2: icmp_seq=1 ttl=62 time=0.786 ms 64 bytes from 172.26.8.2: icmp_seq=2 ttl=62 time=1.08 ms 64 bytes from 172.26.8.2: icmp_seq=3 ttl=62 time=0.811 ms

—- 172.26.8.2 ping statistics —- 3 packets transmitted, 3 received, 0% packet loss, time 2008ms rtt min/avg/max/mdev = 0.786/0.893/1.084/0.139 ms [root@aef133d543aa /]# ping 172.26.9.2 -c 3 ## ping k8s-9上的容器 PING 172.26.9.2 (172.26.9.2) 56(84) bytes of data. 64 bytes from 172.26.9.2: icmp_seq=1 ttl=62 time=0.742 ms 64 bytes from 172.26.9.2: icmp_seq=2 ttl=62 time=1.13 ms 64 bytes from 172.26.9.2: icmp_seq=3 ttl=62 time=0.836 ms

—- 172.26.9.2 ping statistics —- 3 packets transmitted, 3 received, 0% packet loss, time 2006ms rtt min/avg/max/mdev = 0.742/0.903/1.133/0.170 ms [root@aef133d543aa /]#

<a name="82d10d4d"></a>
#### 8.2 跨容器主机之间的访问:从k8s-7容器访问所有主机
```properties
## 跨容器主机之间的访问:从容器访问主机
[root@aef133d543aa /]# ping 192.168.26.7 -c 3
PING 192.168.26.7 (192.168.26.7) 56(84) bytes of data.
64 bytes from 192.168.26.7: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 192.168.26.7: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 192.168.26.7: icmp_seq=3 ttl=64 time=0.111 ms

--- 192.168.26.7 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.052/0.078/0.111/0.026 ms
[root@aef133d543aa /]# ping 192.168.26.8 -c 3
PING 192.168.26.8 (192.168.26.8) 56(84) bytes of data.
64 bytes from 192.168.26.8: icmp_seq=1 ttl=63 time=0.744 ms
64 bytes from 192.168.26.8: icmp_seq=2 ttl=63 time=0.999 ms
64 bytes from 192.168.26.8: icmp_seq=3 ttl=63 time=0.925 ms

--- 192.168.26.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2007ms
rtt min/avg/max/mdev = 0.744/0.889/0.999/0.109 ms
[root@aef133d543aa /]# ping 192.168.26.9 -c 3
PING 192.168.26.9 (192.168.26.9) 56(84) bytes of data.
64 bytes from 192.168.26.9: icmp_seq=1 ttl=63 time=0.273 ms
64 bytes from 192.168.26.9: icmp_seq=2 ttl=63 time=0.632 ms
64 bytes from 192.168.26.9: icmp_seq=3 ttl=63 time=0.684 ms

--- 192.168.26.9 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.273/0.529/0.684/0.184 ms
[root@aef133d543aa /]#

8.3 跨主机容器之间的访问:从k8s-7主机访问所有主机上的容器

## 跨主机容器之间的访问:从主机访问容器
[root@k8s-7 ~]# ping 172.26.9.2 -c 3
PING 172.26.9.2 (172.26.9.2) 56(84) bytes of data.
64 bytes from 172.26.9.2: icmp_seq=1 ttl=63 time=0.550 ms
64 bytes from 172.26.9.2: icmp_seq=2 ttl=63 time=0.811 ms
64 bytes from 172.26.9.2: icmp_seq=3 ttl=63 time=0.808 ms

--- 172.26.9.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.550/0.723/0.811/0.122 ms
[root@k8s-7 ~]# ping 172.26.8.2 -c 3
PING 172.26.8.2 (172.26.8.2) 56(84) bytes of data.
64 bytes from 172.26.8.2: icmp_seq=1 ttl=63 time=0.650 ms
64 bytes from 172.26.8.2: icmp_seq=2 ttl=63 time=0.900 ms
64 bytes from 172.26.8.2: icmp_seq=3 ttl=63 time=0.682 ms

--- 172.26.8.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.650/0.744/0.900/0.111 ms
[root@k8s-7 ~]# ping 172.26.7.2 -c 3
PING 172.26.7.2 (172.26.7.2) 56(84) bytes of data.
64 bytes from 172.26.7.2: icmp_seq=1 ttl=64 time=0.105 ms
64 bytes from 172.26.7.2: icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from 172.26.7.2: icmp_seq=3 ttl=64 time=0.168 ms

--- 172.26.7.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.058/0.110/0.168/0.045 ms

8.4 从容器访问百度

## 从容器访问百度
[root@aef133d543aa /]# ping www.baidu.com -c 3
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from www.baidu.com (14.215.177.39): icmp_seq=1 ttl=127 time=10.2 ms
64 bytes from www.baidu.com (14.215.177.39): icmp_seq=2 ttl=127 time=10.4 ms
64 bytes from www.baidu.com (14.215.177.39): icmp_seq=3 ttl=127 time=9.44 ms

--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 9.444/9.998/10.396/0.404 ms

8.5 k8s-8、k8s-9上的测试

  • k8s-8 ```properties [root@k8s-8 ~]# docker run -i -t —name node2 centos /bin/bash [root@aed6f2cc2da9 /]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
    
    5: eth0@if6: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:1a:08:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.26.8.2/24 brd 172.26.8.255 scope global eth0
     valid_lft forever preferred_lft forever
    
    [root@aed6f2cc2da9 /]# [root@aed6f2cc2da9 /]# ping 172.26.7.2 -c 3 ## ping k8s-7上的容器 PING 172.26.7.2 (172.26.7.2) 56(84) bytes of data. 64 bytes from 172.26.7.2: icmp_seq=1 ttl=62 time=0.572 ms 64 bytes from 172.26.7.2: icmp_seq=2 ttl=62 time=0.859 ms 64 bytes from 172.26.7.2: icmp_seq=3 ttl=62 time=0.828 ms

—- 172.26.7.2 ping statistics —- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 0.572/0.753/0.859/0.128 ms [root@aed6f2cc2da9 /]# ping 172.26.9.2 -c 3 ## ping k8s-9上的容器 PING 172.26.9.2 (172.26.9.2) 56(84) bytes of data. 64 bytes from 172.26.9.2: icmp_seq=1 ttl=62 time=0.629 ms 64 bytes from 172.26.9.2: icmp_seq=2 ttl=62 time=1.84 ms 64 bytes from 172.26.9.2: icmp_seq=3 ttl=62 time=1.10 ms

—- 172.26.9.2 ping statistics —- 3 packets transmitted, 3 received, 0% packet loss, time 2005ms rtt min/avg/max/mdev = 0.629/1.189/1.841/0.499 ms [root@aed6f2cc2da9 /]#

```properties
[root@aed6f2cc2da9 /]# ping 192.168.26.7 -c 3
PING 192.168.26.7 (192.168.26.7) 56(84) bytes of data.
64 bytes from 192.168.26.7: icmp_seq=1 ttl=63 time=0.486 ms
64 bytes from 192.168.26.7: icmp_seq=2 ttl=63 time=0.759 ms
64 bytes from 192.168.26.7: icmp_seq=3 ttl=63 time=0.400 ms

--- 192.168.26.7 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 0.400/0.548/0.759/0.154 ms
[root@aed6f2cc2da9 /]# ping 192.168.26.8 -c 3
PING 192.168.26.8 (192.168.26.8) 56(84) bytes of data.
64 bytes from 192.168.26.8: icmp_seq=1 ttl=64 time=0.043 ms
64 bytes from 192.168.26.8: icmp_seq=2 ttl=64 time=0.062 ms
64 bytes from 192.168.26.8: icmp_seq=3 ttl=64 time=0.137 ms

--- 192.168.26.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.043/0.080/0.137/0.041 ms
[root@aed6f2cc2da9 /]# ping 192.168.26.9 -c 3
PING 192.168.26.9 (192.168.26.9) 56(84) bytes of data.
64 bytes from 192.168.26.9: icmp_seq=1 ttl=63 time=0.604 ms
64 bytes from 192.168.26.9: icmp_seq=2 ttl=63 time=0.854 ms
64 bytes from 192.168.26.9: icmp_seq=3 ttl=63 time=0.662 ms

--- 192.168.26.9 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 0.604/0.706/0.854/0.111 ms
  • k8s-9 ```properties [root@k8s-9 ~]# docker run -i -t —name node3 centos /bin/bash [root@33d5e4bf410c /]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
    
    5: eth0@if6: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:1a:09:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.26.9.2/24 brd 172.26.9.255 scope global eth0
     valid_lft forever preferred_lft forever
    
    [root@33d5e4bf410c /]# [root@33d5e4bf410c /]# ping 172.26.7.2 -c 3 ## ping k8s-7上的容器 PING 172.26.7.2 (172.26.7.2) 56(84) bytes of data. 64 bytes from 172.26.7.2: icmp_seq=1 ttl=62 time=0.517 ms 64 bytes from 172.26.7.2: icmp_seq=2 ttl=62 time=1.11 ms 64 bytes from 172.26.7.2: icmp_seq=3 ttl=62 time=0.995 ms

—- 172.26.7.2 ping statistics —- 3 packets transmitted, 3 received, 0% packet loss, time 2005ms rtt min/avg/max/mdev = 0.517/0.874/1.111/0.258 ms [root@33d5e4bf410c /]# ping 172.26.8.2 -c 3 ## ping k8s-8上的容器 PING 172.26.8.2 (172.26.8.2) 56(84) bytes of data. 64 bytes from 172.26.8.2: icmp_seq=1 ttl=62 time=0.637 ms 64 bytes from 172.26.8.2: icmp_seq=2 ttl=62 time=0.929 ms 64 bytes from 172.26.8.2: icmp_seq=3 ttl=62 time=0.908 ms

—- 172.26.8.2 ping statistics —- 3 packets transmitted, 3 received, 0% packet loss, time 2005ms rtt min/avg/max/mdev = 0.637/0.824/0.929/0.137 ms [root@33d5e4bf410c /]#

```properties
[root@33d5e4bf410c /]# ping 192.168.26.7 -c 3
PING 192.168.26.7 (192.168.26.7) 56(84) bytes of data.
64 bytes from 192.168.26.7: icmp_seq=1 ttl=63 time=0.555 ms
64 bytes from 192.168.26.7: icmp_seq=2 ttl=63 time=0.508 ms
64 bytes from 192.168.26.7: icmp_seq=3 ttl=63 time=0.448 ms

--- 192.168.26.7 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.448/0.503/0.555/0.050 ms
[root@33d5e4bf410c /]# ping 192.168.26.8 -c 3
PING 192.168.26.8 (192.168.26.8) 56(84) bytes of data.
64 bytes from 192.168.26.8: icmp_seq=1 ttl=63 time=0.598 ms
64 bytes from 192.168.26.8: icmp_seq=2 ttl=63 time=0.904 ms
64 bytes from 192.168.26.8: icmp_seq=3 ttl=63 time=0.998 ms

--- 192.168.26.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 0.598/0.833/0.998/0.172 ms
[root@33d5e4bf410c /]# ping 192.168.26.9 -c 3
PING 192.168.26.9 (192.168.26.9) 56(84) bytes of data.
64 bytes from 192.168.26.9: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.26.9: icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from 192.168.26.9: icmp_seq=3 ttl=64 time=0.235 ms

--- 192.168.26.9 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.058/0.133/0.235/0.075 ms

六、部署k8s核心组件

  • 解压软件包

    ## k8s-7:
    ]# cd /opt/src
    [root@k8s-7 src]# tar zxvf kubernetes-server-linux-amd64.tar.gz
    [root@k8s-7 src]# mkdir /opt/apps/kubernetes1.23.6
    [root@k8s-7 src]# mv /opt/src/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,kubectl} /opt/apps/kubernetes1.23.6/.
    [root@k8s-7 src]# ls /opt/apps/kubernetes1.23.6 -l
    总用量 505232
    -rwxr-xr-x 1 root root 131301376 4月  14 17:00 kube-apiserver
    -rwxr-xr-x 1 root root 121122816 4月  14 17:00 kube-controller-manager
    -rwxr-xr-x 1 root root  46596096 4月  14 17:00 kubectl
    -rwxr-xr-x 1 root root 124542016 4月  14 17:00 kubelet
    -rwxr-xr-x 1 root root  44167168 4月  14 17:00 kube-proxy
    -rwxr-xr-x 1 root root  49627136 4月  14 17:00 kube-scheduler
    
  • 复制到k8s-8、k8s-9

    [root@k8s-7 src]# scp -r /opt/apps/kubernetes1.23.6 root@k8s-8:/opt/apps/.
    [root@k8s-7 src]# scp -r /opt/apps/kubernetes1.23.6 root@k8s-9:/opt/apps/.
    
  • k8s-7、k8s-8、k8s-9:创建软连接

    ln -s /opt/apps/kubernetes1.23.6/kube-apiserver /opt/bin/kube-apiserver
    ln -s /opt/apps/kubernetes1.23.6/kube-controller-manager /opt/bin/kube-controller-manager
    ln -s /opt/apps/kubernetes1.23.6/kubelet /opt/bin/kubelet
    ln -s /opt/apps/kubernetes1.23.6/kube-proxy /opt/bin/kube-proxy
    ln -s /opt/apps/kubernetes1.23.6/kube-scheduler /opt/bin/kube-scheduler
    ln -s /opt/apps/kubernetes1.23.6/kubectl /usr/bin/kubectl
    
  • 创建目录

    mkdir -p /opt/kubernetes/ssl
    mkdir -p /var/log/kubernetes
    

    1、kube-apiserver

    1.1 证书制作与分发

  • 创建kube-apiserver-csr.json

    ]# cd /opt/certs/
    certs]# cp etcd-csr.json kube-apiserver-csr.json
    cat kube-apiserver-csr.json
    {
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.26.7",
      "192.168.26.8",
      "192.168.26.9",
      "10.168.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "k8s",
        "OU": "System"
      }
    ]
    }
    

    10.168.0.1:Cluster IP / Service IP

  • 生成证书

    certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
    2022/05/01 16:31:52 [INFO] generate received request
    2022/05/01 16:31:52 [INFO] received CSR
    2022/05/01 16:31:52 [INFO] generating key: rsa-2048
    2022/05/01 16:31:52 [INFO] encoded CSR
    2022/05/01 16:31:52 [INFO] signed certificate with serial number 689818571768179104575825291535796442205188252767
    
  • 分发证书

    certs]# cp /opt/certs/{kube-apiserver.pem,kube-apiserver-key.pem,ca-key.pem,ca.pem} /opt/kubernetes/ssl/.
    certs]# scp /opt/kubernetes/ssl/{kube-apiserver.pem,kube-apiserver-key.pem,ca-key.pem,ca.pem} root@k8s-8:/opt/kubernetes/ssl/.
    certs]# scp /opt/kubernetes/ssl/{kube-apiserver.pem,kube-apiserver-key.pem,ca-key.pem,ca.pem} root@k8s-9:/opt/kubernetes/ssl/.
    

    1.2 创建创建token.csv

    ``` ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘ ecc4e710c716c75aeaefcfbb29e31b56 ~]# cat > /opt/kubernetes/cfg/token.csv << EOF ecc4e710c716c75aeaefcfbb29e31b56,kubelet-bootstrap,10001,”system:node-bootstrapper” EOF

]# scp /opt/kubernetes/cfg/token.csv root@k8s-8:/opt/kubernetes/cfg/token.csv ]# scp /opt/kubernetes/cfg/token.csv root@k8s-9:/opt/kubernetes/cfg/token.csv

<a name="30e5b4be"></a>
#### 1.3 创建启动文件

source /tmp/env cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service

[Service] ExecStart=/opt/bin/kube-apiserver —enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ —anonymous-auth=false \ —bind-address=${etcd_node1} \ —secure-port=6443 \ —advertise-address=${etcd_node1} \ —insecure-port=0 \ —authorization-mode=Node,RBAC \ —runtime-config=api/all=true \ —enable-bootstrap-token-auth=true \ —token-auth-file=/opt/kubernetes/cfg/token.csv \ —service-cluster-ip-range=10.168.0.0/16 \ —service-node-port-range=30000-50000 \ —tls-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem \ —tls-private-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \ —client-ca-file=/opt/kubernetes/ssl/ca.pem \ —kubelet-client-certificate=/opt/kubernetes/ssl/kube-apiserver.pem \ —kubelet-client-key=/opt/kubernetes/ssl/kube-apiserver-key.pem \ —service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ —service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ —service-account-issuer=api \ —etcd-cafile=/opt/etcd/ssl/ca.pem \ —etcd-certfile=/opt/etcd/ssl/etcd.pem \ —etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \ —etcd-servers=https://${etcd_node1}:2379,https://${etcd_node2}:2379,https://${etcd_node3}:2379 \ —enable-swagger-ui=true \ —allow-privileged=true \ —apiserver-count=3 \ —audit-log-maxage=30 \ —audit-log-maxbackup=3 \ —audit-log-maxsize=100 \ —audit-log-path=/var/log/kube-apiserver-audit.log \ —event-ttl=1h \ —alsologtostderr=true \ —logtostderr=false \ —log-dir=/var/log/kubernetes \ —v=4 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536

[Install] WantedBy=multi-user.target EOF

注意修改:k8s-8:bind-address={etcd_node2};k8s-9:bind-address={etcd_node3}。形成多个kube-apiserver。![](https://g.yuque.com/gr/latex?%7Betcd_node2%7D%EF%BC%8Cadvertise-address%3D#card=math&code=%7Betcd_node2%7D%EF%BC%8Cadvertise-address%3D&id=RQDp8)![](https://g.yuque.com/gr/latex?%7Betcd_node3%7D%EF%BC%8Cadvertise-address%3D#card=math&code=%7Betcd_node3%7D%EF%BC%8Cadvertise-address%3D&id=GmuWp)
<a name="fc1eacc9"></a>
#### 1.4 启动、检查

systemctl daemon-reload ; systemctl start kube-apiserver.service systemctl status kube-apiserver systemctl enable kube-apiserver

<a name="d8ab2bab"></a>
### 2、kubectl
<a name="bdc2e21c"></a>
#### 2.1 证书制作与分发

certs]# vi admin-csr.json { “CN”: “admin”, “hosts”: [], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “BeiJing”, “L”: “BeiJing”, “O”: “system:masters”, “OU”: “System” } ] }

```
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

certs]# cp /opt/certs/{admin.pem,admin-key.pem} /opt/kubernetes/ssl/.
certs]# scp /opt/certs/{admin.pem,admin-key.pem} root@k8s-8:/opt/kubernetes/ssl/.
certs]# scp /opt/certs/{admin.pem,admin-key.pem} root@k8s-9:/opt/kubernetes/ssl/.

2.2 kubeconfig配置

kube.config 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书。

cert]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://${etcd_node1}:6443 --kubeconfig=kube.config
Cluster "kubernetes" set.
## 在k8s-8:${etcd_node2}:6443;在k8s-9:${etcd_node3}:6443
cert]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
User "admin" set.

cert]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
Context "kubernetes" created.

cert]# kubectl config use-context kubernetes --kubeconfig=kube.config
Switched to context "kubernetes".

cert]# mkdir ~/.kube

cert]# cp kube.config ~/.kube/config

cert]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes 
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created
  • 查看集群状态 ``` ]# kubectl get componentstatuses Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get “https://127.0.0.1:10259/healthz“: dial tcp 127.0.0.1:10259: connect: connection refused controller-manager Unhealthy Get “https://127.0.0.1:10257/healthz“: dial tcp 127.0.0.1:10257: connect: connection refused etcd-2 Healthy {“health”:”true”,”reason”:””} etcd-1 Healthy {“health”:”true”,”reason”:””} etcd-0 Healthy {“health”:”true”,”reason”:””}

]# kubectl get all —all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.168.0.1 443/TCP 21m

<a name="c5b839ca"></a>
#### 2.3 配置kubectl子命令补全

echo ‘source <(kubectl completion bash)’ >> ~/.bashrc

前提: bash-completion.noarch 必须要先安装才行

- yum list bash*
- yum install bash-completion.noarch -y   ##不能立即生效,要重新登录或:source ~/.bashrc
<a name="ee05fb97"></a>
### 3、kube-controller-manager
<a name="85bccd8e"></a>
#### 3.1 证书制作与分发

cat kube-controller-manager-csr.json

{ “CN”: “system:kube-controller-manager”, “hosts”: [ “127.0.0.1”, “192.168.26.7”, “192.168.26.8”, “192.168.26.9” ], “key”: { “algo”: “rsa”, “size”: 2048 }, “names”: [ { “C”: “CN”, “ST”: “BeiJing”, “L”: “BeiJing”, “O”: “system:kube-controller-manager”, “OU”: “System” } ] }

```
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

certs]# ls kube-controller-manager*.pem
kube-controller-manager-key.pem  kube-controller-manager.pem

certs]# cp kube-controller-manager*.pem /opt/kubernetes/ssl/.
certs]# scp kube-controller-manager*.pem root@k8s-8:/opt/kubernetes/ssl/.
certs]# scp kube-controller-manager*.pem root@k8s-9:/opt/kubernetes/ssl/.

3.2 创建kube-controller-manager.kubeconfig

cert]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://${etcd_node1}:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.
## 在k8s-8:${etcd_node2}:6443;在k8s-9:${etcd_node3}:6443
cert]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.

cert]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.

cert]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".

]# cp kube-controller-manager.kubeconfig /opt/kubernetes/.

3.3 创建启动文件

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/opt/bin/kube-controller-manager \\
  --secure-port=10257 \\
  --bind-address=127.0.0.1 \\
  --kubeconfig=/opt/kubernetes/kube-controller-manager.kubeconfig \\
  --service-cluster-ip-range=10.168.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=172.26.0.0/16 \\
  --experimental-cluster-signing-duration=87600h \\
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  --leader-elect=true \\
  --feature-gates=RotateKubeletServerCertificate=true \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --tls-cert-file=/opt/kubernetes/ssl/kube-controller-manager.pem \\
  --tls-private-key-file=/opt/kubernetes/ssl/kube-controller-manager-key.pem \\
  --use-service-account-credentials=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

3.4 启动、检查

systemctl daemon-reload
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager
systemctl enable kube-controller-manager.service
]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
etcd-1               Healthy     {"health":"true","reason":""}
etcd-0               Healthy     {"health":"true","reason":""}
etcd-2               Healthy     {"health":"true","reason":""}
controller-manager   Healthy     ok

4、kube-schedule

4.1 证书制作与分发

cat kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "192.168.26.7",
    "192.168.26.8",
    "192.168.26.9"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

certs]# ls kube-scheduler*.pem
kube-scheduler-key.pem  kube-scheduler.pem
certs]# cp kube-scheduler*.pem /opt/kubernetes/ssl/.
certs]# scp kube-scheduler*.pem root@k8s-8:/opt/kubernetes/ssl/.
certs]# scp kube-scheduler*.pem root@k8s-9:/opt/kubernetes/ssl/.

4.2 创建kube-scheduler.kubeconfig

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://${etcd_node1}:6443 --kubeconfig=kube-scheduler.kubeconfig
## 在k8s-8:${etcd_node2}:6443;在k8s-9:${etcd_node3}:6443
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

certs]# cp kube-scheduler.kubeconfig /opt/kubernetes/.

4.3 创建启动文件

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/opt/bin/kube-scheduler --address=127.0.0.1 \\
--kubeconfig=/opt/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

4.4 启动、检查

# systemctl start kube-scheduler.service
# systemctl status kube-scheduler
# systemctl enable kube-scheduler.service
]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
etcd-2               Healthy   {"health":"true","reason":""}
etcd-1               Healthy   {"health":"true","reason":""}

5、kubelet

5.1 证书制作与分发

cat kubelet-csr.json
{
  "CN": "system:kubelet",
  "hosts": [
    "127.0.0.1",
    "192.168.26.7",
    "192.168.26.8",
    "192.168.26.9"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kubelet",
      "OU": "System"
    }
  ]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet

certs]# ls kubele*pem
kubelet-key.pem  kubelet.pem
certs]# cp kubele*pem /opt/kubernetes/ssl/.
certs]# scp kubele*pem root@k8s-8:/opt/kubernetes/ssl/.
certs]# scp kubele*pem root@k8s-9:/opt/kubernetes/ssl/.

5.2 创建bootstrap.kubeconfig

  • 生成bootstrap.kubeconfig ``` ]# KUBE_CONFIG=”/opt/kubernetes/cfg/bootstrap.kubeconfig” ]# KUBE_APISERVER=”https://192.168.26.[7、8、9]:6443“ # apiserver IP:PORT ]# TOKEN=”ecc4e710c716c75aeaefcfbb29e31b56” # 与token.csv里保持一致

生成 kubelet bootstrap kubeconfig 配置文件

]# kubectl config set-cluster kubernetes \ —certificate-authority=/opt/kubernetes/ssl/ca.pem \ —embed-certs=true \ —server=${KUBE_APISERVER} \ —kubeconfig=${KUBE_CONFIG} ]# kubectl config set-credentials “kubelet-bootstrap” \ —token=${TOKEN} \ —kubeconfig=${KUBE_CONFIG} ]# kubectl config set-context default \ —cluster=kubernetes \ —user=”kubelet-bootstrap” \ —kubeconfig=${KUBE_CONFIG} ]# kubectl config use-context default —kubeconfig=${KUBE_CONFIG}

```
## 多个master时,不用复制
]# scp /opt/kubernetes/cfg/bootstrap.kubeconfig root@k8s-8:/opt/kubernetes/cfg/bootstrap.kubeconfig
]# scp /opt/kubernetes/cfg/bootstrap.kubeconfig root@k8s-9:/opt/kubernetes/cfg/bootstrap.kubeconfig
授权kubelet-bootstrap用户允许请求证书
]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

5.3 创建配置文件

cat <<EOF > /opt/kubernetes/cfg/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
- 10.168.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

5.4 创建启动文件

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/opt/bin/kubelet \\
  --config=/opt/kubernetes/cfg/kubelet-config.yaml \\
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
  --hostname-override=192.168.26.7 \\
  --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
  --cert-dir=/opt/kubernetes/ssl \\
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

注意修改:k8s-8:hostname-override=192.168.26.8;k8s-8:hostname-override=192.168.26.9

5.5 启动、检查

systemctl daemon-reload
systemctl start kubelet
systemctl status kubelet
systemctl enable kubelet
]# kubectl get node
NAME           STATUS     ROLES    AGE   VERSION
192.168.26.7   NotReady   <none>   20s   v1.23.6
## 查看kubelet证书请求
]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-r6P-91U3klSvxhiMBFIm_ZjLFW5aB5pe5Mh8K3ex0bs   2m49s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
# 批准申请
]# kubectl certificate approve node-csr-r6P-91U3klSvxhiMBFIm_ZjLFW5aB5pe5Mh8K3ex0bs
certificatesigningrequest.certificates.k8s.io/node-csr-r6P-91U3klSvxhiMBFIm_ZjLFW5aB5pe5Mh8K3ex0bs approved
]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
192.168.26.7   Ready    <none>   121m   v1.23.6

6、kube-proxy

6.1 证书制作与分发

cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [
    "127.0.0.1",
    "192.168.26.7",
    "192.168.26.8",
    "192.168.26.9"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kubelet",
      "OU": "System"
    }
  ]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

certs]# ls kube-proxy*.pem
kube-proxy-key.pem  kube-proxy.pem
certs]# cp kube-proxy*.pem /opt/kubernetes/ssl/.
certs]# scp kube-proxy*.pem root@k8s-8:/opt/kubernetes/ssl/.
certs]# scp kube-proxy*.pem root@k8s-9:/opt/kubernetes/ssl/.

6.2 创建kube-proxy.kubeconfig

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://${etcd_node1}:6443 --kubeconfig=kube-proxy.kubeconfig
## 在k8s-8:${etcd_node2}:6443;在k8s-9:${etcd_node3}:6443
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

certs]# cp kube-proxy.kubeconfig /opt/kubernetes/
## 多master时以下不复制
certs]# scp kube-proxy.kubeconfig root@k8s-8:/opt/kubernetes/
certs]# scp kube-proxy.kubeconfig root@k8s-9:/opt/kubernetes/

6.3 创建配置文件

cat >/opt/kubernetes/cfg/kube-proxy.yaml<<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  kubeconfig: "/opt/kubernetes/kube-proxy.kubeconfig"
clusterCIDR: "172.26.0.0/16"
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: "${HOSTNAME}"
metricsBindAddress: 0.0.0.0:10249
mode: "ipvs"
EOF

6.4 创建启动文件

cat >  /usr/lib/systemd/system/kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]

ExecStart=/opt/bin/kube-proxy \
  --config=/opt/kubernetes/cfg/kube-proxy.yaml 
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

6.5 启动、检查

systemctl start kube-proxy.service
systemctl status kube-proxy
systemctl enable kube-proxy.service

七、核心插件

1、部署coredns、nodelocaldns

github地址:https://github.com/kubernetes/kubernetes/tree/v1.23.6/cluster/addons/dns

image.png

1.1 coredns

  • 获取镜像

    cfg]# grep image coredns.yaml
          image: k8s.gcr.io/coredns/coredns:v1.8.6
          imagePullPolicy: IfNotPresent
    cfg]# docker pull registry.aliyuncs.com/google_containers/coredns:1.8.6
    
  • 修改清单文件

__DNS__DOMAIN__ 改为: cluster.local
__DNS__MEMORY__LIMIT__ 改为: 150Mi

     hosts {
         192.168.26.7 k8s-7.example.com  k8s-7 etcd_node1
         192.168.26.8 k8s-8.example.com  k8s-8 etcd_node2
         192.168.26.9 k8s-9.example.com  k8s-9 etcd_node3
         fallthrough
     } # 增加host,本实验未加
     ready
     kubernetes cluster.local 10.168.0.0/16 # 替代以下,本实验不替换
    # kubernetes cluster.local in-addr.arpa ip6.arpa {
    #        pods insecure
    #        fallthrough in-addr.arpa ip6.arpa
    #        ttl 30
    #    }
    ...
        image: registry.aliyuncs.com/google_containers/coredns:1.8.6 #修改镜像

__DNS__SERVER__ 改为: 10.168.0.2

  • 创建

    cfg]# kubectl apply -f coredns.yaml
    serviceaccount/coredns created
    clusterrole.rbac.authorization.k8s.io/system:coredns created
    clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
    configmap/coredns created
    deployment.apps/coredns created
    service/kube-dns created
    
    cfg]# kubectl get pods -n kube-system
    NAME                      READY   STATUS    RESTARTS   AGE
    coredns-bf4fcb984-7hwbd   1/1     Running   0          75s
    cfg]# kubectl get svc -A -o wide
    NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
    default       kubernetes   ClusterIP   10.168.0.1   <none>        443/TCP                  5h57m   <none>
    kube-system   kube-dns     ClusterIP   10.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   103s    k8s-app=kube-dns
    cfg]# kubectl get svc -o wide -n=kube-system
    NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE    SELECTOR
    kube-dns   ClusterIP   10.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   112s   k8s-app=kube-dns
    
  • DNS解析测试 ```shell ]# docker pull busybox:1.28.4 … ]# kubectl run pod-busybox —image=busybox:1.28.4 — sh -c “sleep 100000” #创建pod ]# kubectl run -it —rm dns-test —image=busybox:1.28.4 sh If you don’t see a command prompt, try pressing enter.

/ # nslookup kubernetes Server: 10.168.0.2 Address 1: 10.168.0.2 kube-dns.kube-system.svc.cluster.local

Name: kubernetes Address 1: 10.168.0.1 kubernetes.default.svc.cluster.local

```shell
]# docker pull infoblox/dnstools
...
]# kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
dnstools# nslookup www.baidu.com
Server:         10.168.0.2
Address:        10.168.0.2#53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 14.215.177.39
Name:   www.a.shifen.com
Address: 14.215.177.38

## 创建nginx容器测试dns解析
]# kubectl run nginx --image=nginx
]# kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
dnstools      1/1     Running   0          19s
nginx         1/1     Running   0          4m47s
pod-busybox   1/1     Running   0          8m50s
]# kubectl expose pod nginx --port=88 --target-port=80 --type=NodePort
]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.168.0.1      <none>        443/TCP        6h9m
nginx        NodePort    10.168.209.42   <none>        88:39621/TCP   108s
## 测试解析nginx
dnstools# nslookup nginx
Server:         10.168.0.2
Address:        10.168.0.2#53

Name:   nginx.default.svc.cluster.local
Address: 10.168.209.42   ## 成功解析

通过安装的dns服务,可以解析同一个namespace下其他服务。解析名规则:[SVC-name].[namespace].svc.clouster.local
使用进行验证(其它版本的busybox可能会导致不能解析)busybox:1.28

]# ]# kubectl run -it --rm --restart=Never --image=busybox:1.28.4 busybox
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.168.0.2
Address 1: 10.168.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.168.0.1 kubernetes.default.svc.cluster.local
/ #
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ #
/ #  wget nginx:88
Connecting to nginx:88 (10.168.209.42:88)
index.html           100% |******************************************************************************************************|   615   0:00:00 ETA
/ # ls
bin         dev         etc         home        index.html  proc        root        sys         tmp         usr         var
/ # cat index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # rm index.html
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var

1.2 nodelocaldns

NodeLocal DNSCache通过daemon-set的形式运行在每个工作节点,作为节点上pod的dns缓存代理,从而避免了iptables的DNAT规则和connection tracking。极大提升了dns的性能。

# 设置 coredns 的 cluster-ip
$ COREDNS_CLUSTER_IP=10.233.0.10
# 下载nodelocaldns配置all-in-one(addons/nodelocaldns.yaml)
# 替换cluster-ip
$ sed -i "s/\${COREDNS_CLUSTER_IP}/${COREDNS_CLUSTER_IP}/g" nodelocaldns.yaml
# 创建 nodelocaldns
$ kubectl apply -f nodelocaldns.yaml

…略…

2、部署dashboard

github地址:https://github.com/kubernetes/kubernetes/tree/v1.23.6/cluster/addons/dashboard

image.png

  • 获取镜像

    ]# grep image dashboard.yaml
            image: kubernetesui/dashboard:v2.0.1
            imagePullPolicy: Always
            image: kubernetesui/metrics-scraper:v1.0.4
    ]# docker pull kubernetesui/dashboard:v2.0.1
    ...
    ]# docker pull kubernetesui/metrics-scraper:v1.0.4
    ...
    
  • 修改镜像策略为:imagePullPolicy: IfNotPresent

  • 修改Service为NodePort类型,暴露到外部

    ...
    kind: Service
    apiVersion: v1
    metadata:
    labels:
     k8s-app: kubernetes-dashboard
     kubernetes.io/cluster-service: "true"
     addonmanager.kubernetes.io/mode: Reconcile
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
    spec:
    type: NodePort
    ports:
     - port: 443
       targetPort: 8443
       nodePort: 31230
    selector:
     k8s-app: kubernetes-dashboard
    ...
    
  • 创建 ```shell ]# docker pull kubernetesui/metrics-scraper:v1.0.4 … ]# kubectl get po -o wide -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dashboard-metrics-scraper-546d6779cb-r556m 1/1 Running 0 40s 172.26.8.3 192.168.26.8 kubernetes-dashboard-d5988f6dd-9jpcm 1/1 Running 0 40s 172.26.9.3 192.168.26.9

cfg]# kubectl get pods,svc -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-546d6779cb-r556m 1/1 Running 0 50s pod/kubernetes-dashboard-d5988f6dd-9jpcm 1/1 Running 0 50s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dashboard-metrics-scraper ClusterIP 10.168.172.92 8000/TCP 50s service/kubernetes-dashboard NodePort 10.168.205.60 443:31230/TCP 50s


- 访问:[https://192.168.26.9:31230/](https://192.168.26.9:31230/)

![image.png](https://cdn.nlark.com/yuque/0/2022/png/695940/1651460675626-78c98414-3348-49cc-a3a8-afdd2b78d007.png#clientId=u978024bb-2b61-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=372&id=ua5e07060&margin=%5Bobject%20Object%5D&name=image.png&originHeight=465&originWidth=1010&originalType=binary&ratio=1&rotation=0&showTitle=false&size=48313&status=done&style=none&taskId=ud85d4f25-60a9-4a99-8e3d-2ad7961fd4b&title=&width=808)
> 创建service account并绑定默认cluster-admin管理员集群角色:

```shell
]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin

]# kubectl -n kubernetes-dashboard get secrets | grep dashboard-admin
dashboard-admin-token-t8kw4        kubernetes.io/service-account-token   3      86s

]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin-token-t8kw4
Name:         dashboard-admin-token-t8kw4
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: e53bb67d-fb95-40b2-9a2d-92f3a460742e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkxRVEJJRTg1RGxiemZlZWlkM24xdVBXOUpyQURnOUd1MmlGWVdRcXFKN2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdDhrdzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTUzYmI2N2QtZmI5NS00MGIyLTlhMmQtOTJmM2E0NjA3NDJlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.ww5Uc6aCfYvc5_oNK51xt1wPW-L8drkFOeJ4ZBCbp40lWrqPNK-ZJP12NjJSStG1mZzJxU3I4pYCInJ71Ih0h5D9jxTVpB95BUTaiHvto9S683P150mITpEUlk5xi_JfONKQQEd_JqoDOA_KwzJodTiZ4c_KSHax4sEciqbCFf1LHtwjDfk2y4hHB2YJ56yg6iDoStt-FTOIPxzzwK-M2v2Ua2Kx1r3a5VCcVCzrvLHBsW51O2Du6sMGNtu1htX1OXUruD6n-a-rf3XDXDiiZQdjdZQ2pq1Py9Qi9t-oOrlpmq_iib4xci1qs5LbcHbqg74A5OXt-0ya8vegAEPnJQ

image.png

3、metrics server

metrics-server 是一个采集集群中指标的组件。

3.1 获取资源清单

github地址:https://github.com/kubernetes-sigs/metrics-server

https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

3.2 获取镜像

metrics-server]# grep image components.yaml
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent

metrics-server]# docker pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.1

3.3 修改vi components.yaml

  • 加入- --kubelet-insecure-tls,修改镜像名
        containers:
        - args:
          - --cert-dir=/tmp
          - --kubelet-insecure-tls
          - --secure-port=4443
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
          - --kubelet-use-node-status-port
          - --metric-resolution=15s
          image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
          imagePullPolicy: IfNotPresent
    

    3.4 创建

    metrics-server]# kubectl apply -f components.yaml
    

    3.5 检查

    ]# kubectl get pod -n kube-system | grep metrics
    metrics-server-689945b6f8-smb7j   1/1     Running   0             40s
    
    如果报错了,如下处理:

    修改kube-apiserver启动参数,在末尾增加:

#vi /usr/lib/systemd/system/kube-apiserver.service
  --requestheader-allowed-names=aggregator,metrics-server \
  --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --proxy-client-cert-file=/opt/kubernetes/ssl/kube-proxy.pem \
  --proxy-client-key-file=/opt/kubernetes/ssl/kube-proxy-key.pem \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --enable-aggregator-routing=true \
]# systemctl daemon-reload
]# systemctl restart kube-apiserver.service
]# systemctl status kube-apiserver.service

修改:kubelet启动参数,在末尾增加:--authentication-token-webhook=true \

# vi /usr/lib/systemd/system/kubelet.service
--authentication-token-webhook=true \
]# systemctl daemon-reload
]# systemctl restart kubelet.service
]# systemctl status kubelet.service

]# kubectl top node
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User “system:kube-proxy” cannot list resource “nodes” in API group “metrics.k8s.io” at the cluster scope

报错提示为RBAC权限问题,给kubernetes用户授权如下:(参考:https://blog.csdn.net/heian_99/article/details/119305545)

metrics-server]# cat rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: metrics-reader
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
- apiGroups: ["metrics.k8s.io"]
  resources: ["nodes"]
  verbs: ["get", "watch", "list"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: system:kube-proxy  #用户名称
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: metrics-reader
  apiGroup: rbac.authorization.k8s.io


---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: metrics-reader
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
- apiGroups: ["metrics.k8s.io"]
  resources: ["nodes"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metrics-reader
subjects:
- kind: User
  name: system:kube-proxy #用户名称
  apiGroup: rbac.authorization.k8s.io
metrics-server]# kubectl apply -f rbac.yaml
[root@k8s-9 ~]# kubectl top pod
NAME          CPU(cores)   MEMORY(bytes)
nginx         0m           3Mi
pod-busybox   0m           0Mi
[root@k8s-9 ~]#
[root@k8s-9 ~]# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
192.168.26.7   357m         17%    1033Mi          28%
192.168.26.8   284m         14%    802Mi           21%
192.168.26.9   272m         13%    806Mi           21%

image.png

2022/5/2 于广州

附件

1、coredns.yaml修改后的清单

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: registry.aliyuncs.com/google_containers/coredns:1.8.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 150Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

2、coredns.yaml.base原始清单

apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: “true”

  addonmanager.kubernetes.io/mode: Reconcile

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules:

  • apiGroups:
    • “” resources:
    • endpoints
    • services
    • pods
    • namespaces verbs:
    • list
    • watch
  • apiGroups:
    • “” resources:
    • nodes verbs:
    • get
  • apiGroups:
    • discovery.k8s.io resources:
    • endpointslices verbs:
    • list
    • watch

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: “true” labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects:

  • kind: ServiceAccount name: coredns namespace: kube-system

apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes DNSDOMAIN__ in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance

}

apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: “true” addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: “CoreDNS” spec:

replicas: not specified here:

1. In order to make Addon Manager do not reconcile this replicas parameter.

2. Default is 1.

3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: securityContext: seccompProfile: type: RuntimeDefault priorityClassName: system-cluster-critical serviceAccountName: coredns affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:

      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: k8s-app
                operator: In
                values: ["kube-dns"]
          topologyKey: kubernetes.io/hostname
  tolerations:
    - key: "CriticalAddonsOnly"
      operator: "Exists"
  nodeSelector:
    kubernetes.io/os: linux
  containers:
  - name: coredns
    image: k8s.gcr.io/coredns/coredns:v1.8.6
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        memory: __DNS__MEMORY__LIMIT__
      requests:
        cpu: 100m
        memory: 70Mi
    args: [ "-conf", "/etc/coredns/Corefile" ]
    volumeMounts:
    - name: config-volume
      mountPath: /etc/coredns
      readOnly: true
    ports:
    - containerPort: 53
      name: dns
      protocol: UDP
    - containerPort: 53
      name: dns-tcp
      protocol: TCP
    - containerPort: 9153
      name: metrics
      protocol: TCP
    livenessProbe:
      httpGet:
        path: /health
        port: 8080
        scheme: HTTP
      initialDelaySeconds: 60
      timeoutSeconds: 5
      successThreshold: 1
      failureThreshold: 5
    readinessProbe:
      httpGet:
        path: /ready
        port: 8181
        scheme: HTTP
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_BIND_SERVICE
        drop:
        - all
      readOnlyRootFilesystem: true
  dnsPolicy: Default
  volumes:
    - name: config-volume
      configMap:
        name: coredns
        items:
        - key: Corefile
          path: Corefile

apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: “9153” prometheus.io/scrape: “true” labels: k8s-app: kube-dns kubernetes.io/cluster-service: “true” addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: “CoreDNS” spec: selector: k8s-app: kube-dns clusterIP: DNSSERVER__ ports:

  • name: dns port: 53 protocol: UDP
  • name: dns-tcp port: 53 protocol: TCP
  • name: metrics port: 9153 protocol: TCP ```

    3、nodelocaldns.yaml原始清单

    ```yaml

    Copyright 2018 The Kubernetes Authors.

    #

    Licensed under the Apache License, Version 2.0 (the “License”);

    you may not use this file except in compliance with the License.

    You may obtain a copy of the License at

    #

    http://www.apache.org/licenses/LICENSE-2.0

    #

    Unless required by applicable law or agreed to in writing, software

    distributed under the License is distributed on an “AS IS” BASIS,

    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

    See the License for the specific language governing permissions and

    limitations under the License.

    #

apiVersion: v1 kind: ServiceAccount metadata: name: node-local-dns namespace: kube-system labels: kubernetes.io/cluster-service: “true”

addonmanager.kubernetes.io/mode: Reconcile

apiVersion: v1 kind: Service metadata: name: kube-dns-upstream namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: “true” addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: “KubeDNSUpstream” spec: ports:

  • name: dns port: 53 protocol: UDP targetPort: 53
  • name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns

apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile data: Corefile: | PILLARDNSDOMAIN:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind PILLARLOCALDNS PILLARDNSSERVER forward . PILLARCLUSTERDNS { forcetcp } prometheus :9253 health PILLARLOCALDNS:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind PILLARLOCALDNS PILLARDNSSERVER forward . PILLARCLUSTERDNS { forcetcp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind PILLARLOCALDNS PILLARDNSSERVER forward . PILLARCLUSTERDNS { forcetcp } prometheus :9253 } .:53 { errors cache 30 reload loop bind PILLARLOCALDNS PILLARDNS_SERVER forward . PILLARUPSTREAM__SERVERS prometheus :9253

    }

apiVersion: apps/v1 kind: DaemonSet metadata: name: node-local-dns namespace: kube-system labels: k8s-app: node-local-dns kubernetes.io/cluster-service: “true” addonmanager.kubernetes.io/mode: Reconcile spec: updateStrategy: rollingUpdate: maxUnavailable: 10% selector: matchLabels: k8s-app: node-local-dns template: metadata: labels: k8s-app: node-local-dns annotations: prometheus.io/port: “9253” prometheus.io/scrape: “true” spec: priorityClassName: system-node-critical serviceAccountName: node-local-dns hostNetwork: true dnsPolicy: Default # Don’t use cluster DNS. tolerations:

  - key: "CriticalAddonsOnly"
    operator: "Exists"
  - effect: "NoExecute"
    operator: "Exists"
  - effect: "NoSchedule"
    operator: "Exists"
  containers:
  - name: node-cache
    image: k8s.gcr.io/dns/k8s-dns-node-cache:1.21.1
    resources:
      requests:
        cpu: 25m
        memory: 5Mi
    args: [ "-localip", "__PILLAR__LOCAL__DNS__,__PILLAR__DNS__SERVER__", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ]
    securityContext:
      privileged: true
    ports:
    - containerPort: 53
      name: dns
      protocol: UDP
    - containerPort: 53
      name: dns-tcp
      protocol: TCP
    - containerPort: 9253
      name: metrics
      protocol: TCP
    livenessProbe:
      httpGet:
        host: __PILLAR__LOCAL__DNS__
        path: /health
        port: 8080
      initialDelaySeconds: 60
      timeoutSeconds: 5
    volumeMounts:
    - mountPath: /run/xtables.lock
      name: xtables-lock
      readOnly: false
    - name: config-volume
      mountPath: /etc/coredns
    - name: kube-dns-config
      mountPath: /etc/kube-dns
  volumes:
  - name: xtables-lock
    hostPath:
      path: /run/xtables.lock
      type: FileOrCreate
  - name: kube-dns-config
    configMap:
      name: kube-dns
      optional: true
  - name: config-volume
    configMap:
      name: node-local-dns
      items:
        - key: Corefile
          path: Corefile.base

A headless service is a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods.

We use this to expose metrics to Prometheus.

apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: “9253” prometheus.io/scrape: “true” labels: k8s-app: node-local-dns name: node-local-dns namespace: kube-system spec: clusterIP: None ports:

- name: metrics
  port: 9253
  targetPort: 9253

selector: k8s-app: node-local-dns

<a name="14535425"></a>
### 4、dashboard.yaml原始清单
```yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard


---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
rules:
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

5、components.yaml修改后的清单

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100