背景

服务器配置

节点 内网IP 公网IP 配置
ren 10.0.4.17 1.15.230.38 4C8G
yan 10.0.4.15 101.34.64.205 4C8G
bai 192.168.0.4 106.12.145.172 2C8G

软件版本

软件 版本
centos 7.6
docker 20.10.7
kubelet 1.20.9
kubeadm 1.20.9
kubectl 1.20.9

镜像版本

镜像 版本
k8s.gcr.io/kube-apiserver 1.20.9
k8s.gcr.io/kube-controller-manager 1.20.9
k8s.gcr.io/kube-scheduler 1.20.9
k8s.gcr.io/kube-proxy 1.20.9
k8s.gcr.io/pause 3.2
k8s.gcr.io/etcd 3.4.13-0
k8s.gcr.io/coredns 1.7.0

准备集群基础环境

执行节点:所有节点 执行目录:/root/

配置主机名、各节点SSH连接,参见

centos主机配置SSH免密登录

设置重启自动加载模块

临时设置

  1. modprobe br_netfilter
  2. sysctl -p /etc/sysctl.conf

永久设置

查看

  1. lsmod |grep br_netfilter

image.png

新建 rc.sysinit

  1. cat > /etc/rc.sysinit <<EOF
  2. #!/bin/bash
  3. for file in /etc/sysconfig/modules/*.modules ; do
  4. [ -x $file ] && $file
  5. done
  6. EOF

新建 br_netfilter.modules

  1. cat > /etc/sysconfig/modules/br_netfilter.modules <<EOF
  2. modprobe br_netfilter
  3. EOF

授权br_netfilter.modules文件执行权限

  1. chmod 755 /etc/sysconfig/modules/br_netfilter.modules

重启后再次查看已载入系统的模块

  1. lsmod |grep br_netfilter

image.png

新建k8s网桥配置文件

  1. cat > /root/k8s.conf <<EOF
  2. #开启网桥模式
  3. net.bridge.bridge-nf-call-ip6tables = 1
  4. net.bridge.bridge-nf-call-iptables = 1
  5. #开启转发
  6. net.ipv4.ip_forward = 1
  7. ##关闭ipv6
  8. net.ipv6.conf.all.disable_ipv6=1
  9. EOF

拷贝k8s网桥配置文件到系统目录下

  1. cp k8s.conf /etc/sysctl.d/k8s.conf
  2. sysctl -p /etc/sysctl.d/k8s.conf

设置时区

  1. # 设置系统时区为 中国/上海
  2. timedatectl set-timezone Asia/Shanghai
  3. # 将当前的UTC时间写入硬件时钟
  4. timedatectl set-local-rtc 0
  5. # 重启依赖于系统时间的服务
  6. systemctl restart rsyslog
  7. systemctl restart crond

关闭邮件服务

  1. systemctl stop postfix && systemctl disable postfix

设置rsyslogd、systemd、journald

  1. mkdir /var/log/journal # 持久化保存日志的目录
  2. mkdir /etc/systemd/journald.conf.d

image.png

新建 journald 配置文件

  1. cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
  2. [Journal]
  3. # 持久化
  4. Storage=persistent
  5. # 压缩历史日志
  6. Compress=yes
  7. SysnIntervalSec=5m
  8. RateLimitInterval=30s
  9. RateLimitBurst=1000
  10. # 最大占用空间 10G
  11. SystemMaxUse=10G
  12. # 单日志文件最大 200M
  13. SystemMaxFileSize=200M
  14. # 日志保存时间 2 周
  15. MaxRetentionSec=2week
  16. # 不将日志转发到 syslog
  17. ForwardToSyslog=no
  18. EOF

重启 journald 使生效

  1. systemctl restart systemd-journald

ipvs前置条件准备

  1. modprobe br_netfilter

新建 ipvs.modules 配置文件

  1. cat > /etc/sysconfig/modules/ipvs.modules <<EOF
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack_ipv4
  8. EOF

授权 ipvs.modules 文件执行权限

  1. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

查看已载入系统的模块

  1. lsmod | grep -e ip_vs -e nf_conntrack_ipv4

image.png

关闭swap分区

删除 swap 区所有内容

  1. swapoff -a

修改配置文件 /etc/fstab

  1. vi /etc/fstab
  2. #删除 /mnt/swap swap swap defaults 0 0 这一行或者注释掉这一行

修改配置文件 /etc/sysctl.con

  1. echo vm.swappiness=0 >> /etc/sysctl.con

使生效

  1. sysctl -p

验证

  1. free -lh

image.png

开启ipv4

不执行此步骤,会导致kubeadm初始化主节点报错 [ERROR FileContent—proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

  1. cat /proc/sys/net/ipv4/ip_forward
  2. echo "1" > /proc/sys/net/ipv4/ip_forward

重启网卡会使配置失效,暂不清楚原因
#service network restart
image.png

准备容器运行环境(Docker)

参见
docker安装
docker调整cgroup driver以适配k8s
上传文件

  1. #上传相关文件到 /opt/software/docker
  2. cd /opt/software/docker
  3. tar xzvf docker-20.10.7.tgz
  4. chmod +x docker/*
  5. mv docker/* /usr/local/bin/

创建配置文件

  1. echo '[Unit]
  2. Description=Docker Application Container Engine
  3. Documentation=http://docs.docker.io
  4. After=network.target
  5. [Service]
  6. Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
  7. ExecStart=/usr/local/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
  8. ExecReload=/bin/kill -s HUP $MAINPID
  9. Restart=always
  10. RestartSec=5
  11. TimeoutSec=0
  12. LimitNOFILE=infinity
  13. LimitNPROC=infinity
  14. LimitCORE=infinity
  15. Delegate=yes
  16. KillMode=process
  17. [Install]
  18. WantedBy=multi-user.target
  19. ' >> /etc/systemd/system/docker.service

重新加载配置文件

  1. cd /usr/local/bin
  2. #重新加载配置文件
  3. systemctl daemon-reload
  4. #设置开机启动
  5. systemctl enable docker.service
  6. #启动
  7. systemctl start docker.service
  8. #重启
  9. systemctl daemon-reload
  10. systemctl restart docker

添加docker源

  1. #添加docker源
  2. mkdir -p /etc/docker/
  3. touch /etc/docker/daemon.json
  4. cat > /etc/docker/daemon.json <<EOF
  5. {
  6. "registry-mirrors":["https://docker.mirrors.ustc.edu.cn/"],
  7. "exec-opts": ["native.cgroupdriver=systemd"]
  8. }
  9. EOF

重启

  1. systemctl daemon-reload
  2. systemctl restart docker

验证

  1. docker info

安装Kubeadm、Kubelet、Kubectl

执行节点:所有节点 执行目录:/root/ 网络:yum在线安装

添加yum源

  1. cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. exclude=kubelet kubeadm kubectl
  11. EOF

image.png

关闭SELinux

查看SELinux当前状态

  1. getenforce
  2. sestatus

image.png
临时关闭

  1. setenforce 0

image.png
永久关闭

  1. vi /etc/sysconfig/selinux
  2. #将 SELINUX=enforcing 替换为 SELINUX=disabled
  3. #保存后重启CentOS

yum安装kubelet、kubeadm、kubectl

  1. sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

image.png

kubelet设置为开机自启

  1. sudo systemctl enable --now kubelet

检查 kubelet 服务

  1. systemctl status kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

image.png

建立虚拟网卡

各节点分别操作,替换相应的公网IP

节点 内网IP 公网IP 配置
ren 10.0.4.17 1.15.230.38 4C8G
yan 10.0.4.15 101.34.64.205 4C8G
bai 192.168.0.4 106.12.145.172 2C8G

查看网卡信息

  1. ip addr

image.png

节点 ren

  1. cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
  2. BOOTPROTO=static
  3. DEVICE=eth0:1
  4. IPADDR=1.15.230.38
  5. PREFIX=32
  6. TYPE=Ethernet
  7. USERCTL=no
  8. ONBOOT=yes
  9. EOF

节点 yan

  1. cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
  2. BOOTPROTO=static
  3. DEVICE=eth0:1
  4. IPADDR=101.34.64.205
  5. PREFIX=32
  6. TYPE=Ethernet
  7. USERCTL=no
  8. ONBOOT=yes
  9. EOF

节点 bai

  1. cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF
  2. BOOTPROTO=static
  3. DEVICE=eth0:1
  4. IPADDR=106.12.145.172
  5. PREFIX=32
  6. TYPE=Ethernet
  7. USERCTL=no
  8. ONBOOT=yes
  9. EOF

重启网卡

  1. systemctl restart network

查看新建的IP是否进去

  1. ip addr

image.png

重新开启ipv4

不执行此步骤,会导致kubeadm初始化主节点报错 [ERROR FileContent—proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

  1. cat /proc/sys/net/ipv4/ip_forward
  2. echo "1" > /proc/sys/net/ipv4/ip_forward

修改kubelet启动参数

注意,这步很重要,如果不做,节点仍然会使用内网IP注册进集群

  1. vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
  2. # 在末尾添加参数 --node-ip=公网IP

节点 ren 完整文件

  1. cat > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf <<EOF
  2. # Note: This dropin only works with kubeadm and kubelet v1.11+
  3. [Service]
  4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  6. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
  7. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  8. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
  9. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
  10. EnvironmentFile=-/etc/sysconfig/kubelet
  11. ExecStart=
  12. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=1.15.230.38
  13. EOF

节点 yan 完整文件

  1. cat > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf <<EOF
  2. # Note: This dropin only works with kubeadm and kubelet v1.11+
  3. [Service]
  4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  6. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
  7. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  8. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
  9. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
  10. EnvironmentFile=-/etc/sysconfig/kubelet
  11. ExecStart=
  12. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=101.34.64.205
  13. EOF

节点 bai 完整文件

  1. cat > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf <<EOF
  2. # Note: This dropin only works with kubeadm and kubelet v1.11+
  3. [Service]
  4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  5. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  6. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
  7. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  8. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
  9. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
  10. EnvironmentFile=-/etc/sysconfig/kubelet
  11. ExecStart=
  12. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=106.12.145.172
  13. EOF

准备kubeadm初始化环境

编写 kubeadm-config.yaml 文件,准备初始化主节点

执行节点:所有节点 执行目录:/root/

添加配置文件,注意替换下面的IP

  1. cat > /root/kubeadm-config.yaml <<EOF
  2. apiVersion: kubeadm.k8s.io/v1beta2
  3. kind: ClusterConfiguration
  4. kubernetesVersion: v1.20.9
  5. apiServer:
  6. certSANs: #填写所有kube-apiserver节点的hostname、IP、VIP
  7. - ren #替换为hostname
  8. - 1.15.230.38 #替换为公网
  9. - 10.0.4.17 #替换为私网
  10. - 10.96.0.1 #不要替换,此IP是API的集群地址,部分服务会用到
  11. controlPlaneEndpoint: 1.15.230.38:6443 #替换为公网IP
  12. networking:
  13. podSubnet: 10.244.0.0/16
  14. serviceSubnet: 10.96.0.0/12
  15. --- 将默认调度方式改为ipvs
  16. apiVersion: kubeproxy-config.k8s.io/v1alpha1
  17. kind: KubeProxyConfiguration
  18. featureGates:
  19. SupportIPVSProxyMode: true
  20. mode: ipvs
  21. EOF

此时不能直接执行,因为下列镜像需要从谷歌服务器拉取,国内网络不通 k8s.gcr.io/kube-apiserver:v1.20.9 k8s.gcr.io/kube-controller-manager:v1.20.9 k8s.gcr.io/kube-scheduler:v1.20.9 k8s.gcr.io/kube-proxy:v1.20.9 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0

提前下载镜像

查看要下载的镜像

  1. kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.20.9 k8s.gcr.io/kube-controller-manager:v1.20.9 k8s.gcr.io/kube-scheduler:v1.20.9 k8s.gcr.io/kube-proxy:v1.20.9 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0

编写镜像拉取脚本,准备分发各节点执行

文件路径:/root/

完整文件如下

pull_k8s_images.sh

或手动编写

  1. cat >/root/pull_k8s_images.sh << "EOF"
  2. # 内容为
  3. set -o errexit
  4. set -o nounset
  5. set -o pipefail
  6. ##这里定义需要下载的版本
  7. KUBE_VERSION=v1.20.9
  8. KUBE_PAUSE_VERSION=3.2
  9. ETCD_VERSION=3.4.13-0
  10. DNS_VERSION=1.7.0
  11. ##这是原来被墙的仓库
  12. GCR_URL=k8s.gcr.io
  13. ##这里就是写你要使用的仓库,也可以使用gotok8s
  14. DOCKERHUB_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
  15. ##这里是镜像列表
  16. images=(
  17. kube-proxy:${KUBE_VERSION}
  18. kube-scheduler:${KUBE_VERSION}
  19. kube-controller-manager:${KUBE_VERSION}
  20. kube-apiserver:${KUBE_VERSION}
  21. pause:${KUBE_PAUSE_VERSION}
  22. etcd:${ETCD_VERSION}
  23. coredns:${DNS_VERSION}
  24. )
  25. ## 这里是拉取和改名的循环语句, 先下载, 再tag重命名生成需要的镜像, 再删除下载的镜像
  26. for imageName in ${images[@]} ; do
  27. docker pull $DOCKERHUB_URL/$imageName
  28. docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName
  29. docker rmi $DOCKERHUB_URL/$imageName
  30. done
  31. EOF

将脚本推送到其他节点

  1. scp /root/pull_k8s_images.sh root@yan:/root/
  2. scp /root/pull_k8s_images.sh root@bai:/root/

各节点拉取所需镜像

执行节点:所有节点 执行目录:/root/

授权 镜像拉取脚本 执行权限

  1. chmod +x /root/pull_k8s_images.sh

执行 镜像拉取 脚本

  1. bash /root/pull_k8s_images.sh

image.png

验证

  1. docker images

image.png

master节点初始化

执行节点:主节点 执行目录:/root/

如果是1核心或者1G内存的请在末尾添加参数(—ignore-preflight-errors=all),否则会初始化失败 同时注意,此步骤成功后,会打印,两个重要信息,注意保存

执行 kubeadm ,进行初始化

image.png

  1. kubeadm init --config=kubeadm-config.yaml

image.png

[ERROR FileContent—proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

ipv4设置,莫名原因又变为0,重新开启
image.png
重新执行
image.png
执行成功
image.png

执行日志如下

初始化成功后,将会生成kubeconfig文件,用于请求api服务器

按提示操作

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. export KUBECONFIG=/etc/kubernetes/admin.conf

拷贝命令,用于后面工作节点加入主节点使用

  1. kubeadm join 1.15.230.38:6443 --token uifpif.fyr2s4f5gtemqgnn \
  2. --discovery-token-ca-cert-hash sha256:80accd0bf78574cd8e0df8b3d276e2a8c1453277b510eb02507f8e5a0675676e

修改kube-apiserver参数

文件路径:/etc/kubernetes/manifests/kube-apiserver.yaml

完整文件如下

kube-apiserver.yaml

  1. cp kube-apiserver.yaml /etc/kubernetes/manifests/

或手动编写

修改3个信息

修改 -address.endpoint 添加 —bind-address 修改 —advertise-address

  1. vim /etc/kubernetes/manifests/kube-apiserver.yaml
  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. annotations:
  5. kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 1.15.230.38:6443
  6. creationTimestamp: null
  7. labels:
  8. component: kube-apiserver
  9. tier: control-plane
  10. name: kube-apiserver
  11. namespace: kube-system
  12. spec:
  13. containers:
  14. - command:
  15. - kube-apiserver
  16. - --advertise-address=1.15.230.38
  17. - --bind-address=0.0.0.0
  18. - --allow-privileged=true
  19. - --authorization-mode=Node,RBAC
  20. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  21. - --enable-admission-plugins=NodeRestriction
  22. - --enable-bootstrap-token-auth=true
  23. - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
  24. - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
  25. - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
  26. - --etcd-servers=https://127.0.0.1:2379
  27. - --insecure-port=0
  28. - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
  29. - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
  30. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  31. - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
  32. - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
  33. - --requestheader-allowed-names=front-proxy-client
  34. - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
  35. - --requestheader-extra-headers-prefix=X-Remote-Extra-
  36. - --requestheader-group-headers=X-Remote-Group
  37. - --requestheader-username-headers=X-Remote-User
  38. - --secure-port=6443
  39. - --service-account-issuer=https://kubernetes.default.svc.cluster.local
  40. - --service-account-key-file=/etc/kubernetes/pki/sa.pub
  41. - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
  42. - --service-cluster-ip-range=10.96.0.0/12
  43. - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
  44. - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  45. image: k8s.gcr.io/kube-apiserver:v1.20.9
  46. imagePullPolicy: IfNotPresent
  47. livenessProbe:
  48. failureThreshold: 8
  49. httpGet:
  50. host: 10.0.4.17
  51. path: /livez
  52. port: 6443
  53. scheme: HTTPS
  54. initialDelaySeconds: 10
  55. periodSeconds: 10
  56. timeoutSeconds: 15
  57. name: kube-apiserver
  58. readinessProbe:
  59. failureThreshold: 3
  60. httpGet:
  61. host: 10.0.4.17
  62. path: /readyz
  63. port: 6443
  64. scheme: HTTPS
  65. periodSeconds: 1
  66. timeoutSeconds: 15
  67. resources:
  68. requests:
  69. cpu: 250m
  70. startupProbe:
  71. failureThreshold: 24
  72. httpGet:
  73. host: 10.0.4.17
  74. path: /livez
  75. port: 6443
  76. scheme: HTTPS
  77. initialDelaySeconds: 10
  78. periodSeconds: 10
  79. timeoutSeconds: 15
  80. volumeMounts:
  81. - mountPath: /etc/ssl/certs
  82. name: ca-certs
  83. readOnly: true
  84. - mountPath: /etc/pki
  85. name: etc-pki
  86. readOnly: true
  87. - mountPath: /etc/kubernetes/pki
  88. name: k8s-certs
  89. readOnly: true
  90. hostNetwork: true
  91. priorityClassName: system-node-critical
  92. volumes:
  93. - hostPath:
  94. path: /etc/ssl/certs
  95. type: DirectoryOrCreate
  96. name: ca-certs
  97. - hostPath:
  98. path: /etc/pki
  99. type: DirectoryOrCreate
  100. name: etc-pki
  101. - hostPath:
  102. path: /etc/kubernetes/pki
  103. type: DirectoryOrCreate
  104. name: k8s-certs
  105. status: {}

worker节点加入集群

执行节点:worker节点 执行目录:/root/ 一定要等master 的pod就绪之后,才可以加入worker

执行 kubeadm join 命令

参考步骤 “拷贝命令,用于后面工作节点加入主节点使用”

  1. kubeadm join 1.15.230.38:6443 --token uifpif.fyr2s4f5gtemqgnn \
  2. --discovery-token-ca-cert-hash sha256:80accd0bf78574cd8e0df8b3d276e2a8c1453277b510eb02507f8e5a0675676e

image.png
查看节点状态

  1. kubectl get nodes -o wide

节点未就绪是因为没有安装网络插件

image.png

  1. #监听应用启动情况
  2. kubectl get pod -A -w
  3. #或者
  4. watch -n 1 kubectl get pod -A
  5. #检查各节点连接状态
  6. kubectl get pods -o wide --all-namespaces
  7. #或者
  8. watch -n 1 kubectl get pods -o wide --all-namespaces

安装网络插件 flannel

执行节点:主节点 执行目录:/opt/software/k8s/

下载flannel的yaml配置文件

完整文件如下

kube-flannel.yml

或手动编写

下载

  1. cd /root/
  2. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改

  1. # 共修改两个地方,一个是args下,添加
  2. args:
  3. - --public-ip=$(PUBLIC_IP) # 添加此参数,申明公网IP
  4. - --iface=eth0 # 添加此参数,绑定网卡
  5. # 然后是env下
  6. env:
  7. - name: PUBLIC_IP #添加环境变量
  8. valueFrom:
  9. fieldRef:
  10. fieldPath: status.podIP

部署flannel

image.png

  1. kubectl apply -f kube-flannel.yml

image.png

再次查看pod状态,等待初始化完成

  1. #监听应用启动情况
  2. kubectl get pod -A -w
  3. #或者
  4. watch -n 1 kubectl get pod -A
  5. #检查各节点连接状态
  6. kubectl get pods -o wide --all-namespaces
  7. #或者
  8. watch -n 1 kubectl get pods -o wide --all-namespaces

image.png

若发现pod有异常,排查

  1. kubectl describe pod coredns-74ff55c5b-654lm -n kube-system

原因:master节点有 /run/flannel/subnet.env 文件,而其他worker节点没有,网络通信异常

  1. cat /run/flannel/subnet.env

解决方案:master节点拷贝文件到worker节点

  1. scp /run/flannel/subnet.env root@yan:/run/flannel/
  2. scp /run/flannel/subnet.env root@bai:/run/flannel/

删除异常pod

  1. kubectl delete pod coredns-74ff55c5b-654lm -n kube-system
  2. kubectl delete pod coredns-74ff55c5b-fxg87 -n kube-system

image.png
重启pod

  1. kubectl replace --force -f kube-flannel.yml

一定要等master 的pod就绪之后,才可以加入worker

若集群令牌过期且有新的worker节点加入,重新生成集群令牌

master节点操作 高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

  1. kubeadm token create --print-join-command

image.png

检查网络是否连通

  1. # 检查pod是否都是ready状态
  2. kubectl get pods -o wide --all-namespaces
  3. ...
  4. # 手动创建一个pod
  5. kubectl create deployment nginx --image=nginx
  6. # 查看pod的ip
  7. kubectl get pods -o wide
  8. # 主节点或其它节点,ping一下此ip,看看是否能ping通

image.png
image.png

部署dashboard

执行节点:主节点 执行目录:/root/

yaml安装

kubernetes官方提供的可视化界面
https://github.com/kubernetes/dashboard

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

无法下载则离线安装
完整文件如下
dashboard.yaml
或手动编写

  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. apiVersion: v1
  15. kind: Namespace
  16. metadata:
  17. name: kubernetes-dashboard
  18. ---
  19. apiVersion: v1
  20. kind: ServiceAccount
  21. metadata:
  22. labels:
  23. k8s-app: kubernetes-dashboard
  24. name: kubernetes-dashboard
  25. namespace: kubernetes-dashboard
  26. ---
  27. kind: Service
  28. apiVersion: v1
  29. metadata:
  30. labels:
  31. k8s-app: kubernetes-dashboard
  32. name: kubernetes-dashboard
  33. namespace: kubernetes-dashboard
  34. spec:
  35. ports:
  36. - port: 443
  37. targetPort: 8443
  38. selector:
  39. k8s-app: kubernetes-dashboard
  40. ---
  41. apiVersion: v1
  42. kind: Secret
  43. metadata:
  44. labels:
  45. k8s-app: kubernetes-dashboard
  46. name: kubernetes-dashboard-certs
  47. namespace: kubernetes-dashboard
  48. type: Opaque
  49. ---
  50. apiVersion: v1
  51. kind: Secret
  52. metadata:
  53. labels:
  54. k8s-app: kubernetes-dashboard
  55. name: kubernetes-dashboard-csrf
  56. namespace: kubernetes-dashboard
  57. type: Opaque
  58. data:
  59. csrf: ""
  60. ---
  61. apiVersion: v1
  62. kind: Secret
  63. metadata:
  64. labels:
  65. k8s-app: kubernetes-dashboard
  66. name: kubernetes-dashboard-key-holder
  67. namespace: kubernetes-dashboard
  68. type: Opaque
  69. ---
  70. kind: ConfigMap
  71. apiVersion: v1
  72. metadata:
  73. labels:
  74. k8s-app: kubernetes-dashboard
  75. name: kubernetes-dashboard-settings
  76. namespace: kubernetes-dashboard
  77. ---
  78. kind: Role
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. labels:
  82. k8s-app: kubernetes-dashboard
  83. name: kubernetes-dashboard
  84. namespace: kubernetes-dashboard
  85. rules:
  86. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  87. - apiGroups: [""]
  88. resources: ["secrets"]
  89. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  90. verbs: ["get", "update", "delete"]
  91. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  92. - apiGroups: [""]
  93. resources: ["configmaps"]
  94. resourceNames: ["kubernetes-dashboard-settings"]
  95. verbs: ["get", "update"]
  96. # Allow Dashboard to get metrics.
  97. - apiGroups: [""]
  98. resources: ["services"]
  99. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  100. verbs: ["proxy"]
  101. - apiGroups: [""]
  102. resources: ["services/proxy"]
  103. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  104. verbs: ["get"]
  105. ---
  106. kind: ClusterRole
  107. apiVersion: rbac.authorization.k8s.io/v1
  108. metadata:
  109. labels:
  110. k8s-app: kubernetes-dashboard
  111. name: kubernetes-dashboard
  112. rules:
  113. # Allow Metrics Scraper to get metrics from the Metrics server
  114. - apiGroups: ["metrics.k8s.io"]
  115. resources: ["pods", "nodes"]
  116. verbs: ["get", "list", "watch"]
  117. ---
  118. apiVersion: rbac.authorization.k8s.io/v1
  119. kind: RoleBinding
  120. metadata:
  121. labels:
  122. k8s-app: kubernetes-dashboard
  123. name: kubernetes-dashboard
  124. namespace: kubernetes-dashboard
  125. roleRef:
  126. apiGroup: rbac.authorization.k8s.io
  127. kind: Role
  128. name: kubernetes-dashboard
  129. subjects:
  130. - kind: ServiceAccount
  131. name: kubernetes-dashboard
  132. namespace: kubernetes-dashboard
  133. ---
  134. apiVersion: rbac.authorization.k8s.io/v1
  135. kind: ClusterRoleBinding
  136. metadata:
  137. name: kubernetes-dashboard
  138. roleRef:
  139. apiGroup: rbac.authorization.k8s.io
  140. kind: ClusterRole
  141. name: kubernetes-dashboard
  142. subjects:
  143. - kind: ServiceAccount
  144. name: kubernetes-dashboard
  145. namespace: kubernetes-dashboard
  146. ---
  147. kind: Deployment
  148. apiVersion: apps/v1
  149. metadata:
  150. labels:
  151. k8s-app: kubernetes-dashboard
  152. name: kubernetes-dashboard
  153. namespace: kubernetes-dashboard
  154. spec:
  155. replicas: 1
  156. revisionHistoryLimit: 10
  157. selector:
  158. matchLabels:
  159. k8s-app: kubernetes-dashboard
  160. template:
  161. metadata:
  162. labels:
  163. k8s-app: kubernetes-dashboard
  164. spec:
  165. containers:
  166. - name: kubernetes-dashboard
  167. image: kubernetesui/dashboard:v2.3.1
  168. imagePullPolicy: Always
  169. ports:
  170. - containerPort: 8443
  171. protocol: TCP
  172. args:
  173. - --auto-generate-certificates
  174. - --namespace=kubernetes-dashboard
  175. # Uncomment the following line to manually specify Kubernetes API server Host
  176. # If not specified, Dashboard will attempt to auto discover the API server and connect
  177. # to it. Uncomment only if the default does not work.
  178. # - --apiserver-host=http://my-address:port
  179. volumeMounts:
  180. - name: kubernetes-dashboard-certs
  181. mountPath: /certs
  182. # Create on-disk volume to store exec logs
  183. - mountPath: /tmp
  184. name: tmp-volume
  185. livenessProbe:
  186. httpGet:
  187. scheme: HTTPS
  188. path: /
  189. port: 8443
  190. initialDelaySeconds: 30
  191. timeoutSeconds: 30
  192. securityContext:
  193. allowPrivilegeEscalation: false
  194. readOnlyRootFilesystem: true
  195. runAsUser: 1001
  196. runAsGroup: 2001
  197. volumes:
  198. - name: kubernetes-dashboard-certs
  199. secret:
  200. secretName: kubernetes-dashboard-certs
  201. - name: tmp-volume
  202. emptyDir: {}
  203. serviceAccountName: kubernetes-dashboard
  204. nodeSelector:
  205. "kubernetes.io/os": linux
  206. # Comment the following tolerations if Dashboard must not be deployed on master
  207. tolerations:
  208. - key: node-role.kubernetes.io/master
  209. effect: NoSchedule
  210. ---
  211. kind: Service
  212. apiVersion: v1
  213. metadata:
  214. labels:
  215. k8s-app: dashboard-metrics-scraper
  216. name: dashboard-metrics-scraper
  217. namespace: kubernetes-dashboard
  218. spec:
  219. ports:
  220. - port: 8000
  221. targetPort: 8000
  222. selector:
  223. k8s-app: dashboard-metrics-scraper
  224. ---
  225. kind: Deployment
  226. apiVersion: apps/v1
  227. metadata:
  228. labels:
  229. k8s-app: dashboard-metrics-scraper
  230. name: dashboard-metrics-scraper
  231. namespace: kubernetes-dashboard
  232. spec:
  233. replicas: 1
  234. revisionHistoryLimit: 10
  235. selector:
  236. matchLabels:
  237. k8s-app: dashboard-metrics-scraper
  238. template:
  239. metadata:
  240. labels:
  241. k8s-app: dashboard-metrics-scraper
  242. annotations:
  243. seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
  244. spec:
  245. containers:
  246. - name: dashboard-metrics-scraper
  247. image: kubernetesui/metrics-scraper:v1.0.6
  248. ports:
  249. - containerPort: 8000
  250. protocol: TCP
  251. livenessProbe:
  252. httpGet:
  253. scheme: HTTP
  254. path: /
  255. port: 8000
  256. initialDelaySeconds: 30
  257. timeoutSeconds: 30
  258. volumeMounts:
  259. - mountPath: /tmp
  260. name: tmp-volume
  261. securityContext:
  262. allowPrivilegeEscalation: false
  263. readOnlyRootFilesystem: true
  264. runAsUser: 1001
  265. runAsGroup: 2001
  266. serviceAccountName: kubernetes-dashboard
  267. nodeSelector:
  268. "kubernetes.io/os": linux
  269. # Comment the following tolerations if Dashboard must not be deployed on master
  270. tolerations:
  271. - key: node-role.kubernetes.io/master
  272. effect: NoSchedule
  273. volumes:
  274. - name: tmp-volume
  275. emptyDir: {}

执行安装命令

  1. kubectl apply -f dashboard.yaml

等待状态变为running

  1. kubectl get pod -A

image.png

暴露端口

  1. kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

设置type

type: ClusterIP 改为 type: NodePort

image.png
验证,确认端口映射,便于安全组放行

  1. kubectl get svc -A |grep kubernetes-dashboard

image.png
30770
image.png
image.png
浏览器访问任意一台:
https://yan:30770
image.png

获取 Token

创建访问者账号
完整文件如下
dash.yaml
或手动编写

  1. #创建访问账号,准备一个yaml文件; vi dash.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kubernetes-dashboard
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRoleBinding
  10. metadata:
  11. name: admin-user
  12. roleRef:
  13. apiGroup: rbac.authorization.k8s.io
  14. kind: ClusterRole
  15. name: cluster-admin
  16. subjects:
  17. - kind: ServiceAccount
  18. name: admin-user
  19. namespace: kubernetes-dashboard

安装

  1. kubectl apply -f dash.yaml

image.png
获取令牌

  1. #获取访问令牌
  2. kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

image.png
令牌内容如下

  1. eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9ma3lPV0NSZGVHeUtsU0N6cU1mdGdsQTJaR0RMREp4Y2g4djV5SEN2WmsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW1tbDd3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2MzZjOTkwNS0yNjA5LTQzMDItOGQzYS05ZDM1ZTNmM2UzODEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.drsFRwwSwjYTahaeTN8Kou3bBnbx_RIQLIPnsx7eyBHRn78XTTq3xOhAFtquWBUZv2LJhfpq4zf8z1zlMizD_9Ys7warApuj5SH6pJy-IDrk2pbOfwOX5M89tswCgG85qofXhSVqUmfGn5avkpq81wb4bT6TeQaY-5OHaWZGeHc7sUvrv9NR5wEUo5FSZmx3aStZum-lir5tp64MYvbosVhNkOMlEUc1-j5OhEr6UHcMPkhkFCCyU7Y8JZitpT6oHY32Kl51Yqj2eJMYeA5wtBk2yeXXZ00EvEwMlgdaPPWgzVGx8GMjbfA2ACfYx1bWaNmdEMLrQfAnkpVBVHcG3Q

粘贴
image.png
切换命令空间
image.png
image.png

完成