Kubernetes

1、部署准备工作

部署最小化 K8S 集群:master + node1 + node2
Ubuntu 是一款基于 Debian Linux 的以桌面应用为主的操作系统,内容涵盖文字处理、电子邮件、软件开发工具和 Web 服务等,可供用户免费下载、使用和分享。

  1. vgs
  2. Current machine states:
  3. master running (virtualbox)
  4. node1 running (virtualbox)
  5. node2 running (virtualbox)

1.1 基础环境信息

设置系统主机名以及 Host 文件各节点之间的相互解析

  • 使用这个的 Vagrantfile 启动的三节点服务已经配置好了
  • 以下使用 master 节点进行演示查看,其他节点操作均一致 ```bash

    hostnamectl

    vagrant@k8s-master:~$ hostnamectl Static hostname: k8s-master

hosts

vagrant@k8s-master:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 vagrant.vm vagrant 192.168.30.30 k8s-master 192.168.30.31 k8s-node1 192.168.30.32 k8s-node2

ping

vagrant@k8s-master:~$ ping k8s-node1 PING k8s-node1 (192.168.30.31) 56(84) bytes of data. 64 bytes from k8s-node1 (192.168.30.31): icmp_seq=1 ttl=64 time=0.689 ms

  1. <a name="rZRJ4"></a>
  2. ### 1.2 阿里源配置
  3. 配置 Ubuntu 的阿里源来加速安装速度
  4. - 阿里源镜像地址
  5. ```bash
  6. # 登录服务器
  7. ➜ vgssh master/node1/nod2
  8. Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-50-generic x86_64)
  9. # 设置阿里云Ubuntu镜像
  10. $ sudo cp /etc/apt/sources.list{,.bak}
  11. $ sudo vim /etc/apt/sources.list
  12. # 配置kubeadm的阿里云镜像源
  13. $ sudo vim /etc/apt/sources.list
  14. deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
  15. $ sudo gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB
  16. $ sudo gpg --export --armor BA07F4FB | sudo apt-key add -
  17. # 配置docker安装
  18. $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  19. $ sudo apt-key fingerprint 0EBFCD88
  20. $ sudo vim /etc/apt/sources.list
  21. deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
  22. # 更新仓库
  23. $ sudo apt update
  24. $ sudo apt dist-upgrade

1.3 基础工具安装

部署阶段的基础工具安装

  • 基础组件 docker
  • 部署工具 kubeadm
  • 路由规则 ipvsadm
  • 时间同步 ntp ```bash

    基础工具安装

    $ sudo apt install -y \ docker-ce docker-ce-cli containerd.io \ kubeadm ipvsadm \ ntp ntpdate \ nginx supervisor

将当前普通用户加入docker组(需重新登录)

$ sudo usermod -a -G docker $USER

服务启用

$ sudo systemctl enable docker.service $ sudo systemctl start docker.service $ sudo systemctl enable kubelet.service $ sudo systemctl start kubelet.service

  1. <a name="Jwamm"></a>
  2. ### 1.4 操作系统配置
  3. 操作系统相关配置
  4. - 关闭缓存
  5. - 配置内核参数
  6. - 调整系统时区
  7. - 升级内核版本(默认为4.15.0的版本)
  8. ```bash
  9. # 关闭缓存
  10. $ sudo swapoff -a
  11. # 为K8S来调整内核参数
  12. $ sudo touch /etc/sysctl.d/kubernetes.conf
  13. $ sudo cat > /etc/sysctl.d/kubernetes.conf <<EOF
  14. net.bridge.bridge-nf-call-iptables = 1 # 开启网桥模式(必须)
  15. net.bridge.bridge-nf-call-ip6tables = 1 # 开启网桥模式(必须)
  16. net.ipv6.conf.all.disable_ipv6 = 1 # 关闭IPv6协议(必须)
  17. net.ipv4.ip_forward = 1 # 转发模式(默认开启)
  18. vm.panic_on_oom=0 # 开启OOM(默认开启)
  19. vm.swappiness = 0 # 禁止使用swap空间
  20. vm.overcommit_memory=1 # 不检查物理内存是否够用
  21. fs.inotify.max_user_instances=8192
  22. fs.inotify.max_user_watches=1048576
  23. fs.file-max = 52706963 # 设置文件句柄数量
  24. fs.nr_open = 52706963 # 设置文件的最大打开数量
  25. net.netfilter.nf_conntrack_max = 2310720
  26. EOF
  27. # 查看系统内核参数的方式
  28. $ sudo sysctl -a | grep xxx
  29. # 使内核参数配置文件生效
  30. $ sudo sysctl -p /etc/sysctl.d/kubernetes.conf
  31. # 设置系统时区为中国/上海
  32. $ sudo timedatectl set-timezone Asia/Shanghai
  33. # 将当前的UTC时间写入硬件时钟
  34. $ sudo timedatectl set-local-rtc 0

1.5 开启 ipvs 服务

开启 ipvs 服务

  • kube-proxy 开启 ipvs 的前置条件 ```bash

    载入指定的个别模块

    $ modprobe br_netfilter

修改配置

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF

!/bin/bash

modprobe — ip_vs modprobe — ip_vs_rr modprobe — ip_vs_wrr modprobe — ip_vs_sh modprobe — nf_conntrack_ipv EOF

加载配置

$ chmod 755 /etc/sysconfig/modules/ipvs.modules \ && bash /etc/sysconfig/modules/ipvs.modules \ && lsmod | grep -e ip_vs -e nf_conntrack_ipv

  1. <a name="mpsSv"></a>
  2. ## 2、部署 Master 节点
  3. 节点最低配置: 2C+2G 内存;从节点资源尽量充足<br />kubeadm 工具的 init 命令,即可初始化以单节点部署的 master。为了避免翻墙,这里可以使用阿里云的谷歌源来代替。在执行 kubeadm 部署命令的时候,指定对应地址即可。当然,可以将其加入本地的镜像库之中,更易维护。<br />注意事项
  4. - 阿里云谷歌源地址
  5. - 使用 kubeadm 定制控制平面配置
  6. ```bash
  7. # 登录服务器
  8. ➜ vgssh master
  9. Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-50-generic x86_64)
  10. # 部署节点(命令行)
  11. # 注意pod和service的地址需要不同(否则会报错)
  12. $ sudo kubeadm init \
  13. --kubernetes-version=1.20.2 \
  14. --image-repository registry.aliyuncs.com/google_containers \
  15. --apiserver-advertise-address=192.168.30.30 \
  16. --pod-network-cidr=10.244.0.0/16 \
  17. --service-cidr=10.245.0.0/16
  18. # 部署镜像配置(配置文件)
  19. $ sudo kubeadm init --config ./kubeadm-config.yaml
  20. Your Kubernetes control-plane has initialized successfully!
  21. # 查看IP段是否生效(iptable)
  22. $ ip route show
  23. 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
  24. 10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
  25. 10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
  26. # # 查看IP段是否生效(ipvs)
  27. $ ipvsadm -L -n
  28. IP Virtual Server version 1.2.1 (size=4096)
  29. Prot LocalAddress:Port Scheduler Flags
  30. -> RemoteAddress:Port Forward Weight ActiveConn InActConn

配置文件定义

  • 接口使用了 v1beta2 版本
  • 配置主节点 IP 地址为 192.168.30.30
  • 为 flannel 分配的是 10.244.0.0/16 网段
  • 选择的 kubernetes 是当前最新的 1.20.2 版本
  • 加入了 controllerManager 的水平扩容功能
    1. # kubeadm-config.yaml
    2. # sudo kubeadm config print init-defaults > kubeadm-config.yaml
    3. apiVersion: kubeadm.k8s.io/v1beta2
    4. imageRepository: registry.aliyuncs.com/google_containers
    5. kind: ClusterConfiguration
    6. kubernetesVersion: v1.20.2
    7. apiServer:
    8. extraArgs:
    9. advertise-address: 192.168.30.30
    10. networking:
    11. podSubnet: 10.244.0.0/16
    12. controllerManager:
    13. ExtraArgs:
    14. horizontal-pod-autoscaler-use-rest-clients: "true"
    15. horizontal-pod-autoscaler-sync-period: "10s"
    16. node-monitor-grace-period: "10s"
    执行成功之后会输出如下信息,需要安装如下步骤操作下
    第一步 在 kubectl 默认控制和操作集群节点的时候,需要使用到 CA 的密钥,传输过程是通过 TLS 协议保障通讯的安全性。通过下面 3 行命令拷贝密钥信息到当前用户家目录下,这样 kubectl 执行时会首先访问 .kube 目录,使用这些授权信息访问集群。
    第二步 之后添加 worker 节点时,要通过 token 才能保障安全性。因此,先把显示的这行命令保存下来,以备后续使用会用到。 ```bash

    master setting step one

    To start cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf

master setting step two

You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Join any number of worker nodes by running the following on each as root: kubeadm join 192.168.30.30:6443 \ —token lebbdi.p9lzoy2a16tmr6hq \ —discovery-token-ca-cert-hash \ sha256:6c79fd83825d7b2b0c3bed9e10c428acf8ffcd615a1d7b258e9b500848c20cae

  1. 将子节点加入主节点中
  2. ```bash
  3. $ kubectl get nodes
  4. NAME STATUS ROLES AGE VERSION
  5. k8s-master NotReady control-plane,master 62m v1.20.2
  6. k8s-node1 NotReady <none> 82m v1.20.2
  7. k8s-node2 NotReady <none> 82m v1.20.2
  8. # 查看token令牌
  9. $ sudo kubeadm token list
  10. # 生成token令牌
  11. $ sudo kubeadm token create
  12. # 忘记sha编码
  13. $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \
  14. | openssl rsa -pubin -outform der 2>/dev/null \
  15. | openssl dgst -sha256 -hex | sed 's/^.* //'
  16. # 生成一个新的 token 令牌(比上面的方便)
  17. $ kubeadm token generate
  18. # 直接生成 join 命令(比上面的方便)
  19. $ kubeadm token create <token_generate> --print-join-command --ttl=0

执行完成之后可以通过如下命令,查看主节点信息
默认生成四个命名空间

  • default、kube-system、kube-public、kube-node-lease

部署的核心服务有以下几个 (kube-system)

  • coredns、etcd
  • kube-apiserver、kube-scheduler
  • kube-controller-manager、kube-controller-manager

此时 master 并没有 ready 状态(需要安装网络插件),下面安装 flannel 这个网络插件

  1. # 命名空间
  2. $ kubectl get namespace
  3. NAME STATUS AGE
  4. default Active 19m
  5. kube-node-lease Active 19m
  6. kube-public Active 19m
  7. kube-system Active 19m
  8. # 核心服务
  9. $ kubectl get pod -n kube-system
  10. NAME READY STATUS RESTARTS AGE
  11. coredns-7f89b7bc75-bh42f 1/1 Running 0 19m
  12. coredns-7f89b7bc75-dvzpl 1/1 Running 0 19m
  13. etcd-k8s-master 1/1 Running 0 19m
  14. kube-apiserver-k8s-master 1/1 Running 0 19m
  15. kube-controller-manager-k8s-master 1/1 Running 0 19m
  16. kube-proxy-5rlpv 1/1 Running 0 19m
  17. kube-scheduler-k8s-master 1/1 Running 0 19m

3、部署 flannel 网络

网络服务用于管理 K8S 集群中的服务网络
flannel 网络需要指定 IP 地址段,即上一步中通过编排文件设置的 10.244.0.0/16。其实可以通过 flannel 官方和 HELM 工具直接部署服务,但是原地址是需要搭梯子的。所以,可以将其内容保存在如下配置文件中,修改对应镜像地址。
部署 flannel 服务的官方下载地址

  1. # 部署flannel服务
  2. # 1.修改镜像地址(如果下载不了的话)
  3. # 2.修改Network为--pod-network-cidr的参数IP段
  4. $ kubectl apply -f ./kube-flannel.yml
  5. # 如果部署出现问题可通过如下命令查看日志
  6. $ kubectl logs kube-flannel-ds-6xxs5 --namespace=kube-system
  7. $ kubectl describe pod kube-flannel-ds-6xxs5 --namespace=kube-system

2021-12-27-14-25-46-501489.png
如果使用当中存在问题的,可以参考官方的问题手册
因为这里使用的是 Vagrant 虚拟出来的机器进行 K8S 的部署,但是在运行对应 yaml 配置的时候,会报错。通过查看日志发现是因为默认绑定的是虚拟机上面的 eth0 这块网卡,而这块网卡是 Vagrant 使用的,应该绑定的是 eth1 才对。
Vagrant 通常为所有 VM 分配两个接口,第一个为所有主机分配的 IP 地址为 10.0.2.15,用于获得 NAT 的外部流量。这样会导致 flannel 部署存在问题。通过官方问题说明,可以使用 —iface=eth1 这个参数选择第二个网卡。
对应的参数使用方式,可以参考 flannel use –iface=eth1 中的回答自行添加,而这里直接修改了启动的配置文件,在启动服务的时候通过 args 修改了,如下所示。

  1. $ kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. coredns-7f89b7bc75-bh42f 1/1 Running 0 61m
  4. coredns-7f89b7bc75-dvzpl 1/1 Running 0 61m
  5. etcd-k8s-master 1/1 Running 0 62m
  6. kube-apiserver-k8s-master 1/1 Running 0 62m
  7. kube-controller-manager-k8s-master 1/1 Running 0 62m
  8. kube-flannel-ds-zl148 1/1 Running 0 44s
  9. kube-flannel-ds-ll523 1/1 Running 0 44s
  10. kube-flannel-ds-wpmhw 1/1 Running 0 44s
  11. kube-proxy-5rlpv 1/1 Running 0 61m
  12. kube-scheduler-k8s-master 1/1 Running 0 62m

配置文件如下所示

  1. ---
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: psp.flannel.unprivileged
  6. annotations:
  7. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  8. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  9. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  10. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  11. spec:
  12. privileged: false
  13. volumes:
  14. - configMap
  15. - secret
  16. - emptyDir
  17. - hostPath
  18. allowedHostPaths:
  19. - pathPrefix: "/etc/cni/net.d"
  20. - pathPrefix: "/etc/kube-flannel"
  21. - pathPrefix: "/run/flannel"
  22. readOnlyRootFilesystem: false
  23. # Users and groups
  24. runAsUser:
  25. rule: RunAsAny
  26. supplementalGroups:
  27. rule: RunAsAny
  28. fsGroup:
  29. rule: RunAsAny
  30. # Privilege Escalation
  31. allowPrivilegeEscalation: false
  32. defaultAllowPrivilegeEscalation: false
  33. # Capabilities
  34. allowedCapabilities: ["NET_ADMIN", "NET_RAW"]
  35. defaultAddCapabilities: []
  36. requiredDropCapabilities: []
  37. # Host namespaces
  38. hostPID: false
  39. hostIPC: false
  40. hostNetwork: true
  41. hostPorts:
  42. - min: 0
  43. max: 65535
  44. # SELinux
  45. seLinux:
  46. # SELinux is unused in CaaSP
  47. rule: "RunAsAny"
  48. ---
  49. kind: ClusterRole
  50. apiVersion: rbac.authorization.k8s.io/v1
  51. metadata:
  52. name: flannel
  53. rules:
  54. - apiGroups: ["extensions"]
  55. resources: ["podsecuritypolicies"]
  56. verbs: ["use"]
  57. resourceNames: ["psp.flannel.unprivileged"]
  58. - apiGroups:
  59. - ""
  60. resources:
  61. - pods
  62. verbs:
  63. - get
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - nodes
  68. verbs:
  69. - list
  70. - watch
  71. - apiGroups:
  72. - ""
  73. resources:
  74. - nodes/status
  75. verbs:
  76. - patch
  77. ---
  78. kind: ClusterRoleBinding
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. name: flannel
  82. roleRef:
  83. apiGroup: rbac.authorization.k8s.io
  84. kind: ClusterRole
  85. name: flannel
  86. subjects:
  87. - kind: ServiceAccount
  88. name: flannel
  89. namespace: kube-system
  90. ---
  91. apiVersion: v1
  92. kind: ServiceAccount
  93. metadata:
  94. name: flannel
  95. namespace: kube-system
  96. ---
  97. kind: ConfigMap
  98. apiVersion: v1
  99. metadata:
  100. name: kube-flannel-cfg
  101. namespace: kube-system
  102. labels:
  103. tier: node
  104. app: flannel
  105. data:
  106. cni-conf.json: |
  107. {
  108. "name": "cbr0",
  109. "cniVersion": "0.3.1",
  110. "plugins": [
  111. {
  112. "type": "flannel",
  113. "delegate": {
  114. "hairpinMode": true,
  115. "isDefaultGateway": true
  116. }
  117. },
  118. {
  119. "type": "portmap",
  120. "capabilities": {
  121. "portMappings": true
  122. }
  123. }
  124. ]
  125. }
  126. net-conf.json: |
  127. {
  128. "Network": "10.244.0.0/16",
  129. "Backend": {
  130. "Type": "vxlan"
  131. }
  132. }
  133. ---
  134. apiVersion: apps/v1
  135. kind: DaemonSet
  136. metadata:
  137. name: kube-flannel-ds
  138. namespace: kube-system
  139. labels:
  140. tier: node
  141. app: flannel
  142. spec:
  143. selector:
  144. matchLabels:
  145. app: flannel
  146. template:
  147. metadata:
  148. labels:
  149. tier: node
  150. app: flannel
  151. spec:
  152. affinity:
  153. nodeAffinity:
  154. requiredDuringSchedulingIgnoredDuringExecution:
  155. nodeSelectorTerms:
  156. - matchExpressions:
  157. - key: kubernetes.io/os
  158. operator: In
  159. values:
  160. - linux
  161. hostNetwork: true
  162. priorityClassName: system-node-critical
  163. tolerations:
  164. - operator: Exists
  165. effect: NoSchedule
  166. serviceAccountName: flannel
  167. initContainers:
  168. - name: install-cni
  169. image: quay.io/coreos/flannel:v0.13.1-rc1
  170. command:
  171. - cp
  172. args:
  173. - -f
  174. - /etc/kube-flannel/cni-conf.json
  175. - /etc/cni/net.d/10-flannel.conflist
  176. volumeMounts:
  177. - name: cni
  178. mountPath: /etc/cni/net.d
  179. - name: flannel-cfg
  180. mountPath: /etc/kube-flannel/
  181. containers:
  182. - name: kube-flannel
  183. image: quay.io/coreos/flannel:v0.13.1-rc1
  184. command:
  185. - /opt/bin/flanneld
  186. args:
  187. - --ip-masq
  188. - --kube-subnet-mgr
  189. - --iface=eth1
  190. resources:
  191. requests:
  192. cpu: "100m"
  193. memory: "50Mi"
  194. limits:
  195. cpu: "100m"
  196. memory: "50Mi"
  197. securityContext:
  198. privileged: false
  199. capabilities:
  200. add: ["NET_ADMIN", "NET_RAW"]
  201. env:
  202. - name: POD_NAME
  203. valueFrom:
  204. fieldRef:
  205. fieldPath: metadata.name
  206. - name: POD_NAMESPACE
  207. valueFrom:
  208. fieldRef:
  209. fieldPath: metadata.namespace
  210. volumeMounts:
  211. - name: run
  212. mountPath: /run/flannel
  213. - name: flannel-cfg
  214. mountPath: /etc/kube-flannel/
  215. volumes:
  216. - name: run
  217. hostPath:
  218. path: /run/flannel
  219. - name: cni
  220. hostPath:
  221. path: /etc/cni/net.d
  222. - name: flannel-cfg
  223. configMap:
  224. name: kube-flannel-cfg

至此集群部署成功!如果有参数错误需要修改,也可以在 reset 后重新 init 集群。

  1. $ kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready control-plane,master 62m v1.20.2
  4. k8s-node1 Ready control-plane,master 82m v1.20.2
  5. k8s-node2 Ready control-plane,master 82m v1.20.2
  6. # 重启集群
  7. $ sudo kubeadm reset
  8. $ sudo kubeadm init

4、部署 dashboard 服务

以 WEB 页面的可视化 dashboard 来监控集群的状态
这个还是会遇到需要搭梯子下载启动配置文件的问题,下面是对应的下载地址,可以下载之后上传到服务器上面在进行部署。
部署 dashboard 服务的官方下载地址

  1. # 部署flannel服务
  2. $ kubectl apply -f ./kube-dashboard.yaml
  3. # 如果部署出现问题可通过如下命令查看日志
  4. $ kubectl logs \
  5. kubernetes-dashboard-c9fb67ffc-nknpj \
  6. --namespace=kubernetes-dashboard
  7. $ kubectl describe pod \
  8. kubernetes-dashboard-c9fb67ffc-nknpj \
  9. --namespace=kubernetes-dashboard
  10. $ kubectl get svc -n kubernetes-dashboard
  11. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  12. dashboard-metrics-scraper ClusterIP 10.245.214.11 <none> 8000/TCP 26s
  13. kubernetes-dashboard ClusterIP 10.245.161.146 <none> 443/TCP 26s

需要注意的是 dashboard 默认不允许外网访问,即使通过 kubectl proxy 允许外网访问。但 dashboard 又只允许 HTTPS 访问,这样 kubeadm init 时自签名的 CA 证书是不被浏览器承认的。
这里采用的方案是 Nginx 作为反向代理,使用 Lets Encrypt 提供的有效证书对外提供服务,再经由 proxy_pass 指令反向代理到 kubectl proxy 上,如下所示。此时,本地可经由 8888 访问到 dashboard 服务,再通过 Nginx 访问它。

  1. # 代理(可以使用supervisor)
  2. $ kubectl proxy --accept-hosts='^*$'
  3. $ kubectl proxy --port=8888 --accept-hosts='^*$'
  4. # 测试代理是否正常(默认监听在8001端口上)
  5. $ curl -X GET -L http://localhost:8001
  6. # 本地(可以使用nginx)
  7. proxy_pass http://localhost:8001;
  8. proxy_pass http://localhost:8888;
  9. # 外网访问如下URL地址
  10. https://mydomain/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

配置文件整理

  • nginx
  • supervisor ```nginx

    k8s.conf

client_max_body_size 80M; client_body_buffer_size 128k; proxy_connect_timeout 600; proxy_read_timeout 600; proxy_send_timeout 600;

server { listen 8080 ssl; servername ;

  1. ssl_certificate /etc/kubernetes/pki/ca.crt;
  2. ssl_certificate_key /etc/kubernetes/pki/ca.key;
  3. access_log /var/log/nginx/k8s.access.log;
  4. error_log /var/log/nginx/k8s.error.log error;
  5. location / {
  6. proxy_set_header X-Forwarded-Proto $scheme;
  7. proxy_set_header Host $http_host;
  8. proxy_set_header X-Real-IP $remote_addr;
  9. proxy_pass http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/;
  10. }

}

k8s.conf

[program:k8s-master] command=kubectl proxy —accept-hosts=’^*$’ user=vagrant environment=KUBECONFIG=”/home/vagrant/.kube/config” stopasgroup=true stopasgroup=true autostart=true autorestart=unexpected stdout_logfile_maxbytes=1MB stdout_logfile_backups=10 stderr_logfile_maxbytes=1MB stderr_logfile_backups=10 stderr_logfile=/var/log/supervisor/k8s-stderr.log stdout_logfile=/var/log/supervisor/k8s-stdout.log

  1. 配置文件如下所示
  2. ```yaml
  3. # Copyright 2017 The Kubernetes Authors.
  4. #
  5. # Licensed under the Apache License, Version 2.0 (the "License");
  6. # you may not use this file except in compliance with the License.
  7. # You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. apiVersion: v1
  17. kind: Namespace
  18. metadata:
  19. name: kubernetes-dashboard
  20. ---
  21. apiVersion: v1
  22. kind: ServiceAccount
  23. metadata:
  24. labels:
  25. k8s-app: kubernetes-dashboard
  26. name: kubernetes-dashboard
  27. namespace: kubernetes-dashboard
  28. ---
  29. kind: Service
  30. apiVersion: v1
  31. metadata:
  32. labels:
  33. k8s-app: kubernetes-dashboard
  34. name: kubernetes-dashboard
  35. namespace: kubernetes-dashboard
  36. spec:
  37. ports:
  38. - port: 443
  39. targetPort: 8443
  40. selector:
  41. k8s-app: kubernetes-dashboard
  42. ---
  43. apiVersion: v1
  44. kind: Secret
  45. metadata:
  46. labels:
  47. k8s-app: kubernetes-dashboard
  48. name: kubernetes-dashboard-certs
  49. namespace: kubernetes-dashboard
  50. type: Opaque
  51. ---
  52. apiVersion: v1
  53. kind: Secret
  54. metadata:
  55. labels:
  56. k8s-app: kubernetes-dashboard
  57. name: kubernetes-dashboard-csrf
  58. namespace: kubernetes-dashboard
  59. type: Opaque
  60. data:
  61. csrf: ""
  62. ---
  63. apiVersion: v1
  64. kind: Secret
  65. metadata:
  66. labels:
  67. k8s-app: kubernetes-dashboard
  68. name: kubernetes-dashboard-key-holder
  69. namespace: kubernetes-dashboard
  70. type: Opaque
  71. ---
  72. kind: ConfigMap
  73. apiVersion: v1
  74. metadata:
  75. labels:
  76. k8s-app: kubernetes-dashboard
  77. name: kubernetes-dashboard-settings
  78. namespace: kubernetes-dashboard
  79. ---
  80. kind: Role
  81. apiVersion: rbac.authorization.k8s.io/v1
  82. metadata:
  83. labels:
  84. k8s-app: kubernetes-dashboard
  85. name: kubernetes-dashboard
  86. namespace: kubernetes-dashboard
  87. rules:
  88. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  89. - apiGroups: [""]
  90. resources: ["secrets"]
  91. resourceNames:
  92. [
  93. "kubernetes-dashboard-key-holder",
  94. "kubernetes-dashboard-certs",
  95. "kubernetes-dashboard-csrf",
  96. ]
  97. verbs: ["get", "update", "delete"]
  98. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  99. - apiGroups: [""]
  100. resources: ["configmaps"]
  101. resourceNames: ["kubernetes-dashboard-settings"]
  102. verbs: ["get", "update"]
  103. # Allow Dashboard to get metrics.
  104. - apiGroups: [""]
  105. resources: ["services"]
  106. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  107. verbs: ["proxy"]
  108. - apiGroups: [""]
  109. resources: ["services/proxy"]
  110. resourceNames:
  111. [
  112. "heapster",
  113. "http:heapster:",
  114. "https:heapster:",
  115. "dashboard-metrics-scraper",
  116. "http:dashboard-metrics-scraper",
  117. ]
  118. verbs: ["get"]
  119. ---
  120. kind: ClusterRole
  121. apiVersion: rbac.authorization.k8s.io/v1
  122. metadata:
  123. labels:
  124. k8s-app: kubernetes-dashboard
  125. name: kubernetes-dashboard
  126. rules:
  127. # Allow Metrics Scraper to get metrics from the Metrics server
  128. - apiGroups: ["metrics.k8s.io"]
  129. resources: ["pods", "nodes"]
  130. verbs: ["get", "list", "watch"]
  131. ---
  132. apiVersion: rbac.authorization.k8s.io/v1
  133. kind: RoleBinding
  134. metadata:
  135. labels:
  136. k8s-app: kubernetes-dashboard
  137. name: kubernetes-dashboard
  138. namespace: kubernetes-dashboard
  139. roleRef:
  140. apiGroup: rbac.authorization.k8s.io
  141. kind: Role
  142. name: kubernetes-dashboard
  143. subjects:
  144. - kind: ServiceAccount
  145. name: kubernetes-dashboard
  146. namespace: kubernetes-dashboard
  147. ---
  148. apiVersion: rbac.authorization.k8s.io/v1
  149. kind: ClusterRoleBinding
  150. metadata:
  151. name: kubernetes-dashboard
  152. roleRef:
  153. apiGroup: rbac.authorization.k8s.io
  154. kind: ClusterRole
  155. name: kubernetes-dashboard
  156. subjects:
  157. - kind: ServiceAccount
  158. name: kubernetes-dashboard
  159. namespace: kubernetes-dashboard
  160. ---
  161. kind: Deployment
  162. apiVersion: apps/v1
  163. metadata:
  164. labels:
  165. k8s-app: kubernetes-dashboard
  166. name: kubernetes-dashboard
  167. namespace: kubernetes-dashboard
  168. spec:
  169. replicas: 1
  170. revisionHistoryLimit: 10
  171. selector:
  172. matchLabels:
  173. k8s-app: kubernetes-dashboard
  174. template:
  175. metadata:
  176. labels:
  177. k8s-app: kubernetes-dashboard
  178. spec:
  179. containers:
  180. - name: kubernetes-dashboard
  181. image: registry.cn-shanghai.aliyuncs.com/jieee/dashboard:v2.0.4
  182. imagePullPolicy: Always
  183. ports:
  184. - containerPort: 8443
  185. protocol: TCP
  186. args:
  187. - --auto-generate-certificates
  188. - --namespace=kubernetes-dashboard
  189. # Uncomment the following line to manually specify Kubernetes API server Host
  190. # If not specified, Dashboard will attempt to auto discover the API server and connect
  191. # to it. Uncomment only if the default does not work.
  192. # - --apiserver-host=http://my-address:port
  193. volumeMounts:
  194. - name: kubernetes-dashboard-certs
  195. mountPath: /certs
  196. # Create on-disk volume to store exec logs
  197. - mountPath: /tmp
  198. name: tmp-volume
  199. livenessProbe:
  200. httpGet:
  201. scheme: HTTPS
  202. path: /
  203. port: 8443
  204. initialDelaySeconds: 30
  205. timeoutSeconds: 30
  206. securityContext:
  207. allowPrivilegeEscalation: false
  208. readOnlyRootFilesystem: true
  209. runAsUser: 1001
  210. runAsGroup: 2001
  211. volumes:
  212. - name: kubernetes-dashboard-certs
  213. secret:
  214. secretName: kubernetes-dashboard-certs
  215. - name: tmp-volume
  216. emptyDir: {}
  217. serviceAccountName: kubernetes-dashboard
  218. nodeSelector:
  219. "kubernetes.io/os": linux
  220. # Comment the following tolerations if Dashboard must not be deployed on master
  221. tolerations:
  222. - key: node-role.kubernetes.io/master
  223. effect: NoSchedule
  224. ---
  225. kind: Service
  226. apiVersion: v1
  227. metadata:
  228. labels:
  229. k8s-app: dashboard-metrics-scraper
  230. name: dashboard-metrics-scraper
  231. namespace: kubernetes-dashboard
  232. spec:
  233. ports:
  234. - port: 8000
  235. targetPort: 8000
  236. selector:
  237. k8s-app: dashboard-metrics-scraper
  238. ---
  239. kind: Deployment
  240. apiVersion: apps/v1
  241. metadata:
  242. labels:
  243. k8s-app: dashboard-metrics-scraper
  244. name: dashboard-metrics-scraper
  245. namespace: kubernetes-dashboard
  246. spec:
  247. replicas: 1
  248. revisionHistoryLimit: 10
  249. selector:
  250. matchLabels:
  251. k8s-app: dashboard-metrics-scraper
  252. template:
  253. metadata:
  254. labels:
  255. k8s-app: dashboard-metrics-scraper
  256. annotations:
  257. seccomp.security.alpha.kubernetes.io/pod: "runtime/default"
  258. spec:
  259. containers:
  260. - name: dashboard-metrics-scraper
  261. image: registry.cn-shanghai.aliyuncs.com/jieee/metrics-scraper:v1.0.4
  262. ports:
  263. - containerPort: 8000
  264. protocol: TCP
  265. livenessProbe:
  266. httpGet:
  267. scheme: HTTP
  268. path: /
  269. port: 8000
  270. initialDelaySeconds: 30
  271. timeoutSeconds: 30
  272. volumeMounts:
  273. - mountPath: /tmp
  274. name: tmp-volume
  275. securityContext:
  276. allowPrivilegeEscalation: false
  277. readOnlyRootFilesystem: true
  278. runAsUser: 1001
  279. runAsGroup: 2001
  280. serviceAccountName: kubernetes-dashboard
  281. nodeSelector:
  282. "kubernetes.io/os": linux
  283. # Comment the following tolerations if Dashboard must not be deployed on master
  284. tolerations:
  285. - key: node-role.kubernetes.io/master
  286. effect: NoSchedule
  287. volumes:
  288. - name: tmp-volume
  289. emptyDir: {}

第一种:登录 dashboard 的方式(配置文件)

  • 采用 token 方式
  • 采用秘钥文件方式

2021-12-27-14-25-46-727610.jpeg

  1. # 创建管理员帐户(dashboard)
  2. $ cat <<EOF | kubectl apply -f -
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: admin-user
  7. namespace: kubernetes-dashboard
  8. EOF
  9. # 将用户绑定已经存在的集群管理员角色
  10. $ cat <<EOF | kubectl apply -f -
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. kind: ClusterRoleBinding
  13. metadata:
  14. name: admin-user
  15. roleRef:
  16. apiGroup: rbac.authorization.k8s.io
  17. kind: ClusterRole
  18. name: cluster-admin
  19. subjects:
  20. - kind: ServiceAccount
  21. name: admin-user
  22. namespace: kubernetes-dashboard
  23. EOF
  24. # 获取可用户于访问的token令牌
  25. $ kubectl -n kubernetes-dashboard describe secret \
  26. $(kubectl -n kubernetes-dashboard get secret \
  27. | grep admin-user | awk '{print $1}')

针对 Chrome 浏览器,在空白处点击然后输入:thisisunsafe
针对 Firefox 浏览器,遇到证书过期,添加例外访问

第二种:授权 dashboard 权限(不适用配置文件)

  • 如果登录之后提示权限问题的话,可以执行如下操作
  • 把 serviceaccount 绑定在 clusteradmin
  • 授权 serviceaccount 用户具有整个集群的访问管理权限 ```bash

    创建serviceaccount

    $ kubectl create serviceaccount dashboard-admin -n kube-system

把serviceaccount绑定在clusteradmin

授权serviceaccount用户具有整个集群的访问管理权限

$ kubectl create clusterrolebinding \ dashboard-cluster-admin —clusterrole=cluster-admin \ —serviceaccount=kube-system:dashboard-admin

获取serviceaccount的secret信息,可得到token令牌的信息

$ kubectl get secret -n kube-system

通过上边命令获取到dashboard-admin-token-slfcr信息

$ kubectl describe secret -n kube-system

浏览器访问登录并把token粘贴进去登录即可

https://192.168.30.30:8080/

快捷查看token的命令

$ kubectl describe secrets -n kube-system \ $(kubectl -n kube-system get secret | awk ‘/admin/{print $1}’) ```