前置要求

  • 系统 Centos8
  • 内存推荐3G或以上(这个很重要,否则有可能导致初始化不成功)

1.查询系统版本

  1. [root@localhost ~]# cat /etc/centos-release
  2. CentOS Linux release 8.2.2004 (Core)

2.确定IP地址

  1. [root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
  2. TYPE="Ethernet"
  3. PROXY_METHOD="none"
  4. BROWSER_ONLY="no"
  5. BOOTPROTO="static"
  6. DEFROUTE="yes"
  7. IPV4_FAILURE_FATAL="no"
  8. IPV6INIT="yes"
  9. IPV6_AUTOCONF="yes"
  10. IPV6_DEFROUTE="yes"
  11. IPV6_FAILURE_FATAL="no"
  12. IPV6_ADDR_GEN_MODE="stable-privacy"
  13. NAME="ens33"
  14. UUID="bce8c979-9f30-4b67-819e-cae1ef0b70c0"
  15. DEVICE="ens33"
  16. ONBOOT="yes"
  17. IPADDR="192.168.0.127"
  18. NETMASK="255.255.255.0"
  19. GATEWAY="192.168.0.1"
  20. DNS1="8.8.8.8"

3.添加阿里云

  1. [root@localhost ~]# rm -rfv /etc/yum.repos.d/*
  2. [root@localhost ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

注:这里需要添加阿里云地址,否则可能会导致镜像拉取不出来的情况

4.修改host

  1. [root@localhost ~]# vim /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.0.127 master01.paas.com master01

5. 关闭swapoff,注释swapoff 分区

  1. [root@localhost ~]# swapoff -a
  2. [root@localhost ~]# cat /etc/fstab
  3. #
  4. # /etc/fstab
  5. # Created by anaconda on Thu May 13 00:46:53 2021
  6. #
  7. # Accessible filesystems, by reference, are maintained under '/dev/disk/'.
  8. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
  9. #
  10. # After editing this file, run 'systemctl daemon-reload' to update systemd
  11. # units generated from this file.
  12. #
  13. UUID=d5aed907-ab30-47d1-af47-f76abea61f07 / xfs defaults 0 0
  14. UUID=0e24ccd7-4b02-4873-bfa8-83ba1f1e676b /boot ext4 defaults 1 2
  15. #UUID=dedd03a8-0500-4383-b868-cec55f4dd8bd swap swap defaults 0 0

6.配置内核参数,将桥接的IPv4流量传递到iptables的链

  1. [root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF
  2. > net.bridge.bridge-nf-call-ip6tables = 1
  3. > net.bridge.bridge-nf-call-iptables = 1
  4. > EOF
  1. [root@localhost ~]# sysctl --system
  2. * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
  3. kernel.yama.ptrace_scope = 0
  4. * Applying /usr/lib/sysctl.d/50-coredump.conf ...
  5. kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
  6. * Applying /usr/lib/sysctl.d/50-default.conf ...
  7. kernel.sysrq = 16
  8. kernel.core_uses_pid = 1
  9. kernel.kptr_restrict = 1
  10. net.ipv4.conf.all.rp_filter = 1
  11. net.ipv4.conf.all.accept_source_route = 0
  12. net.ipv4.conf.all.promote_secondaries = 1
  13. net.core.default_qdisc = fq_codel
  14. fs.protected_hardlinks = 1
  15. fs.protected_symlinks = 1
  16. * Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
  17. net.core.optmem_max = 81920
  18. * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
  19. kernel.pid_max = 4194304
  20. * Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
  21. fs.aio-max-nr = 1048576
  22. * Applying /etc/sysctl.d/99-sysctl.conf ...
  23. * Applying /etc/sysctl.d/k8s.conf ...
  24. * Applying /etc/sysctl.conf ...

7.安装常用包

  1. [root@localhost ~]# yum install vim bash-completion net-tools gcc -y

8.使用aliyun源安装docker-ce

  1. [root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
  2. [root@localhost ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. [root@localhost ~]# yum -y install docker-ce

安装docker-ce如果出现以下错

  1. [root@localhost ~]# yum -y install docker-ce
  2. CentOS-8 - Base - mirrors.aliyun.com 14 kB/s | 3.8 kB 00:00
  3. CentOS-8 - Extras - mirrors.aliyun.com 6.4 kB/s | 1.5 kB 00:00
  4. CentOS-8 - AppStream - mirrors.aliyun.com 16 kB/s | 4.3 kB 00:00
  5. Docker CE Stable - x86_64 40 kB/s | 22 kB 00:00
  6. Error:
  7. Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
  8. - cannot install the best candidate for the job
  9. - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
  10. - package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
  11. - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
  12. - package containerd.io-1.2.2-3.el7.x86_64 is excluded
  13. - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
  14. - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
  15. - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
  16. (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

解决方法

  1. [root@localhost ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
  2. [root@localhost ~]# yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm

docker-ce安装成功

  1. [root@localhost ~]# systemctl start docker
  2. [root@localhost ~]# systemctl enable docker

添加aliyundocker仓库加速器
登录阿里云账号获取镜像信息,网址:https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
针对Docker客户端版本大于 1.10.0 的用户

您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

  1. sudo mkdir -p /etc/docker
  2. sudo tee /etc/docker/daemon.json <<-'EOF'
  3. {
  4. "registry-mirrors": ["https://lso20XXX.mirror.aliyuncs.com"]
  5. }
  6. EOF
  7. sudo systemctl daemon-reload
  8. sudo systemctl restart docker

9.安装kubectl、kubelet、kubeadm

  1. [root@localhost ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. > [kubernetes]
  3. > name=Kubernetes
  4. > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. > enabled=1
  6. > gpgcheck=1
  7. > repo_gpgcheck=1
  8. > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. > EOF

10.安装

  1. [root@localhost ~]# yum -y install kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
  1. systemctl enable kubelet && systemctl start kubelet

11.初始化集群(此时会需要较长时间,耐心等待)

  1. [root@localhost ~]# kubeadm init --kubernetes-version=1.18.0 \
  2. > --apiserver-advertise-address=192.168.0.127 \
  3. > --image-repository registry.aliyuncs.com/google_containers \
  4. > --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。

这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。

  1. [apiclient] All control plane components are healthy after 20.502515 seconds
  2. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  3. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
  4. [upload-certs] Skipping phase. Please see --upload-certs
  5. [mark-control-plane] Marking the node master01.paas.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
  6. [mark-control-plane] Marking the node master01.paas.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  7. [bootstrap-token] Using token: fvdmel.61fjcb4ej591sujj
  8. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  9. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  10. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  11. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  12. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  13. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  14. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  15. [addons] Applied essential addon: CoreDNS
  16. [addons] Applied essential addon: kube-proxy
  17. Your Kubernetes control-plane has initialized successfully!
  18. To start using your cluster, you need to run the following as a regular user:
  19. mkdir -p $HOME/.kube
  20. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  21. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  22. You should now deploy a pod network to the cluster.
  23. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  24. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  25. Then you can join any number of worker nodes by running the following on each as root:
  26. kubeadm join 192.168.0.127:6443 --token fvdmel.61fjcb4ej591sujj \
  27. --discovery-token-ca-cert-hash sha256:f36a0ec6acd67259e8f86a6a882bdf445685341a4c2b52cebc7e9651d3de7ec6

出现上述信息,代表安装成功

记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。

11.根据提示创建kubectl

  1. [root@localhost ~]# mkdir -p $HOME/.kube
  2. [root@localhost ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. [root@localhost ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行下面命令,使kubectl可以自动补充

  1. [root@localhost ~]# source <(kubectl completion bash)

12.查看节点,pod

  1. [root@localhost ~]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. master01.paas.com NotReady master 5m4s v1.18.0
  1. [root@localhost ~]# kubectl get pod --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system coredns-7ff77c879f-g9cqf 0/1 Pending 0 7m30s
  4. kube-system coredns-7ff77c879f-st5h7 0/1 Pending 0 7m30s
  5. kube-system etcd-master01.paas.com 1/1 Running 0 7m41s
  6. kube-system kube-apiserver-master01.paas.com 1/1 Running 0 7m41s
  7. kube-system kube-controller-manager-master01.paas.com 1/1 Running 0 7m41s
  8. kube-system kube-proxy-bb58h 1/1 Running 0 7m30s
  9. kube-system kube-scheduler-master01.paas.com 1/1 Running 0 7m41s

13.node节点为NotReady,因为corednspod没有启动,缺少网络pod

  1. [root@localhost ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  2. configmap/calico-config created
  3. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  4. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  5. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  6. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  7. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  8. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  9. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  10. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  11. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  12. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  13. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  14. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  15. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  16. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  17. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  18. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  19. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  20. clusterrole.rbac.authorization.k8s.io/calico-node created
  21. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  22. daemonset.apps/calico-node created
  23. serviceaccount/calico-node created
  24. deployment.apps/calico-kube-controllers created
  25. serviceaccount/calico-kube-controllers created
  26. poddisruptionbudget.policy/calico-kube-controllers created

查看pod和node

  1. [root@localhost ~]# kubectl get pod --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system calico-kube-controllers-6566c5b7d8-hcm8j 1/1 Running 0 2m32s
  4. kube-system calico-node-hv6wl 1/1 Running 0 2m32s
  5. kube-system coredns-7ff77c879f-g9cqf 1/1 Running 0 10m
  6. kube-system coredns-7ff77c879f-st5h7 1/1 Running 0 10m
  7. kube-system etcd-master01.paas.com 1/1 Running 0 11m
  8. kube-system kube-apiserver-master01.paas.com 1/1 Running 0 11m
  9. kube-system kube-controller-manager-master01.paas.com 1/1 Running 0 11m
  10. kube-system kube-proxy-bb58h 1/1 Running 0 10m
  11. kube-system kube-scheduler-master01.paas.com 1/1 Running 0 11m

查看时,会出现没有启动成功的情况,可以使用下面的命令:

  1. [root@localhost ~]# systemctl restart kubelet

然后在查询

  1. [root@localhost ~]# kubectl get pod --all-namespaces

多次尝试直到全部启动成功

14.安装kubernetes-dashboard

官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里修改nodeport
在40行左右添加
type: NodePort
在44行左右添加
nodePort: 30000
整体代码如下:

  1. # Copyright 2018 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. apiVersion: v1
  15. kind: Namespace
  16. metadata:
  17. name: kubernetes-dashboard
  18. ---
  19. apiVersion: v1
  20. kind: ServiceAccount
  21. metadata:
  22. labels:
  23. k8s-app: kubernetes-dashboard
  24. name: kubernetes-dashboard
  25. namespace: kubernetes-dashboard
  26. ---
  27. kind: Service
  28. apiVersion: v1
  29. metadata:
  30. labels:
  31. k8s-app: kubernetes-dashboard
  32. name: kubernetes-dashboard
  33. namespace: kubernetes-dashboard
  34. spec:
  35. type: NodePort
  36. ports:
  37. - port: 443
  38. targetPort: 8443
  39. nodePort: 30000
  40. selector:
  41. k8s-app: kubernetes-dashboard
  42. ---
  43. apiVersion: v1
  44. kind: Secret
  45. metadata:
  46. labels:
  47. k8s-app: kubernetes-dashboard
  48. name: kubernetes-dashboard-certs
  49. namespace: kubernetes-dashboard
  50. type: Opaque
  51. ---
  52. apiVersion: v1
  53. kind: Secret
  54. metadata:
  55. labels:
  56. k8s-app: kubernetes-dashboard
  57. name: kubernetes-dashboard-csrf
  58. namespace: kubernetes-dashboard
  59. type: Opaque
  60. data:
  61. csrf: ""
  62. ---
  63. apiVersion: v1
  64. kind: Secret
  65. metadata:
  66. labels:
  67. k8s-app: kubernetes-dashboard
  68. name: kubernetes-dashboard-key-holder
  69. namespace: kubernetes-dashboard
  70. type: Opaque
  71. ---
  72. kind: ConfigMap
  73. apiVersion: v1
  74. metadata:
  75. labels:
  76. k8s-app: kubernetes-dashboard
  77. name: kubernetes-dashboard-settings
  78. namespace: kubernetes-dashboard
  79. ---
  80. kind: Role
  81. apiVersion: rbac.authorization.k8s.io/v1
  82. metadata:
  83. labels:
  84. k8s-app: kubernetes-dashboard
  85. name: kubernetes-dashboard
  86. namespace: kubernetes-dashboard
  87. rules:
  88. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  89. - apiGroups: [""]
  90. resources: ["secrets"]
  91. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  92. verbs: ["get", "update", "delete"]
  93. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  94. - apiGroups: [""]
  95. resources: ["configmaps"]
  96. resourceNames: ["kubernetes-dashboard-settings"]
  97. verbs: ["get", "update"]
  98. # Allow Dashboard to get metrics.
  99. - apiGroups: [""]
  100. resources: ["services"]
  101. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  102. verbs: ["proxy"]
  103. - apiGroups: [""]
  104. resources: ["services/proxy"]
  105. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  106. verbs: ["get"]
  107. ---
  108. kind: ClusterRole
  109. apiVersion: rbac.authorization.k8s.io/v1
  110. metadata:
  111. labels:
  112. k8s-app: kubernetes-dashboard
  113. name: kubernetes-dashboard
  114. rules:
  115. # Allow Metrics Scraper to get metrics from the Metrics server
  116. - apiGroups: ["metrics.k8s.io"]
  117. resources: ["pods", "nodes"]
  118. verbs: ["get", "list", "watch"]
  119. ---
  120. apiVersion: rbac.authorization.k8s.io/v1
  121. kind: RoleBinding
  122. metadata:
  123. labels:
  124. k8s-app: kubernetes-dashboard
  125. name: kubernetes-dashboard
  126. namespace: kubernetes-dashboard
  127. roleRef:
  128. apiGroup: rbac.authorization.k8s.io
  129. kind: Role
  130. name: kubernetes-dashboard
  131. subjects:
  132. - kind: ServiceAccount
  133. name: kubernetes-dashboard
  134. namespace: kubernetes-dashboard
  135. ---
  136. apiVersion: rbac.authorization.k8s.io/v1
  137. kind: ClusterRoleBinding
  138. metadata:
  139. name: kubernetes-dashboard
  140. roleRef:
  141. apiGroup: rbac.authorization.k8s.io
  142. kind: ClusterRole
  143. name: kubernetes-dashboard
  144. subjects:
  145. - kind: ServiceAccount
  146. name: kubernetes-dashboard
  147. namespace: kubernetes-dashboard
  148. ---
  149. kind: Deployment
  150. apiVersion: apps/v1
  151. metadata:
  152. labels:
  153. k8s-app: kubernetes-dashboard
  154. name: kubernetes-dashboard
  155. namespace: kubernetes-dashboard
  156. spec:
  157. replicas: 1
  158. revisionHistoryLimit: 10
  159. selector:
  160. matchLabels:
  161. k8s-app: kubernetes-dashboard
  162. template:
  163. metadata:
  164. labels:
  165. k8s-app: kubernetes-dashboard
  166. spec:
  167. containers:
  168. - name: kubernetes-dashboard
  169. image: kubernetesui/dashboard:v2.0.0-rc7
  170. imagePullPolicy: Always
  171. ports:
  172. - containerPort: 8443
  173. protocol: TCP
  174. args:
  175. - --auto-generate-certificates
  176. - --namespace=kubernetes-dashboard
  177. # Uncomment the following line to manually specify Kubernetes API server Host
  178. # If not specified, Dashboard will attempt to auto discover the API server and connect
  179. # to it. Uncomment only if the default does not work.
  180. # - --apiserver-host=http://my-address:port
  181. volumeMounts:
  182. - name: kubernetes-dashboard-certs
  183. mountPath: /certs
  184. # Create on-disk volume to store exec logs
  185. - mountPath: /tmp
  186. name: tmp-volume
  187. livenessProbe:
  188. httpGet:
  189. scheme: HTTPS
  190. path: /
  191. port: 8443
  192. initialDelaySeconds: 30
  193. timeoutSeconds: 30
  194. securityContext:
  195. allowPrivilegeEscalation: false
  196. readOnlyRootFilesystem: true
  197. runAsUser: 1001
  198. runAsGroup: 2001
  199. volumes:
  200. - name: kubernetes-dashboard-certs
  201. secret:
  202. secretName: kubernetes-dashboard-certs
  203. - name: tmp-volume
  204. emptyDir: {}
  205. serviceAccountName: kubernetes-dashboard
  206. nodeSelector:
  207. "beta.kubernetes.io/os": linux
  208. # Comment the following tolerations if Dashboard must not be deployed on master
  209. tolerations:
  210. - key: node-role.kubernetes.io/master
  211. effect: NoSchedule
  212. ---
  213. kind: Service
  214. apiVersion: v1
  215. metadata:
  216. labels:
  217. k8s-app: dashboard-metrics-scraper
  218. name: dashboard-metrics-scraper
  219. namespace: kubernetes-dashboard
  220. spec:
  221. ports:
  222. - port: 8000
  223. targetPort: 8000
  224. selector:
  225. k8s-app: dashboard-metrics-scraper
  226. ---
  227. kind: Deployment
  228. apiVersion: apps/v1
  229. metadata:
  230. labels:
  231. k8s-app: dashboard-metrics-scraper
  232. name: dashboard-metrics-scraper
  233. namespace: kubernetes-dashboard
  234. spec:
  235. replicas: 1
  236. revisionHistoryLimit: 10
  237. selector:
  238. matchLabels:
  239. k8s-app: dashboard-metrics-scraper
  240. template:
  241. metadata:
  242. labels:
  243. k8s-app: dashboard-metrics-scraper
  244. annotations:
  245. seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
  246. spec:
  247. containers:
  248. - name: dashboard-metrics-scraper
  249. image: kubernetesui/metrics-scraper:v1.0.4
  250. ports:
  251. - containerPort: 8000
  252. protocol: TCP
  253. livenessProbe:
  254. httpGet:
  255. scheme: HTTP
  256. path: /
  257. port: 8000
  258. initialDelaySeconds: 30
  259. timeoutSeconds: 30
  260. volumeMounts:
  261. - mountPath: /tmp
  262. name: tmp-volume
  263. securityContext:
  264. allowPrivilegeEscalation: false
  265. readOnlyRootFilesystem: true
  266. runAsUser: 1001
  267. runAsGroup: 2001
  268. serviceAccountName: kubernetes-dashboard
  269. nodeSelector:
  270. "beta.kubernetes.io/os": linux
  271. # Comment the following tolerations if Dashboard must not be deployed on master
  272. tolerations:
  273. - key: node-role.kubernetes.io/master
  274. effect: NoSchedule
  275. volumes:
  276. - name: tmp-volume
  277. emptyDir: {}
  1. [root@localhost soft]# kubectl create -f recommended.yaml
  2. namespace/kubernetes-dashboard created
  3. serviceaccount/kubernetes-dashboard created
  4. service/kubernetes-dashboard created
  5. secret/kubernetes-dashboard-certs created
  6. secret/kubernetes-dashboard-csrf created
  7. secret/kubernetes-dashboard-key-holder created
  8. configmap/kubernetes-dashboard-settings created
  9. role.rbac.authorization.k8s.io/kubernetes-dashboard created
  10. clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
  11. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  12. clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  13. deployment.apps/kubernetes-dashboard created
  14. service/dashboard-metrics-scraper created
  15. deployment.apps/dashboard-metrics-scraper created

查看pod,service

  1. [root@localhost soft]# kubectl get pod --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system calico-kube-controllers-6566c5b7d8-hcm8j 1/1 Running 0 17m
  4. kube-system calico-node-hv6wl 1/1 Running 0 17m
  5. kube-system coredns-7ff77c879f-g9cqf 1/1 Running 0 25m
  6. kube-system coredns-7ff77c879f-st5h7 1/1 Running 0 25m
  7. kube-system etcd-master01.paas.com 1/1 Running 0 25m
  8. kube-system kube-apiserver-master01.paas.com 1/1 Running 0 25m
  9. kube-system kube-controller-manager-master01.paas.com 1/1 Running 0 25m
  10. kube-system kube-proxy-bb58h 1/1 Running 0 25m
  11. kube-system kube-scheduler-master01.paas.com 1/1 Running 0 25m
  12. kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-qhh7s 1/1 Running 0 2m27s
  13. kubernetes-dashboard kubernetes-dashboard-5d4dc8b976-nklpk 1/1 Running 0 2m27s

同样的,如果有服务没启动,重启kubelet

  1. systemctl restart kubelet

浏览器访问
https://192.168.0.127:30000/
注意是https请求
浏览器会提示有风险,忽略,点击高级访问网站

  1. [root@localhost ~]# find / -name kubernetes-dashboard-token*
  2. /var/lib/kubelet/pods/5c6451f1-94c3-4061-be65-267467a24b8c/volumes/kubernetes.io~secret/kubernetes-dashboard-token-njb8k
  3. /var/lib/kubelet/pods/521c3913-a71b-498e-b7e3-8fa9c5ffe282/volumes/kubernetes.io~secret/kubernetes-dashboard-token-njb8k
  4. [root@localhost ~]# kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard-token-njb8k | grep token | awk 'NR==3{print $2}'
  5. eyJhbGciOiJSUzI1NiIsImtpZCI6IlQyU3l3Z09PWnZ6ajJwdzNJTUlISTJrSHZmUkE0ckhuSnMxMnBpNDVDV1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uamI4ayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE3MTk0ZmNlLTM1YWYtNGY3MC1iYWI5LWUzZTBkMzRiOTMwZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.HtGK9YDQlS4dBalBbBhydQzmInyGiYFhGPi8AJxGpVeU5kap_NLU4PKDA3vvd2xaQd8g6KtFl75fL9AgMcDetzzTLOJwWWNDxMkq9qeSQojLN9380XP4XQhkIFu5GxSLYEnGNdjUAS_Y9D7WVNJjJBjL-vEQKsxX6Gj7ybNVIJk82T4E0cc-YBydyfWzSRVYDu6YoFSx_GtdjBYknHM2VsZeimS7_2ojdrWptS4QoBhF1QgtvYRP1ggwm3i8l_7lT3-P6Efh-YVDLW3TXtnlKpZtRYz2XbrUkGrGIev-ihxSKEvsYREKL28SR0geDq3vxWMq3RNLRPYak4Q_XtxKsQ

找到并复制上述token
在浏览器中粘贴即可

点击确定,即可登录dashboard