软件版本

软件 版本
centos 7.6
docker 20.10.7

注意:确保三台机器在同一个局域网,否则会遇到网络插件安装不成功的情况

集群基本环境准备

开通云主机
配置集群ssh免密登录
centos主机配置SSH免密登录
将 SELinux 设置为 permissive 模式(相当于将其禁用)

  1. sudo setenforce 0
  2. sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

image.png
关闭swap

  1. swapoff -a
  2. sed -ri 's/.*swap.*/#&/' /etc/fstab

image.png
允许 iptables 检查桥接流量

  1. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
  2. br_netfilter
  3. EOF
  4. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
  5. net.bridge.bridge-nf-call-ip6tables = 1
  6. net.bridge.bridge-nf-call-iptables = 1
  7. EOF

使以上配置生效

  1. sudo sysctl --system

设置时间同步

提前拷贝rpm文件

  1. #ntp
  2. /opt/software/ntp
  3. rpm -ivh *.rpm

使用kubeadm创建集群

离线安装 kubelet、kubeadm、kubectl

以下操作在所有节点执行

官网下载 https://www.downloadkubernetes.com/

  1. cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. exclude=kubelet kubeadm kubectl
  11. EOF
  1. sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
  1. sudo systemctl enable --now kubelet

检查 kubelet 服务

  1. systemctl status kubelet

image.png

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

使用 kubeadm 引导集群

以下操作在所有节点执行

所有节点下载镜像

  1. sudo tee ./images.sh <<-'EOF'
  2. #!/bin/bash
  3. images=(
  4. kube-apiserver:v1.20.9
  5. kube-proxy:v1.20.9
  6. kube-controller-manager:v1.20.9
  7. kube-scheduler:v1.20.9
  8. coredns:1.7.0
  9. etcd:3.4.13-0
  10. pause:3.2
  11. )
  12. for imageName in ${images[@]} ; do
  13. docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
  14. done
  15. EOF
  1. chmod +x ./images.sh && ./images.sh

共7个镜像
image.png

初始化master节点

内网地址:10.0.4.17 ren 外网地址:1.15.230.38 ren

主节点执行

  1. #所有机器添加master域名映射,以下需要修改为自己的
  2. echo "10.0.4.17 cluster-endpoint" >> /etc/hosts

其他节点执行

  1. #所有机器添加master域名映射,以下需要修改为自己的
  2. echo "1.15.230.38 cluster-endpoint" >> /etc/hosts

验证ping
image.png

有问题,ping不通 解决方案:云主机开放ICMP协议

主节点执行

—apiserver-advertise-address 使用公网IP

  1. kubeadm init \
  2. --apiserver-advertise-address=1.15.230.38 \
  3. --control-plane-endpoint=cluster-endpoint \
  4. --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
  5. --kubernetes-version v1.20.9 \
  6. --service-cidr=10.96.0.0/16 \
  7. --pod-network-cidr=192.168.0.0/16

报错:detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/ 原因:docker默认的cgroup driver 是 cgroupfs 解决方案:调整docker的cgroup driver 为 systemd

解决方案详见
docker调整cgroup driver以适配k8s
再次执行
执行日志如下
image.png
完整执行日志

  1. [root@ren ~]# kubeadm init \
  2. > --apiserver-advertise-address=10.0.4.17 \
  3. > --control-plane-endpoint=cluster-endpoint \
  4. > --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
  5. > --kubernetes-version v1.20.9 \
  6. > --service-cidr=10.96.0.0/16 \
  7. > --pod-network-cidr=192.168.0.0/16
  8. [init] Using Kubernetes version: v1.20.9
  9. [preflight] Running pre-flight checks
  10. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
  11. [preflight] Pulling images required for setting up a Kubernetes cluster
  12. [preflight] This might take a minute or two, depending on the speed of your internet connection
  13. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  14. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  15. [certs] Generating "ca" certificate and key
  16. [certs] Generating "apiserver" certificate and key
  17. [certs] apiserver serving cert is signed for DNS names [cluster-endpoint kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ren] and IPs [10.96.0.1 10.0.4.17]
  18. [certs] Generating "apiserver-kubelet-client" certificate and key
  19. [certs] Generating "front-proxy-ca" certificate and key
  20. [certs] Generating "front-proxy-client" certificate and key
  21. [certs] Generating "etcd/ca" certificate and key
  22. [certs] Generating "etcd/server" certificate and key
  23. [certs] etcd/server serving cert is signed for DNS names [localhost ren] and IPs [10.0.4.17 127.0.0.1 ::1]
  24. [certs] Generating "etcd/peer" certificate and key
  25. [certs] etcd/peer serving cert is signed for DNS names [localhost ren] and IPs [10.0.4.17 127.0.0.1 ::1]
  26. [certs] Generating "etcd/healthcheck-client" certificate and key
  27. [certs] Generating "apiserver-etcd-client" certificate and key
  28. [certs] Generating "sa" key and public key
  29. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  30. [kubeconfig] Writing "admin.conf" kubeconfig file
  31. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  32. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  33. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  34. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  35. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  36. [kubelet-start] Starting the kubelet
  37. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  38. [control-plane] Creating static Pod manifest for "kube-apiserver"
  39. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  40. [control-plane] Creating static Pod manifest for "kube-scheduler"
  41. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  42. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  43. [apiclient] All control plane components are healthy after 14.002908 seconds
  44. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  45. [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
  46. [upload-certs] Skipping phase. Please see --upload-certs
  47. [mark-control-plane] Marking the node ren as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
  48. [mark-control-plane] Marking the node ren as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  49. [bootstrap-token] Using token: co20qa.8557cvja5f3g0vs5
  50. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  51. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  52. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  53. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  54. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  55. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  56. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  57. [addons] Applied essential addon: CoreDNS
  58. [addons] Applied essential addon: kube-proxy
  59. Your Kubernetes control-plane has initialized successfully!
  60. To start using your cluster, you need to run the following as a regular user:
  61. mkdir -p $HOME/.kube
  62. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  63. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  64. Alternatively, if you are the root user, you can run:
  65. export KUBECONFIG=/etc/kubernetes/admin.conf
  66. You should now deploy a pod network to the cluster.
  67. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  68. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  69. You can now join any number of control-plane nodes by copying certificate authorities
  70. and service account keys on each node and then running the following as root:
  71. kubeadm join cluster-endpoint:6443 --token co20qa.8557cvja5f3g0vs5 \
  72. --discovery-token-ca-cert-hash sha256:76627665f22a96c465792de8c9e4947b34a658c0d5be3fd66aef5cb375ac2937 \
  73. --control-plane
  74. Then you can join any number of worker nodes by running the following on each as root:
  75. kubeadm join cluster-endpoint:6443 --token co20qa.8557cvja5f3g0vs5 \
  76. --discovery-token-ca-cert-hash sha256:76627665f22a96c465792de8c9e4947b34a658c0d5be3fd66aef5cb375ac2937
  77. [root@ren ~]#

按提示操作

To start using your cluster, you need to run the following as a regular user:

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  1. export KUBECONFIG=/etc/kubernetes/admin.conf

验证master节点是否正常

  1. kubectl get nodes

image.png
STATUS=NotReady 的原因是没有部署网络插件

You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

这里选择 Calico
image.png
执行下载命令

  1. curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O

完整文件如下(离线环境,可拷贝)

应用yaml文件 calico.yaml 安装相关组件

  1. kubectl apply -f calico.yaml

image.png
验证,查看集群中部署了哪些应用

  1. kubectl get pods -A

image.png
等待应用状态全部变为Running,再执行下一步
image.png
检查master节点是否就绪

  1. kubectl get nodes

image.png

master节点就绪后加入worker节点

拷贝上述执行日志的worker节点相关命令

此命令含token,24小时内有效

以下命令在worker节点执行

  1. kubeadm join cluster-endpoint:6443 --token co20qa.8557cvja5f3g0vs5 \
  2. --discovery-token-ca-cert-hash sha256:76627665f22a96c465792de8c9e4947b34a658c0d5be3fd66aef5cb375ac2937

image.png
验证worker节点是否加入成功
master节点执行命令

  1. kubectl get nodes

image.png
NotReady原因:有一些应用还在初始化

监听进度

  1. #监听应用启动情况
  2. kubectl get pod -A -w
  3. #或者
  4. watch -n 1 kubectl get pod -A
  5. #检查各节点连接状态
  6. kubectl get pods -o wide --all-namespaces
  7. #或者
  8. watch -n 1 kubectl get pods -o wide --all-namespaces

image.png
若安装报错,则重新安装

  1. kubectl apply -f calico.yaml

发现报错:error: unable to recognize “calico.yaml”: no matches for kind “PodDisruptionBudget” in version “policy/v1” 原因:k8s不支持当前calico版本的原因, calico版本与k8s版本支持关系可到calico官网查看

image.png
重新下载

  1. curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O

重新安装

  1. kubectl apply -f calico.yaml

不成功可能是各主机未处于同一局域网

若集群令牌过期且有新的worker节点加入,重新生成集群令牌

master节点操作 高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

  1. kubeadm token create --print-join-command

image.png

部署dashboard

yaml安装


kubernetes官方提供的可视化界面
https://github.com/kubernetes/dashboard

  1. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

无法下载则离线安装

  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. apiVersion: v1
  15. kind: Namespace
  16. metadata:
  17. name: kubernetes-dashboard
  18. ---
  19. apiVersion: v1
  20. kind: ServiceAccount
  21. metadata:
  22. labels:
  23. k8s-app: kubernetes-dashboard
  24. name: kubernetes-dashboard
  25. namespace: kubernetes-dashboard
  26. ---
  27. kind: Service
  28. apiVersion: v1
  29. metadata:
  30. labels:
  31. k8s-app: kubernetes-dashboard
  32. name: kubernetes-dashboard
  33. namespace: kubernetes-dashboard
  34. spec:
  35. ports:
  36. - port: 443
  37. targetPort: 8443
  38. selector:
  39. k8s-app: kubernetes-dashboard
  40. ---
  41. apiVersion: v1
  42. kind: Secret
  43. metadata:
  44. labels:
  45. k8s-app: kubernetes-dashboard
  46. name: kubernetes-dashboard-certs
  47. namespace: kubernetes-dashboard
  48. type: Opaque
  49. ---
  50. apiVersion: v1
  51. kind: Secret
  52. metadata:
  53. labels:
  54. k8s-app: kubernetes-dashboard
  55. name: kubernetes-dashboard-csrf
  56. namespace: kubernetes-dashboard
  57. type: Opaque
  58. data:
  59. csrf: ""
  60. ---
  61. apiVersion: v1
  62. kind: Secret
  63. metadata:
  64. labels:
  65. k8s-app: kubernetes-dashboard
  66. name: kubernetes-dashboard-key-holder
  67. namespace: kubernetes-dashboard
  68. type: Opaque
  69. ---
  70. kind: ConfigMap
  71. apiVersion: v1
  72. metadata:
  73. labels:
  74. k8s-app: kubernetes-dashboard
  75. name: kubernetes-dashboard-settings
  76. namespace: kubernetes-dashboard
  77. ---
  78. kind: Role
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. labels:
  82. k8s-app: kubernetes-dashboard
  83. name: kubernetes-dashboard
  84. namespace: kubernetes-dashboard
  85. rules:
  86. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  87. - apiGroups: [""]
  88. resources: ["secrets"]
  89. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  90. verbs: ["get", "update", "delete"]
  91. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  92. - apiGroups: [""]
  93. resources: ["configmaps"]
  94. resourceNames: ["kubernetes-dashboard-settings"]
  95. verbs: ["get", "update"]
  96. # Allow Dashboard to get metrics.
  97. - apiGroups: [""]
  98. resources: ["services"]
  99. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  100. verbs: ["proxy"]
  101. - apiGroups: [""]
  102. resources: ["services/proxy"]
  103. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  104. verbs: ["get"]
  105. ---
  106. kind: ClusterRole
  107. apiVersion: rbac.authorization.k8s.io/v1
  108. metadata:
  109. labels:
  110. k8s-app: kubernetes-dashboard
  111. name: kubernetes-dashboard
  112. rules:
  113. # Allow Metrics Scraper to get metrics from the Metrics server
  114. - apiGroups: ["metrics.k8s.io"]
  115. resources: ["pods", "nodes"]
  116. verbs: ["get", "list", "watch"]
  117. ---
  118. apiVersion: rbac.authorization.k8s.io/v1
  119. kind: RoleBinding
  120. metadata:
  121. labels:
  122. k8s-app: kubernetes-dashboard
  123. name: kubernetes-dashboard
  124. namespace: kubernetes-dashboard
  125. roleRef:
  126. apiGroup: rbac.authorization.k8s.io
  127. kind: Role
  128. name: kubernetes-dashboard
  129. subjects:
  130. - kind: ServiceAccount
  131. name: kubernetes-dashboard
  132. namespace: kubernetes-dashboard
  133. ---
  134. apiVersion: rbac.authorization.k8s.io/v1
  135. kind: ClusterRoleBinding
  136. metadata:
  137. name: kubernetes-dashboard
  138. roleRef:
  139. apiGroup: rbac.authorization.k8s.io
  140. kind: ClusterRole
  141. name: kubernetes-dashboard
  142. subjects:
  143. - kind: ServiceAccount
  144. name: kubernetes-dashboard
  145. namespace: kubernetes-dashboard
  146. ---
  147. kind: Deployment
  148. apiVersion: apps/v1
  149. metadata:
  150. labels:
  151. k8s-app: kubernetes-dashboard
  152. name: kubernetes-dashboard
  153. namespace: kubernetes-dashboard
  154. spec:
  155. replicas: 1
  156. revisionHistoryLimit: 10
  157. selector:
  158. matchLabels:
  159. k8s-app: kubernetes-dashboard
  160. template:
  161. metadata:
  162. labels:
  163. k8s-app: kubernetes-dashboard
  164. spec:
  165. containers:
  166. - name: kubernetes-dashboard
  167. image: kubernetesui/dashboard:v2.3.1
  168. imagePullPolicy: Always
  169. ports:
  170. - containerPort: 8443
  171. protocol: TCP
  172. args:
  173. - --auto-generate-certificates
  174. - --namespace=kubernetes-dashboard
  175. # Uncomment the following line to manually specify Kubernetes API server Host
  176. # If not specified, Dashboard will attempt to auto discover the API server and connect
  177. # to it. Uncomment only if the default does not work.
  178. # - --apiserver-host=http://my-address:port
  179. volumeMounts:
  180. - name: kubernetes-dashboard-certs
  181. mountPath: /certs
  182. # Create on-disk volume to store exec logs
  183. - mountPath: /tmp
  184. name: tmp-volume
  185. livenessProbe:
  186. httpGet:
  187. scheme: HTTPS
  188. path: /
  189. port: 8443
  190. initialDelaySeconds: 30
  191. timeoutSeconds: 30
  192. securityContext:
  193. allowPrivilegeEscalation: false
  194. readOnlyRootFilesystem: true
  195. runAsUser: 1001
  196. runAsGroup: 2001
  197. volumes:
  198. - name: kubernetes-dashboard-certs
  199. secret:
  200. secretName: kubernetes-dashboard-certs
  201. - name: tmp-volume
  202. emptyDir: {}
  203. serviceAccountName: kubernetes-dashboard
  204. nodeSelector:
  205. "kubernetes.io/os": linux
  206. # Comment the following tolerations if Dashboard must not be deployed on master
  207. tolerations:
  208. - key: node-role.kubernetes.io/master
  209. effect: NoSchedule
  210. ---
  211. kind: Service
  212. apiVersion: v1
  213. metadata:
  214. labels:
  215. k8s-app: dashboard-metrics-scraper
  216. name: dashboard-metrics-scraper
  217. namespace: kubernetes-dashboard
  218. spec:
  219. ports:
  220. - port: 8000
  221. targetPort: 8000
  222. selector:
  223. k8s-app: dashboard-metrics-scraper
  224. ---
  225. kind: Deployment
  226. apiVersion: apps/v1
  227. metadata:
  228. labels:
  229. k8s-app: dashboard-metrics-scraper
  230. name: dashboard-metrics-scraper
  231. namespace: kubernetes-dashboard
  232. spec:
  233. replicas: 1
  234. revisionHistoryLimit: 10
  235. selector:
  236. matchLabels:
  237. k8s-app: dashboard-metrics-scraper
  238. template:
  239. metadata:
  240. labels:
  241. k8s-app: dashboard-metrics-scraper
  242. annotations:
  243. seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
  244. spec:
  245. containers:
  246. - name: dashboard-metrics-scraper
  247. image: kubernetesui/metrics-scraper:v1.0.6
  248. ports:
  249. - containerPort: 8000
  250. protocol: TCP
  251. livenessProbe:
  252. httpGet:
  253. scheme: HTTP
  254. path: /
  255. port: 8000
  256. initialDelaySeconds: 30
  257. timeoutSeconds: 30
  258. volumeMounts:
  259. - mountPath: /tmp
  260. name: tmp-volume
  261. securityContext:
  262. allowPrivilegeEscalation: false
  263. readOnlyRootFilesystem: true
  264. runAsUser: 1001
  265. runAsGroup: 2001
  266. serviceAccountName: kubernetes-dashboard
  267. nodeSelector:
  268. "kubernetes.io/os": linux
  269. # Comment the following tolerations if Dashboard must not be deployed on master
  270. tolerations:
  271. - key: node-role.kubernetes.io/master
  272. effect: NoSchedule
  273. volumes:
  274. - name: tmp-volume
  275. emptyDir: {}

执行安装命令

  1. kubectl apply -f dashboard.yaml

等待状态变为running

  1. kubectl get pod -A

image.png

暴露端口

  1. kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

设置type

type: ClusterIP 改为 type: NodePort

image.png
验证,确认端口映射,便于安全组放行

  1. kubectl get svc -A |grep kubernetes-dashboard

image.png
32335
image.png
image.png
浏览器访问任意一台:
https://139.198.163.196:32335
image.png

获取 Token

创建访问者账号

  1. #创建访问账号,准备一个yaml文件; vi dash.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: admin-user
  6. namespace: kubernetes-dashboard
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRoleBinding
  10. metadata:
  11. name: admin-user
  12. roleRef:
  13. apiGroup: rbac.authorization.k8s.io
  14. kind: ClusterRole
  15. name: cluster-admin
  16. subjects:
  17. - kind: ServiceAccount
  18. name: admin-user
  19. namespace: kubernetes-dashboard

安装

  1. kubectl apply -f dash.yaml

image.png
获取令牌

  1. #获取访问令牌
  2. kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

image.png
令牌内容如下

  1. eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd1bEQxbnZ1Z2JYMlZpNG1ZTUdRMUkwd29SX2xXb1ktbkRtNHh1Z2VtSVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW1sem02Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyYTEwOTAyMy05ZjAyLTQ3OGUtYmZlNS1jZmMwYjlhZjA1ZmEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.hO2xd5Mtp9CXXo8O7j9A_LCeemh6HgsypjOiS9QYItXfMd6gb33wtzS0W8rmKdGAmhN9_m1KhXimsxwyVYXcqxqYqXENEskeR2Eo7IIwS70Lv6ezEKV5y9pg9CNB2HVvNir634QCpmpbWGfF1Ry6Cb_dFx4EYxHx9jcpV2twkXG6vlBYevZf1V2q9Lszpeqj75EJxPnrqLF2xKILGO2F3seAThk7wXJBzEamtZBss4v2LR_d7LliHx0pzqNV4FEbQO4OXyQp0bayagkhhDtQGC3pnvS5FeJs0cvXgqvK_VYa6qATsLht0doRQQUjbWTH4x4rWXrzFdeVU2VeXAAMtw

粘贴
image.png
切换命令空间
image.png
image.png

完成