前言

  • k8s中有默认的几个角色
    • role:特定命名空间访问权限
    • ClusterRole:所有命名空间的访问权限
  • 角色绑定
    • roleBinding:角色绑定到主体
    • ClusterRoleBinding:集群角色绑定到主体
  • 主体
    • user:用户
    • group:用户组
    • serviceAccount:服务账号
  • 角色绑定主题有两种方式:
    • 绑定kind为service account
    • 绑定kind为user

      一、 在集群内拿到Token

      1.1 创建命名空间sademo

      1. # 创建命名空间
      2. kubectl create ns sademo

      1.2 创建角色sa-role

      1. # 创建角色
      2. # 新建role yaml文件
      3. vim sa-role.yaml
      4. # 执行
      5. kubectl apply -f sa-role.yaml
      sa-role.yaml ```yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: sademo # namespace的name,可更改 name: sa-role # role的name,可更改 rules:
  • apiGroups: [“”] # “” indicates the core API group resources: [“pods”] verbs: [“get”, “watch”, “list”]
  1. <a name="wTBlP"></a>
  2. ### 1.3 创建角色绑定user-rolebinding
  3. ```shell
  4. #创建serviceAccount
  5. kubectl create serviceaccount sa-sa
  6. # 创建roleBinding yaml文件
  7. vim user-rolebinding.yaml
  8. kubectl apply -f user-rolebinding.yaml

1.5 service account绑定(kind 是service account)

service account绑定的yaml
sa-roleRinding.yaml

  1. kind: RoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: sa-rolebinding
  5. namespace: sademo
  6. subjects:
  7. - kind: ServiceAccount
  8. name: sa-sa # Name is case sensitive
  9. namespace: sademo
  10. roleRef:
  11. kind: Role #this must be Role or ClusterRole
  12. name: sa-role # this must match the name of the Role or ClusterRole you wish to bind to
  13. apiGroup: rbac.authorization.k8s.io

1.5.1 在pod中使用service account

  1. # 创建pod
  2. vim sa-nginx-pod.yaml
  3. 执行
  4. kubectl apply -f sa-nginx-pod.yaml
  5. # 查看pod的信息
  6. kubectl describe pod sa-nginx-pod
  7. # 进入pod内部
  8. # ab-nginx-pod 为新创建的pod的名称 可以通过 kubect get pod 查看
  9. kubectl exec -it sa-nginx-pod -- /bin/bash
  10. # 生成CA和Tocken
  11. root@sa-nginx-pod:/#
  12. export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  13. root@sa-nginx-pod:/#
  14. TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
  15. # 使用curl 获取 pods列表
  16. root@sa-nginx-pod:/#
  17. curl -H "Authorization: Bearer $TOKEN" https://10.4.104.169:6443/api/v1/namespaces/defult/pods
  18. kubectl exec it sa-nginx-pod -- /bin/sh
  19. cd /var/run/secrets/kubernetes.io/serviceaccount
  20. cat token

image.png
sa-nginx-pod.yaml:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: sa-nginx-pod
  5. spec:
  6. serviceAccountName: sa-sa
  7. containers:
  8. - image: nginx:latest
  9. name: nginx

二、验证能否在集群外拿到Tocken

  1. # 获取service account
  2. kubectl get serviceaccount
  3. # 查看service account 信息
  4. kubectl describe sa sa-sa
  5. # 查看secret 的json信息
  6. kubectl get secret secretname -o json
  7. # 转换Token
  8. echo "Tocken" | base64 --decode

image.png

三、 验证Tocken是否相同

根据截图,相同√

  1. "eyJhbGciOiJSUzI1NiIsImtpZCI6IlhGbHRva3gyMDB6RnJoYlBhRjJ0XzdvdVZZZjNBLU5ONHhEMFVGdmNiN0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InNhLXNhLXRva2VuLXZ0YjJ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InNhLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODljOGVlYWEtZGY5Mi00NjI2LWJkMzUtOTczZmRlODlhYWUyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6c2Etc2EifQ.DdhVDUO6azpa9bY-803SvFm-YAyz4pSOspFBLts91mnxaQEH9AY-yTJt1DR8QBMVz4XnIY0bckkq3SJoIq_ZgOdcBl31EM7srvPl67hkjYF1562i3YmOfDj3D7b3X7hrB3xKdpbK9youlqAtalfR9DnvP3c9H4n5asI2Nu37IoxBuEPLv7Ke_mizfb638sV4rNyhKzx7WWeyLIxmebk-_C7_F6rZlGLgG3245OwLwvYu_HSiZtcRMeK9-pM417PXLqAVOJ5g1VKgn30jXXs6U0ht0DMylziv2en1q_iQfT3YAPfVenqjtaBxmlnZlGuiA9rXGFuCuBxiCWe7viP6Qw"

四、用代码测试

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  6. "k8s.io/apimachinery/pkg/runtime/schema"
  7. "k8s.io/client-go/dynamic"
  8. "k8s.io/client-go/rest"
  9. )
  10. func getConfig() *rest.Config {
  11. config := rest.Config{
  12. // Host: "https://10.2.238.171:6443",
  13. Host: "https://10.4.104.169:6443",
  14. // ContentConfig: rest.ContentConfig{
  15. // GroupVersion: &v1.SchemeGroupVersion,
  16. // NegotiatedSerializer: scheme.Codecs.WithoutConversion(),
  17. // },
  18. TLSClientConfig: rest.TLSClientConfig{
  19. Insecure: true,
  20. },
  21. // BearerToken: "eyJhbGciOiJSUzI1NiIsImtpZCI6IlhlazJtR3QxTGQ0OEhrN1ZoZWE2d2NuenZEbEc4WjNSeXV6RnlvUEpobEUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImN2YmFja3VwLXRva2VuLTg5OHQ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImN2YmFja3VwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZDRjYTE0NzQtNTM5Yy00Y2FhLTllNDMtNmM3OWUwNDQwYTExIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y3ZiYWNrdXAifQ.GY4UVEtXufBvDGuqhxZAzqTcdGh-wUpJ9bHYsCu5ds0uFqMNmu4iQBueW6aJJRnAl1ziiRFYANvMIg5PMGlszg1H7EuDTg79dUD0j1PNE7JgDCeP0yIzqEKXJlWQKK7W8WIa-KSMK3JyHjfN3h2tOMgilWbxweKBc5lzxHWJlyXI2caoV_ihx6pWRBIyadUBl1ptgc7GrkMnlEcstgbLtUcq7z5pgptTFXFRFi-4_SwsSo9QiCkDi0QxiuziESnbbzPUGkdqb87uL_yKhT0SvWTFAJ3N4gLq4Zi76hQvgcUT5i2zSV8T7BPgMA_cSOodRqyJqJyfHqqhMYuQU-sGlg",
  22. BearerToken: "eyJhbGciOiJSUzI1NiIsImtpZCI6IlhGbHRva3gyMDB6RnJoYlBhRjJ0XzdvdVZZZjNBLU5ONHhEMFVGdmNiN0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InNhLXNhLXRva2VuLXZ0YjJ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InNhLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODljOGVlYWEtZGY5Mi00NjI2LWJkMzUtOTczZmRlODlhYWUyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6c2Etc2EifQ.DdhVDUO6azpa9bY-803SvFm-YAyz4pSOspFBLts91mnxaQEH9AY-yTJt1DR8QBMVz4XnIY0bckkq3SJoIq_ZgOdcBl31EM7srvPl67hkjYF1562i3YmOfDj3D7b3X7hrB3xKdpbK9youlqAtalfR9DnvP3c9H4n5asI2Nu37IoxBuEPLv7Ke_mizfb638sV4rNyhKzx7WWeyLIxmebk-_C7_F6rZlGLgG3245OwLwvYu_HSiZtcRMeK9-pM417PXLqAVOJ5g1VKgn30jXXs6U0ht0DMylziv2en1q_iQfT3YAPfVenqjtaBxmlnZlGuiA9rXGFuCuBxiCWe7viP6Qw",
  23. }
  24. return &config
  25. }
  26. func main() {
  27. // 设置资源组合版本
  28. gvr := schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"}
  29. // 获取config对象
  30. config := getConfig()
  31. // 获取动态客户端
  32. dynamicClient, _ := dynamic.NewForConfig(config)
  33. //获取deployment资源
  34. resStruct, _ := dynamicClient.Resource(gvr).Namespace("default").Get(context.TODO(), "mysql", metav1.GetOptions{})
  35. // 将unstructured序列化成json
  36. j, _ := resStruct.MarshalJSON()
  37. // 打印json
  38. fmt.Println(string(j))
  39. }

结果:
image.png

附录(以下是错误尝试记录)

一、 尝试不在Linux下创建用户

用户名user绑定(kind是User)

角色绑定的yaml
user-rolebinding.yaml

  1. kind: RoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: user-rolebingding # roleBinding的name,可更改
  5. namespace: sademo
  6. subjects:
  7. - kind: User
  8. name: lucy # Name is case sensitive
  9. apiGroup: rbac.authorization.k8s.io
  10. roleRef:
  11. kind: Role #this must be Role or ClusterRole
  12. name: sa-role # this must match the name of the Role or ClusterRole you wish to bind to
  13. apiGroup: rbac.authorization.k8s.io

1.4.1 创建角色的文件夹

  1. mkdir /usr/local/k8s/lucy
  2. vim lucy-csr.json

lucy-csr.json

  1. {
  2. "CN": "lucy",
  3. "hosts": [],
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. },
  8. "names": [
  9. {
  10. "C": "CN",
  11. "L": "BeiJing",
  12. "ST": "BeiJing",
  13. # Group
  14. "O": "k8s",
  15. "OU": "System"
  16. }
  17. ]
  18. }

1.4.2 下载证书工具:

  1. mkdir /udr/local/bin
  2. wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  3. wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
  4. wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  5. chmod a+x *

到lucy的目录下查看kubenetes

  1. ls /usr/local/k8s/lucy/etc/kubernetes/pki

image.png

1.4.3 生成令牌,设置各种参数

  1. # 生成令牌
  2. cfssl gencert -ca=ca.crt -ca-key=ca.key -profile=kubernetes /usr/local/k8s/lucy/lucy-csr.json | cfssljson -bare lucy
  3. # ls 就能看到生成了lucy-key.pem,lucy.pem
  4. # 进入Lucy 目录 设置集群参数
  5. cd /usr/local/k8s/lucy
  6. kubectl config set-cluster kubernetes \
  7. --certificate-authority=/etc/kubernetes/pki/ca.crt \
  8. --embed-certs=true \
  9. --server=https://10.4.104.169:6443 \
  10. --kubeconfig=lucy.kubeconfig
  11. # 设置客户端认证参数
  12. kubectl config set-credentials lucy \
  13. --client-key=/etc/kubernetes/pki/lucy-key.pem \
  14. --client-certificate=/etc/kubernetes/pki/lucy.pem \
  15. --embed-certs=true \
  16. --kubeconfig=lucy.kubeconfig
  17. # 设置上下文参数
  18. kubectl config set-context default \
  19. --cluster=kubernetes \
  20. --user=lucy \
  21. --namespace=sademo \
  22. --kubeconfig=lucy.kubeconfig
  23. # 设置默认上下文(可以没有)
  24. kubectl config use-context default --kubeconfig=lucy.kubeconfig
  25. # 绑定角色空间
  26. kubectl create rolebinding devuser-admin-binding \
  27. --clusterrole=admin \
  28. --user=lucy \
  29. --namespace=sademo
  30. # 进入Lucy,设置一个./kube文件
  31. mkdir ./kube
  32. # 将其复制到lucy目录下
  33. cp lucy.kubeconfig /home/lucy/.kube
  34. # 设置文件的所有者为lucy
  35. chown lucy:lucy /home/lucy/.kube/lucy.kubeconfig

结果是:无效的用户!
image.png

二、 CV官方提供测试Token

和以上操作一样,比以上更简洁,做测试使用

  1. (以下是CV官方提供的测试命令)
  2. # 要创建Kubernetes服务帐户(例如cvbackup)
  3. kubectl create serviceaccount cvbackup
  4. # 为确保服务帐户具有执行数据保护操作的足够特权,请将服务帐户添加到default-sa-crb群集角色绑定中
  5. kubectl create clusterrolebinding default-sa-crb \
  6. --clusterrole=cluster-admin \
  7. --serviceaccount=default:cvbackup
  8. # 提取配置Kubernetes集群以进行数据保护所需的服务帐户令牌。
  9. kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='cvbackup')].data.token}"|base64 --decode