7.0 思维导图

画板

7.1 k8s存储类型

emptyDir

  • Pod中容器之间数据共享,临时存储卷

案例:

  1. cat emptyDir-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: ed-pod
  7. name: ed-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: ed-pod
  12. volumeMounts:
  13. - mountPath: /cache
  14. name: cache-volume
  15. volumes:
  16. - name: cache-volume
  17. emptyDir: {}

hostPath

  • 映射主机目录或文件到容器中
  1. cat hP-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: hp-pod
  7. name: hp-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: hp-pod
  12. volumeMounts:
  13. - mountPath: /cache
  14. name: cache-volume
  15. volumes:
  16. - name: cache-volume
  17. hostPath:
  18. path: /tmp
  19. type: Directory

NFS

https://www.yuque.com/docs/share/9e24efba-f47b-438b-946b-a6565ec50fe6?# 《16、文件服务器》

  1. # yum install nfs-utils -y
  2. # vi /etc/exports
  3. /nfs/kubernetes *(rw,no_root_squash)
  4. # mkdir -p /nfs/kubernetes
  5. # systemctl start nfs
  6. # systemctl enable nfs
  7. mount -t nfs 192.168.6.22:/nfs/kubernetes /tmp/test
  • no_root_squash:这个选项表示 NFS 服务器不会将来自客户端的 root 用户请求映射为匿名用户或非特权用户。默认情况下,NFS 会将来自客户端的 root 用户请求映射为匿名用户(root_squash),这是为了安全考虑。但是,如果你信任客户端,并且希望 root 用户能够像在本地文件系统上那样进行完全访问,你可以使用 no_root_squash 选项
  1. cat nfs.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: nfs-pod
  7. name: nfs-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: nfs-pod
  12. volumeMounts:
  13. - mountPath: /cache
  14. name: nfs-client-root
  15. volumes:
  16. - name: nfs-client-root
  17. nfs:
  18. server: 192.168.6.22
  19. path: /nfs/kubernetes

PV、PVC

  1. cat pvc-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: pvc-pod
  7. name: pvc-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: pvc-pod
  12. volumeMounts:
  13. - mountPath: /cache
  14. name: www-pvc
  15. volumes:
  16. - name: www-pvc
  17. persistentVolumeClaim:
  18. claimName: mypvc001
  19. ---
  20. cat pvc.yaml
  21. apiVersion: v1
  22. kind: PersistentVolumeClaim
  23. metadata:
  24. name: mypvc001
  25. spec:
  26. accessModes:
  27. - ReadWriteMany
  28. resources:
  29. requests:
  30. storage: 2Gi
  1. cat pv.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5. name: pv001
  6. spec:
  7. capacity:
  8. storage: 2Gi
  9. accessModes:
  10. - ReadWriteMany
  11. persistentVolumeReclaimPolicy: Retain
  12. nfs:
  13. server: 192.168.6.22
  14. path: /nfs/kubernetes/pv001
  • 通过volumeName字段来关联静态的pv(可以通过<font style="color:rgb(223, 48, 121);background-color:rgb(13, 13, 13);">storageClassName:</font><font style="color:rgb(0, 166, 125);background-color:rgb(13, 13, 13);">""</font>指定为空来防止PVC自动关联到默认的存储类上
  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: my-pvc
  5. spec:
  6. volumeName: pv001
  7. accessModes:
  8. - ReadWriteMany
  9. storageClassName: "" # 这里指定了空的存储类,防止 PVC 自动关联到默认的存储类上
  10. resources:
  11. requests:
  12. storage: 2Gi

storageClass

  • 动态供给pv

7、k8s存储 - 图2

K8s默认不支持NFS动态供给,需要单独部署社区开发的插件。

项目地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

部署storageClass

  1. # 部署:
  2. cd deploy
  3. kubectl apply -f rbac.yaml # 授权访问apiserver
  4. kubectl apply -f deployment.yaml # 部署插件,需修改里面NFS服务器地址与共享目录
  5. kubectl apply -f class.yaml # 创建存储类
  6. kubectl get sc # 查看存储类
  7. # 如果需要修改为默认的存储类(其中local是sc名称)
  8. kubectl patch sc local -p '{"metadata": {"annotations": {"storageclass.beta.kubernetes.io/is-default-class": "true"}}}'
  1. cat rbac.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: nfs-client-provisioner
  6. # replace with namespace where provisioner is deployed
  7. namespace: default
  8. ---
  9. kind: ClusterRole
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. metadata:
  12. name: nfs-client-provisioner-runner
  13. rules:
  14. - apiGroups: [""]
  15. resources: ["persistentvolumes"]
  16. verbs: ["get", "list", "watch", "create", "delete"]
  17. - apiGroups: [""]
  18. resources: ["persistentvolumeclaims"]
  19. verbs: ["get", "list", "watch", "update"]
  20. - apiGroups: ["storage.k8s.io"]
  21. resources: ["storageclasses"]
  22. verbs: ["get", "list", "watch"]
  23. - apiGroups: [""]
  24. resources: ["events"]
  25. verbs: ["create", "update", "patch"]
  26. ---
  27. kind: ClusterRoleBinding
  28. apiVersion: rbac.authorization.k8s.io/v1
  29. metadata:
  30. name: run-nfs-client-provisioner
  31. subjects:
  32. - kind: ServiceAccount
  33. name: nfs-client-provisioner
  34. # replace with namespace where provisioner is deployed
  35. namespace: default
  36. roleRef:
  37. kind: ClusterRole
  38. name: nfs-client-provisioner-runner
  39. apiGroup: rbac.authorization.k8s.io
  40. ---
  41. kind: Role
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. metadata:
  44. name: leader-locking-nfs-client-provisioner
  45. # replace with namespace where provisioner is deployed
  46. namespace: default
  47. rules:
  48. - apiGroups: [""]
  49. resources: ["endpoints"]
  50. verbs: ["get", "list", "watch", "create", "update", "patch"]
  51. ---
  52. kind: RoleBinding
  53. apiVersion: rbac.authorization.k8s.io/v1
  54. metadata:
  55. name: leader-locking-nfs-client-provisioner
  56. # replace with namespace where provisioner is deployed
  57. namespace: default
  58. subjects:
  59. - kind: ServiceAccount
  60. name: nfs-client-provisioner
  61. # replace with namespace where provisioner is deployed
  62. namespace: default
  63. roleRef:
  64. kind: Role
  65. name: leader-locking-nfs-client-provisioner
  66. apiGroup: rbac.authorization.k8s.io
  1. cat deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nfs-client-provisioner
  6. labels:
  7. app: nfs-client-provisioner
  8. # replace with namespace where provisioner is deployed
  9. namespace: default
  10. spec:
  11. replicas: 1
  12. strategy:
  13. type: Recreate
  14. selector:
  15. matchLabels:
  16. app: nfs-client-provisioner
  17. template:
  18. metadata:
  19. labels:
  20. app: nfs-client-provisioner
  21. spec:
  22. serviceAccountName: nfs-client-provisioner
  23. containers:
  24. - name: nfs-client-provisioner
  25. image: lizhenliang/nfs-subdir-external-provisioner:v4.0.1
  26. volumeMounts:
  27. - name: nfs-client-root
  28. mountPath: /persistentvolumes
  29. env:
  30. - name: PROVISIONER_NAME
  31. value: k8s-sigs.io/nfs-subdir-external-provisioner
  32. - name: NFS_SERVER
  33. value: 192.168.6.22 # nfs服务地址
  34. - name: NFS_PATH
  35. value: /nfs/kubernetes # 可挂载路径
  36. volumes:
  37. - name: nfs-client-root
  38. nfs:
  39. server: 192.168.6.22 # nfs服务地址
  40. path: /nfs/kubernetes # 可挂载路径
  • 这里的nfs路径必须要是777权限,其他权限会报错:

I0329 10:20:33.637376 1 event.go:278] Event(v1.ObjectReference{Kind:”PersistentVolumeClaim”, Namespace:”default”, Name:”mypvc-sc”, UID:”2651dc3c-e1b2-468e-a549-cc4ee799b1e3”, APIVersion:”v1”, ResourceVersion:”17446”, FieldPath:””}): type: ‘Warning’ reason: ‘ProvisioningFailed’ failed to provision volume with StorageClass “managed-nfs-storage”: unable to create directory to provision new pv: mkdir /persistentvolumes/default-mypvc-sc-pvc-2651dc3c-e1b2-468e-a549-cc4ee799b1e3: permission denied

  1. cat class.yaml
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: managed-nfs-storage
  6. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
  7. parameters:
  8. archiveOnDelete: "false"

问题处理

Mounting command: mount

Mounting arguments: -t nfs 10.27.0.30:/kube/nfs/kubernetes-mysql /app/kubelet/pods/50c30d8b-c52b-4c94-a26f-0a1f35e7398c/volumes/kubernetes.io~nfs/nfs-client-root

Output: mount: /app/kubelet/pods/50c30d8b-c52b-4c94-a26f-0a1f35e7398c/volumes/kubernetes.io~nfs/nfs-client-root: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program.

7、k8s存储 - 图3

  • 节点没有部署nfs客户端yum install nfs-utils

测试

  • nfs目录中创建文件夹:--
  1. cat pvc-sc-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: sc-pod
  7. name: sc-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: sc-pod
  12. volumeMounts:
  13. - mountPath: /cache
  14. name: www-sc
  15. volumes:
  16. - name: www-sc
  17. persistentVolumeClaim:
  18. claimName: mypvc-sc
  1. cat pvc-sc.yaml
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: mypvc-sc
  6. spec:
  7. storageClassName: managed-nfs-storage
  8. accessModes:
  9. - ReadWriteMany
  10. resources:
  11. requests:
  12. storage: 5Gi

7.2 StatefulSet

画板

测试

  • 前提:已部署storageClass
  1. cat web.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: nginx
  6. labels:
  7. app: nginx
  8. namespace: yyy
  9. spec:
  10. ports:
  11. - port: 80
  12. name: web
  13. clusterIP: None
  14. selector:
  15. app: nginx
  16. ---
  17. apiVersion: apps/v1
  18. kind: StatefulSet
  19. metadata:
  20. name: web
  21. namespace: yyy
  22. spec:
  23. serviceName: "nginx"
  24. replicas: 2
  25. selector:
  26. matchLabels:
  27. app: nginx
  28. template:
  29. metadata:
  30. labels:
  31. app: nginx
  32. spec:
  33. containers:
  34. - name: nginx
  35. image: nginx
  36. ports:
  37. - containerPort: 80
  38. name: web
  39. volumeMounts:
  40. - name: www
  41. mountPath: /usr/share/nginx/html
  42. volumeClaimTemplates:
  43. - metadata:
  44. name: www
  45. spec:
  46. storageClassName: managed-nfs-storage
  47. accessModes: [ "ReadWriteOnce" ]
  48. resources:
  49. requests:
  50. storage: 1Gi
  • 唯一的网络标识

<font style="color:#F5222D;">for i in 0 1; do kubectl -n yyy exec "web-$i" -- sh -c 'hostname'; done</font>

<font style="color:#F5222D;">for i in 0 1; do kubectl -n yyy exec "web-$i" -- sh -c 'hostname -i'; done</font>

  • 检查他们在集群内部的 DNS 地址
  1. k run -i --tty --image=busybox:1.28.4 dns-test --restart=Never --rm -n yyy
  2. If you don't see a command prompt, try pressing enter.
  3. / # nslookup web-0
  4. Server: 10.96.0.10
  5. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  6. nslookup: can't resolve 'web-0'
  7. / # nslookup web-0.nginx
  8. Server: 10.96.0.10
  9. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  10. Name: web-0.nginx
  11. Address 1: 10.244.169.133 web-0.nginx.yyy.svc.cluster.local
  12. / # nslookup web-1.nginx
  13. Server: 10.96.0.10
  14. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
  15. Name: web-1.nginx
  16. Address 1: 10.244.36.123 web-1.nginx.yyy.svc.cluster.local

删除sts并查看他们新建的信息

  • <font style="color:#F5222D;">watch -n 1 kubectl -n yyy get pod,sts</font>
  • kubectl -n yyy delete pods -l app=nginx

你会发现IP改变了??

不要在其他应用中使用statefulset的pod的IP地址(很重要)

使用主机名提供服务

  1. for i in 0 1; do kubectl -n yyy exec "web-$i" -- sh -c 'echo "$(hostname)" > /usr/share/nginx/html/index.html'; done
  2. for i in 0 1; do kubectl -n yyy exec -i -t "web-$i" -- curl http://localhost/; done

持久化存储

删除pod,等重新创建并ready后再次使用上面的curl命令还是会得到主机名;

虽然 **web-0web-1 被重新调度了,但它们仍然继续监听各自的主机名,因为和它们的 PersistentVolumeClaim 相关联的 PersistentVolume 被重新挂载到了各自的 volumeMount 上。 不管 web-0web-1** 被调度到了哪个节点上,它们的 PersistentVolumes 将会被挂载到合适的挂载点上。

statefulset伸缩

  • 两个终端

扩容

  1. $ kubectl get pod -n yyy -w -l app=nginx
  2. NAME READY STATUS RESTARTS AGE
  3. web-0 1/1 Running 1 18h
  4. web-1 1/1 Running 1 18h
  5. web-2 0/1 Pending 0 0s
  6. web-2 0/1 Pending 0 0s
  7. web-2 0/1 Pending 0 2s
  8. web-2 0/1 ContainerCreating 0 2s
  9. web-2 0/1 ContainerCreating 0 3s
  10. web-2 1/1 Running 0 20s
  11. web-3 0/1 Pending 0 0s
  12. web-3 0/1 Pending 0 0s
  13. web-3 0/1 Pending 0 2s
  14. web-3 0/1 ContainerCreating 0 2s
  15. web-3 0/1 ContainerCreating 0 3s
  16. web-3 1/1 Running 0 20s
  17. web-4 0/1 Pending 0 0s
  18. web-4 0/1 Pending 0 0s
  19. web-4 0/1 Pending 0 2s
  20. web-4 0/1 ContainerCreating 0 2s
  21. web-4 0/1 ContainerCreating 0 3s
  22. web-4 1/1 Running 0 20s
  1. $ kubectl -n yyy scale sts web --replicas=5

缩容

  1. $ kubectl -n yyy get pod -w -l app=nginx
  2. NAME READY STATUS RESTARTS AGE
  3. web-0 1/1 Running 1 18h
  4. web-1 1/1 Running 1 18h
  5. web-2 1/1 Running 0 5m30s
  6. web-3 1/1 Running 0 5m10s
  7. web-4 1/1 Running 0 4m50s
  8. web-4 1/1 Terminating 0 7m10s
  9. web-4 1/1 Terminating 0 7m11s
  10. web-4 0/1 Terminating 0 7m12s
  11. web-4 0/1 Terminating 0 7m22s
  12. web-4 0/1 Terminating 0 7m22s
  13. web-3 1/1 Terminating 0 7m42s
  14. web-3 1/1 Terminating 0 7m42s
  15. web-3 0/1 Terminating 0 7m43s
  16. web-3 0/1 Terminating 0 7m46s
  17. web-3 0/1 Terminating 0 7m46s
  1. $ kubectl -n yyy patch sts web -p '{"spec":{"replicas":3}}'
  2. $ kubectl -n yyy get pvc -l app=nginx
  3. $ kubectl get pv -l app=nginx
  • 顺序终止:控制器会按照与 Pod 序号索引相反的顺序每次删除一个 Pod。在删除下一个 Pod 前会等待上一个被完全关闭。
  • 五个 PersistentVolumeClaims 和五个 PersistentVolumes 仍然存在。 查看 Pod 的 稳定存储,我们发现当删除 StatefulSet 的 Pod 时,挂载到 StatefulSet 的 Pod 的 PersistentVolumes 不会被删除。

更新(升级)

更新策略由 StatefulSet API Object 的spec.updateStrategy 字段决定。这个特性能够用来更新一个 StatefulSet 中的 Pod 的 container images,resource requests,以及 limits,labels 和 annotations。 RollingUpdate滚动更新是 StatefulSets 默认策略。
  1. kubectl edit sts web -n yyy
  2. # 或者
  3. kubectl patch sts web --type='json' -p='[{"op": "replace", "path": "/spec/template/containers/0/image/", "value": "nginx:1.17"}]'
  1. $ k get po -w -l app=nginx -n yyy
  2. NAME READY STATUS RESTARTS AGE
  3. web-0 1/1 Running 1 18h
  4. web-1 1/1 Running 1 18h
  5. web-2 1/1 Running 0 29m
  6. web-2 1/1 Terminating 0 30m
  7. web-2 1/1 Terminating 0 30m
  8. web-2 0/1 Terminating 0 30m
  9. web-2 0/1 Terminating 0 30m
  10. web-2 0/1 Terminating 0 30m
  11. web-2 0/1 Pending 0 0s
  12. web-2 0/1 Pending 0 0s
  13. web-2 0/1 ContainerCreating 0 0s
  14. web-2 0/1 ContainerCreating 0 0s
  15. web-2 1/1 Running 0 18s
  16. web-1 1/1 Terminating 1 18h
  17. web-1 1/1 Terminating 1 18h
  18. web-1 0/1 Terminating 1 18h
  19. web-1 0/1 Terminating 1 18h
  20. web-1 0/1 Terminating 1 18h
  21. web-1 0/1 Pending 0 0s
  22. web-1 0/1 Pending 0 0s
  23. web-1 0/1 ContainerCreating 0 0s
  24. web-1 0/1 ContainerCreating 0 0s
  25. web-1 1/1 Running 0 22s
  26. web-0 1/1 Terminating 1 18h
  27. web-0 1/1 Terminating 1 18h
  28. web-0 0/1 Terminating 1 18h
  29. web-0 0/1 Terminating 1 18h
  30. web-0 0/1 Terminating 1 18h
  31. web-0 0/1 Pending 0 0s
  32. web-0 0/1 Pending 0 0s
  33. web-0 0/1 ContainerCreating 0 0s
  34. web-0 0/1 ContainerCreating 0 1s
  35. web-0 1/1 Running 0 18s

查看镜像

  1. for p in 0 1 2; do kubectl -n yyy get pod "web-$p" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
  2. $ kubectl rollout status sts/web -n yyy
  3. partitioned roll out complete: 3 new pods have been updated...

7.3 configMap

用程序配置

用configmap数据有两种方式:

  • 变量注入
  • 数据卷挂载
  1. k create configmap myconfigmap --from-literal=admin=weblogic --from-literal=password=welcome1 -o yaml --dry-run=client > myconfigmap.yaml
  2. cat myconfigmap.yaml
  3. apiVersion: v1
  4. data:
  5. admin: weblogic
  6. password: welcome1
  7. kind: ConfigMap
  8. metadata:
  9. name: myconfigmap
  1. cat mycm-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: mycm-pod
  7. name: mycm-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: mycm-pod
  12. env:
  13. - name: USERNAME
  14. valueFrom:
  15. configMapKeyRef:
  16. name: myconfigmap
  17. key: admin
  18. - name: PASSWORD
  19. valueFrom:
  20. configMapKeyRef:
  21. name: myconfigmap
  22. key: password

7.4 Secret

  1. k create secret generic mysecret --from-literal=username=root --from-literal=password=gsdx_123 -o yaml --dry-run=client > mysecret.yaml
  2. cat mysecret.yaml
  3. apiVersion: v1
  4. data:
  5. password: Z3NkeF8xMjM=
  6. username: cm9vdA==
  7. kind: Secret
  8. metadata:
  9. name: mysecret
  1. cat mysecret-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. labels:
  6. run: mysecret-pod
  7. name: mysecret-pod
  8. spec:
  9. containers:
  10. - image: nginx
  11. name: mycm-pod
  12. env:
  13. - name: USERNAME
  14. valueFrom:
  15. secretKeyRef:
  16. name: mysecret
  17. key: root
  18. - name: PASSWORD
  19. valueFrom:
  20. secretKeyRef:
  21. name: mysecret
  22. key: password
  • 自己排查错误