[toc]

PV、PVC概述

管理存储是管理计算的一个明显问题。PersistentVolume子系统为用户和管理员提供了一个API,用于抽象如何根据消费方式提供存储的详细信息。于是引入了两个新的API资源:PersistentVolume和PersistentVolumeClaim

PersistentVolume(PV)是集群中已由管理员配置的一段网络存储。 集群中的资源就像一个节点是一个集群资源。 PV是诸如卷之类的卷插件,但是具有独立于使用PV的任何单个pod的生命周期。 该API对象包含存储的实现细节,即NFS,iSCSI或云提供商特定的存储系统。

PersistentVolumeClaim(PVC)是用户存储的请求。 它类似于pod。Pod消耗节点资源,PVC消耗存储资源。 pod可以请求特定级别的资源(CPU和内存)。 权限要求可以请求特定的大小和访问模式。

虽然PersistentVolumeClaims允许用户使用抽象存储资源,但是常见的是,用户需要具有不同属性(如性能)的PersistentVolumes,用于不同的问题。 管理员需要能够提供多种不同于PersistentVolumes,而不仅仅是大小和访问模式,而不会使用户了解这些卷的实现细节。 对于这些需求,存在StorageClass资源。

StorageClass为集群提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。 Kubernetes本身对于什么类别代表是不言而喻的。 这个概念有时在其他存储系统中称为“配置文件”

POD动态供给

动态供给主要是能够自动帮你创建pv,需要多大的空间就创建多大的pv。k8s帮助创建pv,创建pvc就直接api调用存储类来寻找pv。

如果是存储静态供给的话,会需要我们手动去创建pv,如果没有足够的资源,找不到合适的pv,那么pod就会处于pending等待的状态。而动态供给主要的一个实现就是StorageClass存储对象,其实它就是声明你使用哪个存储,然后帮你去连接,再帮你去自动创建pv。

POD使用RBD做为持久数据卷

安装与配置

RBD支持ReadWriteOnce,ReadOnlyMany两种模式
1、配置rbd-provisioner

  1. cat >external-storage-rbd-provisioner.yaml<<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: rbd-provisioner
  6. namespace: kube-system
  7. ---
  8. kind: ClusterRole
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. metadata:
  11. name: rbd-provisioner
  12. rules:
  13. - apiGroups: [""]
  14. resources: ["persistentvolumes"]
  15. verbs: ["get", "list", "watch", "create", "delete"]
  16. - apiGroups: [""]
  17. resources: ["persistentvolumeclaims"]
  18. verbs: ["get", "list", "watch", "update"]
  19. - apiGroups: ["storage.k8s.io"]
  20. resources: ["storageclasses"]
  21. verbs: ["get", "list", "watch"]
  22. - apiGroups: [""]
  23. resources: ["events"]
  24. verbs: ["create", "update", "patch"]
  25. - apiGroups: [""]
  26. resources: ["endpoints"]
  27. verbs: ["get", "list", "watch", "create", "update", "patch"]
  28. - apiGroups: [""]
  29. resources: ["services"]
  30. resourceNames: ["kube-dns"]
  31. verbs: ["list", "get"]
  32. ---
  33. kind: ClusterRoleBinding
  34. apiVersion: rbac.authorization.k8s.io/v1
  35. metadata:
  36. name: rbd-provisioner
  37. subjects:
  38. - kind: ServiceAccount
  39. name: rbd-provisioner
  40. namespace: kube-system
  41. roleRef:
  42. kind: ClusterRole
  43. name: rbd-provisioner
  44. apiGroup: rbac.authorization.k8s.io
  45. ---
  46. apiVersion: rbac.authorization.k8s.io/v1
  47. kind: Role
  48. metadata:
  49. name: rbd-provisioner
  50. namespace: kube-system
  51. rules:
  52. - apiGroups: [""]
  53. resources: ["secrets"]
  54. verbs: ["get"]
  55. ---
  56. apiVersion: rbac.authorization.k8s.io/v1
  57. kind: RoleBinding
  58. metadata:
  59. name: rbd-provisioner
  60. namespace: kube-system
  61. roleRef:
  62. apiGroup: rbac.authorization.k8s.io
  63. kind: Role
  64. name: rbd-provisioner
  65. subjects:
  66. - kind: ServiceAccount
  67. name: rbd-provisioner
  68. namespace: kube-system
  69. ---
  70. apiVersion: apps/v1
  71. kind: Deployment
  72. metadata:
  73. name: rbd-provisioner
  74. namespace: kube-system
  75. spec:
  76. selector:
  77. matchLabels:
  78. app: rbd-provisioner
  79. replicas: 1
  80. strategy:
  81. type: Recreate
  82. template:
  83. metadata:
  84. labels:
  85. app: rbd-provisioner
  86. spec:
  87. containers:
  88. - name: rbd-provisioner
  89. image: "quay.io/external_storage/rbd-provisioner:v2.0.0-k8s1.11"
  90. env:
  91. - name: PROVISIONER_NAME
  92. value: ceph.com/rbd
  93. serviceAccount: rbd-provisioner
  94. EOF
  95. kubectl apply -f external-storage-rbd-provisioner.yaml

2、配置storageclass

  1. 1、创建pod时,kubelet需要使用rbd命令去检测和挂载pv对应的ceph image,所以要在所有的worker节点安装ceph客户端ceph-common
  2. cephceph.client.admin.keyringceph.conf文件拷贝到master的/etc/ceph目录下
  3. yum -y install ceph-common
  4. 2、创建 osd pool cephmon或者admin节点
  5. ceph osd pool create kube 128 128
  6. ceph osd pool ls
  7. 3、创建k8s访问ceph的用户 cephmon或者admin节点
  8. ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
  9. 4、查看key cephmon或者admin节点
  10. ceph auth get-key client.admin
  11. ceph auth get-key client.kube
  12. 5、创建 admin secret
  13. kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
  14. --from-literal=key=AQCtovZdgFEhARAAoKhLtquAyM8ROvmBv55Jig== \
  15. --namespace=kube-system
  16. 6、在 default 命名空间创建pvc用于访问ceph secret
  17. kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
  18. --from-literal=key=AQAM9PxdEFi3AhAAzvvhuyk1AfN5twlY+4zNMA== \
  19. --namespace=default

3、配置StorageClass

  1. cat >storageclass-ceph-rdb.yaml<<EOF
  2. kind: StorageClass
  3. apiVersion: storage.k8s.io/v1
  4. metadata:
  5. name: dynamic-ceph-rdb
  6. provisioner: ceph.com/rbd
  7. parameters:
  8. monitors: 10.151.30.125:6789,10.151.30.126:6789,10.151.30.127:6789
  9. adminId: admin
  10. adminSecretName: ceph-secret
  11. adminSecretNamespace: kube-system
  12. pool: kube
  13. userId: kube
  14. userSecretName: ceph-user-secret
  15. fsType: ext4
  16. imageFormat: "2"
  17. imageFeatures: "layering"
  18. EOF

4、创建yaml

  1. kubectl apply -f storageclass-ceph-rdb.yaml

5、查看sc

  1. kubectl get storageclasses

测试使用

1、创建pvc测试

  1. cat >ceph-rdb-pvc-test.yaml<<EOF
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: ceph-rdb-claim
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. storageClassName: dynamic-ceph-rdb
  10. resources:
  11. requests:
  12. storage: 2Gi
  13. EOF
  14. kubectl apply -f ceph-rdb-pvc-test.yaml

2、查看

  1. kubectl get pvc
  2. kubectl get pv

3、创建 nginx pod 挂载测试

  1. cat >nginx-pod.yaml<<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: nginx-pod1
  6. labels:
  7. name: nginx-pod1
  8. spec:
  9. containers:
  10. - name: nginx-pod1
  11. image: nginx:alpine
  12. ports:
  13. - name: web
  14. containerPort: 80
  15. volumeMounts:
  16. - name: ceph-rdb
  17. mountPath: /usr/share/nginx/html
  18. volumes:
  19. - name: ceph-rdb
  20. persistentVolumeClaim:
  21. claimName: ceph-rdb-claim
  22. EOF
  23. kubectl apply -f nginx-pod.yaml

4、查看

  1. kubectl get pods -o wide

5、修改文件内容

  1. kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo this is from Ceph RBD!!! > /usr/share/nginx/html/index.html'

6、访问测试

  1. curl http://$podip

7、清理

  1. kubectl delete -f nginx-pod.yaml
  2. kubectl delete -f ceph-rdb-pvc-test.yaml

POD使用CephFS做为持久数据卷

CephFS方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany

Ceph端创建CephFS pool

1、如下操作在ceph的mon或者admin节点
CephFS需要使用两个Pool来分别存储数据和元数据

  1. ceph osd pool create fs_data 128
  2. ceph osd pool create fs_metadata 128
  3. ceph osd lspools

2、创建一个CephFS

  1. ceph fs new cephfs fs_metadata fs_data

3、查看

  1. ceph fs ls

部署 cephfs-provisioner

1、使用社区提供的cephfs-provisioner

  1. cat >external-storage-cephfs-provisioner.yaml<<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: cephfs-provisioner
  6. namespace: kube-system
  7. ---
  8. kind: ClusterRole
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. metadata:
  11. name: cephfs-provisioner
  12. rules:
  13. - apiGroups: [""]
  14. resources: ["persistentvolumes"]
  15. verbs: ["get", "list", "watch", "create", "delete"]
  16. - apiGroups: [""]
  17. resources: ["persistentvolumeclaims"]
  18. verbs: ["get", "list", "watch", "update"]
  19. - apiGroups: ["storage.k8s.io"]
  20. resources: ["storageclasses"]
  21. verbs: ["get", "list", "watch"]
  22. - apiGroups: [""]
  23. resources: ["events"]
  24. verbs: ["create", "update", "patch"]
  25. - apiGroups: [""]
  26. resources: ["endpoints"]
  27. verbs: ["get", "list", "watch", "create", "update", "patch"]
  28. - apiGroups: [""]
  29. resources: ["secrets"]
  30. verbs: ["create", "get", "delete"]
  31. ---
  32. kind: ClusterRoleBinding
  33. apiVersion: rbac.authorization.k8s.io/v1
  34. metadata:
  35. name: cephfs-provisioner
  36. subjects:
  37. - kind: ServiceAccount
  38. name: cephfs-provisioner
  39. namespace: kube-system
  40. roleRef:
  41. kind: ClusterRole
  42. name: cephfs-provisioner
  43. apiGroup: rbac.authorization.k8s.io
  44. ---
  45. apiVersion: rbac.authorization.k8s.io/v1
  46. kind: Role
  47. metadata:
  48. name: cephfs-provisioner
  49. namespace: kube-system
  50. rules:
  51. - apiGroups: [""]
  52. resources: ["secrets"]
  53. verbs: ["create", "get", "delete"]
  54. ---
  55. apiVersion: rbac.authorization.k8s.io/v1
  56. kind: RoleBinding
  57. metadata:
  58. name: cephfs-provisioner
  59. namespace: kube-system
  60. roleRef:
  61. apiGroup: rbac.authorization.k8s.io
  62. kind: Role
  63. name: cephfs-provisioner
  64. subjects:
  65. - kind: ServiceAccount
  66. name: cephfs-provisioner
  67. namespace: kube-system
  68. ---
  69. apiVersion: apps/v1
  70. kind: Deployment
  71. metadata:
  72. name: cephfs-provisioner
  73. namespace: kube-system
  74. spec:
  75. selector:
  76. matchLabels:
  77. app: cephfs-provisioner
  78. replicas: 1
  79. strategy:
  80. type: Recreate
  81. template:
  82. metadata:
  83. labels:
  84. app: cephfs-provisioner
  85. spec:
  86. containers:
  87. - name: cephfs-provisioner
  88. image: "quay.io/external_storage/cephfs-provisioner:latest"
  89. env:
  90. - name: PROVISIONER_NAME
  91. value: ceph.com/cephfs
  92. command:
  93. - "/usr/local/bin/cephfs-provisioner"
  94. args:
  95. - "-id=cephfs-provisioner-1"
  96. serviceAccount: cephfs-provisioner
  97. EOF
  98. kubectl apply -f external-storage-cephfs-provisioner.yaml

2、查看状态 等待running之后 再进行后续的操作

  1. kubectl get pod -n kube-system

配置 storageclass

1、查看key 在ceph的mon或者admin节点

  1. ceph auth get-key client.admin

2、创建 admin secret

  1. kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
  2. --from-literal=key=AQCtovZdgFEhARAAoKhLtquAyM8ROvmBv55Jig== \
  3. --namespace=kube-system

3、查看 secret

  1. kubectl get secret ceph-secret -n kube-system -o yaml

4、配置 StorageClass

  1. cat >storageclass-cephfs.yaml<<EOF
  2. kind: StorageClass
  3. apiVersion: storage.k8s.io/v1
  4. metadata:
  5. name: dynamic-cephfs
  6. provisioner: ceph.com/cephfs
  7. parameters:
  8. monitors: 10.151.30.125:6789,10.151.30.126:6789,10.151.30.127:6789
  9. adminId: admin
  10. adminSecretName: ceph-secret
  11. adminSecretNamespace: "kube-system"
  12. claimRoot: /volumes/kubernetes
  13. EOF

5、创建

  1. kubectl apply -f storageclass-cephfs.yaml

6、查看

  1. kubectl get sc

测试使用

1、创建pvc测试

  1. cat >cephfs-pvc-test.yaml<<EOF
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: cephfs-claim
  6. spec:
  7. accessModes:
  8. - ReadWriteMany
  9. storageClassName: dynamic-cephfs
  10. resources:
  11. requests:
  12. storage: 2Gi
  13. EOF
  14. kubectl apply -f cephfs-pvc-test.yaml

2、查看

  1. kubectl get pvc
  2. kubectl get pv

3、创建 nginx pod 挂载测试

  1. cat >nginx-pod.yaml<<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: nginx-pod2
  6. labels:
  7. name: nginx-pod2
  8. spec:
  9. containers:
  10. - name: nginx-pod2
  11. image: nginx
  12. ports:
  13. - name: web
  14. containerPort: 80
  15. volumeMounts:
  16. - name: cephfs
  17. mountPath: /usr/share/nginx/html
  18. volumes:
  19. - name: cephfs
  20. persistentVolumeClaim:
  21. claimName: cephfs-claim
  22. EOF
  23. kubectl apply -f nginx-pod.yaml

4、查看

  1. kubectl get pods -o wide

5、修改文件内容

  1. kubectl exec -ti nginx-pod2 -- /bin/sh -c 'echo This is from CephFS!!! > /usr/share/nginx/html/index.html'

6、访问pod测试

  1. curl http://$podip

7、清理

  1. kubectl delete -f nginx-pod.yaml
  2. kubectl delete -f cephfs-pvc-test.yaml