基于nfs部署的storageclass

    1. 01-nfs-rbac.yaml
    1. apiVersion: v1
    2. kind: ServiceAccount
    3. metadata:
    4. name: nfs-client-provisioner
    5. # namespace: default # 根据实际情况设置
    6. ---
    7. apiVersion: rbac.authorization.k8s.io/v1
    8. kind: ClusterRole
    9. metadata:
    10. name: nfs-client-provisioner-runner
    11. rules:
    12. - apiGroups: [""]
    13. resources: ["persistentvolumes"]
    14. verbs: ["get","list","watch","create","delete"]
    15. - apiGroups: [""]
    16. resources: ["persistentvolumeclaims"]
    17. verbs: ["get","list","watch","update"]
    18. - apiGroups: ["storage.k8s.io"]
    19. resources: ["storageclasses"]
    20. verbs: ["get","list","watch"]
    21. - apiGroups: [""]
    22. resources: ["events"]
    23. verbs: ["list","watch","create","update","patch"]
    24. - apiGroups: [""]
    25. resources: ["endpoints"]
    26. verbs: ["create","delete","get","list","watch","patch","update"]
    27. ---
    28. apiVersion: rbac.authorization.k8s.io/v1
    29. kind: ClusterRoleBinding
    30. metadata:
    31. name: run-nfs-client-provisioner
    32. subjects:
    33. - kind: ServiceAccount
    34. name: nfs-client-provisioner
    35. namespace: default
    36. roleRef:
    37. kind: ClusterRole
    38. name: nfs-client-provisioner-runner
    39. apiGroup: rbac.authorization.k8s.io
    1. 02-nfs-storageclass.yaml
    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: managed-nfs-storage
    5. provisioner: nfs-storage
    6. # parameters:
    7. # archiveOnDelete: "ture"
    8. # reclaimPolicy: Delete
    1. 03-nfs-provisioner.yaml
    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: nfs-client-provisioner
    5. namespace: default
    6. spec:
    7. replicas: 1
    8. selector:
    9. matchLabels:
    10. app: nfs-client-provisioner
    11. strategy:
    12. type: Recreate
    13. template:
    14. metadata:
    15. labels:
    16. app: nfs-client-provisioner
    17. spec:
    18. serviceAccountName: nfs-client-provisioner
    19. containers:
    20. - name: nfs-client-provisioner
    21. image: quay.io/external_storage/nfs-client-provisioner:latest
    22. volumeMounts:
    23. - name: nfs-client-root
    24. mountPath: /persistentvolumes
    25. env:
    26. - name: PROVISIONER_NAME
    27. value: nfs-storage # 要和storageclass中的provisioner名字一致
    28. - name: NFS_SERVER
    29. value: NFS_SERVER_IP
    30. - name: NFS_PATH
    31. value: /yunwei/prometheus
    32. volumes:
    33. - name: nfs-client-root
    34. nfs:
    35. server: NFS_SERVER_IP
    36. path: /yunwei/prometheus
    1. 使用,nginx-statefulset.yaml
    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: nginx-headless
    5. labels:
    6. app: nginx
    7. spec:
    8. ports:
    9. - port: 80
    10. name: web
    11. clusterIP: None
    12. selector:
    13. app: nginx
    14. ---
    15. apiVersion: apps/v1
    16. kind: StatefulSet
    17. metadata:
    18. name: web
    19. spec:
    20. serviceName: nginx
    21. replicas: 2
    22. selector:
    23. matchLabels:
    24. app: nginx
    25. template:
    26. metadata:
    27. labels:
    28. app: nginx
    29. spec:
    30. containers:
    31. - name: nginx
    32. image: ikubernetes/myapp:v1
    33. ports:
    34. - containerPort: 80
    35. name: web
    36. volumeMounts:
    37. - name: www
    38. mountPath: /usr/share/nginx/html
    39. volumeClaimTemplates:
    40. - metadata:
    41. name: www
    42. annotations:
    43. volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    44. spec:
    45. accessModes: ["ReadWriteOnce"]
    46. resources:
    47. requests:
    48. storage: 1G
    1. StorageClass回收策略对数据的影响
    • 第一种配置:
    1. archiveOnDelete: "false"
    2. reclaimPolicy: Delete # 默认为Delete
    1. 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2. 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3. 3.删除PVC后,PV被删除且NFS Server对应数据被删除
    • 第二种配置:
    1. archiveOnDelete: "false"
    2. reclaimPolicy: Retain
    1. 1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2. 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3. 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
    4. 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV
    • 第三种配置:
    1. archiveOnDelete: "ture"
    2. reclaimPolicy: Retain
    1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
    4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
    
    • 第四种配置:
    archiveOnDelete: "ture"
    reclaimPolicy: Delete
    
    1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
    3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
    4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中