基于nfs部署的storageclass
- 01-nfs-rbac.yaml
apiVersion: v1kind: ServiceAccountmetadata:name: nfs-client-provisioner# namespace: default # 根据实际情况设置---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: nfs-client-provisioner-runnerrules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get","list","watch","create","delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get","list","watch","update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get","list","watch"]- apiGroups: [""]resources: ["events"]verbs: ["list","watch","create","update","patch"]- apiGroups: [""]resources: ["endpoints"]verbs: ["create","delete","get","list","watch","patch","update"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: run-nfs-client-provisionersubjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: defaultroleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
- 02-nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: managed-nfs-storageprovisioner: nfs-storage# parameters:# archiveOnDelete: "ture"# reclaimPolicy: Delete
- 03-nfs-provisioner.yaml
apiVersion: apps/v1kind: Deploymentmetadata:name: nfs-client-provisionernamespace: defaultspec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: nfs-storage # 要和storageclass中的provisioner名字一致- name: NFS_SERVERvalue: NFS_SERVER_IP- name: NFS_PATHvalue: /yunwei/prometheusvolumes:- name: nfs-client-rootnfs:server: NFS_SERVER_IPpath: /yunwei/prometheus
- 使用,nginx-statefulset.yaml
apiVersion: v1kind: Servicemetadata:name: nginx-headlesslabels:app: nginxspec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx---apiVersion: apps/v1kind: StatefulSetmetadata:name: webspec:serviceName: nginxreplicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: ikubernetes/myapp:v1ports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwannotations:volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"spec:accessModes: ["ReadWriteOnce"]resources:requests:storage: 1G
- StorageClass回收策略对数据的影响
- 第一种配置:
archiveOnDelete: "false"reclaimPolicy: Delete # 默认为Delete
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用3.删除PVC后,PV被删除且NFS Server对应数据被删除
- 第二种配置:
archiveOnDelete: "false"reclaimPolicy: Retain
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
- 第三种配置:
archiveOnDelete: "ture"reclaimPolicy: Retain
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
- 第四种配置:
archiveOnDelete: "ture"
reclaimPolicy: Delete
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
