环境说明

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. docker-desktop Ready master 50m v1.19.7

安装NFS

关闭防火墙和禁止开启启动防火墙

  1. $ sudo systemctl stop firewalld.service
  2. $ sudo systemctl disable firewalld.service
  3. $ sudo systemctl status firewalld
  4. firewalld.service - firewalld - dynamic firewall daemon
  5. Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
  6. Active: inactive (dead)
  7. Docs: man:firewalld(1)

安装nfs

  1. $ sudo yum -y install nfs-utils rpcbind

创建/data/k8s/ 目录

  1. $ sudo mkdir -p /data/k8s/
  2. $ sudo chmod 755 /data/k8s/

配置 nfs,nfs 的默认配置文件在 /etc/exports 文件下,在该文件中添加下面的配置信息:

  1. $ sudo vim /etc/exports
  2. /data/k8s *(rw,sync,no_root_squash)

配置说明:

  • /data/k8s:是共享的数据目录
  • *:表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
  • rw:读写的权限
  • sync:表示文件同时写入硬盘和内存
  • no_root_squash:当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份

启动服务 nfs

启动服务 nfs 需要向 rpc 注册,rpc 一旦重启了,注册的文件都会丢失,向他注册的服务都需要重启
注意启动顺序,

启动 rpcbind

  1. $ sudo systemctl start rpcbind.service
  2. $ sudo systemctl enable rpcbind
  3. $ sudo systemctl status rpcbind
  4. rpcbind.service - RPC bind service
  5. Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
  6. Active: active (running) since Sun 2021-07-11 22:31:08 CST; 33s ago
  7. Main PID: 4392 (rpcbind)
  8. CGroup: /system.slice/rpcbind.service
  9. └─4392 /sbin/rpcbind -w
  10. Jul 11 22:31:08 VM-8-5-centos systemd[1]: Starting RPC bind service...
  11. Jul 11 22:31:08 VM-8-5-centos systemd[1]: Started RPC bind service.

启动 nfs

  1. $ sudo systemctl start nfs.service
  2. $ sudo systemctl enable nfs
  3. $ sudo systemctl status nfs
  4. nfs-server.service - NFS server and services
  5. Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  6. Drop-In: /run/systemd/generator/nfs-server.service.d
  7. └─order-with-mounts.conf
  8. Active: active (exited) since Sun 2021-07-11 22:32:35 CST; 53s ago
  9. Main PID: 4642 (code=exited, status=0/SUCCESS)
  10. CGroup: /system.slice/nfs-server.service
  11. Jul 11 22:32:35 VM-8-5-centos systemd[1]: Starting NFS server and services...
  12. Jul 11 22:32:35 VM-8-5-centos systemd[1]: Started NFS server and services.

查看挂载权限

  1. $ cat /var/lib/nfs/etab
  2. /data/k8s *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)

检查下 nfs 是否有共享目录

  1. showmount -e 10.0.8.5
  2. Export list for 10.0.8.5:
  3. /data/k8s *

本地客户端挂载$HOME/k8s/data

  1. $ mkdir -p $HOME/k8s/data
  2. $ mount -t nfs 10.0.8.5:/data/k8s $HOME/k8s/data
  3. $ cd $HOME/k8s/data/
  4. $ sudo echo "hello world" >> test.txt

创建pv

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: pv-nfs
  5. spec:
  6. capacity:
  7. storage: 2Gi
  8. accessModes:
  9. - ReadWriteOnce
  10. persistentVolumeReclaimPolicy: Recycle
  11. nfs:
  12. path: /data/k8s
  13. server: 10.0.8.5

Capacity(存储能力)

一般来说,一个 PV 对象都要指定一个存储能力,通过 PV 的 capacity属性来设置的,目前只支持存储空间的设置,就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。

AccessModes(访问模式)

AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:

  • ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
  • ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
  • ReadWriteMany(RWX):读写权限,可以被多个节点挂载

注意:一些 PV 可能支持多种访问模式,但是在挂载的时候只能使用一种访问模式,多种访问模式是不会生效的。

persistentVolumeReclaimPolicy(回收策略)

我这里指定的 PV 的回收策略为 Recycle,目前 PV 支持的策略有三种:

  • Retain(保留)- 保留数据,需要管理员手工清理数据
  • Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevoluem/*
  • Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。

不过需要注意的是,目前只有 NFS 和 HostPath 两种类型支持回收策略。当然一般来说还是设置为 Retain 这种策略保险一点。

状态

一个 PV 的生命周期中,可能会处于4中不同的阶段:

  • Available(可用):表示可用状态,还未被任何 PVC 绑定
  • Bound(已绑定):表示 PV 已经被 PVC 绑定
  • Released(已释放):PVC 被删除,但是资源还未被集群重新声明
  • Failed(失败): 表示该 PV 的自动回收失败
    1. kubectl get pv
    2. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    3. pv1 1Gi RWO Recycle Available 3m

    StorageClass

    参考这个创建https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy

创建ServiceAccount及相关权限

创建rbac-nfs.yaml

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: nfs-client-provisioner
  5. namespace: default
  6. ---
  7. kind: ClusterRole
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. metadata:
  10. name: nfs-client-provisioner-runner
  11. rules:
  12. - apiGroups: [""]
  13. resources: ["persistentvolumes"]
  14. verbs: ["get", "list", "watch", "create", "delete"]
  15. - apiGroups: [""]
  16. resources: ["persistentvolumeclaims"]
  17. verbs: ["get", "list", "watch", "update"]
  18. - apiGroups: ["storage.k8s.io"]
  19. resources: ["storageclasses"]
  20. verbs: ["get", "list", "watch"]
  21. - apiGroups: [""]
  22. resources: ["events"]
  23. verbs: ["create", "update", "patch"]
  24. ---
  25. kind: ClusterRoleBinding
  26. apiVersion: rbac.authorization.k8s.io/v1
  27. metadata:
  28. name: run-nfs-client-provisioner
  29. subjects:
  30. - kind: ServiceAccount
  31. name: nfs-client-provisioner
  32. # replace with namespace where provisioner is deployed
  33. namespace: default
  34. roleRef:
  35. kind: ClusterRole
  36. name: nfs-client-provisioner-runner
  37. apiGroup: rbac.authorization.k8s.io
  38. ---
  39. kind: Role
  40. apiVersion: rbac.authorization.k8s.io/v1
  41. metadata:
  42. name: leader-locking-nfs-client-provisioner
  43. # replace with namespace where provisioner is deployed
  44. namespace: default
  45. rules:
  46. - apiGroups: [""]
  47. resources: ["endpoints"]
  48. verbs: ["get", "list", "watch", "create", "update", "patch"]
  49. ---
  50. kind: RoleBinding
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. metadata:
  53. name: leader-locking-nfs-client-provisioner
  54. subjects:
  55. - kind: ServiceAccount
  56. name: nfs-client-provisioner
  57. # replace with namespace where provisioner is deployed
  58. namespace: default
  59. roleRef:
  60. kind: Role
  61. name: leader-locking-nfs-client-provisioner
  62. apiGroup: rbac.authorization.k8s.io

执行

  1. kubectl apply -f rbac-nfs.yaml
  2. serviceaccount/nfs-client-provisioner created
  3. clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
  4. clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
  5. role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
  6. rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

创建NFS provisioner

创建provisioner 文件provisioner-nfs.yaml

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nfs-client-provisioner
  5. labels:
  6. app: nfs-client-provisioner
  7. # replace with namespace where provisioner is deployed
  8. namespace: default #与RBAC文件中的namespace保持一致
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: nfs-client-provisioner
  14. strategy:
  15. type: Recreate
  16. template:
  17. metadata:
  18. labels:
  19. app: nfs-client-provisioner
  20. spec:
  21. serviceAccountName: nfs-client-provisioner
  22. containers:
  23. - name: nfs-client-provisioner
  24. image: quay.io/external_storage/nfs-client-provisioner:latest
  25. volumeMounts:
  26. - name: nfs-client-root
  27. mountPath: /persistentvolumes
  28. env:
  29. - name: PROVISIONER_NAME
  30. value: nfs-storage #provisioner名称,请确保该名称与 sc-nfs.yaml文件中的provisioner名称保持一致
  31. - name: NFS_SERVER
  32. value: 81.71.154.47 #NFS Server IP地址
  33. - name: NFS_PATH
  34. value: /data/k8s #NFS挂载卷
  35. volumes:
  36. - name: nfs-client-root
  37. nfs:
  38. server: 81.71.154.47 #NFS Server IP地址
  39. path: /data/k8s #NFS 挂载卷

执行

  1. kubectl apply -f provisioner-nfs.yaml
  2. deployment.apps/nfs-client-provisioner created

创建NFS资源的StorageClass

创建StorageClass 文件sc-nfs.yaml,这里provisioner的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: managed-nfs-storage
  5. provisioner: nfs-storage
  6. parameters:
  7. archiveOnDelete: "false"

执行

  1. kubectl apply -f sc-nfs.yaml
  2. storageclass.storage.k8s.io/managed-nfs-storage created

验证

声明pvc文件test-claim.yaml

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: test-claim
  5. spec:
  6. storageClassName: managed-nfs-storage
  7. accessModes:
  8. - ReadWriteMany
  9. resources:
  10. requests:
  11. storage: 1Mi

也谢说使用annotations:volume.beta.kubernetes.io/storage-class 这个建议大家放弃使用。
官方文档已经准备弃用了。https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
图片.png

执行

  1. kubectl get sc
  2. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
  3. hostpath (default) docker.io/hostpath Delete Immediate false 19m
  4. managed-nfs-storage nfs-storage Delete Immediate false 11s
  5. kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  6. storageclass.storage.k8s.io/managed-nfs-storage patched
  7. kubectl patch storageclass hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'storageclass.storage.k8s.io/hostpath patched

执行

  1. $ kubectl apply -f test-claim.yaml
  2. persistentvolumeclaim/test-claim created
  3. pv git:(master) kubectl get pvc
  4. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  5. test-claim Bound pvc-050e4039-935c-4f18-8623-8580d1295e3a 1Mi RWX managed-nfs-storage 7s
  6. $ kubectl get pv
  7. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  8. pvc-050e4039-935c-4f18-8623-8580d1295e3a 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32s

创建pod文件test-pod.yaml

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: test-pod
  5. spec:
  6. containers:
  7. - name: test-pod
  8. image: busybox
  9. command:
  10. - "/bin/sh"
  11. args:
  12. - "-c"
  13. - "touch /mnt/SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
  14. volumeMounts:
  15. - name: nfs-pvc
  16. mountPath: "/mnt"
  17. restartPolicy: "Never"
  18. volumes:
  19. - name: nfs-pvc
  20. persistentVolumeClaim:
  21. claimName: test-claim #与PVC名称保持一致

pvc

声明pvc-nfs.yaml 配置文件

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: pvc-nfs
  5. spec:
  6. accessModes:
  7. - ReadWriteOnce
  8. resources:
  9. requests:
  10. storage: 1Gi

执行创建pvc

  1. $ kubectl apply -f pvc-nfs.yaml
  2. persistentvolumeclaim/pvc-nfs created

查看pvc

  1. $ kubectl get pvc
  2. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  3. pvc-nfs Bound pvc-aec6f5cf-c3a6-422f-9063-82ce0cdbf53a 1Gi RWO hostpath 13s

查看pv

  1. kubectl get pv
  2. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  3. pv-nfs 1Gi RWO Recycle Available 85s
  4. pvc-b9de43c1-ecd7-4e58-94d7-f5c24092ad3c 1Gi RWO Delete Bound default/pvc-nfs hostpath 61s Bound default/pvc-nfs hostpath 2m37s

错误处理

  1. 1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
  1. [root@master ~]# grep -B 5 'feature-gates' /etc/kubernetes/manifests/kube-apiserver.yaml
  2. - --service-account-key-file=/etc/kubernetes/pki/sa.pub
  3. - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
  4. - --service-cluster-ip-range=10.96.0.0/12
  5. - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
  6. - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  7. - --feature-gates=RemoveSelfLink=false #添加内容

参考

https://www.jianshu.com/p/b860d26f2951
https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner
https://www.kococ.cn/20210119/cid=670.html
https://blog.csdn.net/ag1942/article/details/115371793