环境说明
kubectl get node
NAME STATUS ROLES AGE VERSION
docker-desktop Ready master 50m v1.19.7
安装NFS
关闭防火墙和禁止开启启动防火墙
$ sudo systemctl stop firewalld.service
$ sudo systemctl disable firewalld.service
$ sudo systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
安装nfs
$ sudo yum -y install nfs-utils rpcbind
创建/data/k8s/ 目录
$ sudo mkdir -p /data/k8s/
$ sudo chmod 755 /data/k8s/
配置 nfs,nfs 的默认配置文件在 /etc/exports 文件下,在该文件中添加下面的配置信息:
$ sudo vim /etc/exports
/data/k8s *(rw,sync,no_root_squash)
配置说明:
- /data/k8s:是共享的数据目录
- *:表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
- rw:读写的权限
- sync:表示文件同时写入硬盘和内存
- no_root_squash:当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份
启动服务 nfs
启动服务 nfs 需要向 rpc 注册,rpc 一旦重启了,注册的文件都会丢失,向他注册的服务都需要重启
注意启动顺序,
启动 rpcbind
$ sudo systemctl start rpcbind.service
$ sudo systemctl enable rpcbind
$ sudo systemctl status rpcbind
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-07-11 22:31:08 CST; 33s ago
Main PID: 4392 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─4392 /sbin/rpcbind -w
Jul 11 22:31:08 VM-8-5-centos systemd[1]: Starting RPC bind service...
Jul 11 22:31:08 VM-8-5-centos systemd[1]: Started RPC bind service.
启动 nfs
$ sudo systemctl start nfs.service
$ sudo systemctl enable nfs
$ sudo systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Sun 2021-07-11 22:32:35 CST; 53s ago
Main PID: 4642 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Jul 11 22:32:35 VM-8-5-centos systemd[1]: Starting NFS server and services...
Jul 11 22:32:35 VM-8-5-centos systemd[1]: Started NFS server and services.
查看挂载权限
$ cat /var/lib/nfs/etab
/data/k8s *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
检查下 nfs 是否有共享目录
showmount -e 10.0.8.5
Export list for 10.0.8.5:
/data/k8s *
本地客户端挂载$HOME/k8s/data
$ mkdir -p $HOME/k8s/data
$ mount -t nfs 10.0.8.5:/data/k8s $HOME/k8s/data
$ cd $HOME/k8s/data/
$ sudo echo "hello world" >> test.txt
创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/k8s
server: 10.0.8.5
Capacity(存储能力)
一般来说,一个 PV 对象都要指定一个存储能力,通过 PV 的 capacity属性来设置的,目前只支持存储空间的设置,就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。
AccessModes(访问模式)
AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
- ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
- ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
- ReadWriteMany(RWX):读写权限,可以被多个节点挂载
注意:一些 PV 可能支持多种访问模式,但是在挂载的时候只能使用一种访问模式,多种访问模式是不会生效的。
persistentVolumeReclaimPolicy(回收策略)
我这里指定的 PV 的回收策略为 Recycle,目前 PV 支持的策略有三种:
- Retain(保留)- 保留数据,需要管理员手工清理数据
- Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevoluem/*
- Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。
不过需要注意的是,目前只有 NFS 和 HostPath 两种类型支持回收策略。当然一般来说还是设置为 Retain 这种策略保险一点。
状态
一个 PV 的生命周期中,可能会处于4中不同的阶段:
- Available(可用):表示可用状态,还未被任何 PVC 绑定
- Bound(已绑定):表示 PV 已经被 PVC 绑定
- Released(已释放):PVC 被删除,但是资源还未被集群重新声明
- Failed(失败): 表示该 PV 的自动回收失败
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 1Gi RWO Recycle Available 3m
StorageClass
参考这个创建https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
创建ServiceAccount及相关权限
创建rbac-nfs.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
执行
kubectl apply -f rbac-nfs.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
创建NFS provisioner
创建provisioner 文件provisioner-nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage #provisioner名称,请确保该名称与 sc-nfs.yaml文件中的provisioner名称保持一致
- name: NFS_SERVER
value: 81.71.154.47 #NFS Server IP地址
- name: NFS_PATH
value: /data/k8s #NFS挂载卷
volumes:
- name: nfs-client-root
nfs:
server: 81.71.154.47 #NFS Server IP地址
path: /data/k8s #NFS 挂载卷
执行
kubectl apply -f provisioner-nfs.yaml
deployment.apps/nfs-client-provisioner created
创建NFS资源的StorageClass
创建StorageClass 文件sc-nfs.yaml,这里provisioner的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: nfs-storage
parameters:
archiveOnDelete: "false"
执行
kubectl apply -f sc-nfs.yaml
storageclass.storage.k8s.io/managed-nfs-storage created
验证
声明pvc文件test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
也谢说使用annotations:volume.beta.kubernetes.io/storage-class 这个建议大家放弃使用。
官方文档已经准备弃用了。https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
执行
✗ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath (default) docker.io/hostpath Delete Immediate false 19m
managed-nfs-storage nfs-storage Delete Immediate false 11s
✗ kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/managed-nfs-storage patched
kubectl patch storageclass hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'storageclass.storage.k8s.io/hostpath patched
执行
$ kubectl apply -f test-claim.yaml
persistentvolumeclaim/test-claim created
➜ pv git:(master) ✗ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-050e4039-935c-4f18-8623-8580d1295e3a 1Mi RWX managed-nfs-storage 7s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-050e4039-935c-4f18-8623-8580d1295e3a 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 32s
创建pod文件test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim #与PVC名称保持一致
pvc
声明pvc-nfs.yaml 配置文件
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
执行创建pvc
$ kubectl apply -f pvc-nfs.yaml
persistentvolumeclaim/pvc-nfs created
查看pvc
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pvc-aec6f5cf-c3a6-422f-9063-82ce0cdbf53a 1Gi RWO hostpath 13s
查看pv
✗ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs 1Gi RWO Recycle Available 85s
pvc-b9de43c1-ecd7-4e58-94d7-f5c24092ad3c 1Gi RWO Delete Bound default/pvc-nfs hostpath 61s Bound default/pvc-nfs hostpath 2m37s
错误处理
1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
[root@master ~]# grep -B 5 'feature-gates' /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --feature-gates=RemoveSelfLink=false #添加内容
参考
https://www.jianshu.com/p/b860d26f2951
https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner
https://www.kococ.cn/20210119/cid=670.html
https://blog.csdn.net/ag1942/article/details/115371793