PV PVC概念

PersistentVolume(PV)面向管理员的资源,直接和底层存储关联的。其生命周期独立于使用PV的任何单个Pod。
PersistentVolumeClaim(PVC)面向用户使用者用于提交存储的请求,包含需要存储的容量与访问模式。

PV作为存储资源
主要包括存储能力、访问模式、存储类型、回收策略、后端存储类型等关键信息的设置
PVC作为用户对存储资源的需求申请
主要包括存储空间请求、访问模式、PV选择条件和存储类别等信息的设置

Kubernetes从1.0
引入PersistentVolume(PV)和PersistentVolumeClaim(PVC)两个资源对象来实现对存储的管理子系统
Kubernetes从1.4
引入了一个新的资源对象StorageClass,用于标记存储资源的特性和性能。
Kubernetes 1.7
本地数据卷管理。本地数据卷管理的主要内容是将非主分区的其他分区全部作为本地持久化数据卷供 Kubernetes 调度使用,遵循 Kubernetes 的 PV/PVC 模型。
Kubernetes从1.9
引入容器存储接口Container Storage Interface(CSI)机制
目标是在Kubernetes和外部存储系统之间建立一套标准的存储管理接口,通过该接口为容器提供存储服务,类似于CRI(容器运行时接口)和CNI(容器网络接口)。
Kubernetes从1.13
引入存储卷类型的设置(volumeMode=xxx),可选项包括Filesystem(文件系统)和Block(块设备),默认值为Filesystem。如 RBD(Ceph Block Device)

PV访问模式:
(RWO) ReadWriteOnce – the volume can be mounted as read-write by a single node (单node的读写)
(ROM) ReadOnlyMany – the volume can be mounted read-only by many nodes (多node的只读)
(RWM) ReadWriteMany – the volume can be mounted as read-write by many nodes (多node的读写)

pv可以设置三种回收策略:
保留(Retain),回收(Recycle)和删除(Delete)。
保留(Retain):允许人工处理保留的数据。(默认)
回收(Recycle):将执行清除操作,可以被新的pvc使用。注意:Recycle回收策略已弃用,推荐的方法是使用动态配置,
删除(Delete):将删除pv和外部关联的存储资源,需要插件支持。

PV卷阶段状态:
Available – 资源尚未被claim使用
Bound – 卷已经被绑定到claim了
Released – claim被删除,卷处于释放状态,但未被集群回收。
Failed – 卷自动回收失败

PV与PVC有两种方式去使用:
静态
集群管理员创建多个PV,用户手动提交PVC请求对PV进行bound。若管理员创建的PV与用户提交的PVC不匹配时,PVC处于挂起状态。
动态**

当管理员创建的所有静态PV均与用户创建的PVC不匹配时,会应基于StorageClasses自动创建PV并进行bound。这种方式需要管理员创建并配置了StorageClass,才能进行动态创建。

PV生命周期

image.png

image.png

image.png

SCI 存储接口

image.png

集群的共享存储
第一种方法是通过Samba、NFS或GlusterFS将Kubernetes集群与传统的存储设施进行集成。这种方法可以很容易地扩展到基于云的共享文件系统,如Amazon EFS、Azure Files和Google Cloud Filestore。

在这种架构中,存储层与Kubernetes所管理的计算层完全解耦。在Kubernetes的Pod中有两种方式来使用共享存储:

本地配置(Native Provisioning):幸运的是,大多数的共享文件系统都有内置到上游Kubernetes发行版中的卷插件,或者具有一个容器存储接口(Container Storage Interface - CSI)驱动程序。这使得集群管理员能够使用特定于共享文件系统或托管服务的参数,以声明的方式来定义持久卷(Persistent Volumes)。

基于主机的配置(Host-based Provisioning):在这种方法里,启动脚本在每个负责挂载共享存储的节点(Node)上运行。Kubernetes集群中的每个节点都有一个暴露给工作负载的挂载点,且该挂载点是一致的、众所周知的。持久卷(Persistent Volume)会通过hostPath或Local PV指向该主机目录。

由于耐久性和持久性是由底层存储来负责,因此工作负载与之完全解耦。这使得Pod可以在任何节点上调度,而且不需要定义节点关联,从而能确保Pod总是在选定好的节点上调度。
然而,当遇到需要高I/O吞吐量的有状态负载的时候这种方法就不是一个理想的选择了。因为共享文件系统的设计目的并不是为了满足那些带IOPS的需求,例如关系型数据库、NoSQL数据库和其他写密集型负载所需的IOPS。
可供选择的存储:GlusterFS、Samba、NFS、Amazon EFS、Azure Files、Google Cloud Filestore。
典型的工作负载:内容管理系统(Content Management Systems)、机器学习培训/推理作业(Machine Learning Training/Inference Jobs)和数字资产管理系统(Digital Asset Management Systems)。

Kubernetes通过控制器维护所需的配置状态。Deployment、ReplicaSet、DaemonSet和StatefulSet就是一些常用的控制器。
StatefulSet是一种特殊类型的控制器,它可以使Kubernetes中运行集群工作负载变得很容易。集群工作负载通常有一个或多个主服务器(Masters)和多个从服务器(Slaves)。大多数数据库都以集群模式设计的,这样可以提供高可用性和容错能力。
有状态集群工作负载持续地在Masters和Slaves之间复制数据。为此,集群基础设施寄期望于参与的实体(Masters和Slaves)有一致且众所周知的Endpoints,以可靠地同步状态。但在Kubernetes中,Pod的设计寿命很短,且不会保证拥有相同的名称和IP地址。
有状态集群工作负载的另一个需求是持久的后端存储,它具有容错能力,以及能够处理IOPS。
为了方便在Kubernetes中运行有状态集群工作负载,引入了StatefulSets。StatefulSet里的Pod可以保证有稳定且唯一的标识符。它们遵循一种可预测的命名规则,并且还支持有序、顺畅的部署和扩展。
参与StatefulSet的每个Pod都有一个相应的Persistent Volume Claim(PVC),该声明遵循类似的命名规则。当一个Pod终止并在不同的Node上重新调度时,Kubernetes控制器将确保该Pod与同一个PVC相关联,以确保状态是完整的。
由于StatefulSet中的每个Pod都有专用的PVC和PV,所以没有使用共享存储的硬性规则。但还是期望StatefulSet是由快速、可靠、持久的存储层(如基于SSD的块存储设备)支持。在确保将写操作完全提交到磁盘之后,可以在块存储设备中进行常规备份和快照。
可供选择的存储:SSD、块存储设备,例如Amazon EBS、Azure Disks、GCE PD。
典型的工作负载:Apache ZooKeeper、Apache Kafka、Percona Server for MySQL、PostgreSQL Automatic Failover以及JupyterHub。

容器中的数据存储是临时的,在容器中运行时应用程序会出现一些问题。首先,当容器崩溃时,kubelet将重新启动它,但是文件将丢失-容器以干净状态启动。其次,Pod里封装多个容器时,通常需要在这些容器之间实现文件共享。Kubernetes Volume解决了这两个问题

创建Pod使用Volume

  1. cat << EOF > test-volume.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-volume
  6. spec:
  7. containers:
  8. - image: nginx
  9. imagePullPolicy: IfNotPresent
  10. name: nginx-web
  11. volumeMounts:
  12. - mountPath: /usr/share/nginx/html
  13. name: test-volume
  14. volumes:
  15. - name: test-volume
  16. hostPath:
  17. path: /data
  18. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f test-volume.yaml
  2. pod/test-volume created
  3. [liwm@rmaster01 liwm]$ kubectl get pod
  4. NAME READY STATUS RESTARTS AGE
  5. test-volume 1/1 Running 0 2m46s
  6. [liwm@rmaster01 liwm]$
  7. [liwm@rmaster01 liwm]$
  8. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  9. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  10. test-volume 1/1 Running 0 2m51s 10.42.4.37 node01 <none> <none>
  11. [root@node01 data]# echo nginx-404 > 50x.html
  12. [root@node01 data]# echo nginx-webserver > index.html
  13. [root@node01 data]#
  14. [liwm@rmaster01 liwm]$ curl 10.42.4.37
  15. nginx-webserver
  16. [liwm@rmaster01 liwm]$ kubectl exec -it test-volume bash
  17. root@test-volume:/# cat /usr/share/nginx/html/50x.html
  18. nginx-404
  19. root@test-volume:/#
  20. [liwm@rmaster01 liwm]$ kubectl describe pod test-volume
  21. .....
  22. Volumes:
  23. test-volume:
  24. Type: HostPath (bare host directory volume)
  25. Path: /data
  26. HostPathType:

PV and PVC

静态存储**
manual 该名称将用于将PersistentVolumeClaim请求绑定到此(自定义)

  1. cat << EOF > pv.yaml
  2. kind: PersistentVolume
  3. apiVersion: v1
  4. metadata:
  5. name: task-pv-volume
  6. labels:
  7. type: local
  8. spec:
  9. storageClassName: manual
  10. capacity:
  11. storage: 10Gi
  12. accessModes:
  13. - ReadWriteOnce
  14. hostPath:
  15. path: "/storage/pv1"
  16. EOF

kubectl apply -f pv.yaml

[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 7s

述具体的PV功能。
访问模式:
(RWO) ReadWriteOnce – the volume can be mounted as read-write by a single node (单node的读写)
(ROM) ReadOnlyMany – the volume can be mounted read-only by many nodes (多node的只读)
(RWM) ReadWriteMany – the volume can be mounted as read-write by many nodes (多node的读写)

pv可以设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)。
- 保留(Retain):允许人工处理保留的数据。(默认)
- 回收(Recycle):将执行清除操作,之后可以被新的pvc使用。
- 删除(Delete):将删除pv和外部关联的存储资源,需要插件支持。

PV卷阶段状态:
Available – 资源尚未被claim使用
Bound – 卷已经被绑定到claim了
Released – claim被删除,卷处于释放状态,但未被集群回收。
Failed – 卷自动回收失败

PVC
PV 10G
PVC 请求 3G =10G
PV不属于任何一个命名空间 独立于命名空间之外

  1. cat << EOF > pvc.yaml
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: task-pv-claim
  6. spec:
  7. storageClassName: manual
  8. accessModes:
  9. - ReadWriteOnce
  10. resources:
  11. requests:
  12. storage: 3Gi
  13. EOF

创建一个pod使用pvc

  1. cat << EOF > pod-pvc.yaml
  2. kind: Pod
  3. apiVersion: v1
  4. metadata:
  5. name: task-pvc-pod
  6. spec:
  7. volumes:
  8. - name: task-pv-volume
  9. persistentVolumeClaim:
  10. claimName: task-pv-claim
  11. containers:
  12. - name: task-pvc-container
  13. image: nginx
  14. imagePullPolicy: IfNotPresent
  15. ports:
  16. - containerPort: 80
  17. name: "http-server"
  18. volumeMounts:
  19. - mountPath: "/usr/share/nginx/html"
  20. name: task-pv-volume
  21. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f pv.yaml
  2. persistentvolume/task-pv-volume created
  3. [liwm@rmaster01 liwm]$ kubectl get pv
  4. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  5. task-pv-volume 10Gi RWO Retain Available manual 4s
  6. [liwm@rmaster01 liwm]$ kubectl create -f pvc.yaml
  7. persistentvolumeclaim/task-pv-claim created
  8. [liwm@rmaster01 liwm]$ kubectl get pvc
  9. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  10. task-pv-claim Bound task-pv-volume 10Gi RWO manual 3s
  11. [liwm@rmaster01 liwm]$ kubectl get pv
  12. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  13. task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 26s
  14. [liwm@rmaster01 liwm]$ kubectl create -f pod-pvc.yaml
  15. pod/task-pvc-pod created
  16. [liwm@rmaster01 liwm]$ kubectl get pod
  17. NAME READY STATUS RESTARTS AGE
  18. task-pvc-pod 1/1 Running 0 3s
  19. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  20. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  21. task-pvc-pod 1/1 Running 0 10s 10.42.4.41 node01 <none> <none>
  22. [liwm@rmaster01 liwm]$ kubectl exec -it task-pvc-pod bash
  23. root@task-pvc-pod:/# cd /usr/share/nginx/html/
  24. root@task-pvc-pod:/usr/share/nginx/html# ls
  25. index.html
  26. root@task-pvc-pod:/usr/share/nginx/html# cat index.html
  27. 111
  28. [root@node01 /]# cd storage/
  29. [root@node01 storage]#
  30. [root@node01 storage]# ll
  31. total 0
  32. drwxr-xr-x 2 root root 24 Mar 14 13:59 pv1
  33. [root@node01 storage]# cd pv1/
  34. [root@node01 pv1]#
  35. [root@node01 pv1]# ls
  36. index.html
  37. [root@node01 pv1]# echo 222 > index.html
  38. [root@node01 pv1]#
  39. [root@node01 pv1]# cat index.html
  40. 222
  41. [root@node01 pv1]#
  42. root@task-pvc-pod:/usr/share/nginx/html# cat index.html
  43. 222
  44. root@task-pvc-pod:/usr/share/nginx/html#

在master节点操作
[root@master01 pv1]# kubectl exec -it task-pv-pod bash
root@task-pv-pod:/# cd /usr/share/nginx/html/
root@task-pv-pod:/usr/share/nginx/html# ls
root@task-pv-pod:/usr/share/nginx/html# touch index.html
root@task-pv-pod:/usr/share/nginx/html# echo 11 > index.html
root@task-pv-pod:/usr/share/nginx/html# exit
exit
[root@master01 pv1]# curl 192.168.1.41
11

pod运行在node01,所以要去node01节点查看hostpath
[root@node01 ~]# cd /storage/
[root@node01 storage]# ls
pv1
[root@node01 storage]# cd pv1/
[root@node01 pv1]# ls
index.html
[root@node01 pv1]

回收策略
PV的状态 保留(Retain),回收(Recycle)和删除(Delete)。

  1. cat << EOF > pv.yaml
  2. kind: PersistentVolume
  3. apiVersion: v1
  4. metadata:
  5. name: task-pv-volume
  6. labels:
  7. type: local
  8. spec:
  9. storageClassName: manual
  10. persistentVolumeReclaimPolicy: Recycle
  11. capacity:
  12. storage: 10Gi
  13. accessModes:
  14. - ReadWriteOnce
  15. hostPath:
  16. path: "/storage/pv1"
  17. EOF
  1. [liwm@rmaster01 liwm]$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. task-pvc-pod 1/1 Running 0 5s
  4. [liwm@rmaster01 liwm]$ kubectl get pv
  5. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  6. task-pv-volume 10Gi RWO Recycle Bound default/task-pv-claim manual 54s
  7. [liwm@rmaster01 liwm]$ kubectl get pvc
  8. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  9. task-pv-claim Bound task-pv-volume 10Gi RWO manual 33s
  10. [liwm@rmaster01 liwm]$ kubectl delete pod task-pvc-pod
  11. pod "task-pvc-pod" deleted
  12. [liwm@rmaster01 liwm]$ kubectl delete pvc task-pv-claim
  13. persistentvolumeclaim "task-pv-claim" deleted
  14. [liwm@rmaster01 liwm]$ kubectl get pvc
  15. No resources found in default namespace.
  16. [liwm@rmaster01 liwm]$ kubectl get pv
  17. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  18. task-pv-volume 10Gi RWO Recycle Available manual 3m7s
  19. [liwm@rmaster01 liwm]$

StorageClass

NFS环境准备

安装nfs server
yum -y install nfs-utils
# 启动服务,并设置为开机自启
systemctl enable —now nfs
# 创建共享目录
mkdir /storage
# 编辑nfs配置文件
vim /etc/exports
/storage (rw,sync,no_root_squash)
# 重启服务
systemctl restart nfs
# kubernetes集群计算节点部署
yum -y install nfs-utils
# 在计算节点测试
mkdir /test
mount.nfs 172.17.224.182:/storage /test
touch /test/123

StorageClass插件部署*

# 下载系统插件:
yum -y install git
https://github.com/kubernetes-incubator/external-storage
git clone https://github.com/kubernetes-incubator/external-storage.git

修改yaml信息
[liwm@rmaster01 deploy]$ pwd
/home/liwm/yaml/nfs-client/deploy

vim deployment.yaml
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.31.130
- name: NFS_PATH
value: /storage
volumes:
- name: nfs-client-root
nfs:
server: 192.168.31.130
path: /storage

  1. [liwm@rmaster01 deploy]$ cat deployment.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nfs-client-provisioner
  6. labels:
  7. app: nfs-client-provisioner
  8. # replace with namespace where provisioner is deployed
  9. namespace: default
  10. spec:
  11. replicas: 1
  12. strategy:
  13. type: Recreate
  14. selector:
  15. matchLabels:
  16. app: nfs-client-provisioner
  17. template:
  18. metadata:
  19. labels:
  20. app: nfs-client-provisioner
  21. spec:
  22. serviceAccountName: nfs-client-provisioner
  23. containers:
  24. - name: nfs-client-provisioner
  25. image: jmgao1983/nfs-client-provisioner
  26. volumeMounts:
  27. - name: nfs-client-root
  28. mountPath: /persistentvolumes
  29. env:
  30. - name: PROVISIONER_NAME
  31. value: fuseim.pri/ifs
  32. - name: NFS_SERVER
  33. value: 192.168.31.130
  34. - name: NFS_PATH
  35. value: /storage
  36. volumes:
  37. - name: nfs-client-root
  38. nfs:
  39. server: 192.168.31.130
  40. path: /storage
  41. [liwm@rmaster01 deploy]$ cat rbac.yaml
  42. apiVersion: v1
  43. kind: ServiceAccount
  44. metadata:
  45. name: nfs-client-provisioner
  46. # replace with namespace where provisioner is deployed
  47. namespace: default
  48. ---
  49. kind: ClusterRole
  50. apiVersion: rbac.authorization.k8s.io/v1
  51. metadata:
  52. name: nfs-client-provisioner-runner
  53. rules:
  54. - apiGroups: [""]
  55. resources: ["persistentvolumes"]
  56. verbs: ["get", "list", "watch", "create", "delete"]
  57. - apiGroups: [""]
  58. resources: ["persistentvolumeclaims"]
  59. verbs: ["get", "list", "watch", "update"]
  60. - apiGroups: ["storage.k8s.io"]
  61. resources: ["storageclasses"]
  62. verbs: ["get", "list", "watch"]
  63. - apiGroups: [""]
  64. resources: ["events"]
  65. verbs: ["create", "update", "patch"]
  66. ---
  67. kind: ClusterRoleBinding
  68. apiVersion: rbac.authorization.k8s.io/v1
  69. metadata:
  70. name: run-nfs-client-provisioner
  71. subjects:
  72. - kind: ServiceAccount
  73. name: nfs-client-provisioner
  74. # replace with namespace where provisioner is deployed
  75. namespace: default
  76. roleRef:
  77. kind: ClusterRole
  78. name: nfs-client-provisioner-runner
  79. apiGroup: rbac.authorization.k8s.io
  80. ---
  81. kind: Role
  82. apiVersion: rbac.authorization.k8s.io/v1
  83. metadata:
  84. name: leader-locking-nfs-client-provisioner
  85. # replace with namespace where provisioner is deployed
  86. namespace: default
  87. rules:
  88. - apiGroups: [""]
  89. resources: ["endpoints"]
  90. verbs: ["get", "list", "watch", "create", "update", "patch"]
  91. ---
  92. kind: RoleBinding
  93. apiVersion: rbac.authorization.k8s.io/v1
  94. metadata:
  95. name: leader-locking-nfs-client-provisioner
  96. # replace with namespace where provisioner is deployed
  97. namespace: default
  98. subjects:
  99. - kind: ServiceAccount
  100. name: nfs-client-provisioner
  101. # replace with namespace where provisioner is deployed
  102. namespace: default
  103. roleRef:
  104. kind: Role
  105. name: leader-locking-nfs-client-provisioner
  106. apiGroup: rbac.authorization.k8s.io
  107. [liwm@rmaster01 deploy]$ cat class.yaml
  108. apiVersion: storage.k8s.io/v1
  109. kind: StorageClass
  110. metadata:
  111. name: managed-nfs-storage
  112. provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
  113. parameters:
  114. archiveOnDelete: "false"
  115. [liwm@rmaster01 deploy]$

reclaimPolicy:有两种策略:Delete、Retain。默认是Delet
fuseim.pri/ifs为上面deployment上创建的PROVISIONER_NAME
# 部署插件

  1. [liwm@rmaster01 liwm]$ kubectl describe storageclasses.storage.k8s.io managed-nfs-storage
  2. Name: managed-nfs-storage
  3. IsDefaultClass: No
  4. Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"managed-nfs-storage"},"parameters":{"archiveOnDelete":"false"},"provisioner":"fuseim.pri/ifs"}
  5. Provisioner: fuseim.pri/ifs
  6. Parameters: archiveOnDelete=false
  7. AllowVolumeExpansion: <unset>
  8. MountOptions: <none>
  9. ReclaimPolicy: Delete
  10. VolumeBindingMode: Immediate
  11. Events: <none>
  12. [liwm@rmaster01 liwm]$
  1. kubectl apply -f rbac.yaml
  2. kubectl apply -f deployment.yaml
  3. kubectl apply -f class.yaml
  4. [liwm@rmaster01 deploy]$ kubectl get pod
  5. NAME READY STATUS RESTARTS AGE
  6. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 25s
  7. [liwm@rmaster01 deploy]$
  8. [liwm@rmaster01 deploy]$ kubectl get storageclasses.storage.k8s.io
  9. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
  10. managed-nfs-storage fuseim.pri/ifs Delete Immediate false 13d
  11. [liwm@rmaster01 deploy]$

mount.nfs 192.168.31.130:/storage /test

测试

测试一:创建pvc后自动创建pv并bound

  1. cat << EOF > pvc-nfs.yaml
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: nginx-test
  6. spec:
  7. accessModes:
  8. - ReadWriteMany
  9. storageClassName: managed-nfs-storage
  10. resources:
  11. requests:
  12. storage: 1Gi
  13. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f pvc-nfs.yaml
  2. persistentvolumeclaim/nginx-test created
  3. [liwm@rmaster01 liwm]$ kubectl get pvc
  4. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  5. nginx-test Bound pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1 1Gi RWX managed-nfs-storage 7s
  6. [liwm@rmaster01 liwm]$ kubectl get pv
  7. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  8. pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1 1Gi RWX Delete Bound default/nginx-test managed-nfs-storage 9s
  9. [liwm@rmaster01 liwm]$

测试二:创建Pod,自动创建pvc与pv

  1. cat << EOF > statefulset-pvc-nfs.yaml
  2. apiVersion: apps/v1
  3. kind: StatefulSet
  4. metadata:
  5. name: web
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: nginx # has to match .spec.template.metadata.labels
  10. serviceName: "nginx"
  11. replicas: 3 # by default is 1
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx # has to match .spec.selector.matchLabels
  16. spec:
  17. terminationGracePeriodSeconds: 10
  18. containers:
  19. - name: nginx
  20. image: nginx
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - containerPort: 80
  24. name: web
  25. volumeMounts:
  26. - name: www
  27. mountPath: /usr/share/nginx/html
  28. volumeClaimTemplates:
  29. - metadata:
  30. name: www
  31. spec:
  32. accessModes: [ "ReadWriteMany" ]
  33. storageClassName: "managed-nfs-storage"
  34. resources:
  35. requests:
  36. storage: 1Gi
  37. EOF

策略
ReadWriteMany 多节点映射

  1. [liwm@rmaster01 liwm]$ kubectl create -f statefulset-pvc-nfs.yaml
  2. [liwm@rmaster01 liwm]$ kubectl get pod
  3. NAME READY STATUS RESTARTS AGE
  4. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 6m
  5. web-0 1/1 Running 0 73s
  6. web-1 1/1 Running 0 29s
  7. web-2 1/1 Running 0 14s
  8. [liwm@rmaster01 liwm]$ kubectl get pvc
  9. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  10. www-web-0 Bound pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3 1Gi RWX managed-nfs-storage 79s
  11. www-web-1 Bound pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924 1Gi RWX managed-nfs-storage 35s
  12. www-web-2 Bound pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970 1Gi RWX managed-nfs-storage 20s
  13. [liwm@rmaster01 liwm]$ kubectl get pv
  14. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  15. pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924 1Gi RWX Delete Bound default/www-web-1 managed-nfs-storage 36s
  16. pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3 1Gi RWX Delete Bound default/www-web-0 managed-nfs-storage 79s
  17. pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970 1Gi RWX Delete Bound default/www-web-2 managed-nfs-storage 21s
  18. [liwm@rmaster01 liwm]$ kubectl scale statefulset --replicas=1 web
  19. statefulset.apps/web scaled
  20. [liwm@rmaster01 liwm]$ kubectl get pod
  21. NAME READY STATUS RESTARTS AGE
  22. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 6m56s
  23. web-0 1/1 Running 0 2m9s
  24. [liwm@rmaster01 liwm]$ kubectl get pvc
  25. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  26. www-web-0 Bound pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3 1Gi RWX managed-nfs-storage 2m15s
  27. www-web-1 Bound pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924 1Gi RWX managed-nfs-storage 91s
  28. www-web-2 Bound pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970 1Gi RWX managed-nfs-storage 76s
  29. [liwm@rmaster01 liwm]$ kubectl get pv
  30. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  31. pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924 1Gi RWX Delete Bound default/www-web-1 managed-nfs-storage 93s
  32. pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3 1Gi RWX Delete Bound default/www-web-0 managed-nfs-storage 2m16s
  33. pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970 1Gi RWX Delete Bound default/www-web-2 managed-nfs-storage 78s
  34. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  35. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  36. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 9m2s 10.42.4.44 node01 <none> <none>
  37. web-0 1/1 Running 0 4m15s 10.42.4.46 node01 <none> <none>
  38. [liwm@rmaster01 liwm]$
  39. [liwm@rmaster01 liwm]$ kubectl get pvc
  40. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  41. www-web-0 Bound pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3 1Gi RWX managed-nfs-storage 5m16s
  42. www-web-1 Bound pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924 1Gi RWX managed-nfs-storage 4m32s
  43. www-web-2 Bound pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970 1Gi RWX managed-nfs-storage 4m17s
  44. [liwm@rmaster01 liwm]$ kubectl get pod
  45. NAME READY STATUS RESTARTS AGE
  46. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 10m
  47. web-0 1/1 Running 0 5m26s
  48. [liwm@rmaster01 liwm]$ kubectl delete pvc www-web-1
  49. persistentvolumeclaim "www-web-1" deleted
  50. [liwm@rmaster01 liwm]$ kubectl delete pvc www-web-2
  51. persistentvolumeclaim "www-web-2" deleted
  52. [liwm@rmaster01 liwm]$ kubectl get pvc
  53. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  54. www-web-0 Bound pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3 1Gi RWX managed-nfs-storage 6m9s
  55. [liwm@rmaster01 liwm]$ kubectl exec -it web-0 bash
  56. root@web-0:/# cd /usr/share/nginx/html/
  57. root@web-0:/usr/share/nginx/html# ls
  58. root@web-0:/usr/share/nginx/html# echo nfs-server > index.html
  59. root@web-0:/usr/share/nginx/html# exit
  60. exit
  61. [liwm@rmaster01 liwm]$ cd /storage/
  62. [liwm@rmaster01 storage]$ ls
  63. archived-default-nginx-test-pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
  64. archived-default-www-web-1-pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924 pvc-nfs.yaml
  65. archived-default-www-web-2-pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970
  66. [liwm@rmaster01 storage]$ cd default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
  67. [liwm@rmaster01 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ ls
  68. index.html
  69. [liwm@rmaster01 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ cat index.html
  70. nfs-server
  71. [liwm@rmaster01 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$
  1. [liwm@rmaster01 ~]$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 15m
  4. web-0 1/1 Running 0 11m
  5. [liwm@rmaster01 ~]$ kubectl delete -f statefulset.yaml
  6. statefulset.apps "web" deleted
  7. [liwm@rmaster01 ~]$ kubectl get pod
  8. NAME READY STATUS RESTARTS AGE
  9. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 16m
  10. web-0 0/1 Terminating 0 11m
  11. [liwm@rmaster01 ~]$ kubectl get pvc
  12. No resources found in default namespace.
  13. [liwm@rmaster01 ~]$ kubectl get pv
  14. No resources found in default namespace.
  15. [liwm@rmaster01 ~]$
  16. [liwm@rmaster01 ~]$ cd /storage/
  17. [liwm@rmaster01 storage]$ ls
  18. archived-default-nginx-test-pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1
  19. archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
  20. archived-default-www-web-1-pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924
  21. archived-default-www-web-2-pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970
  22. pvc-nfs.yaml
  23. [liwm@rmaster01 storage]$ cd archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
  24. [liwm@rmaster01 archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ ls
  25. index.html
  26. [liwm@rmaster01 archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ cat index.html
  27. nfs-server
  28. [liwm@rmaster01 archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$

测试三:将nfs的storageclass设置为默认,创建Pod不指定storageclass,申请pvc的资源是否成功
# 设置managed-nfs-storage为默认
kubectl patch storageclass managed-nfs-storage -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’

测试,编写yaml文件不指定storageclass

  1. cat <<EOF> statefulset2.yaml
  2. apiVersion: apps/v1
  3. kind: StatefulSet
  4. metadata:
  5. name: web
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: nginx # has to match .spec.template.metadata.labels
  10. serviceName: "nginx"
  11. replicas: 2 # by default is 1
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx # has to match .spec.selector.matchLabels
  16. spec:
  17. terminationGracePeriodSeconds: 10
  18. containers:
  19. - name: nginx
  20. image: nginx
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - containerPort: 80
  24. name: web
  25. volumeMounts:
  26. - name: html
  27. mountPath: /usr/share/nginx/html
  28. volumeClaimTemplates:
  29. - metadata:
  30. name: html
  31. spec:
  32. accessModes: [ "ReadWriteOnce" ]
  33. resources:
  34. requests:
  35. storage: 1Gi
  36. EOF

kubectl apply -f statefulset2.yaml


ConfigMap

ConfigMap资源提供了将配置文件与image分离后,向Pod注入配置数据的方法,以保证容器化应用程序的可移植性。

为了让镜像和配置文件解耦,以便实现镜像的可移植性和可复用性
因为一个configMap其实就是一系列配置信息的集合,将来可直接注入到Pod中的容器使用
注入方式有两种
1:将configMap做为存储卷
2:将configMap通过env中configMapKeyRef注入到容器中
configMap是KeyValve形式来保存数据的

环境变量使用
# 通过yaml文件创建env configmaps

  1. cat << EOF > configmap.yaml
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. name: test-config
  6. data:
  7. username: damon
  8. password: redhat
  9. EOF

**

  1. [liwm@rmaster01 liwm]$ kubectl create -f configmap.yaml
  2. configmap/test-config created
  3. [liwm@rmaster01 liwm]$ kubectl get configmaps test-config
  4. NAME DATA AGE
  5. test-config 2 17s
  6. [liwm@rmaster01 liwm]$
  7. [liwm@rmaster01 liwm]$ kubectl describe configmaps test-config
  8. Name: test-config
  9. Namespace: default
  10. Labels: <none>
  11. Annotations: <none>
  12. Data
  13. ====
  14. password:
  15. ----
  16. redhat
  17. username:
  18. ----
  19. damon
  20. Events: <none>
  21. [liwm@rmaster01 liwm]$

**
# pod使用configmaps的env环境变量

  1. cat << EOF > config-pod-env1.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-configmap-env-pod
  6. spec:
  7. containers:
  8. - name: test-container
  9. image: radial/busyboxplus
  10. imagePullPolicy: IfNotPresent
  11. command: [ "/bin/sh", "-c", "sleep 1000000" ]
  12. envFrom:
  13. - configMapRef:
  14. name: test-config
  15. EOF
  1. [liwm@rmaster01 liwm]$ kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 84m
  4. test-configmap-env-pod 1/1 Running 0 89s
  5. [liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-env-pod -- env
  6. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  7. HOSTNAME=test-configmap-env-pod
  8. TERM=xterm
  9. username=damon
  10. password=redhat
  11. TEST1_PORT_8081_TCP_PROTO=tcp
  12. TEST1_PORT_8081_TCP_PORT=8081
  13. TEST2_SERVICE_PORT=8081
  14. TEST2_PORT_8081_TCP_ADDR=10.43.34.138
  15. KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
  16. TEST1_SERVICE_HOST=10.43.67.146
  17. TEST2_SERVICE_HOST=10.43.34.138
  18. TEST2_PORT=tcp://10.43.34.138:8081
  19. TEST2_PORT_8081_TCP=tcp://10.43.34.138:8081
  20. KUBERNETES_SERVICE_PORT=443
  21. KUBERNETES_SERVICE_PORT_HTTPS=443
  22. KUBERNETES_PORT_443_TCP_PROTO=tcp
  23. KUBERNETES_PORT_443_TCP_PORT=443
  24. TEST1_SERVICE_PORT=8081
  25. TEST1_PORT_8081_TCP=tcp://10.43.67.146:8081
  26. TEST2_PORT_8081_TCP_PORT=8081
  27. KUBERNETES_SERVICE_HOST=10.43.0.1
  28. KUBERNETES_PORT=tcp://10.43.0.1:443
  29. KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
  30. TEST1_PORT=tcp://10.43.67.146:8081
  31. TEST1_PORT_8081_TCP_ADDR=10.43.67.146
  32. TEST2_PORT_8081_TCP_PROTO=tcp
  33. HOME=/
  34. [liwm@rmaster01 liwm]$

pod命令行使用comfigmaps的env环境变量

  1. cat << EOF > config-pod-env2.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-configmap-command-env-pod
  6. spec:
  7. containers:
  8. - name: test-container
  9. image: radial/busyboxplus
  10. imagePullPolicy: IfNotPresent
  11. command: [ "/bin/sh", "-c", "echo \$(MYSQLUSER) \$(MYSQLPASSWD); sleep 1000000" ]
  12. env:
  13. - name: MYSQLUSER
  14. valueFrom:
  15. configMapKeyRef:
  16. name: test-config
  17. key: username
  18. - name: MYSQLPASSWD
  19. valueFrom:
  20. configMapKeyRef:
  21. name: test-config
  22. key: password
  23. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f config-pod-env2.yaml
  2. pod/test-configmap-command-env-pod created
  3. [liwm@rmaster01 liwm]$ kubectl get pod
  4. NAME READY STATUS RESTARTS AGE
  5. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 92m
  6. test-configmap-command-env-pod 1/1 Running 0 9s
  7. [liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-command-env-pod -- env
  8. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  9. HOSTNAME=test-configmap-command-env-pod
  10. TERM=xterm
  11. MYSQLUSER=damon
  12. MYSQLPASSWD=redhat
  13. KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
  14. TEST1_PORT=tcp://10.43.67.146:8081
  15. TEST2_SERVICE_PORT=8081
  16. TEST2_PORT_8081_TCP_PORT=8081
  17. KUBERNETES_PORT_443_TCP_PROTO=tcp
  18. KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
  19. TEST1_SERVICE_PORT=8081
  20. TEST1_PORT_8081_TCP_PORT=8081
  21. TEST1_PORT_8081_TCP_ADDR=10.43.67.146
  22. TEST2_PORT_8081_TCP_PROTO=tcp
  23. TEST2_PORT_8081_TCP_ADDR=10.43.34.138
  24. KUBERNETES_SERVICE_HOST=10.43.0.1
  25. KUBERNETES_PORT=tcp://10.43.0.1:443
  26. TEST1_SERVICE_HOST=10.43.67.146
  27. TEST1_PORT_8081_TCP=tcp://10.43.67.146:8081
  28. TEST2_SERVICE_HOST=10.43.34.138
  29. TEST2_PORT=tcp://10.43.34.138:8081
  30. TEST2_PORT_8081_TCP=tcp://10.43.34.138:8081
  31. KUBERNETES_SERVICE_PORT=443
  32. KUBERNETES_SERVICE_PORT_HTTPS=443
  33. KUBERNETES_PORT_443_TCP_PORT=443
  34. TEST1_PORT_8081_TCP_PROTO=tcp
  35. HOME=/
  36. [liwm@rmaster01 liwm]$

#Volume挂载使用

# 创建配置文件的configmap
echo 123 > index.html
kubectl create configmap web-config —from-file=index.html
# pod使用volume挂载

  1. [liwm@rmaster01 liwm]$ echo 123 > index.html
  2. [liwm@rmaster01 liwm]$ kubectl create configmap web-config --from-file=index.html
  3. configmap/web-config created
  4. [liwm@rmaster01 liwm]$
  5. [liwm@rmaster01 liwm]$ kubectl describe configmaps web-config
  6. Name: web-config
  7. Namespace: default
  8. Labels: <none>
  9. Annotations: <none>
  10. Data
  11. ====
  12. index.html:
  13. ----
  14. 123
  15. Events: <none>
  16. [liwm@rmaster01 liwm]$
  1. cat << EOF > test-configmap-volume-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-configmap-volume-pod
  6. spec:
  7. volumes:
  8. - name: config-volume
  9. configMap:
  10. name: web-config
  11. containers:
  12. - name: test-container
  13. image: nginx
  14. imagePullPolicy: IfNotPresent
  15. volumeMounts:
  16. - name: config-volume
  17. mountPath: /usr/share/nginx/html
  18. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f test-configmap-volume-pod.yaml
  2. pod/test-configmap-volume-pod created
  3. [liwm@rmaster01 liwm]$ kubectl get pod
  4. NAME READY STATUS RESTARTS AGE
  5. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 98m
  6. test-configmap-volume-pod 1/1 Running 0 8s
  7. [liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-volume-pod bash
  8. root@test-configmap-volume-pod:/# cd /usr/share/nginx/html/
  9. root@test-configmap-volume-pod:/usr/share/nginx/html# cat index.html
  10. 123
  11. root@test-configmap-volume-pod:/usr/share/nginx/html#

# subPath使用

  1. cat << EOF > test-configmap-subpath.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-configmap-pod01
  6. spec:
  7. volumes:
  8. - name: config-volume
  9. configMap:
  10. name: web-config
  11. containers:
  12. - name: test-container
  13. image: nginx
  14. imagePullPolicy: IfNotPresent
  15. volumeMounts:
  16. - name: config-volume
  17. mountPath: /usr/share/nginx/html/index.html
  18. subPath: index.html
  19. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f test-configmap-subpath.yaml
  2. pod/test-configmap-volume-pod created
  3. [liwm@rmaster01 liwm]$ kubectl get pod
  4. NAME READY STATUS RESTARTS AGE
  5. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 102m
  6. test-configmap-volume-pod 1/1 Running 0 4s
  7. [liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-volume-pod bash
  8. root@test-configmap-volume-pod:/# cd /usr/share/nginx/html/
  9. root@test-configmap-volume-pod:/usr/share/nginx/html# ls
  10. 50x.html index.html
  11. root@test-configmap-volume-pod:/usr/share/nginx/html# cat index.html
  12. 123
  13. root@test-configmap-volume-pod:/usr/share/nginx/html#

#mountpath 自动更新

  1. cat << EOF > test-configmap-mountpath.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-configmap-pod02
  6. spec:
  7. volumes:
  8. - name: config-volume
  9. configMap:
  10. name: web-config
  11. containers:
  12. - name: test-container
  13. image: nginx
  14. imagePullPolicy: IfNotPresent
  15. volumeMounts:
  16. - name: config-volume
  17. mountPath: /usr/share/nginx/html
  18. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f .
  2. pod/test-configmap-pod02 created
  3. pod/test-configmap-pod01 created
  4. [liwm@rmaster01 liwm]$
  5. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  7. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 135m 10.42.4.44 node01 <none> <none>
  8. test-configmap-pod01 1/1 Running 0 21s 10.42.4.68 node01 <none> <none>
  9. test-configmap-pod02 1/1 Running 0 21s 10.42.4.67 node01 <none> <none>
  10. [liwm@rmaster01 liwm]$ curl 10.42.4.68
  11. test
  12. [liwm@rmaster01 liwm]$ curl 10.42.4.67
  13. test
  14. [liwm@rmaster01 liwm]$ kubectl edit configmaps web-config
  15. configmap/web-config edited
  16. [liwm@rmaster01 liwm]$ curl 10.42.4.68
  17. test
  18. [liwm@rmaster01 liwm]$ curl 10.42.4.67
  19. uat
  20. [liwm@rmaster01 liwm]$

Secret

Secret资源用来给Pod传递敏感信息,例如密码,令牌,密钥。Secret卷由tmpfs(基于 RAM 的文件系统)提供存储,因此Secret数据永远不会被写入持久化的存储器。

secret类型有三种:
generic: 通用类型,环境变量 文本 通常用于存储密码数据。
tls:此类型仅用于存储私钥和证书。
docker-registry: 若要保存docker仓库的认证信息的话,就必须使用此种类型来创建
**

  1. [liwm@rmaster01 liwm]$ kubectl create secret
  2. docker-registry generic tls
  3. [liwm@rmaster01 liwm]$


环境变量使用
# 手动加密
echo -n ‘admin’ | base64
YWRtaW4=

echo -n ‘redhat’ | base64
cmVkaGF0
# 解密
echo ‘YWRtaW4=’ | base64 —decode
#返回结果:admin

echo ‘cmVkaGF0’ | base64 —decode
#返回结果:redhat

创建secret的yaml

  1. cat << EOF > secret-env.yaml
  2. apiVersion: v1
  3. kind: Secret
  4. metadata:
  5. name: mysecret-env
  6. type: Opaque
  7. data:
  8. username: YWRtaW4=
  9. password: cmVkaGF0
  10. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f secret-env.yaml secret/mysecret-env created
  2. [liwm@rmaster01 liwm]$ kubectl get configmaps
  3. No resources found in default namespace.
  4. [liwm@rmaster01 liwm]$ kubectl get secrets
  5. NAME TYPE DATA AGE
  6. default-token-qxwg5 kubernetes.io/service-account-token 3 20d
  7. istio.default istio.io/key-and-cert 3 15d
  8. mysecret-env Opaque 2 24s
  9. nfs-client-provisioner-token-797f6 kubernetes.io/service-account-token 3 13d
  10. [liwm@rmaster01 liwm]$
  11. [liwm@rmaster01 liwm]$ kubectl describe secrets mysecret-env
  12. Name: mysecret-env
  13. Namespace: default
  14. Labels: <none>
  15. Annotations: <none>
  16. Type: Opaque
  17. Data
  18. ====
  19. password: 6 bytes
  20. username: 5 bytes
  21. [liwm@rmaster01 liwm]$

pod env使用secret

  1. cat << EOF > secret-pod-env1.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: envfrom-secret
  6. spec:
  7. containers:
  8. - name: envars-test-container
  9. image: nginx
  10. imagePullPolicy: IfNotPresent
  11. envFrom:
  12. - secretRef:
  13. name: mysecret-env
  14. EOF
  1. [liwm@rmaster01 liwm]$ kubectl create -f secret-pod-env1.yaml
  2. pod/envfrom-secret created
  3. [liwm@rmaster01 liwm]$
  4. [liwm@rmaster01 liwm]$ kubectl get pod
  5. NAME READY STATUS RESTARTS AGE
  6. envfrom-secret 1/1 Running 0 111s
  7. nfs-client-provisioner-7757d98f8c-m66g5 1/1 Running 0 176m
  8. [liwm@rmaster01 liwm]$ kubectl exec -it envfrom-secret -- env
  9. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  10. HOSTNAME=envfrom-secret
  11. TERM=xterm
  12. password=redhat
  13. username=admin
  14. KUBERNETES_PORT_443_TCP_PORT=443
  15. TEST1_PORT_8081_TCP=tcp://10.43.67.146:8081
  16. TEST2_PORT_8081_TCP=tcp://10.43.34.138:8081
  17. TEST2_PORT_8081_TCP_ADDR=10.43.34.138
  18. KUBERNETES_SERVICE_PORT=443
  19. TEST2_PORT_8081_TCP_PROTO=tcp
  20. KUBERNETES_SERVICE_HOST=10.43.0.1
  21. TEST1_PORT=tcp://10.43.67.146:8081
  22. TEST1_PORT_8081_TCP_ADDR=10.43.67.146
  23. TEST2_SERVICE_PORT=8081
  24. TEST2_PORT=tcp://10.43.34.138:8081
  25. KUBERNETES_PORT_443_TCP_PROTO=tcp
  26. TEST1_SERVICE_PORT=8081
  27. TEST1_PORT_8081_TCP_PROTO=tcp
  28. TEST1_PORT_8081_TCP_PORT=8081
  29. KUBERNETES_SERVICE_PORT_HTTPS=443
  30. KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
  31. KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
  32. TEST1_SERVICE_HOST=10.43.67.146
  33. TEST2_SERVICE_HOST=10.43.34.138
  34. TEST2_PORT_8081_TCP_PORT=8081
  35. KUBERNETES_PORT=tcp://10.43.0.1:443
  36. NGINX_VERSION=1.17.9
  37. NJS_VERSION=0.3.9
  38. PKG_RELEASE=1~buster
  39. HOME=/root
  40. [liwm@rmaster01 liwm]$

自定义环境变量 name: SECRET_USERNAME name: SECRET_PASSWORD

  1. cat << EOF > secret-pod-env2.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: pod-env-secret
  6. spec:
  7. containers:
  8. - name: mycontainer
  9. image: radial/busyboxplus
  10. imagePullPolicy: IfNotPresent
  11. command: [ "/bin/sh", "-c", "echo \$(SECRET_USERNAME) \$(SECRET_PASSWORD); sleep 1000000" ]
  12. env:
  13. - name: SECRET_USERNAME
  14. valueFrom:
  15. secretKeyRef:
  16. name: mysecret-env
  17. key: username
  18. - name: SECRET_PASSWORD
  19. valueFrom:
  20. secretKeyRef:
  21. name: mysecret-env
  22. key: password
  23. EOF

#Volume挂载
**# 创建配置文件的secret
kubectl create secret generic web-secret —from-file=index.html

volume挂在secret

  1. cat << EOF > pod-volume-secret.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: pod-volume-secret
  6. spec:
  7. containers:
  8. - name: pod-volume-secret
  9. image: nginx
  10. imagePullPolicy: IfNotPresent
  11. volumeMounts:
  12. - name: test-web
  13. mountPath: "/usr/share/nginx/html"
  14. readOnly: true
  15. volumes:
  16. - name: test-web
  17. secret:
  18. secretName: web-secret
  19. EOF

应用场景:
secret docker-registory 部署Harbo
http://192.168.31.131:8080/

下载离线安装包
wget https://github.com/goharbor/harbor/releases/download/v1.10.0/harbor-offline-installer-v1.10.0.tgz

下载docker-compose
wget https://docs.rancher.cn/download/compose/v1.25.4-docker-compose-Linux-x86_64
chmod +x v1.25.4-docker-compose-Linux-x86_64 && mv v1.25.4-docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

修改docker daemon.json ,添加安全私有镜像仓库

  1. [root@rmaster02 harbor]# cat /etc/docker/daemon.json
  2. {
  3. "insecure-registries":["192.168.31.131:8080"],
  4. "max-concurrent-downloads": 3,
  5. "max-concurrent-uploads": 5,
  6. "registry-mirrors": ["https://0bb06s1q.mirror.aliyuncs.com"],
  7. "storage-driver": "overlay2",
  8. "storage-opts": ["overlay2.override_kernel_check=true"],
  9. "log-driver": "json-file",
  10. "log-opts": {
  11. "max-size": "100m",
  12. "max-file": "3"
  13. }
  14. }

[root@rmaster01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.130 rmaster01
192.168.31.131 rmaster02
192.168.31.132 rmaster03
192.168.31.133 node01
192.168.31.134 node02
192.168.31.133 node01 riyimei.cn
[root@rmaster01 ~]#
[root@rmaster01 ~]# scp /etc/docker/daemon.json node01:/etc/docker/
[root@rmaster01 ~]# scp /etc/docker/daemon.json node02:/etc/docker/
systemctl restart docker

安装harbor之前需要在harbor安装目录下修改harbor.yml 文件
./install.sh
./install.sh —with-trivy —with-clair

登陆web创建用户,设定密码,创建项目
user: liwm
password:AAbb0101

docker login 私有仓库
docker login 192.168.31.131:8080

上传image到damon用户的私有仓库中
docker tag nginx:latest 192.168.31.131:8080/test/nginx:latest
docker push 192.168.31.131:8080/test/nginx:latest

创建secret

  1. kubectl create secret docker-registry harbor-secret --docker-server=192.168.31.131:8080 --docker-username=liwm --docker-password=AAbb0101 --docker-email=liweiming0611@163.com

创建Pod,调用imagePullSecrets

  1. cat << EOF > harbor-sc.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: nginx
  6. spec:
  7. containers:
  8. - name: nginx-demo
  9. image: 192.168.31.131:8080/test/nginx:latest
  10. imagePullSecrets:
  11. - name: harbor-secret
  12. EOF

kubectl create -f harbor-sc.yaml

  1. [liwm@rmaster01 liwm]$ kubectl create -f harbor-sc.yaml
  2. pod/nginx created
  3. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  5. nfs-client-provisioner-7757d98f8c-97cjt 1/1 Running 0 4m42s 10.42.4.85 node01 <none> <none>
  6. nginx 0/1 ImagePullBackOff 0 5s 10.42.4.87 node01 <none> <none>
  7. [liwm@rmaster01 liwm]$ kubectl create secret docker-registry harbor-secret --docker-server=192.168.31.131:8080 --docker-username=liwm --docker-password=AAbb0101 --docker-email=liweiming0611@163.com
  8. secret/harbor-secret created
  9. [liwm@rmaster01 liwm]$ kubectl apply -f harbor-sc.yaml
  10. Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  11. pod/nginx configured
  12. [liwm@rmaster01 liwm]$ kubectl get pod -o wide
  13. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  14. nfs-client-provisioner-7757d98f8c-97cjt 1/1 Running 0 5m28s 10.42.4.85 node01 <none> <none>
  15. nginx 1/1 Running 0 51s 10.42.4.87 node01 <none> <none>
  16. [liwm@rmaster01 liwm]$ kubectl describe pod nginx
  17. Name: nginx
  18. Namespace: default
  19. Priority: 0
  20. Node: node01/192.168.31.133
  21. Start Time: Sat, 28 Mar 2020 00:13:43 +0800
  22. Labels: <none>
  23. Annotations: cni.projectcalico.org/podIP: 10.42.4.87/32
  24. kubectl.kubernetes.io/last-applied-configuration:
  25. {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"192.168...
  26. Status: Running
  27. IP: 10.42.4.87
  28. IPs:
  29. IP: 10.42.4.87
  30. Containers:
  31. secret-pod:
  32. Container ID: docker://53700ed416c3d549c69e6269af09549ac81ef286ae0d02a765e6e1ba3444fc77
  33. Image: 192.168.31.131:8080/test/nginx:latest
  34. Image ID: docker-pullable://192.168.31.131:8080/test/nginx@sha256:3936fb3946790d711a68c58be93628e43cbca72439079e16d154b5db216b58da
  35. Port: <none>
  36. Host Port: <none>
  37. State: Running
  38. Started: Sat, 28 Mar 2020 00:14:25 +0800
  39. Ready: True
  40. Restart Count: 0
  41. Environment: <none>
  42. Mounts:
  43. /var/run/secrets/kubernetes.io/serviceaccount from default-token-qxwg5 (ro)
  44. Conditions:
  45. Type Status
  46. Initialized True
  47. Ready True
  48. ContainersReady True
  49. PodScheduled True
  50. Volumes:
  51. default-token-qxwg5:
  52. Type: Secret (a volume populated by a Secret)
  53. SecretName: default-token-qxwg5
  54. Optional: false
  55. QoS Class: BestEffort
  56. Node-Selectors: <none>
  57. Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
  58. node.kubernetes.io/unreachable:NoExecute for 300s
  59. Events:
  60. Type Reason Age From Message
  61. ---- ------ ---- ---- -------
  62. Normal Scheduled <unknown> default-scheduler Successfully assigned default/nginx to node01
  63. Warning Failed 44s (x2 over 58s) kubelet, node01 Failed to pull image "192.168.31.131:8080/test/nginx:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for 192.168.31.131:8080/test/nginx, repository does not exist or may require 'docker login'
  64. Warning Failed 44s (x2 over 58s) kubelet, node01 Error: ErrImagePull
  65. Normal BackOff 33s (x3 over 58s) kubelet, node01 Back-off pulling image "192.168.31.131:8080/test/nginx:latest"
  66. Warning Failed 33s (x3 over 58s) kubelet, node01 Error: ImagePullBackOff
  67. Normal Pulling 19s (x3 over 59s) kubelet, node01 Pulling image "192.168.31.131:8080/test/nginx:latest"
  68. Normal Pulled 19s kubelet, node01 Successfully pulled image "192.168.31.131:8080/test/nginx:latest"
  69. Normal Created 19s kubelet, node01 Created container secret-pod
  70. Normal Started 18s kubelet, node01 Started container secret-pod
  71. [liwm@rmaster01 liwm]$

emptyDir

EmptyDir在将Pod分配给节点时,首先创建一个卷,只要Pod在首次创建的节点一直运行没有删除的触发,那么该卷一直存在。Pod中的容器都可以在该emptyDir卷中读取和写入相同的文件,尽管该卷可以共享在每个Container中的相同或不同路径上。如果出于任何原因将Pod从emptyDir节点中删除,则其中的数据将被永久删除。
注:容器崩溃而不触发删除Pod的动作,数据在emptyDir中不会丢失。

cat << EOF > emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
  name: emptydir-pod
  labels:
    app: myapp
spec:
  volumes:
  - name: storage
    emptyDir: {}
  containers:
  - name: myapp1
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'sleep 3600000']
  - name: myapp2
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'sleep 10000000']
EOF

kubectl apply -f emptydir.yaml

kubete维护容器

[liwm@rmaster01 liwm]$ kubectl create -f emptydir.yaml 
pod/emptydir-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS              RESTARTS   AGE
emptydir-pod                              0/2     ContainerCreating   0          4s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running             0          4m6s
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
emptydir-pod                              2/2     Running   0          15s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          4m17s
[liwm@rmaster01 liwm]$ kubectl exec -it emptydir-pod -c myapp1 sh
/ # cd /storage/
/storage # touch 222
/storage # exit
[liwm@rmaster01 liwm]$ kubectl exec -it emptydir-pod -c myapp2 sh

/ # 
/ # cd /storage/
/storage # ls
222
/storage # exit
[liwm@rmaster01 liwm]$ 
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
emptydir-pod                              2/2     Running   0          2m10s   10.42.4.90   node01   <none>           <none>
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          6m12s   10.42.4.89   node01   <none>           <none>

[root@node01 ~]# docker ps |grep emptydir-pod
6b3623beca2c        fffcfdfce622                         "sh -c 'sleep 100000…"   4 minutes ago       Up 4 minutes                            k8s_myapp2_emptydir-pod_default_1b5bdd0d-346c-42eb-9151-025a1366b4e5_0
d07d098747a2        fffcfdfce622                         "sh -c 'sleep 360000…"   4 minutes ago       Up 4 minutes                            k8s_myapp1_emptydir-pod_default_1b5bdd0d-346c-42eb-9151-025a1366b4e5_0
c0aef4d253fb        rancher/pause:3.1                    "/pause"                 4 minutes ago       Up 4 minutes                            k8s_POD_emptydir-pod_default_1b5bdd0d-346c-42eb-9151-025a1366b4e5_0
[root@node01 ~]# docker inspect 6b3623beca2c|grep storage
                "/var/lib/kubelet/pods/1b5bdd0d-346c-42eb-9151-025a1366b4e5/volumes/kubernetes.io~empty-dir/storage:/storage",
                "Source": "/var/lib/kubelet/pods/1b5bdd0d-346c-42eb-9151-025a1366b4e5/volumes/kubernetes.io~empty-dir/storage",
                "Destination": "/storage",

[root@node01 ~]# cd /var/lib/kubelet/pods/1b5bdd0d-346c-42eb-9151-025a1366b4e5/volumes/kubernetes.io~empty-dir/storage
[root@node01 storage]# ls
222
[root@node01 storage]# 
[root@node01 storage]# docker rm -f 6b3623beca2c d07d098747a2
6b3623beca2c
d07d098747a2
[root@node01 storage]# ls
222
[root@node01 storage]#


[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
emptydir-pod                              2/2     Running   0          11m
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          15m
[liwm@rmaster01 liwm]$ kubectl exec -it emptydir-pod -c myapp1 sh
/ # cd /storage/
/storage # ls
222
/storage # exit
[liwm@rmaster01 liwm]$ kubectl delete pod emptydir-pod 
pod "emptydir-pod" deleted
[liwm@rmaster01 liwm]$ 

[root@node01 storage]# ls
222
[root@node01 storage]# ls
[root@node01 storage]#


emptyDir + init-containers

cat << EOF > initcontainers.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  volumes:
  - name: storage
    emptyDir: {}
  containers:
  - name: myapp-containers
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'if [ -f /storage/testfile ] ; then sleep 3600000 ; fi']
  initContainers:
  - name: init-containers
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'touch /storage/testfile && sleep 10']
EOF
[liwm@rmaster01 liwm]$ kubectl create -f initcontainers.yaml 
pod/myapp-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS     RESTARTS   AGE
myapp-pod                                 0/1     Init:0/1   0          4s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running    0          20m
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
myapp-pod                                 1/1     Running   0          36s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          20m
[liwm@rmaster01 liwm]$ kubectl exec -it myapp-pod -- ls /storage/
testfile
[liwm@rmaster01 liwm]$