PV PVC概念

PersistentVolume(PV)面向管理员的资源,直接和底层存储关联的。其生命周期独立于使用PV的任何单个Pod。
PersistentVolumeClaim(PVC)面向用户使用者用于提交存储的请求,包含需要存储的容量与访问模式。

PV作为存储资源
主要包括存储能力、访问模式、存储类型、回收策略、后端存储类型等关键信息的设置
PVC作为用户对存储资源的需求申请
主要包括存储空间请求、访问模式、PV选择条件和存储类别等信息的设置

Kubernetes从1.0
引入PersistentVolume(PV)和PersistentVolumeClaim(PVC)两个资源对象来实现对存储的管理子系统
Kubernetes从1.4
引入了一个新的资源对象StorageClass,用于标记存储资源的特性和性能。
Kubernetes 1.7
本地数据卷管理。本地数据卷管理的主要内容是将非主分区的其他分区全部作为本地持久化数据卷供 Kubernetes 调度使用,遵循 Kubernetes 的 PV/PVC 模型。
Kubernetes从1.9
引入容器存储接口Container Storage Interface(CSI)机制
目标是在Kubernetes和外部存储系统之间建立一套标准的存储管理接口,通过该接口为容器提供存储服务,类似于CRI(容器运行时接口)和CNI(容器网络接口)。
Kubernetes从1.13
引入存储卷类型的设置(volumeMode=xxx),可选项包括Filesystem(文件系统)和Block(块设备),默认值为Filesystem。如 RBD(Ceph Block Device)

PV访问模式:
(RWO) ReadWriteOnce – the volume can be mounted as read-write by a single node (单node的读写)
(ROM) ReadOnlyMany – the volume can be mounted read-only by many nodes (多node的只读)
(RWM) ReadWriteMany – the volume can be mounted as read-write by many nodes (多node的读写)

pv可以设置三种回收策略:
保留(Retain),回收(Recycle)和删除(Delete)。
保留(Retain):允许人工处理保留的数据。(默认)
回收(Recycle):将执行清除操作,可以被新的pvc使用。注意:Recycle回收策略已弃用,推荐的方法是使用动态配置,
删除(Delete):将删除pv和外部关联的存储资源,需要插件支持。

PV卷阶段状态:
Available – 资源尚未被claim使用
Bound – 卷已经被绑定到claim了
Released – claim被删除,卷处于释放状态,但未被集群回收。
Failed – 卷自动回收失败

PV与PVC有两种方式去使用:
静态
集群管理员创建多个PV,用户手动提交PVC请求对PV进行bound。若管理员创建的PV与用户提交的PVC不匹配时,PVC处于挂起状态。
动态**
当管理员创建的所有静态PV均与用户创建的PVC不匹配时,会应基于StorageClasses自动创建PV并进行bound。这种方式需要管理员创建并配置了StorageClass,才能进行动态创建。

PV生命周期

image.png

image.png

image.png

SCI 存储接口

image.png

集群的共享存储
第一种方法是通过Samba、NFS或GlusterFS将Kubernetes集群与传统的存储设施进行集成。这种方法可以很容易地扩展到基于云的共享文件系统,如Amazon EFS、Azure Files和Google Cloud Filestore。

在这种架构中,存储层与Kubernetes所管理的计算层完全解耦。在Kubernetes的Pod中有两种方式来使用共享存储:

本地配置(Native Provisioning):幸运的是,大多数的共享文件系统都有内置到上游Kubernetes发行版中的卷插件,或者具有一个容器存储接口(Container Storage Interface - CSI)驱动程序。这使得集群管理员能够使用特定于共享文件系统或托管服务的参数,以声明的方式来定义持久卷(Persistent Volumes)。

基于主机的配置(Host-based Provisioning):在这种方法里,启动脚本在每个负责挂载共享存储的节点(Node)上运行。Kubernetes集群中的每个节点都有一个暴露给工作负载的挂载点,且该挂载点是一致的、众所周知的。持久卷(Persistent Volume)会通过hostPath或Local PV指向该主机目录。

由于耐久性和持久性是由底层存储来负责,因此工作负载与之完全解耦。这使得Pod可以在任何节点上调度,而且不需要定义节点关联,从而能确保Pod总是在选定好的节点上调度。
然而,当遇到需要高I/O吞吐量的有状态负载的时候这种方法就不是一个理想的选择了。因为共享文件系统的设计目的并不是为了满足那些带IOPS的需求,例如关系型数据库、NoSQL数据库和其他写密集型负载所需的IOPS。
可供选择的存储:GlusterFS、Samba、NFS、Amazon EFS、Azure Files、Google Cloud Filestore。
典型的工作负载:内容管理系统(Content Management Systems)、机器学习培训/推理作业(Machine Learning Training/Inference Jobs)和数字资产管理系统(Digital Asset Management Systems)。

Kubernetes通过控制器维护所需的配置状态。Deployment、ReplicaSet、DaemonSet和StatefulSet就是一些常用的控制器。
StatefulSet是一种特殊类型的控制器,它可以使Kubernetes中运行集群工作负载变得很容易。集群工作负载通常有一个或多个主服务器(Masters)和多个从服务器(Slaves)。大多数数据库都以集群模式设计的,这样可以提供高可用性和容错能力。
有状态集群工作负载持续地在Masters和Slaves之间复制数据。为此,集群基础设施寄期望于参与的实体(Masters和Slaves)有一致且众所周知的Endpoints,以可靠地同步状态。但在Kubernetes中,Pod的设计寿命很短,且不会保证拥有相同的名称和IP地址。
有状态集群工作负载的另一个需求是持久的后端存储,它具有容错能力,以及能够处理IOPS。
为了方便在Kubernetes中运行有状态集群工作负载,引入了StatefulSets。StatefulSet里的Pod可以保证有稳定且唯一的标识符。它们遵循一种可预测的命名规则,并且还支持有序、顺畅的部署和扩展。
参与StatefulSet的每个Pod都有一个相应的Persistent Volume Claim(PVC),该声明遵循类似的命名规则。当一个Pod终止并在不同的Node上重新调度时,Kubernetes控制器将确保该Pod与同一个PVC相关联,以确保状态是完整的。
由于StatefulSet中的每个Pod都有专用的PVC和PV,所以没有使用共享存储的硬性规则。但还是期望StatefulSet是由快速、可靠、持久的存储层(如基于SSD的块存储设备)支持。在确保将写操作完全提交到磁盘之后,可以在块存储设备中进行常规备份和快照。
可供选择的存储:SSD、块存储设备,例如Amazon EBS、Azure Disks、GCE PD。
典型的工作负载:Apache ZooKeeper、Apache Kafka、Percona Server for MySQL、PostgreSQL Automatic Failover以及JupyterHub。

容器中的数据存储是临时的,在容器中运行时应用程序会出现一些问题。首先,当容器崩溃时,kubelet将重新启动它,但是文件将丢失-容器以干净状态启动。其次,Pod里封装多个容器时,通常需要在这些容器之间实现文件共享。Kubernetes Volume解决了这两个问题

创建Pod使用Volume

  1. cat << EOF > test-volume.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: test-volume
  6. spec:
  7. containers:
  8. - image: nginx
  9. imagePullPolicy: IfNotPresent
  10. name: nginx-web
  11. volumeMounts:
  12. - mountPath: /usr/share/nginx/html
  13. name: test-volume
  14. volumes:
  15. - name: test-volume
  16. hostPath:
  17. path: /data
  18. EOF
[liwm@rmaster01 liwm]$ kubectl create -f test-volume.yaml 
pod/test-volume created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME          READY   STATUS    RESTARTS   AGE
test-volume   1/1     Running   0          2m46s
[liwm@rmaster01 liwm]$ 
[liwm@rmaster01 liwm]$ 
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME          READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
test-volume   1/1     Running   0          2m51s   10.42.4.37   node01   <none>           <none>

[root@node01 data]# echo nginx-404 > 50x.html 
[root@node01 data]# echo nginx-webserver > index.html 
[root@node01 data]#

[liwm@rmaster01 liwm]$ curl 10.42.4.37
nginx-webserver
[liwm@rmaster01 liwm]$ kubectl exec -it test-volume bash
root@test-volume:/# cat /usr/share/nginx/html/50x.html 
nginx-404
root@test-volume:/#


[liwm@rmaster01 liwm]$ kubectl describe pod test-volume
.....
Volumes:
  test-volume:
    Type:          HostPath (bare host directory volume)
    Path:          /data
    HostPathType:

PV and PVC

静态存储**
manual 该名称将用于将PersistentVolumeClaim请求绑定到此(自定义)

cat << EOF > pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual 
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/storage/pv1"
EOF

kubectl apply -f pv.yaml

[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 7s

述具体的PV功能。
访问模式:
(RWO) ReadWriteOnce – the volume can be mounted as read-write by a single node (单node的读写)
(ROM) ReadOnlyMany – the volume can be mounted read-only by many nodes (多node的只读)
(RWM) ReadWriteMany – the volume can be mounted as read-write by many nodes (多node的读写)

pv可以设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)。
- 保留(Retain):允许人工处理保留的数据。(默认)
- 回收(Recycle):将执行清除操作,之后可以被新的pvc使用。
- 删除(Delete):将删除pv和外部关联的存储资源,需要插件支持。

PV卷阶段状态:
Available – 资源尚未被claim使用
Bound – 卷已经被绑定到claim了
Released – claim被删除,卷处于释放状态,但未被集群回收。
Failed – 卷自动回收失败

PVC
PV 10G
PVC 请求 3G =10G
PV不属于任何一个命名空间 独立于命名空间之外

cat << EOF > pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
EOF

创建一个pod使用pvc

cat << EOF > pod-pvc.yaml
kind: Pod
apiVersion: v1
metadata:
  name: task-pvc-pod
spec:
  volumes:
    - name: task-pv-volume
      persistentVolumeClaim:
       claimName: task-pv-claim
  containers:
    - name: task-pvc-container
      image: nginx
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-volume
EOF
[liwm@rmaster01 liwm]$ kubectl create -f pv.yaml 
persistentvolume/task-pv-volume created
[liwm@rmaster01 liwm]$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
task-pv-volume   10Gi       RWO            Retain           Available           manual                  4s
[liwm@rmaster01 liwm]$ kubectl create -f pvc.yaml 
persistentvolumeclaim/task-pv-claim created
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim   Bound    task-pv-volume   10Gi       RWO            manual         3s
[liwm@rmaster01 liwm]$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
task-pv-volume   10Gi       RWO            Retain           Bound    default/task-pv-claim   manual                  26s

[liwm@rmaster01 liwm]$ kubectl create -f pod-pvc.yaml 
pod/task-pvc-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME           READY   STATUS    RESTARTS   AGE
task-pvc-pod   1/1     Running   0          3s
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME           READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
task-pvc-pod   1/1     Running   0          10s   10.42.4.41   node01   <none>           <none>
[liwm@rmaster01 liwm]$ kubectl exec -it task-pvc-pod bash
root@task-pvc-pod:/# cd  /usr/share/nginx/html/
root@task-pvc-pod:/usr/share/nginx/html# ls
index.html
root@task-pvc-pod:/usr/share/nginx/html# cat index.html 
111

[root@node01 /]# cd storage/
[root@node01 storage]# 
[root@node01 storage]# ll
total 0
drwxr-xr-x 2 root root 24 Mar 14 13:59 pv1
[root@node01 storage]# cd pv1/
[root@node01 pv1]# 
[root@node01 pv1]# ls
index.html
[root@node01 pv1]# echo 222 > index.html 
[root@node01 pv1]# 
[root@node01 pv1]# cat index.html 
222
[root@node01 pv1]# 
root@task-pvc-pod:/usr/share/nginx/html# cat index.html 
222
root@task-pvc-pod:/usr/share/nginx/html#

在master节点操作
[root@master01 pv1]# kubectl exec -it task-pv-pod bash
root@task-pv-pod:/# cd /usr/share/nginx/html/
root@task-pv-pod:/usr/share/nginx/html# ls
root@task-pv-pod:/usr/share/nginx/html# touch index.html
root@task-pv-pod:/usr/share/nginx/html# echo 11 > index.html
root@task-pv-pod:/usr/share/nginx/html# exit
exit
[root@master01 pv1]# curl 192.168.1.41
11

pod运行在node01,所以要去node01节点查看hostpath
[root@node01 ~]# cd /storage/
[root@node01 storage]# ls
pv1
[root@node01 storage]# cd pv1/
[root@node01 pv1]# ls
index.html
[root@node01 pv1]

回收策略
PV的状态 保留(Retain),回收(Recycle)和删除(Delete)。

cat << EOF > pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual 
  persistentVolumeReclaimPolicy: Recycle
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/storage/pv1"
EOF
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME           READY   STATUS    RESTARTS   AGE
task-pvc-pod   1/1     Running   0          5s
[liwm@rmaster01 liwm]$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
task-pv-volume   10Gi       RWO            Recycle          Bound    default/task-pv-claim   manual                  54s
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim   Bound    task-pv-volume   10Gi       RWO            manual         33s
[liwm@rmaster01 liwm]$ kubectl delete pod task-pvc-pod 
pod "task-pvc-pod" deleted

[liwm@rmaster01 liwm]$ kubectl delete pvc task-pv-claim 
persistentvolumeclaim "task-pv-claim" deleted
[liwm@rmaster01 liwm]$ kubectl get pvc
No resources found in default namespace.
[liwm@rmaster01 liwm]$ kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
task-pv-volume   10Gi       RWO            Recycle          Available           manual                  3m7s
[liwm@rmaster01 liwm]$

pv重新绑定
删除这个字段内容
image.png

http://www.huilog.com/?p=1371

pvc的状态是lost的,查看事件,发现报错:Two claims are bound to the same volume, this one is bound incorrectly。

根据报错查找kubernetes的源码,发现在pkg/controller/volume/persistentvolume/pv_controller.go:

image.png
通过查看源码,可以发现是pv中的claimRef存在,且claimRef.UID与pvc的UID对应不上(if elseif的条件)导致的。主

要原因是虽然yaml内有uid,但是apply到一个新的集群内,UID会重新生成,跟yaml内的不一样了。
那我们去掉UID和claimRef

[rancher@rmaster01 ~]$ kubectl edit pv mydata 

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    field.cattle.io/creatorId: user-vztn2
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2021-04-02T02:36:37Z"
  finalizers:
  - kubernetes.io/pv-protection
  - external-attacher/driver-longhorn-io
  name: mydata
  resourceVersion: "8008552"
  selfLink: /api/v1/persistentvolumes/mydata
  uid: 6cde640e-f0c0-4c31-8593-cb053f51976e
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: myweb
    namespace: default
    resourceVersion: "8008550"
    uid: 37fa727e-b0b1-4809-8890-f3180d6eb5c9
  csi:
    driver: driver.longhorn.io
    volumeAttributes:
      numberOfReplicas: "3"
      size: 2Gi
      staleReplicaTimeout: "20"
    volumeHandle: mydata
  persistentVolumeReclaimPolicy: Retain
  storageClassName: myweb
  volumeMode: Filesystem
status:
  phase: Bound
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: myweb
    namespace: default
    resourceVersion: "8008550"
    uid: 37fa727e-b0b1-4809-8890-f3180d6eb5c9

image.png

image.png

image.png

StorageClass

NFS环境准备

安装nfs server
yum -y install nfs-utils
# 启动服务,并设置为开机自启
systemctl enable —now nfs
# 创建共享目录
mkdir /storage
# 编辑nfs配置文件
vim /etc/exports
/storage (rw,sync,no_root_squash)
# 重启服务
systemctl restart nfs
# kubernetes集群计算节点部署
yum -y install nfs-utils
# 在计算节点测试
mkdir /test
mount.nfs 172.17.224.182:/storage /test
touch /test/123

StorageClass插件部署*

# 下载系统插件:
yum -y install git
https://github.com/kubernetes-incubator/external-storage
git clone https://github.com/kubernetes-incubator/external-storage.git

修改yaml信息
[liwm@rmaster01 deploy]$ pwd
/home/liwm/yaml/nfs-client/deploy

vim deployment.yaml
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.31.130
- name: NFS_PATH
value: /storage
volumes:
- name: nfs-client-root
nfs:
server: 192.168.31.130
path: /storage

[liwm@rmaster01 deploy]$ cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: jmgao1983/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.31.130
            - name: NFS_PATH
              value: /storage
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.31.130
            path: /storage
[liwm@rmaster01 deploy]$ cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[liwm@rmaster01 deploy]$ cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
[liwm@rmaster01 deploy]$

reclaimPolicy:有两种策略:Delete、Retain。默认是Delet
fuseim.pri/ifs为上面deployment上创建的PROVISIONER_NAME
# 部署插件

[liwm@rmaster01 liwm]$ kubectl describe storageclasses.storage.k8s.io managed-nfs-storage 
Name:            managed-nfs-storage
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"managed-nfs-storage"},"parameters":{"archiveOnDelete":"false"},"provisioner":"fuseim.pri/ifs"}
Provisioner:           fuseim.pri/ifs
Parameters:            archiveOnDelete=false
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>
[liwm@rmaster01 liwm]$
kubectl apply -f rbac.yaml
kubectl apply -f deployment.yaml
kubectl apply -f class.yaml
[liwm@rmaster01 deploy]$ kubectl get pod  
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          25s
[liwm@rmaster01 deploy]$
[liwm@rmaster01 deploy]$ kubectl get storageclasses.storage.k8s.io  
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  13d
[liwm@rmaster01 deploy]$

mount.nfs 192.168.31.130:/storage /test

测试

测试一:创建pvc后自动创建pv并bound

cat << EOF > pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: managed-nfs-storage
  resources:
    requests:
      storage: 1Gi
EOF
[liwm@rmaster01 liwm]$ kubectl create -f pvc-nfs.yaml 
persistentvolumeclaim/nginx-test created
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
nginx-test   Bound    pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1   1Gi        RWX            managed-nfs-storage   7s
[liwm@rmaster01 liwm]$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1   1Gi        RWX            Delete           Bound    default/nginx-test   managed-nfs-storage            9s
[liwm@rmaster01 liwm]$

测试二:创建Pod,自动创建pvc与pv

cat << EOF > statefulset-pvc-nfs.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: "managed-nfs-storage"
      resources:
        requests:
          storage: 1Gi
EOF

策略
ReadWriteMany 多节点映射

[liwm@rmaster01 liwm]$ kubectl create -f statefulset-pvc-nfs.yaml 
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          6m
web-0                                     1/1     Running   0          73s
web-1                                     1/1     Running   0          29s
web-2                                     1/1     Running   0          14s
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
www-web-0   Bound    pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3   1Gi        RWX            managed-nfs-storage   79s
www-web-1   Bound    pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924   1Gi        RWX            managed-nfs-storage   35s
www-web-2   Bound    pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970   1Gi        RWX            managed-nfs-storage   20s
[liwm@rmaster01 liwm]$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS          REASON   AGE
pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924   1Gi        RWX            Delete           Bound    default/www-web-1   managed-nfs-storage            36s
pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3   1Gi        RWX            Delete           Bound    default/www-web-0   managed-nfs-storage            79s
pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970   1Gi        RWX            Delete           Bound    default/www-web-2   managed-nfs-storage            21s
[liwm@rmaster01 liwm]$ kubectl scale statefulset --replicas=1 web 
statefulset.apps/web scaled
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          6m56s
web-0                                     1/1     Running   0          2m9s
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
www-web-0   Bound    pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3   1Gi        RWX            managed-nfs-storage   2m15s
www-web-1   Bound    pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924   1Gi        RWX            managed-nfs-storage   91s
www-web-2   Bound    pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970   1Gi        RWX            managed-nfs-storage   76s
[liwm@rmaster01 liwm]$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS          REASON   AGE
pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924   1Gi        RWX            Delete           Bound    default/www-web-1   managed-nfs-storage            93s
pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3   1Gi        RWX            Delete           Bound    default/www-web-0   managed-nfs-storage            2m16s
pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970   1Gi        RWX            Delete           Bound    default/www-web-2   managed-nfs-storage            78s

[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          9m2s    10.42.4.44   node01   <none>           <none>
web-0                                     1/1     Running   0          4m15s   10.42.4.46   node01   <none>           <none>
[liwm@rmaster01 liwm]$ 
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
www-web-0   Bound    pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3   1Gi        RWX            managed-nfs-storage   5m16s
www-web-1   Bound    pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924   1Gi        RWX            managed-nfs-storage   4m32s
www-web-2   Bound    pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970   1Gi        RWX            managed-nfs-storage   4m17s
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          10m
web-0                                     1/1     Running   0          5m26s
[liwm@rmaster01 liwm]$ kubectl delete pvc www-web-1
persistentvolumeclaim "www-web-1" deleted
[liwm@rmaster01 liwm]$ kubectl delete pvc www-web-2
persistentvolumeclaim "www-web-2" deleted
[liwm@rmaster01 liwm]$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
www-web-0   Bound    pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3   1Gi        RWX            managed-nfs-storage   6m9s
[liwm@rmaster01 liwm]$ kubectl exec -it web-0 bash
root@web-0:/# cd  /usr/share/nginx/html/
root@web-0:/usr/share/nginx/html# ls
root@web-0:/usr/share/nginx/html# echo nfs-server > index.html
root@web-0:/usr/share/nginx/html# exit
exit
[liwm@rmaster01 liwm]$ cd /storage/
[liwm@rmaster01 storage]$ ls
archived-default-nginx-test-pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1  default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
archived-default-www-web-1-pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924   pvc-nfs.yaml
archived-default-www-web-2-pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970
[liwm@rmaster01 storage]$ cd default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
[liwm@rmaster01 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ ls
index.html
[liwm@rmaster01 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ cat index.html 
nfs-server
[liwm@rmaster01 default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$
[liwm@rmaster01 ~]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          15m
web-0                                     1/1     Running   0          11m
[liwm@rmaster01 ~]$ kubectl delete -f statefulset.yaml 
statefulset.apps "web" deleted
[liwm@rmaster01 ~]$ kubectl get pod 
NAME                                      READY   STATUS        RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running       0          16m
web-0                                     0/1     Terminating   0          11m
[liwm@rmaster01 ~]$ kubectl get pvc
No resources found in default namespace.
[liwm@rmaster01 ~]$ kubectl get pv
No resources found in default namespace.
[liwm@rmaster01 ~]$
[liwm@rmaster01 ~]$ cd /storage/
[liwm@rmaster01 storage]$ ls
archived-default-nginx-test-pvc-aa1b584b-850c-49a7-834d-77cb56a2f6e1
archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
archived-default-www-web-1-pvc-0da53d10-6b52-41b0-b11c-bbb0f440e924
archived-default-www-web-2-pvc-c7380539-7aa9-49c5-9cc5-69c0b9ee8970
pvc-nfs.yaml
[liwm@rmaster01 storage]$ cd archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3
[liwm@rmaster01 archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ ls
index.html
[liwm@rmaster01 archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$ cat index.html 
nfs-server
[liwm@rmaster01 archived-default-www-web-0-pvc-172176c7-e498-4e12-8e59-d5e78f5b1ef3]$

测试三:将nfs的storageclass设置为默认,创建Pod不指定storageclass,申请pvc的资源是否成功
# 设置managed-nfs-storage为默认
kubectl patch storageclass managed-nfs-storage -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’

测试,编写yaml文件不指定storageclass

cat <<EOF> statefulset2.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 2 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: html
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
EOF

kubectl apply -f statefulset2.yaml







ConfigMap

ConfigMap资源提供了将配置文件与image分离后,向Pod注入配置数据的方法,以保证容器化应用程序的可移植性。

为了让镜像和配置文件解耦,以便实现镜像的可移植性和可复用性
因为一个configMap其实就是一系列配置信息的集合,将来可直接注入到Pod中的容器使用
注入方式有两种
1:将configMap做为存储卷
2:将configMap通过env中configMapKeyRef注入到容器中
configMap是KeyValve形式来保存数据的

环境变量使用
# 通过yaml文件创建env configmaps

cat << EOF > configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: test-config
data:
  username: damon
  password: redhat
EOF

**

[liwm@rmaster01 liwm]$ kubectl create -f configmap.yaml 
configmap/test-config created
[liwm@rmaster01 liwm]$ kubectl get configmaps test-config 
NAME          DATA   AGE
test-config   2      17s
[liwm@rmaster01 liwm]$
[liwm@rmaster01 liwm]$ kubectl describe configmaps test-config 
Name:         test-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
password:
----
redhat
username:
----
damon
Events:  <none>
[liwm@rmaster01 liwm]$

**
# pod使用configmaps的env环境变量

cat << EOF > config-pod-env1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-configmap-env-pod
spec:
  containers:
    - name: test-container
      image: radial/busyboxplus
      imagePullPolicy: IfNotPresent
      command: [ "/bin/sh", "-c", "sleep 1000000" ]
      envFrom:
      - configMapRef:
          name: test-config
EOF
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          84m
test-configmap-env-pod                    1/1     Running   0          89s
[liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-env-pod -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=test-configmap-env-pod
TERM=xterm
username=damon
password=redhat
TEST1_PORT_8081_TCP_PROTO=tcp
TEST1_PORT_8081_TCP_PORT=8081
TEST2_SERVICE_PORT=8081
TEST2_PORT_8081_TCP_ADDR=10.43.34.138
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
TEST1_SERVICE_HOST=10.43.67.146
TEST2_SERVICE_HOST=10.43.34.138
TEST2_PORT=tcp://10.43.34.138:8081
TEST2_PORT_8081_TCP=tcp://10.43.34.138:8081
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
TEST1_SERVICE_PORT=8081
TEST1_PORT_8081_TCP=tcp://10.43.67.146:8081
TEST2_PORT_8081_TCP_PORT=8081
KUBERNETES_SERVICE_HOST=10.43.0.1
KUBERNETES_PORT=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
TEST1_PORT=tcp://10.43.67.146:8081
TEST1_PORT_8081_TCP_ADDR=10.43.67.146
TEST2_PORT_8081_TCP_PROTO=tcp
HOME=/
[liwm@rmaster01 liwm]$

pod命令行使用comfigmaps的env环境变量

cat << EOF > config-pod-env2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-configmap-command-env-pod
spec:
  containers:
    - name: test-container
      image: radial/busyboxplus
      imagePullPolicy: IfNotPresent
      command: [ "/bin/sh", "-c", "echo \$(MYSQLUSER) \$(MYSQLPASSWD); sleep 1000000" ]
      env:
        - name: MYSQLUSER
          valueFrom: 
            configMapKeyRef: 
              name: test-config
              key: username
        - name: MYSQLPASSWD
          valueFrom:
            configMapKeyRef: 
              name: test-config
              key: password
EOF
[liwm@rmaster01 liwm]$ kubectl create -f config-pod-env2.yaml 
pod/test-configmap-command-env-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          92m
test-configmap-command-env-pod            1/1     Running   0          9s
[liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-command-env-pod -- env 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=test-configmap-command-env-pod
TERM=xterm
MYSQLUSER=damon
MYSQLPASSWD=redhat
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
TEST1_PORT=tcp://10.43.67.146:8081
TEST2_SERVICE_PORT=8081
TEST2_PORT_8081_TCP_PORT=8081
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
TEST1_SERVICE_PORT=8081
TEST1_PORT_8081_TCP_PORT=8081
TEST1_PORT_8081_TCP_ADDR=10.43.67.146
TEST2_PORT_8081_TCP_PROTO=tcp
TEST2_PORT_8081_TCP_ADDR=10.43.34.138
KUBERNETES_SERVICE_HOST=10.43.0.1
KUBERNETES_PORT=tcp://10.43.0.1:443
TEST1_SERVICE_HOST=10.43.67.146
TEST1_PORT_8081_TCP=tcp://10.43.67.146:8081
TEST2_SERVICE_HOST=10.43.34.138
TEST2_PORT=tcp://10.43.34.138:8081
TEST2_PORT_8081_TCP=tcp://10.43.34.138:8081
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
TEST1_PORT_8081_TCP_PROTO=tcp
HOME=/
[liwm@rmaster01 liwm]$

#Volume挂载使用

# 创建配置文件的configmap
echo 123 > index.html
kubectl create configmap web-config —from-file=index.html
# pod使用volume挂载

[liwm@rmaster01 liwm]$ echo 123 > index.html
[liwm@rmaster01 liwm]$ kubectl create configmap web-config --from-file=index.html
configmap/web-config created
[liwm@rmaster01 liwm]$
[liwm@rmaster01 liwm]$ kubectl describe configmaps web-config 
Name:         web-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
index.html:
----
123

Events:  <none>
[liwm@rmaster01 liwm]$
cat << EOF > test-configmap-volume-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-configmap-volume-pod
spec:
  volumes:
    - name: config-volume
      configMap:
        name: web-config
  containers:
    - name: test-container
      image: nginx
      imagePullPolicy: IfNotPresent
      volumeMounts:
      - name: config-volume
        mountPath: /usr/share/nginx/html
EOF
[liwm@rmaster01 liwm]$ kubectl create -f test-configmap-volume-pod.yaml 
pod/test-configmap-volume-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          98m
test-configmap-volume-pod                 1/1     Running   0          8s
[liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-volume-pod bash
root@test-configmap-volume-pod:/# cd /usr/share/nginx/html/
root@test-configmap-volume-pod:/usr/share/nginx/html# cat index.html 
123
root@test-configmap-volume-pod:/usr/share/nginx/html#

# subPath使用

cat << EOF > test-configmap-subpath.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-configmap-pod01
spec:
  volumes:
    - name: config-volume
      configMap:
        name: web-config
  containers:
    - name: test-container
      image: nginx
      imagePullPolicy: IfNotPresent
      volumeMounts:
      - name: config-volume
        mountPath: /usr/share/nginx/html/index.html
        subPath: index.html
EOF
[liwm@rmaster01 liwm]$ kubectl create -f test-configmap-subpath.yaml 
pod/test-configmap-volume-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          102m
test-configmap-volume-pod                 1/1     Running   0          4s
[liwm@rmaster01 liwm]$ kubectl exec -it test-configmap-volume-pod bash
root@test-configmap-volume-pod:/# cd /usr/share/nginx/html/
root@test-configmap-volume-pod:/usr/share/nginx/html# ls
50x.html  index.html
root@test-configmap-volume-pod:/usr/share/nginx/html# cat index.html 
123
root@test-configmap-volume-pod:/usr/share/nginx/html#

#mountpath 自动更新

cat << EOF > test-configmap-mountpath.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-configmap-pod02
spec:
  volumes:
    - name: config-volume
      configMap:
        name: web-config
  containers:
    - name: test-container
      image: nginx
      imagePullPolicy: IfNotPresent
      volumeMounts:
      - name: config-volume
        mountPath: /usr/share/nginx/html
EOF
[liwm@rmaster01 liwm]$ kubectl create -f .
pod/test-configmap-pod02 created
pod/test-configmap-pod01 created
[liwm@rmaster01 liwm]$ 
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME                                      READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          135m   10.42.4.44   node01   <none>           <none>
test-configmap-pod01                      1/1     Running   0          21s    10.42.4.68   node01   <none>           <none>
test-configmap-pod02                      1/1     Running   0          21s    10.42.4.67   node01   <none>           <none>
[liwm@rmaster01 liwm]$ curl 10.42.4.68
test
[liwm@rmaster01 liwm]$ curl 10.42.4.67
test
[liwm@rmaster01 liwm]$ kubectl edit configmaps web-config 
configmap/web-config edited
[liwm@rmaster01 liwm]$ curl 10.42.4.68
test
[liwm@rmaster01 liwm]$ curl 10.42.4.67
uat
[liwm@rmaster01 liwm]$

Secret

Secret资源用来给Pod传递敏感信息,例如密码,令牌,密钥。Secret卷由tmpfs(基于 RAM 的文件系统)提供存储,因此Secret数据永远不会被写入持久化的存储器。

secret类型有三种:
generic: 通用类型,环境变量 文本 通常用于存储密码数据。
tls:此类型仅用于存储私钥和证书。
docker-registry: 若要保存docker仓库的认证信息的话,就必须使用此种类型来创建
**

[liwm@rmaster01 liwm]$ kubectl create secret 
docker-registry  generic          tls              
[liwm@rmaster01 liwm]$


环境变量使用
# 手动加密
echo -n ‘admin’ | base64
YWRtaW4=

echo -n ‘redhat’ | base64
cmVkaGF0
# 解密
echo ‘YWRtaW4=’ | base64 —decode
#返回结果:admin

echo ‘cmVkaGF0’ | base64 —decode
#返回结果:redhat

创建secret的yaml

cat << EOF > secret-env.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysecret-env
type: Opaque
data:
  username: YWRtaW4=
  password: cmVkaGF0
EOF
[liwm@rmaster01 liwm]$ kubectl create -f secret-env.yaml secret/mysecret-env created
[liwm@rmaster01 liwm]$ kubectl get configmaps 
No resources found in default namespace.
[liwm@rmaster01 liwm]$ kubectl get secrets 
NAME                                 TYPE                                  DATA   AGE
default-token-qxwg5                  kubernetes.io/service-account-token   3      20d
istio.default                        istio.io/key-and-cert                 3      15d
mysecret-env                         Opaque                                2      24s
nfs-client-provisioner-token-797f6   kubernetes.io/service-account-token   3      13d
[liwm@rmaster01 liwm]$

[liwm@rmaster01 liwm]$ kubectl describe secrets mysecret-env 
Name:         mysecret-env
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  6 bytes
username:  5 bytes
[liwm@rmaster01 liwm]$

pod env使用secret

cat << EOF > secret-pod-env1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: envfrom-secret
spec:
  containers:
  - name: envars-test-container
    image: nginx
    imagePullPolicy: IfNotPresent
    envFrom:
    - secretRef:
        name: mysecret-env
EOF
[liwm@rmaster01 liwm]$ kubectl create -f secret-pod-env1.yaml 
pod/envfrom-secret created
[liwm@rmaster01 liwm]$
[liwm@rmaster01 liwm]$ kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
envfrom-secret                            1/1     Running   0          111s
nfs-client-provisioner-7757d98f8c-m66g5   1/1     Running   0          176m
[liwm@rmaster01 liwm]$ kubectl exec -it envfrom-secret -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=envfrom-secret
TERM=xterm
password=redhat
username=admin
KUBERNETES_PORT_443_TCP_PORT=443
TEST1_PORT_8081_TCP=tcp://10.43.67.146:8081
TEST2_PORT_8081_TCP=tcp://10.43.34.138:8081
TEST2_PORT_8081_TCP_ADDR=10.43.34.138
KUBERNETES_SERVICE_PORT=443
TEST2_PORT_8081_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.43.0.1
TEST1_PORT=tcp://10.43.67.146:8081
TEST1_PORT_8081_TCP_ADDR=10.43.67.146
TEST2_SERVICE_PORT=8081
TEST2_PORT=tcp://10.43.34.138:8081
KUBERNETES_PORT_443_TCP_PROTO=tcp
TEST1_SERVICE_PORT=8081
TEST1_PORT_8081_TCP_PROTO=tcp
TEST1_PORT_8081_TCP_PORT=8081
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
TEST1_SERVICE_HOST=10.43.67.146
TEST2_SERVICE_HOST=10.43.34.138
TEST2_PORT_8081_TCP_PORT=8081
KUBERNETES_PORT=tcp://10.43.0.1:443
NGINX_VERSION=1.17.9
NJS_VERSION=0.3.9
PKG_RELEASE=1~buster
HOME=/root
[liwm@rmaster01 liwm]$

自定义环境变量 name: SECRET_USERNAME name: SECRET_PASSWORD

cat << EOF > secret-pod-env2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-env-secret
spec:
  containers:
  - name: mycontainer
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    command: [ "/bin/sh", "-c", "echo \$(SECRET_USERNAME) \$(SECRET_PASSWORD); sleep 1000000" ]
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret-env
            key: username
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysecret-env
            key: password
EOF

#Volume挂载
**# 创建配置文件的secret
kubectl create secret generic web-secret —from-file=index.html

volume挂在secret

cat << EOF > pod-volume-secret.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-volume-secret
spec:
  containers:
  - name: pod-volume-secret
    image: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: test-web
      mountPath: "/usr/share/nginx/html"
      readOnly: true
  volumes:
  - name: test-web
    secret:
      secretName: web-secret
EOF

应用场景:
secret docker-registory 部署Harbo
http://192.168.31.131:8080/

下载离线安装包
wget https://github.com/goharbor/harbor/releases/download/v1.10.0/harbor-offline-installer-v1.10.0.tgz

下载docker-compose
wget https://docs.rancher.cn/download/compose/v1.25.4-docker-compose-Linux-x86_64
chmod +x v1.25.4-docker-compose-Linux-x86_64 && mv v1.25.4-docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

修改docker daemon.json ,添加安全私有镜像仓库

[root@rmaster02 harbor]# cat /etc/docker/daemon.json 
{
"insecure-registries":["192.168.31.131:8080"],
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"registry-mirrors": ["https://0bb06s1q.mirror.aliyuncs.com"],
"storage-driver": "overlay2",
"storage-opts": ["overlay2.override_kernel_check=true"],
"log-driver": "json-file",
"log-opts": {
  "max-size": "100m",
  "max-file": "3"
}
}

[root@rmaster01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.130 rmaster01
192.168.31.131 rmaster02
192.168.31.132 rmaster03
192.168.31.133 node01
192.168.31.134 node02
192.168.31.133 node01 riyimei.cn
[root@rmaster01 ~]#
[root@rmaster01 ~]# scp /etc/docker/daemon.json node01:/etc/docker/
[root@rmaster01 ~]# scp /etc/docker/daemon.json node02:/etc/docker/
systemctl restart docker

安装harbor之前需要在harbor安装目录下修改harbor.yml 文件
./install.sh
./install.sh —with-trivy —with-clair

登陆web创建用户,设定密码,创建项目
user: liwm
password:AAbb0101

docker login 私有仓库
docker login 192.168.31.131:8080

上传image到damon用户的私有仓库中
docker tag nginx:latest 192.168.31.131:8080/test/nginx:latest
docker push 192.168.31.131:8080/test/nginx:latest

创建secret

kubectl create secret docker-registry harbor-secret --docker-server=192.168.31.131:8080 --docker-username=liwm --docker-password=AAbb0101 --docker-email=liweiming0611@163.com

创建Pod,调用imagePullSecrets

cat << EOF > harbor-sc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx-demo
    image: 192.168.31.131:8080/test/nginx:latest
  imagePullSecrets:
  - name: harbor-secret
EOF

kubectl create -f harbor-sc.yaml

[liwm@rmaster01 liwm]$ kubectl create -f harbor-sc.yaml 
pod/nginx created
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME                                      READY   STATUS             RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-7757d98f8c-97cjt   1/1     Running            0          4m42s   10.42.4.85   node01   <none>           <none>
nginx                                     0/1     ImagePullBackOff   0          5s      10.42.4.87   node01   <none>           <none>

[liwm@rmaster01 liwm]$ kubectl create secret docker-registry harbor-secret --docker-server=192.168.31.131:8080 --docker-username=liwm --docker-password=AAbb0101 --docker-email=liweiming0611@163.com
secret/harbor-secret created
[liwm@rmaster01 liwm]$ kubectl apply -f harbor-sc.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/nginx configured
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-7757d98f8c-97cjt   1/1     Running   0          5m28s   10.42.4.85   node01   <none>           <none>
nginx                                     1/1     Running   0          51s     10.42.4.87   node01   <none>           <none>
[liwm@rmaster01 liwm]$ kubectl describe pod nginx 
Name:         nginx
Namespace:    default
Priority:     0
Node:         node01/192.168.31.133
Start Time:   Sat, 28 Mar 2020 00:13:43 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 10.42.4.87/32
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"192.168...
Status:       Running
IP:           10.42.4.87
IPs:
  IP:  10.42.4.87
Containers:
  secret-pod:
    Container ID:   docker://53700ed416c3d549c69e6269af09549ac81ef286ae0d02a765e6e1ba3444fc77
    Image:          192.168.31.131:8080/test/nginx:latest
    Image ID:       docker-pullable://192.168.31.131:8080/test/nginx@sha256:3936fb3946790d711a68c58be93628e43cbca72439079e16d154b5db216b58da
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 28 Mar 2020 00:14:25 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qxwg5 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-qxwg5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qxwg5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned default/nginx to node01
  Warning  Failed     44s (x2 over 58s)  kubelet, node01    Failed to pull image "192.168.31.131:8080/test/nginx:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for 192.168.31.131:8080/test/nginx, repository does not exist or may require 'docker login'
  Warning  Failed     44s (x2 over 58s)  kubelet, node01    Error: ErrImagePull
  Normal   BackOff    33s (x3 over 58s)  kubelet, node01    Back-off pulling image "192.168.31.131:8080/test/nginx:latest"
  Warning  Failed     33s (x3 over 58s)  kubelet, node01    Error: ImagePullBackOff
  Normal   Pulling    19s (x3 over 59s)  kubelet, node01    Pulling image "192.168.31.131:8080/test/nginx:latest"
  Normal   Pulled     19s                kubelet, node01    Successfully pulled image "192.168.31.131:8080/test/nginx:latest"
  Normal   Created    19s                kubelet, node01    Created container secret-pod
  Normal   Started    18s                kubelet, node01    Started container secret-pod
[liwm@rmaster01 liwm]$

emptyDir

EmptyDir在将Pod分配给节点时,首先创建一个卷,只要Pod在首次创建的节点一直运行没有删除的触发,那么该卷一直存在。Pod中的容器都可以在该emptyDir卷中读取和写入相同的文件,尽管该卷可以共享在每个Container中的相同或不同路径上。如果出于任何原因将Pod从emptyDir节点中删除,则其中的数据将被永久删除。
注:容器崩溃而不触发删除Pod的动作,数据在emptyDir中不会丢失。

cat << EOF > emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
  name: emptydir-pod
  labels:
    app: myapp
spec:
  volumes:
  - name: storage
    emptyDir: {}
  containers:
  - name: myapp1
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'sleep 3600000']
  - name: myapp2
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'sleep 10000000']
EOF

kubectl apply -f emptydir.yaml

kubete维护容器

[liwm@rmaster01 liwm]$ kubectl create -f emptydir.yaml 
pod/emptydir-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS              RESTARTS   AGE
emptydir-pod                              0/2     ContainerCreating   0          4s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running             0          4m6s
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
emptydir-pod                              2/2     Running   0          15s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          4m17s
[liwm@rmaster01 liwm]$ kubectl exec -it emptydir-pod -c myapp1 sh
/ # cd /storage/
/storage # touch 222
/storage # exit
[liwm@rmaster01 liwm]$ kubectl exec -it emptydir-pod -c myapp2 sh

/ # 
/ # cd /storage/
/storage # ls
222
/storage # exit
[liwm@rmaster01 liwm]$ 
[liwm@rmaster01 liwm]$ kubectl get pod -o wide 
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
emptydir-pod                              2/2     Running   0          2m10s   10.42.4.90   node01   <none>           <none>
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          6m12s   10.42.4.89   node01   <none>           <none>

[root@node01 ~]# docker ps |grep emptydir-pod
6b3623beca2c        fffcfdfce622                         "sh -c 'sleep 100000…"   4 minutes ago       Up 4 minutes                            k8s_myapp2_emptydir-pod_default_1b5bdd0d-346c-42eb-9151-025a1366b4e5_0
d07d098747a2        fffcfdfce622                         "sh -c 'sleep 360000…"   4 minutes ago       Up 4 minutes                            k8s_myapp1_emptydir-pod_default_1b5bdd0d-346c-42eb-9151-025a1366b4e5_0
c0aef4d253fb        rancher/pause:3.1                    "/pause"                 4 minutes ago       Up 4 minutes                            k8s_POD_emptydir-pod_default_1b5bdd0d-346c-42eb-9151-025a1366b4e5_0
[root@node01 ~]# docker inspect 6b3623beca2c|grep storage
                "/var/lib/kubelet/pods/1b5bdd0d-346c-42eb-9151-025a1366b4e5/volumes/kubernetes.io~empty-dir/storage:/storage",
                "Source": "/var/lib/kubelet/pods/1b5bdd0d-346c-42eb-9151-025a1366b4e5/volumes/kubernetes.io~empty-dir/storage",
                "Destination": "/storage",

[root@node01 ~]# cd /var/lib/kubelet/pods/1b5bdd0d-346c-42eb-9151-025a1366b4e5/volumes/kubernetes.io~empty-dir/storage
[root@node01 storage]# ls
222
[root@node01 storage]# 
[root@node01 storage]# docker rm -f 6b3623beca2c d07d098747a2
6b3623beca2c
d07d098747a2
[root@node01 storage]# ls
222
[root@node01 storage]#


[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
emptydir-pod                              2/2     Running   0          11m
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          15m
[liwm@rmaster01 liwm]$ kubectl exec -it emptydir-pod -c myapp1 sh
/ # cd /storage/
/storage # ls
222
/storage # exit
[liwm@rmaster01 liwm]$ kubectl delete pod emptydir-pod 
pod "emptydir-pod" deleted
[liwm@rmaster01 liwm]$ 

[root@node01 storage]# ls
222
[root@node01 storage]# ls
[root@node01 storage]#


emptyDir + init-containers

cat << EOF > initcontainers.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  volumes:
  - name: storage
    emptyDir: {}
  containers:
  - name: myapp-containers
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'if [ -f /storage/testfile ] ; then sleep 3600000 ; fi']
  initContainers:
  - name: init-containers
    image: radial/busyboxplus
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: storage
      mountPath: /storage
    command: ['sh', '-c', 'touch /storage/testfile && sleep 10']
EOF
[liwm@rmaster01 liwm]$ kubectl create -f initcontainers.yaml 
pod/myapp-pod created
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS     RESTARTS   AGE
myapp-pod                                 0/1     Init:0/1   0          4s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running    0          20m
[liwm@rmaster01 liwm]$ kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
myapp-pod                                 1/1     Running   0          36s
nfs-client-provisioner-7757d98f8c-qwjrl   1/1     Running   0          20m
[liwm@rmaster01 liwm]$ kubectl exec -it myapp-pod -- ls /storage/
testfile
[liwm@rmaster01 liwm]$