在k8s中,有很多的新概念,跟以往的运维相关的知识有很大的出入。

image.png

1、Pod

Pod是k8s中最基础的单元。Pod是由一组容器组成的。Pod最主要的功能是管理一组容器的,为其提供网络、存储、监控等服务。

1.1、创建第一个Pod

在k8s中,创建Pod,通常通过配置清单的方式来创建。

1、使用命令创建Pod

  1. [root@kubernetes-master-01 ~]# kubectl run test --image=nginx
  2. pod/test created
  3. [root@kubernetes-master-01 ~]# kubectl get pods
  4. NAME READY STATUS RESTARTS AGE
  5. test 0/1 ContainerCreating 0 9s
  • 参数 | 参数 | 解释 | | —- | —- | | —image | 指定镜像 | | —port | 指定容器端口的 | | —rm | 当容器结束时,Pod自动删除 |

2、使用配置清单的方式创建Pod

  1. # Api版本号
  2. apiVersion: v1
  3. # 资源名称
  4. kind: Pod
  5. # 基础信息
  6. metadata:
  7. # 资源名称
  8. name: testv1
  9. # 定义容器相关
  10. spec:
  11. # 定义容器
  12. containers:
  13. - name: nginx # 定义名称
  14. image: nginx
  15. [root@kubernetes-master-01 k8s]# kubectl apply -f test.yaml
  16. pod/testv1 created

1.2、查看Pod

[root@kubernetes-master-01 k8s]# kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
test     1/1     Running   0          15m
test1    1/1     Running   0          12m
testv1   1/1     Running   0          39s

NAME : 资源名称
READY:容器状态
STATUS:资源运行状态
RESTARTS:资源重启次数
AGE:启动的时长


参数:
-o : 指定打印的类型

  wide : 展示更加详细的信息
  yaml : 以yaml格式打印Pod
  json : 以json格式打印Pod

-w : 实时监控

1.3、Pod的详细信息

[root@kubernetes-master-01 k8s]# kubectl describe pod test
Name:         test
Namespace:    default
Priority:     0
Node:         kubernetes-master-03/192.168.13.53
Start Time:   Mon, 28 Mar 2022 11:53:24 +0800
Labels:       run=test
Annotations:  <none>
Status:       Running
IP:           10.240.136.2
IPs:
  IP:  10.240.136.2
Containers:
  test:
    Container ID:   docker://b33398b141b4e25b3bb4aeb62a90ee1d59f143c2fa3077304127fed5db9d28dc
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:4ed64c2e0857ad21c38b98345ebb5edb01791a0a10b0e9e3d9ddde185cdbd31a
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 28 Mar 2022 11:53:46 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gwkgt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-gwkgt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gwkgt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 360s
                 node.kubernetes.io/unreachable:NoExecute for 360s
Events:          <none>
[root@kubernetes-master-01 k8s]#

1.4、Pod的状态

状态 解释
Running 正在运行状态
Terminating 正在结束状态
Pending 正在调度状态
ContainerCreating 正在创建容器状态
ErrImagePull 下载镜像失败
ImagePullBackOff 重试拉取镜像
CrashLoopBackOff 镜像运行失败状态
Completed 已完成状态(镜像的生命周期正常结束)

1.5、Pod的重启策略(restartPolicy)

当Pod遇见运行问题时,一般会自动重启;我们可以通过配置控制其重启策略。

重启策略 解释
Always 总是重启(无论什么状态下都重启)
OnFailure 只有在运行失败时重启
Never 在任何情况下都不重启

1.6、Pod的生命周期

Pod从创建那一刻开始到删除之间发生的一些事情。

image.png

1.6、启动回调(lifecycle)

当Pod刚刚创建完成时运行的回调任务。

# Api版本号
apiVersion: v1
# 资源名称
kind: Pod
# 基础信息
metadata:
  # 资源名称
  name: testv1
  # 注解
  annotations:
    xxx: 'This is Nginx Pod'
# 定义容器相关
spec:
  restartPolicy: Always
  # 定义容器
  containers:
    - name: nginx # 定义名称
      image: nginx
      lifecycle:
        postStart:
          exec:
            command:
              - '/bin/sh'
              - '-c'
              - 'echo "HelloWorld" > /usr/share/nginx/html/index.html'

          httpGet:    # 当容器刚刚运行起来之后,去访问的连接
            port: 80
            host: www.baidu.com

          tcpSocket:    # 相当于去ping一下80端口
            port: 80

1.7、存活性探测

就是用来探测容器是否正常运行。如果探测失败(也就是说容器没有正常运行),则会立即重启容器。

# Api版本号
apiVersion: v1
# 资源名称
kind: Pod
# 基础信息
metadata:
  # 资源名称
  name: testv1
  # 注解
  annotations:
    xxx: 'This is Nginx Pod'
# 定义容器相关
spec:
  restartPolicy: Always
  # 定义容器
  containers:
    - name: nginx # 定义名称
      image: nginx
      livenessProbe:
        exec:
          command:
            - '/bin/sh'
            - '-c'
            - 'xxx'

1.8、就绪性探测

当一个容器正常运行起来之后,并不能保证容器一定能够外提供服务。就绪性探测就是探测该容器是否能够正常对外提供服务,如果探测失败,则立即清理出集群。 在k8s中有一个名命空间级资源:service , 相当于负载均衡。

# Api版本号
apiVersion: v1
# 资源名称
kind: Pod
# 基础信息
metadata:
  # 资源名称
  name: testv1
  labels:
    app: test
  # 注解
  annotations:
    xxx: 'This is Nginx Pod'
# 定义容器相关
spec:
  restartPolicy: Always
  # 定义容器
  containers:
    - name: nginx # 定义名称
      image: nginx
      readinessProbe:
        exec:
          command:
            - '/bin/sh'
            - '-c'
            - 'xxx'
        tcpSocket:
          port: 80
        httpGet:
          port: 80
---
apiVersion: v1
kind: Service
metadata:
  name: testv1
spec:
  selector:
    app: test
  ports:
    - port: 80
      targetPort: 80

1.9、环境变量

设置容器内部的环境变量。

---
kind: Pod
apiVersion: v1
metadata:
  name: django
spec:
  containers:
    - name: django
      image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:v1
      env:
        - name: DJANGO_NAME
          value: 'Django==2.2.2'

2、Deployment

dopleoyment是k8s中的无状态控制器。 功能: 1、能够控制Pod的数量 2、监控Pod

2.1、Deployment初体验

apiVersion: apps/v1
kind: Deployment
metadata:
  name: django
spec:
  # 选择器
  selector:
    matchLabels:
      app: django
  # 创建Pod模板
  template:
    metadata:
      labels:
        app: django
    spec:
      containers:
        - name: django
          image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:v1

2.2、控制Pod的数量

  • 通过命令

    [root@kubernetes-master-01 k8s]# kubectl scale deployment django --replicas=10
    deployment.apps/django scaled
    
  • 通过yaml ```bash apiVersion: apps/v1 kind: Deployment metadata: name: django spec: replicas: 5

    选择器

    selector: matchLabels:

    app: django
    

    创建Pod模板

    template: metadata:

    labels:
      app: django
    

    spec:

    containers:
      - name: django
        image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:v1
    

- **打标签**
```bash
[root@kubernetes-master-01 k8s]# kubectl patch deployments.apps django -p '{"spec":{"replicas": 5}}'
deployment.apps/django patched

3、label

标签算是k8s中各个组件之间的桥梁。

3.1、查看label

[root@kubernetes-master-01 k8s]# kubectl get pods --show-labels 
NAME                      READY   STATUS    RESTARTS   AGE   LABELS
django-657f99699c-cg78n   1/1     Running   0          16m   app=django,pod-template-hash=657f99699c

3.2、增加label

[root@kubernetes-master-01 k8s]# kubectl label pod django-657f99699c-cg78n deploy=test
pod/django-657f99699c-cg78n labeled

3.3、删除lable

[root@kubernetes-master-01 k8s]# kubectl label pod django-657f99699c-cg78n deploy-
pod/django-657f99699c-cg78n labeled

3.4、使用Deployment部署Nginx代理Django

  • 准备Django ```dockerfile FROM python:3.6

RUN pip install django==2.2.2 -i https://pypi.tuna.tsinghua.edu.cn/simple/

ADD ./ /opt

WORKDIR /opt

EXPOSE 8000

CMD python manage.py runserver 0.0.0.0:8000


- nginx
```dockerfile
FROM nginx

ADD django.conf /etc/nginx/conf.d/default.conf

EXPOSE 80 443

CMD nginx -g 'daemon off;'
  • deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: django
    spec:
    replicas: 5
    # 选择器
    selector:
      matchLabels:
        app: django
    # 创建Pod模板
    template:
      metadata:
        labels:
          app: django
      spec:
        containers:
          - name: django
            image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:v2
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:nginxv2
    

    4、Service(svc)

    可以理解为k8s中的负载均衡器。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: django
spec:
  replicas: 5
  # 选择器
  selector:
    matchLabels:
      app: django
  # 创建Pod模板
  template:
    metadata:
      labels:
        app: django
    spec:
      containers:
        - name: django
          image: alvinos/django:v1
          readinessProbe:
            httpGet:
              port: 80
              path: /index
---
kind: Service
apiVersion: v1
metadata:
  name: django
spec:
  selector:
    app: django
  ports:
    - port: 80   # 负载均衡的端口
      targetPort: 80  # 后端服务的端口

4.1、Service的类型

Service一般有四种类型,分别是:ClusterIP、NodePort、LoadBalancer以及ExternalName。

4.1.1、ClusterIP

在k8s中,默认的类型就是ClusterIP。它为服务提供一个VIP,这个VIP只在集群内部生效。

kind: Service
apiVersion: v1
metadata:
  name: django
spec:
  selector:
    app: django
  ports:
    - port: 80   # 负载均衡的端口
      targetPort: 80  # 后端服务的端口
  type: ClusterIP
  clusterIP: 10.96.12.60

4.1.2、NodePort

利用宿主主机的HOST和Port来代理k8s的端口

kind: Service
apiVersion: v1
metadata:
  name: django
spec:
  selector:
    app: django
  ports:
    - port: 80   # 负载均衡的端口
      targetPort: 80  # 后端服务的端口
      nodePort: 30080
  type: NodePort
  clusterIP: 10.96.12.60

4.1.3、LoadBalancer

利用公网上面的负载均衡器代理k8s服务。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: django
spec:
  replicas: 5
  # 选择器
  selector:
    matchLabels:
      app: django
  # 创建Pod模板
  template:
    metadata:
      labels:
        app: django
    spec:
      containers:
        - name: django
          image: alvinos/django:v1
          readinessProbe:
            httpGet:
              port: 80
              path: /index
---
kind: Service
apiVersion: v1
metadata:
  name: django
spec:
  selector:
    app: django
  ports:
    - port: 80   # 负载均衡的端口
      targetPort: 80  # 后端服务的端口
  type: LoadBalancer
  clusterIP: 10.96.12.60

4.1.4、ExternalName

ExternalName最主要的功能是将外部服务接入到集群内部管理。

---
kind: Service
apiVersion: v1
metadata:
  name: baidu
spec:
  type: ExternalName
  externalName: www.baidu.com

4.1.5、使用Service部署Nginx + Django

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: testv3
spec:
  selector:
    matchLabels:
      app: testv3
  template:
    metadata:
      labels:
        app: testv3
    spec:
      containers:
        - name: django
          image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:v2
          readinessProbe:
            tcpSocket:
              port: 8000
        - name: nginx
          image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:nginxv2
---
kind: Service
apiVersion: v1
metadata:
  name: testv3
spec:
  selector:
    app: testv3
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30081
  type: NodePort

4.2、Endpoints

在k8s集群中,service代理Pod使用的实际上是endpoints。

  • endpoints负责在集群中寻找pod。
  • service 负责提供一个VIP。

注意:当Service和Endpoints的名称相同时,k8s默认将其关联起来。

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: testv6
spec:
  replicas: 5
  selector:
    matchLabels:
      app: testv6
  template:
    metadata:
      labels:
        app: testv6
    spec:
      containers:
        - name: nginx
          image: nginx
          readinessProbe:
            exec:
              command:
                - '/bin/sh'
                - '-c'
                - 'cat /usr/share/nginx/html/index.html'
---
kind: Service
apiVersion: v1
metadata:
  name: testv6
spec:
  selector:
    app: testv6
  ports:
    - port: 80
      targetPort: 80

注意:Endpoints除此之外还可以将外部的服务接入集群,接入集群之后像集群内部的资源一样去管理服务。

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: '123456'

---
kind: Endpoints
apiVersion: v1
metadata:
  name: mysql
subsets:
  - addresses:
      - ip: 106.13.81.75
    ports:
      - port: 49154
---
kind: Service
apiVersion: v1
metadata:
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 49154

4.3、Service别名

在k8s集群中,每一个service都有一个vip,但是vip重建之后会发生改变,也就是说service的ip是不固定的,k8s集群就自动给service的名称提供域名解析。

[root@kubernetes-master-01 k8s]# kubectl run --rm -it testv7 --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup django
Server:    10.96.0.2
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local

Name:      django
Address 1: 10.96.12.59 django.default.svc.cluster.local

其中:django.default.svc.cluster.local 就是service的别名。

  • 语法:service名称.命名空间名称.svc.cluster.local
    root@mysql-5b7f695f-qkkj5:/# mysql -uroot -p123456 -hmysql.default.svc.cluster.local
    

    5、存储卷

    在k8s集群中,跟在docker中类似,数据是无法永久保存的。所以,k8s集群提供了一个存储卷。

5.1、configMap

一般用来存储配置文件的。

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: django
data:
  default.conf: |
    server {
        listen 80;
        server_name _;
        location / {
            proxy_pass http://127.0.0.1:8000;
        }
    }
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: django
spec:
  selector:
    matchLabels:
      app: django
  template:
    metadata:
      labels:
        app: django
    spec:
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: /etc/nginx/conf.d/
              name: django-conf
      volumes:
        - name: django-conf
          configMap:
            name: django
            items:
              - key: default.conf
                path: default.conf  # 挂载目录的相对路径

5.1.1、使用configmap实现nginx + django

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: django
data:
  default.conf: |
    server {
        listen 80;
        server_name _;
        location / {
            proxy_pass http://127.0.0.1:8000;
        }
    }
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: django
spec:
  selector:
    matchLabels:
      app: django
  template:
    metadata:
      labels:
        app: django
    spec:
      containers:
        - name: django
          image: registry.cn-hangzhou.aliyuncs.com/alvinos/django:v2
          readinessProbe:
            tcpSocket:
              port: 8000
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: /etc/nginx/conf.d/
              name: django-conf
      volumes:
        - name: django-conf
          configMap:
            name: django
            items:
              - key: default.conf
                path: default.conf  # 挂载目录的相对路径
---
kind: Service
apiVersion: v1
metadata:
  name: django
spec:
  selector:
    app: django
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30082
  type: NodePort

5.1.2、configmap用作环境变量

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: test
data:
  MYSQL_ROOT_PASSWORD: '123456'
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: test
spec:
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          envFrom:
            - configMapRef:
                name: test

5.2、secret

Secret解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者Pod Spec中。Secret可以以Volume或者环境变量的方式使用。通常需要将保存至secret中的数据进行base64加密。

---
kind: Secret
apiVersion: v1
metadata:
  name: test
data:
  MYSQL_ROOT_PASSWORD: MTIzNDU2Cg==

5.2.1、Opaque

opaque主要用来保存密码、密钥等信息。它也是secret中的默认类型。

1、用作环境变量

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-secret
spec:
  selector:
    matchLabels:
      app: test-secret
  template:
    metadata:
      labels:
        app: test-secret
    spec:
      containers:
        - name: nginx
          image: nginx
          envFrom:
            - secretRef:
                name: test

2、挂载

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-secret
spec:
  selector:
    matchLabels:
      app: test-secret
  template:
    metadata:
      labels:
        app: test-secret
    spec:
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: /tmp/
              name: test
      volumes:
        - name: test
          secret:
            secretName: test
            items:
              - key: MYSQL_ROOT_PASSWORD
                path: password

注意:环境变量的方式是直接挂载到容器的环境变量上;使用存储卷的方式是挂载成一个文件。

5.2.2、dockerconfigjson

# 假设现在有一个未公开的镜像需要下载到容器,怎么使用k8s部署。
export DOCKER_REGISTRY_SERVER=106.13.81.75
export DOCKER_USER=admin
export DOCKER_PASSWORD=Harbor12345
export DOCKER_EMAIL=root@123.com

kubectl create secret docker-registry harbor106 --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
        - name: harbor106
      containers:
        - name: nginx
          image: 106.13.81.75/os/nginx:latest

5.3、emptyDir

注意用来多个容器之间共享临时数据。当pod生命周期结束之后,emptyDir中的数据也会立即删除。

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: emptydir
spec:
  selector:
    matchLabels:
      app: emptydir
  template:
    metadata:
      labels:
        app: emptydir
    spec:
      containers:
        - name: test
          image: busybox:1.28.3
          command:
            - '/bin/sh'
            - '-c'
            - 'while true; do echo `date` > /tmp/1.txt; sleep 1; done'
          volumeMounts:
            - mountPath: /tmp/
              name: emptydir
        - name: test2
          image: busybox:1.28.3
          command:
            - '/bin/sh'
            - '-c'
            - 'while true; do echo `date "+%F %H:%m:%d"` > /tmp/1.txt; sleep 1; done'
          volumeMounts:
            - mountPath: /tmp/
              name: emptydir
      volumes:
        - name: emptydir
          emptyDir: {}

5.4、hostPath

跟宿主主机之间的存储互通。也就是说部署到那一台机器上就跟那一个宿主主机存储互通。

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: hostpath
spec:
  selector:
    matchLabels:
      app: hostpath
  template:
    metadata:
      labels:
        app: hostpath
    spec:
      containers:
        - name: test
          image: nginx
          volumeMounts:
            - mountPath: /usr/share/nginx/html/
              name: hostpath
      volumes:
        - name: hostpath
          hostPath:
            path: /tmp # 宿主主机的路径

-- 在docker中跑一个docker

5.5、网络存储

在k8s中,还可以使用网络存储来作为k8s的后端存储。

[root@kubernetes-master-01 k8s]# for i in m1 m2 m3 n1 n2 n3 n4 n5; do ssh root@$i 'yum install nfs-utils -y';  done
[root@kubernetes-master-01 k8s]# for i in m1 m2 m3 n1 n2 n3 n4 n5; do ssh root@$i 'groupadd www -g 666 && useradd www -u 666 -g 666';  done
[root@kubernetes-master-01 k8s]# vim /etc/exports
[root@kubernetes-master-01 k8s]# cat /etc/exports
/backup 192.168.13.0(rw,sync,all_squash,anonuid=666,anongid=666) 
[root@kubernetes-master-01 k8s]# systemctl enable --now nfs-server rpcbind
[root@kubernetes-master-01 k8s]# mkdir /backup
[root@kubernetes-master-01 k8s]# chown www.www /backup
  • yaml ```yaml

kind: Deployment apiVersion: apps/v1 metadata: name: nfs spec: selector: matchLabels: app: nfs template: metadata: labels: app: nfs spec: containers:

    - name: nfs
      image: nginx
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nfs
  volumes:
    - name: nfs
      nfs:
        path: /backup
        server: 192.168.13.51

<a name="vVrPF"></a>
### 5.6、PV/PVC
> 动态匹配。

<a name="pLO2j"></a>
#### 5.6.1、PV
> 存储

- pv的访问策略
| **模式** | **解释** |
| --- | --- |
| ReadWriteOnce(RWO) | 可读可写,但只支持被单个节点挂载。 |
| ReadOnlyMany(ROX) | 只读,可以被多个节点挂载。 |
| ReadWriteMany(RWX) | 多路可读可写。这种存储可以以读写的方式被多个节点共享。不是每一种存储都支持这三种方式,像共享方式,目前支持的还比较少,比较常用的是NFS。在 PVC 绑定 PV 时通常根据两个条件来绑定,一个是存储的大小,另一个就是访问模式。 |

```yaml

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: v0001
spec:
  accessModes:
    - ReadOnlyMany
  capacity:
    storage: 20Gi
  nfs:
    path: /backup/v10001
    server: 192.168.13.51
  • pv的回收策略 | 策略 | 解释 | | —- | —- | | Retain | 不清理, 保留 Volume(需要手动清理) | | Recycle | 删除数据,即rm -rf /thevolume/*(只有 NFS 和 HostPath 支持) | | Delete | 删除存储资源,比如删除AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持) |
kind: PersistentVolume
apiVersion: v1
metadata:
  name: v0001
spec:
  persistentVolumeReclaimPolicy: Delete
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 20Gi
  nfs:
    path: /backup/v10001
    server: 192.168.13.51
  • pv的状态 | 状态 | 解释 | | —- | —- | | Available | 可用。 | | Bound | 已经分配给PVC。 | | Released | PVC 解绑但还未执行回收策略。 | | Failed | 发生错误。 |

5.6.2、PVC

存储请求

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: v0001
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: "10Gi"

案例

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: v0001
spec:
  persistentVolumeReclaimPolicy: Delete
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 20Gi
  nfs:
    path: /backup/v10001
    server: 192.168.13.51
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: v0001
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: "10Gi"
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: v0001
spec:
  selector:
    matchLabels:
      app: v0001
  template:
    metadata:
      labels:
        app: v0001
    spec:
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: v0001
      volumes:
        - name: v0001
          persistentVolumeClaim:
            claimName: v0001

5.7、StorageClass

StorageClass 是一个动态创建PV的一个存储类。

5.7.1、部署StorageClass

通常使用Helm来部署StorageClass。

# Helm 相当于 yum
[root@kubernetes-master-01 k8s]# wget https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz

[root@kubernetes-master-01 k8s]# tar -xf helm-v3.8.1-linux-amd64.tar.gz

[root@kubernetes-master-01 k8s]# cd linux-amd64/
[root@kubernetes-master-01 linux-amd64]# mv helm /usr/local/bin/


# 安装helm仓库
[root@kubernetes-master-01 k8s]# helm repo add moikot https://moikot.github.io/helm-charts

# 下载nfs-client安装包
[root@kubernetes-master-01 k8s]# helm pull rimusz/nfs-client-provisioner

[root@kubernetes-master-01 k8s]# tar -xf nfs-client-provisioner-0.1.6.tgz
[root@kubernetes-master-01 k8s]# cd nfs-client-provisioner/
[root@kubernetes-master-01 k8s]# vim values.yaml
## Cloud Filestore instance
nfs:
  ## Set IP address
  server: "192.168.13.51"
  ## Set file share name
  path: "/backup/v10003"
storageClass:
 accessModes: ReadWriteMany

[root@kubernetes-master-01 k8s]# helm install nfs-client ./

5.7.2、使用StorageClass创建PV

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: sc
spec:
  selector:
    matchLabels:
      app: sc
  template:
    metadata:
      labels:
        app: sc
    spec:
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: sc
      volumes:
        - name: sc
          persistentVolumeClaim:
            claimName: sc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sc
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: "10Gi"