hostPath
- running a Container that needs access to Docker internals; use a hostPath of /var/lib/docker
- running cAdvisor in a Container; use a hostPath of /sys
- allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
The supported values for field type are:
| Value | Behavior |
|---|---|
| Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. | |
| DirectoryOrCreate | If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. |
| Directory | A directory must exist at the given path |
| FileOrCreate | If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. |
| File | A file must exist at the given path |
| Socket | A UNIX socket must exist at the given path |
| CharDevice | A character device must exist at the given path |
| BlockDevice | A block device must exist at the given path |
PersistentVolume
apiVersion: v1kind: PersistentVolumemetadata:name: mysql-pvnamespace: jolimall-devlabels:pv: mysql-pvspec:capacity:storage: 1GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RecyclestorageClassName: nfsnfs:path: /nfsdata/mysqlserver: 192.168.3.126---apiVersion: v1kind: PersistentVolumeClaimmetadata:name: mysql-pvcnamespace: jolimall-devspec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: nfsselector:matchLabels:pv: mysql-pv
RBD
Ceph 存储集群默认是开启了 cephx 认证,这里通过 K8s secret 来通过 Ceph 集群的认证
# 这条命令在被 ceph-deploy admin 了的节点同时也能使用 kubectl 的节点中执行
ceph_admin_secret=$(ceph auth get-key client.admin | base64)
ceph_admin_secret_namespace=jolimall-dev
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: ${ceph_admin_secret_namespace}
type: "kubernetes.io/rbd"
data:
key: ${ceph_admin_secret}
EOF
pod 挂载事先在 Ceph 上创建好的 image
apiVersion: v1
kind: Pod
metadata:
name: rbd-pod
spec:
containers:
- image: nginx
name: rbd-rw
volumeMounts:
- name: rbdpd
mountPath: /mnt/rbd
volumes:
- name: rbdpd
rbd:
monitors:
- '192.168.3.169:6789'
- '192.168.3.180:6789'
- '192.168.3.218:6789'
pool: rbd
image: foo
fsType: ext4
# 如果是没有被格式化的块设备,将不能使用 readOnly ,要 false 掉
readOnly: true
# 就是用的 admin 的
user: admin
secretRef:
name: ceph-admin-secret
动态创建 image 及 pv
部署第三方 storage class 供应者
git clone https://github.com/kubernetes-incubator/external-storage.git
cd external-storage/ceph/rbd/deploy/
kubectl apply -f rbac/
申请存储类 rbd-sc
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: rbd-sc
provisioner: ceph.com/rbd
parameters:
monitors: 192.168.3.169:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: default
pool: rbd
userId: admin
userSecretName: ceph-admin-secret
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
申请持久卷申明
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-sc-test-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: rbd-sc
resources:
requests:
storage: 1Gi
手动修改 rbd image 大小后 (rbd resize)
- ssh 到 pod 所在的 node 上,并查到是哪块 /dev/rbd?
- blockdev —getsize64 /dev/rbd0 查看是否能 resize2fs 变化
- resize2fs /dev/rbd0 开始 resize2fs
- df | grep /dev/rbd0 看一下
- kubectl edit pv
手动修改 pv 的大小 CephFS
单 pod 示例,必须预先创建好目录apiVersion: v1 kind: Pod metadata: name: cephfs-pod spec: containers: - image: nginx name: cephfs-rw volumeMounts: - name: cephfs mountPath: /mnt/cephfs volumes: - name: cephfs cephfs: monitors: - '192.168.3.169:6789' - '192.168.3.180:6789' - '192.168.3.218:6789' readOnly: false # 就是用的 admin 的 user: admin secretRef: name: ceph-admin-secret动态创建 image 及 pv
部署第三方 storage class 供应者,默认在 cephfs ns 内部署
在 cephfs ns 内创建 cephfs 适用的 secretgit clone https://github.com/kubernetes-incubator/external-storage.git cd external-storage/ceph/cephfs/deploy/ kubectl create namespace cephfs kubectl apply -f rbac/
创建 sc 名为 cephfs-scceph_admin_secret=$(ceph auth get-key client.admin | base64) cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: cephfs-admin-secret namespace: cephfs type: "kubernetes.io/cephfs" data: key: ${ceph_admin_secret} EOF
创建 pvc 测试一下kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs-sc provisioner: ceph.com/cephfs parameters: monitors: 192.168.3.169:6789,192.168.3.180:6789,192.168.3.218:6789 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: "cephfs" claimRoot: /k8s-pvc-volumes # 不用在 cephfs 中创建,会自动创建apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-sc-test-pvc namespace: default # 命名空间随意 spec: accessModes: - ReadWriteMany storageClassName: cephfs-sc resources: requests: storage: 1Gi
