本文所使用的文件在这里 附件.zip

第一章:存储相关概念

  • 在容器世界中,无状态是一个核心原则,然而我们始终需要保存数据,并提供给他人进行访问,所以就需要一个方案用于保持数据,以备重启之需。
  • 在 Kubernetes 中,PVC 是管理有状态应用的一个推荐方案。有了 PVC 的帮助,Pod 可以申请并连接到存储卷,这些存储卷在 Pod 生命周期结束之后,还能独立存在。
  • PVC 在存储方面让开发和运维的职责得以分离。运维人员负责供应存储,而开发人员则可以在不知后端细节的情况下,申请使用这些存储卷。
  • PVC 由一系列组件构成:
    • PVC:是 Pod 对存储的请求。PVC 会被 Pod 动态加载成为一个存储卷。
    • PV,可以由运维手工分配,也可以使用 StorageClass 动态分配。PV 受 Kubernetes 管理,但并不与特定的 Pod 直接绑定。
    • StorageClass:由管理员创建,可以用来动态的创建存储卷和 PV。
  • 物理存储:实际连接和加载的存储卷。
  • 分布式存储系统是一个有效的解决有状态工作负载高可用问题的方案。Ceph 就是一个分布式存储系统,近年来其影响主键扩大。Rook 是一个编排器,能够支持包括 Ceph 在内的多种存储方案。Rook 简化了 Ceph 在 Kubernetes 集群中的部署过程。
  • 在生产环境中使用 Rook + Ceph 组合的用户正在日益增加,尤其是自建数据中心的用户,CENGNopen in new window、Gini、GPR 等很多组织都在进行评估。

1.png

第二章 Ceph

  • Ceph 可以有如下的功能:
    • Ceph 对象存储:键值存储,其接口就是简单的 GET,PUT,DEL 等。如七牛,阿里云 oss 等。
    • Ceph 块设备:AWS 的 EBS 青云的云硬盘阿里云的盘古系统,还有Ceph的RBD(RBD 是 Ceph 面向块存储的接口)
    • Ceph 文件系统:它比块存储具有更丰富的接口,需要考虑目录、文件属性等支持,实现一个支持并行化的文件存储应该是最困难的。
  • 一个 Ceph 存储集群需要
    • 至少一个 Ceph 监视器、Ceph 管理器、Ceph OSD(对象存储守护程序)
    • 需要运行 Ceph 文件系统客户端,则需要部署 Ceph Metadata Server。

2.png

  • Monitors: Ceph Monitor (ceph-mon) 监视器:维护集群状态信息
    • 维护集群状态的映射,包括监视器映射,管理器映射,OSD 映射,MDS 映射和 CRUSH 映射。
    • 这些映射是 Ceph 守护程序相互协调所必需的关键群集状态。
    • 监视器还负责管理守护程序和客户端之间的身份验证。
    • 通常至少需要三个监视器(这也是为什么需要三个 Node 节点的原因所在)才能实现冗余和高可用性。
  • Managers: Ceph Manager 守护进程(ceph-mgr) : 负责跟踪运行时指标和 Ceph 集群的当前状态
    • Ceph Manager 守护进程(ceph-mgr)负责跟踪运行时指标和 Ceph 集群的当前状态
    • 包括存储利用率,当前性能指标和系统负载。
    • Ceph Manager 守护程序还托管基于 python 的模块,以管理和公开 Ceph 集群信息,包括基于 Web 的 Ceph Dashboard 和REST API 。
    • 通常,至少需要两个管理器才能实现高可用性。
  • Ceph OSDs: Ceph OSD (对象存储守护进程, ceph-osd) 【存储数据】
    • 通过检查其他 Ceph OSD 守护程序的心跳来存储数据,处理数据复制,恢复,重新平衡,并向 Ceph 监视器和管理器提供一些监视信息。
    • 通常至少需要 3 个 Ceph OSD 才能实现冗余和高可用性。
  • MDSs: Ceph Metadata Server (MDS, ceph-mdsceph 元数据服务器)
    • 存储能代表 Ceph File System 的元数据(如:Ceph 块设备和 Ceph 对象存储不使用 MDS)。
    • Ceph 元数据服务器允许 POSIX 文件系统用户执行基本命令(如 ls,find 等),而不会给 Ceph 存储集群带来巨大负担。

第三章 Rook

3.1 基本概念

  • Rook 是云原生平台的存储编排工具。
  • Rook 工作原理如下:

3.png

  • Rook 的架构图如下:

4.png

3.2 Operator 是什么?

  • k8s 中 Operator + CRD(CustomResourceDefinitions【k8s自定义资源类型】),可以快速帮我们部署一些有状态应用集群,如redis,mysql,Zookeeper 等。
  • Rook 的 Operator 是 k8s 集群和存储集群之间进行交互的解析器。

很多时候,我们在进行 k8s 的二次开发的时候就需要使用到 CRD + Operator 。

第四章 部署

4.1 前提条件

  • Kubernetes 的版本必须是 v1.17 以上版本。
  • Kubernetes 集群各个节点主机需要安装 lvm2 :
  1. yum -y install lvm2

5.gif

  • Kubernetes 集群各节点主机内核版本不低于 4.17 。
  • Kubernetes 集群至少有 3 个工作节点,为了配置 Ceph 存储集群,至少需要以下本地存储选项之一:
    • 原始设备(无分区或格式化文件系统)。
    • 原始分区(无格式化文件系统)。
    • block模式下存储类中可用的 PV 。
  • 我们可以使用以下命令确认我们的分区或设备是否为格式化文件系统:
  1. lsblk -f
  1. NAME FSTYPE LABEL UUID MOUNTPOINT
  2. vda
  3. └─vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB
  4. ├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 /
  5. └─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP]
  6. vdb
  • 如果该FSTYPE字段不为空,则在相应设备的顶部有一个文件系统。在这种情况下,您可以将 vdb 用于 Ceph,而不能使用 vda 及其分区。

注意:云厂商需要磁盘清零,假设 lsblk -f 命令查询的结果是 vdc ,那么就需要执行 dd if=/dev/zero of=/dev/vdc bs=1M status=progress 命令。

  • 在 VMware 中的 k8s-node1 和 k8s-node2 以及 K8s-node3 节点添加硬盘,本次只以 k8s-node1 为例,其余节点依次类推即可:

6.PNG

7.PNG

8.PNG

9.PNG

10.PNG

11.PNG

12.PNG

13.PNG

注意:将 k8s-node1 、k8s-node2 以及 k8s-node3 节点重启一下。

  • 查看 k8s-node1 、k8s-node2 和 k8s-node3 节点的分区或设备是否为格式化文件系统:
  1. lsblk -f

14.gif

  • 对 k8s-node1 、k8s-node2 和 k8s-node3 节点的无格式化系统进行清零:
  1. dd if=/dev/zero of=/dev/sdb bs=1M status=progress

15.gif

4.2 部署 & 修改 Operator

  • 下载 rook 的源码:
  1. wget https://github.com/rook/rook/archive/refs/tags/v1.9.2.zip

16.gif

  • 解压:
  1. unzip v1.9.2.zip

17.gif

  • 进入 examples 目录:
  1. cd rook-1.9.2/deploy/examples

18.gif

  • 修改 operator.yaml 文件(修改好的文件在这里 operator.yaml):
  1. vi operator.yaml
  1. kind: ConfigMap
  2. apiVersion: v1
  3. metadata:
  4. name: rook-ceph-operator-config
  5. namespace: rook-ceph
  6. data:
  7. ROOK_LOG_LEVEL: "INFO"
  8. ROOK_CSI_ENABLE_CEPHFS: "true"
  9. ROOK_CSI_ENABLE_RBD: "true"
  10. ROOK_CSI_ENABLE_NFS: "false"
  11. ROOK_CSI_ENABLE_GRPC_METRICS: "false"
  12. CSI_ENABLE_ENCRYPTION: "false"
  13. CSI_PROVISIONER_REPLICAS: "2"
  14. CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true"
  15. CSI_ENABLE_RBD_SNAPSHOTTER: "true"
  16. CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
  17. CSI_RBD_FSGROUPPOLICY: "ReadWriteOnceWithFSType"
  18. CSI_CEPHFS_FSGROUPPOLICY: "ReadWriteOnceWithFSType"
  19. CSI_NFS_FSGROUPPOLICY: "ReadWriteOnceWithFSType"
  20. ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
  21. CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: "false"
  22. # 原来
  23. # these images to the desired release of the CSI driver.
  24. # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.6.1"
  25. # ROOK_CSI_REGISTRAR_IMAGE: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
  26. # ROOK_CSI_RESIZER_IMAGE: "k8s.gcr.io/sig-storage/csi-resizer:v1.4.0"
  27. # ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0"
  28. # ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1"
  29. # ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.4.0"
  30. # ROOK_CSI_NFS_IMAGE: "k8s.gcr.io/sig-storage/nfsplugin:v3.1.0"
  31. # 打开注释,进行修改
  32. # ------------ 修改 -------------
  33. ROOK_CSI_CEPH_IMAGE: "ccr.ccs.tencentyun.com/cephcsi/cephcsi:v3.6.1"
  34. ROOK_CSI_REGISTRAR_IMAGE: "ccr.ccs.tencentyun.com/sig-storage/csi-node-driver-registrar:v2.5.0"
  35. ROOK_CSI_RESIZER_IMAGE: "ccr.ccs.tencentyun.com/sig-storage/csi-resizer:v1.4.0"
  36. ROOK_CSI_PROVISIONER_IMAGE: "ccr.ccs.tencentyun.com/sig-storage/csi-provisioner:v3.1.0"
  37. ROOK_CSI_SNAPSHOTTER_IMAGE: "ccr.ccs.tencentyun.com/sig-storage/csi-snapshotter:v5.0.1"
  38. ROOK_CSI_ATTACHER_IMAGE: "ccr.ccs.tencentyun.com/sig-storage/csi-attacher:v3.4.0"
  39. ROOK_CSI_NFS_IMAGE: "ccr.ccs.tencentyun.com/sig-storage/nfsplugin:v3.1.0"
  40. # ------------ 修改 -------------
  41. ---
  42. apiVersion: apps/v1
  43. kind: Deployment
  44. metadata:
  45. name: rook-ceph-operator
  46. namespace: rook-ceph
  47. labels:
  48. operator: rook
  49. storage-backend: ceph
  50. app.kubernetes.io/name: rook-ceph
  51. app.kubernetes.io/instance: rook-ceph
  52. app.kubernetes.io/component: rook-ceph-operator
  53. app.kubernetes.io/part-of: rook-ceph-operator
  54. spec:
  55. selector:
  56. matchLabels:
  57. app: rook-ceph-operator
  58. replicas: 1
  59. template:
  60. metadata:
  61. labels:
  62. app: rook-ceph-operator
  63. spec:
  64. serviceAccountName: rook-ceph-system
  65. containers:
  66. - name: rook-ceph-operator
  67. # ------------ 修改 -------------
  68. image: ccr.ccs.tencentyun.com/k8s-rook/ceph:v1.9.2 # 原来 rook/ceph:v1.9.2
  69. # ------------ 修改 -------------
  70. args: ["ceph", "operator"]
  71. securityContext:
  72. runAsNonRoot: true
  73. runAsUser: 2016
  74. runAsGroup: 2016

19.gif

  • 安装:
  1. kubectl create -f crds.yaml -f common.yaml -f operator.yaml

20.gif

4.3 部署集群

  • 修改cluster.yaml使用我们指定的磁盘当做存储节点即可(修改好的文件在这里cluster.yaml):
  1. vi cluster.yaml
  1. apiVersion: ceph.rook.io/v1
  2. kind: CephCluster
  3. metadata:
  4. name: rook-ceph
  5. namespace: rook-ceph
  6. spec:
  7. cephVersion:
  8. # ------------ 修改 -------------
  9. image: ccr.ccs.tencentyun.com/k8s-ceph/ceph:v16.2.7 # 原来是 quay.io/ceph/ceph:v16.2.7
  10. # ------------ 修改 -------------
  11. allowUnsupported: false
  12. dataDirHostPath: /var/lib/rook
  13. skipUpgradeChecks: false
  14. continueUpgradeAfterChecksEvenIfNotHealthy: false
  15. waitTimeoutForHealthyOSDInMinutes: 10
  16. mon:
  17. count: 3
  18. allowMultiplePerNode: false
  19. mgr:
  20. count: 2
  21. allowMultiplePerNode: false
  22. modules:
  23. - name: pg_autoscaler
  24. enabled: true
  25. dashboard:
  26. enabled: true
  27. ssl: true
  28. monitoring:
  29. enabled: false
  30. network:
  31. connections:
  32. encryption:
  33. enabled: false
  34. compression:
  35. enabled: false
  36. crashCollector:
  37. disable: false
  38. cleanupPolicy:
  39. confirmation: ""
  40. sanitizeDisks:
  41. method: quick
  42. dataSource: zero
  43. iteration: 1
  44. allowUninstallWithVolumes: false
  45. annotations:
  46. labels:
  47. resources:
  48. removeOSDsIfOutAndSafeToRemove: false
  49. priorityClassNames:
  50. mon: system-node-critical
  51. osd: system-node-critical
  52. mgr: system-cluster-critical
  53. storage: # cluster level storage configuration and selection
  54. # ------------ 修改 -------------
  55. useAllNodes: false
  56. useAllDevices: false
  57. config:
  58. osdsPerDevice: '3' # 每个设备的 osd 数量
  59. nodes:
  60. - name: "k8s-node1" # 必须符合 kubernetes.io/hostname ,通过 kubectl get nodes --show-labels 查看
  61. devices:
  62. - name: "sdb"
  63. - name: "k8s-node2"
  64. devices:
  65. - name: "sdb"
  66. - name: "k8s-node3"
  67. devices:
  68. - name: "sdb"
  69. # ------------ 修改 -------------
  70. onlyApplyOSDPlacement: false

21.gif

  • 安装:
  1. kubectl create -f cluster.yaml

22.gif

  • 出现如下结果,就说明已经安装完毕:

23.gif

4.4 部署 dashboard

  • 默认情况下,其实已经安装了 dashboard 。

24.gif

  • 当然,我们也可以选择 NodePort 的形式:
  1. kubectl apply -f dashboard-external-https.yaml

25.gif

  • 获取的 dashboard 的密码:
  1. # 用户名是 admin 密码是 1/L!SSnCgwj1=P#/iB@x
  2. kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 -d

26.gif

  • 浏览器访问:https://192.168.65.100:30446

27.gif

第五章:实战

5.1 块存储和共享存储

  • Ceph 分为块存储(RBD)和共享存储(CephFS)两类。
    • Ceph 中的块存储(RBD)一般用于单节点读写(RWO,ReadWriteOnce),适用于有状态应用。
    • Ceph 中的共享存储(CephFs)一般用于多节点读写(RWX,ReadWriteMany),适用于无状态应用。
  • Rook 可以帮准我们创建好 StorageClass,我们只需要在 PVC 中指定存储类,Rook 就会调用 StorageClass 里面的 Provisioner供应商,接下来对 Ceph 集群操作。

5.2 配置块存储(RBD)

  • 官网

  • 块存储,RWO 模式 ,STS 删除,PVC 不会删除,需要自己手动维护。

  • 安装块存储的 StorageClass:

  1. # 此文件就在 rook-1.9.2/deploy/examples/csi/rbd/storageclass.yaml
  2. vi rook-ceph-block.yaml
  1. apiVersion: ceph.rook.io/v1
  2. kind: CephBlockPool
  3. metadata:
  4. name: replicapool
  5. namespace: rook-ceph # namespace:cluster
  6. spec:
  7. failureDomain: host
  8. replicated:
  9. size: 3
  10. # Disallow setting pool with replica 1, this could lead to data loss without recovery.
  11. # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
  12. requireSafeReplicaSize: true
  13. # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
  14. # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
  15. #targetSizeRatio: .5
  16. ---
  17. apiVersion: storage.k8s.io/v1
  18. kind: StorageClass
  19. metadata:
  20. name: rook-ceph-block
  21. # Change "rook-ceph" provisioner prefix to match the operator namespace if needed
  22. provisioner: rook-ceph.rbd.csi.ceph.com
  23. parameters:
  24. # clusterID is the namespace where the rook cluster is running
  25. # If you change this namespace, also change the namespace below where the secret namespaces are defined
  26. clusterID: rook-ceph # namespace:cluster
  27. # If you want to use erasure coded pool with RBD, you need to create
  28. # two pools. one erasure coded and one replicated.
  29. # You need to specify the replicated pool here in the `pool` parameter, it is
  30. # used for the metadata of the images.
  31. # The erasure coded pool must be set as the `dataPool` parameter below.
  32. #dataPool: ec-data-pool
  33. pool: replicapool
  34. # (optional) mapOptions is a comma-separated list of map options.
  35. # For krbd options refer
  36. # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  37. # For nbd options refer
  38. # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  39. # mapOptions: lock_on_read,queue_depth=1024
  40. # (optional) unmapOptions is a comma-separated list of unmap options.
  41. # For krbd options refer
  42. # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  43. # For nbd options refer
  44. # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  45. # unmapOptions: force
  46. # (optional) Set it to true to encrypt each volume with encryption keys
  47. # from a key management system (KMS)
  48. # encrypted: "true"
  49. # (optional) Use external key management system (KMS) for encryption key by
  50. # specifying a unique ID matching a KMS ConfigMap. The ID is only used for
  51. # correlation to configmap entry.
  52. # encryptionKMSID: <kms-config-id>
  53. # RBD image format. Defaults to "2".
  54. imageFormat: "2"
  55. # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
  56. imageFeatures: layering
  57. # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  58. # in the same namespace as the cluster.
  59. csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  60. csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  61. csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  62. csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  63. csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  64. csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  65. # Specify the filesystem type of the volume. If not specified, csi-provisioner
  66. # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
  67. # in hyperconverged settings where the volume is mounted on the same node as the osds.
  68. csi.storage.k8s.io/fstype: ext4
  69. # uncomment the following to use rbd-nbd as mounter on supported nodes
  70. # **IMPORTANT**: CephCSI v3.4.0 onwards a volume healer functionality is added to reattach
  71. # the PVC to application pod if nodeplugin pod restart.
  72. # Its still in Alpha support. Therefore, this option is not recommended for production use.
  73. #mounter: rbd-nbd
  74. allowVolumeExpansion: true
  75. reclaimPolicy: Delete
  1. kubectl apply -f rook-ceph-block.yaml

28.gif

  • 测试:
  1. apiVersion: apps/v1
  2. kind: StatefulSet
  3. metadata:
  4. name: sts-nginx
  5. namespace: default
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: sts-nginx # has to match .spec.template.metadata.labels
  10. serviceName: "sts-nginx"
  11. replicas: 3 # by default is 1
  12. template:
  13. metadata:
  14. labels:
  15. app: sts-nginx # has to match .spec.selector.matchLabels
  16. spec:
  17. terminationGracePeriodSeconds: 10
  18. containers:
  19. - name: sts-nginx
  20. image: nginx
  21. ports:
  22. - containerPort: 80
  23. name: web
  24. volumeMounts:
  25. - name: www
  26. mountPath: /usr/share/nginx/html
  27. volumeClaimTemplates:
  28. - metadata:
  29. name: www
  30. spec:
  31. accessModes: [ "ReadWriteOnce" ]
  32. storageClassName: "rook-ceph-block"
  33. resources:
  34. requests:
  35. storage: 20Mi
  36. ---
  37. apiVersion: v1
  38. kind: Service
  39. metadata:
  40. name: sts-nginx
  41. namespace: default
  42. spec:
  43. selector:
  44. app: sts-nginx
  45. type: ClusterIP
  46. ports:
  47. - name: sts-nginx
  48. port: 80
  49. targetPort: 80
  50. protocol: TCP

5.3 配置文件存储(CephFS)

  • 官网

  • 文件存储(CephFS),RWX 模式,多个 Pod 共同操作一个地方。

  • 安装:

  1. vi rook-cephfs.yaml
  1. apiVersion: ceph.rook.io/v1
  2. kind: CephFilesystem
  3. metadata:
  4. name: myfs
  5. namespace: rook-ceph # namespace:cluster
  6. spec:
  7. # The metadata pool spec. Must use replication.
  8. metadataPool:
  9. replicated:
  10. size: 3
  11. requireSafeReplicaSize: true
  12. parameters:
  13. # Inline compression mode for the data pool
  14. # Further reference: https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression
  15. compression_mode:
  16. none
  17. # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
  18. # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
  19. #target_size_ratio: ".5"
  20. # The list of data pool specs. Can use replication or erasure coding.
  21. dataPools:
  22. - name: replicated
  23. failureDomain: host
  24. replicated:
  25. size: 3
  26. # Disallow setting pool with replica 1, this could lead to data loss without recovery.
  27. # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
  28. requireSafeReplicaSize: true
  29. parameters:
  30. # Inline compression mode for the data pool
  31. # Further reference: https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression
  32. compression_mode:
  33. none
  34. # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
  35. # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
  36. #target_size_ratio: ".5"
  37. # Whether to preserve filesystem after CephFilesystem CRD deletion
  38. preserveFilesystemOnDelete: true
  39. # The metadata service (mds) configuration
  40. metadataServer:
  41. # The number of active MDS instances
  42. activeCount: 1
  43. # Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.
  44. # If false, standbys will be available, but will not have a warm cache.
  45. activeStandby: true
  46. # The affinity rules to apply to the mds deployment
  47. placement:
  48. # nodeAffinity:
  49. # requiredDuringSchedulingIgnoredDuringExecution:
  50. # nodeSelectorTerms:
  51. # - matchExpressions:
  52. # - key: role
  53. # operator: In
  54. # values:
  55. # - mds-node
  56. # topologySpreadConstraints:
  57. # tolerations:
  58. # - key: mds-node
  59. # operator: Exists
  60. # podAffinity:
  61. podAntiAffinity:
  62. requiredDuringSchedulingIgnoredDuringExecution:
  63. - labelSelector:
  64. matchExpressions:
  65. - key: app
  66. operator: In
  67. values:
  68. - rook-ceph-mds
  69. # topologyKey: kubernetes.io/hostname will place MDS across different hosts
  70. topologyKey: kubernetes.io/hostname
  71. preferredDuringSchedulingIgnoredDuringExecution:
  72. - weight: 100
  73. podAffinityTerm:
  74. labelSelector:
  75. matchExpressions:
  76. - key: app
  77. operator: In
  78. values:
  79. - rook-ceph-mds
  80. # topologyKey: */zone can be used to spread MDS across different AZ
  81. # Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower
  82. # Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or upper
  83. topologyKey: topology.kubernetes.io/zone
  84. # A key/value list of annotations
  85. # annotations:
  86. # key: value
  87. # A key/value list of labels
  88. # labels:
  89. # key: value
  90. # resources:
  91. # The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory
  92. # limits:
  93. # cpu: "500m"
  94. # memory: "1024Mi"
  95. # requests:
  96. # cpu: "500m"
  97. # memory: "1024Mi"
  98. priorityClassName: system-cluster-critical
  99. livenessProbe:
  100. disabled: false
  101. startupProbe:
  102. disabled: false
  103. # Filesystem mirroring settings
  104. # mirroring:
  105. # enabled: true
  106. # list of Kubernetes Secrets containing the peer token
  107. # for more details see: https://docs.ceph.com/en/latest/dev/cephfs-mirroring/#bootstrap-peers
  108. # peers:
  109. #secretNames:
  110. #- secondary-cluster-peer
  111. # specify the schedule(s) on which snapshots should be taken
  112. # see the official syntax here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#add-and-remove-schedules
  113. # snapshotSchedules:
  114. # - path: /
  115. # interval: 24h # daily snapshots
  116. # startTime: 11:55
  117. # manage retention policies
  118. # see syntax duration here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#add-and-remove-retention-policies
  119. # snapshotRetention:
  120. # - path: /
  121. # duration: "h 24"
  122. ---
  123. apiVersion: storage.k8s.io/v1
  124. kind: StorageClass
  125. metadata:
  126. name: rook-cephfs
  127. annotations:
  128. storageclass.kubernetes.io/is-default-class: "true"
  129. # Change "rook-ceph" provisioner prefix to match the operator namespace if needed
  130. provisioner: rook-ceph.cephfs.csi.ceph.com # driver:namespace:operator
  131. parameters:
  132. # clusterID is the namespace where the rook cluster is running
  133. # If you change this namespace, also change the namespace below where the secret namespaces are defined
  134. clusterID: rook-ceph # namespace:cluster
  135. # CephFS filesystem name into which the volume shall be created
  136. fsName: myfs
  137. # Ceph pool into which the volume shall be created
  138. # Required for provisionVolume: "true"
  139. pool: myfs-replicated
  140. # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  141. # in the same namespace as the cluster.
  142. csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  143. csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  144. csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  145. csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  146. csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  147. csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  148. # (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel)
  149. # If omitted, default volume mounter will be used - this is determined by probing for ceph-fuse
  150. # or by setting the default mounter explicitly via --volumemounter command-line argument.
  151. # mounter: kernel
  152. reclaimPolicy: Delete
  153. allowVolumeExpansion: true
  154. mountOptions:
  155. # uncomment the following line for debugging
  156. #- debug
  1. kubectl apply -f rook-cephfs.yaml

29.gif

  • 测试:
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginx-deploy
  5. namespace: default
  6. labels:
  7. app: nginx-deploy
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: nginx-deploy
  12. replicas: 3
  13. strategy:
  14. rollingUpdate:
  15. maxSurge: 25%
  16. maxUnavailable: 25%
  17. type: RollingUpdate
  18. template:
  19. metadata:
  20. labels:
  21. app: nginx-deploy
  22. spec:
  23. containers:
  24. - name: nginx-deploy
  25. image: nginx
  26. volumeMounts:
  27. - name: localtime
  28. mountPath: /etc/localtime
  29. - name: nginx-html-storage
  30. mountPath: /usr/share/nginx/html
  31. volumes:
  32. - name: localtime
  33. hostPath:
  34. path: /usr/share/zoneinfo/Asia/Shanghai
  35. - name: nginx-html-storage
  36. persistentVolumeClaim:
  37. claimName: nginx-pv-claim
  38. ---
  39. apiVersion: v1
  40. kind: PersistentVolumeClaim
  41. metadata:
  42. name: nginx-pv-claim
  43. labels:
  44. app: nginx-deploy
  45. spec:
  46. storageClassName: rook-cephfs
  47. accessModes:
  48. - ReadWriteMany ##如果是ReadWriteOnce将会是什么效果
  49. resources:
  50. requests:
  51. storage: 10Mi