1.环境

主机名 IP地址 角色
k8s-master01 192.168.1.8 k8s-master,glusterfs,heketi
k8s-node01 192.168.1.9

k8s-node,glusterfs
k8s-node02 192.168.1.10

k8s-node,glusterfs

2.说明

因为是模拟环境,三台虚拟机全部重新添加一块磁盘用于作为glusterfs的存储节点,磁盘添加完成之后无需格式化,只需要找到磁盘的所在盘符即可

  • 我这边添加完成的是/dev/sdb ```bash [root@k8s-master01 ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0009b911

Device Boot Start End Blocks Id System /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 104857599 51379200 8e Linux LVM

Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

… … …

[root@k8s-master01 ~]#

  1. <a name="IZLDI"></a>
  2. ## 3.安装glustefs软件
  3. - 所有节点全部安装glustefs软件
  4. ```bash
  5. [root@k8s-master01 ~]# yum -y install centos-release-gluster
  6. [root@k8s-master01 ~]# yum -y install glusterfs glusterfs-server glusterfs-fuse
  7. [root@k8s-master01 ~]# systemctl enable glusterfsd
  8. [root@k8s-master01 ~]# systemctl start glusterfsd
  9. [root@k8s-master01 ~]# systemctl enable glusterd
  10. [root@k8s-master01 ~]# systemctl start glusterd
  • 将节点加入到集群可信池当中
    • 在任意一个节点上使用如下命令发现其他节点,组件GlusterFS集群
  1. [root@k8s-master01 ~]# gluster peer probe 192.168.1.9
  2. [root@k8s-master01 ~]# gluster peer probe 192.168.1.10
  3. [root@k8s-master01 ~]# gluster peer status
  4. Number of Peers: 2
  5. Hostname: 192.168.1.9
  6. Uuid: 368760d3-c0be-4670-8616-0d4fef3ffc25
  7. State: Peer in Cluster (Connected)
  8. Hostname: 192.168.1.10
  9. Uuid: 017fb361-cb4a-4a40-9f3c-62a94208d626
  10. State: Peer in Cluster (Connected)
  11. [root@k8s-master01 ~]#
  • 测试

    创建一个测试卷,我们直接使用系统默认的分区创建一个测试卷,不要使用我们挂载的裸磁盘设备,因为后面heketi只支持为格式化过的裸磁盘

  1. [root@k8s-master01 ~]# gluster volume create test-volume replica 2 192.168.1.9:/home/gluster 192.168.1.10:/home/gluster force
  2. volume create: test-volume: success: please start the volume to access data
  3. [root@k8s-master01 ~]#

激活测试卷

  1. [root@k8s-master01 ~]# gluster volume start test-volume
  2. volume start: test-volume: success
  3. [root@k8s-master01 ~]#

删除测试卷

  1. [root@k8s-master01 ~]# gluster volume stop test-volume
  2. Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
  3. volume stop: test-volume: success
  4. [root@k8s-master01 ~]#
  5. [root@k8s-master01 ~]# gluster volume delete test-volume
  6. Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
  7. volume delete: test-volume: success
  8. [root@k8s-master01 ~]#

4.部署heketi

  • Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期。 通过Heketi,就可以像使用OpenStack Manila,Kubernetes和OpenShift一样申请可以动态配置GlusterFS卷。Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不同的故障域内。同时Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。
  • 有了heketi,存储管理员不必再管理或者配置brick、磁盘或可信存储池,heketi服务将为管理员管理所有硬件,并使其能够按需分配存储。不过,在heketi中注册的任何磁盘都必须以原始格式提供,而不能是创建过文件系统的磁盘分区
  • heketi项目地址:https://github.com/heketi/heketi

  • 安装heketi ```bash [root@k8s-master01 ~]# yum -y install heketi heketi-client [root@k8s-master01 ~]# systemctl enable heketi [root@k8s-master01 ~]# systemctl start heketi

  1. - 配置ssh互信
  2. > 使用ssh的方式连接glusterfs集群的各个节点,并且拥有管理权限
  3. ```bash
  4. [root@k8s-master01 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''
  5. [root@k8s-master01 ~]# chown heketi.heketi /etc/heketi/heketi_key*
  6. [root@k8s-master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.1.8
  7. [root@k8s-master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.1.9
  8. [root@k8s-master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.1.10
  9. #测试ssh免密是否成功
  10. ssh -i /etc/heketi/heketi_key root@192.168.1.8
  11. ssh -i /etc/heketi/heketi_key root@192.168.1.9
  12. ssh -i /etc/heketi/heketi_key root@192.168.1.10
  • 配置heketi的主配置文件/etc/heketi/heketi.json

    如果需要启用heketi的认证,则需要将”use_auth”的参数设置为”true”,并且需要在”jwt {}”配置段中为各用户设定相应的密码,用户名和密码可以自定义。 “glusterfs {}”配置段用于指定Glusterfs存储集群的认证方式及认证信息

  1. [root@k8s-master01 ~]# vim /etc/heketi/heketi.json
  2. {
  3. "_port_comment": "Heketi Server Port Number",
  4. #修改端口号,防止端口冲突
  5. "port": "18080",
  6. "_use_auth": "Enable JWT authorization. Please enable for deployment",
  7. #开启认证
  8. "use_auth": true,
  9. "_jwt": "Private keys for access",
  10. "jwt": {
  11. "_admin": "Admin has access to all APIs",
  12. #设置管理员的key
  13. "admin": {
  14. "key": "adminkey"
  15. },
  16. "_user": "User only has access to /volumes endpoint",
  17. "user": {
  18. "key": "userkey"
  19. }
  20. },
  21. "_glusterfs_comment": "GlusterFS Configuration",
  22. "glusterfs": {
  23. "_executor_comment": [
  24. "Execute plugin. Possible choices: mock, ssh",
  25. "mock: This setting is used for testing and development.",
  26. " It will not send commands to any node.",
  27. "ssh: This setting will notify Heketi to ssh to the nodes.",
  28. " It will need the values in sshexec to be configured.",
  29. "kubernetes: Communicate with GlusterFS containers over",
  30. " Kubernetes exec api."
  31. ],
  32. #使用ssh认证
  33. "executor": "ssh",
  34. "_sshexec_comment": "SSH username and private key file information",
  35. "sshexec": {
  36. "keyfile": "/etc/heketi/heketi_key",
  37. "user": "root",
  38. "port": "22",
  39. "fstab": "/etc/fstab"
  40. },
  41. "_kubeexec_comment": "Kubernetes configuration",
  42. "kubeexec": {
  43. "host" :"https://kubernetes.host:8443",
  44. "cert" : "/path/to/crt.file",
  45. "insecure": false,
  46. "user": "kubernetes username",
  47. "password": "password for kubernetes user",
  48. "namespace": "OpenShift project or Kubernetes namespace",
  49. "fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"
  50. },
  51. #设置heketi数据库文件位置
  52. "_db_comment": "Database file name",
  53. "db": "/var/lib/heketi/heketi.db",
  54. "_loglevel_comment": [
  55. "Set log level. Choices are:",
  56. " none, critical, error, warning, info, debug",
  57. "Default is warning"
  58. ],
  59. #设置日志的输出级别
  60. "loglevel" : "warning"
  61. }
  62. }
  • 启动heketi服务 ```bash [root@k8s-master01 ~]# systemctl restart heketi

[root@k8s-master01 ~]# systemctl status heketi ● heketi.service - Heketi Server Loaded: loaded (/usr/lib/systemd/system/heketi.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2020-11-22 15:02:13 CST; 5h 38min ago Main PID: 850 (heketi) Tasks: 11 Memory: 31.8M CGroup: /system.slice/heketi.service └─850 /usr/bin/heketi —config=/etc/heketi/heketi.json

Nov 22 20:38:17 k8s-master01 heketi[850]: Tasks: 64 Nov 22 20:38:17 k8s-master01 heketi[850]: Memory: 123.6M Nov 22 20:38:17 k8s-master01 heketi[850]: CGroup: /system.slice/glusterd.service Nov 22 20:38:17 k8s-master01 heketi[850]: ├─ 1134 /usr/sbin/glusterd -p /var/run/glusterd.pid —log-level INFO Nov 22 20:38:17 k8s-master01 heketi[850]: ├─ 1261 /usr/sbin/glusterfsd -s 192.168.1.9 —volfile-id vol_03a6177cb7510b79e1081150d691e676.192.168.1.9.var-lib-heketi-mounts-vg_161c983bc2712439ed7a4a207c7ea4be-brick_573d2c3c…6/192.168.1.9-var-l Nov 22 20:38:17 k8s-master01 heketi[850]: ├─ 1272 /usr/sbin/glusterfs -s localhost —volfile-id shd/vol_03a6177cb7510b79e1081150d691e676 -p /var/run/gluster/shd/vol_03a6177cb7510b79e1081150d691e676/vol_03a6177cb7510b79e1081150d691e676-sh…-6 Nov 22 20:38:17 k8s-master01 heketi[850]: └─126830 /usr/sbin/glusterfsd -s 192.168.1.9 —volfile-id vol_13187107acad247c37022bd116fce4b5.192.168.1.9.var-lib-heketi-mounts-vg_161c983bc2712439ed7a4a207c7ea4be-brick_a97025d8…5/192.168.1.9-var-l Nov 22 20:38:17 k8s-master01 heketi[850]: Nov 22 15:03:55 k8s-node01 systemd[1]: Starting GlusterFS, a clustered file-system server… Nov 22 20:38:17 k8s-master01 heketi[850]: Nov 22 15:03:57 k8s-node01 systemd[1]: Started GlusterFS, a clustered file-system server. Nov 22 20:38:17 k8s-master01 heketi[850]: ]: Stderr [] Hint: Some lines were ellipsized, use -l to show in full. [root@k8s-master01 ~]#

  1. <a name="riaXa"></a>
  2. ### 4.1 使用heketi创建集群
  3. <a name="W7CoQ"></a>
  4. #### 4.1.1方法1
  5. > 如下是手动方式,一般我们都是直接使用第二种创建方法,使用拓扑的方式创建
  6. > - 创建集群
  7. ```bash
  8. heketi-cli --user admin --server http://192.168.1.8:18080 --secret adminkey --json cluster create

将节点加入集群 因为我们开启了heketi认证,每次执行heketi-cli操作时,都需要带上认证参数,比较麻烦,我们可以定义为alias

  1. alias heketi-cli='heketi-cli --server "http://192.168.1.8:18080" --user "admin" --secret "adminkey"'
  1. heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.1.8 --storage-host-name 192.168.1.8 --zone 1
  2. heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.1.9 --storage-host-name 192.168.1.9 --zone 1
  3. heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.1.10 --storage-host-name 192.168.1.10 --zone 1

看到有些文档说需要在centos上部署时,需要注释每台glusterfs上的/etc/sudoers中的Defaults requiretty,不然加第二个node死活报错,最后把日志级别调高才看到日志里有记录sudo提示require tty。由于我这里直接部署在ubuntu上,所有不存在上述问题。如果有遇到这种问题的,可以照着操作下。

  • 添加device

这里需要特别说明的是,目前heketi仅支持使用裸分区或裸磁盘添加为device,不支持文件系统。

  1. # --node参数给出的id是上一步创建node时生成的,这里只给出一个添加的示例,实际配置中,要添加每一个节点的每一块用于存储的硬盘
  2. heketi-cli -json device add -name="/dev/sdb1" --node "c3638f57b5c5302c6f7cd5136c8fdc5e"

4.1.2 方法2(推荐)

  • 利用配置文件的方式设置heketi拓扑

    拓扑信息用于让heketi确认可用的节点,磁盘,和集群,管理员必须自行确定节点故障域和节点集群。故障域是赋予一组节点的整数值,这组节点共享相同的交换机,电源或其他任何导致他们同时失效的组件 一个适用于我们当前环境的示例配置如下(/etc/heketi/topology_daemon.json),它将根据glusterfs存储集群的实际环境把三个节点定义在同一个集群中,并且指明各节点上可用于提供存储空间的磁盘设备

  1. [root@k8s-master01 ~]# vim /etc/heketi/topology_demo.json
  2. {
  3. "clusters": [
  4. {
  5. "nodes": [
  6. {
  7. "node": {
  8. "hostnames": {
  9. "manage": [
  10. "192.168.1.8"
  11. ],
  12. "storage": [
  13. "192.168.1.8"
  14. ]
  15. },
  16. "zone": 1
  17. },
  18. "devices": [
  19. "/dev/sdb"
  20. ]
  21. },
  22. {
  23. "node": {
  24. "hostnames": {
  25. "manage": [
  26. "192.168.1.9"
  27. ],
  28. "storage": [
  29. "192.168.1.9"
  30. ]
  31. },
  32. "zone": 1
  33. },
  34. "devices": [
  35. "/dev/sdb"
  36. ]
  37. },
  38. {
  39. "node": {
  40. "hostnames": {
  41. "manage": [
  42. "192.168.1.10"
  43. ],
  44. "storage": [
  45. "192.168.1.10"
  46. ]
  47. },
  48. "zone": 1
  49. },
  50. "devices": [
  51. "/dev/sdb"
  52. ]
  53. }
  54. ]
  55. }
  56. ]
  57. }

利用如下命令加载拓扑信息,从而完成集群配置。此命令会生成一个集群,并为其添加的各节点随机生成ID号

  1. [root@k8s-master01 ~]# heketi-cli topology load --json=/etc/heketi/topology_demo.json
  • 创建一个5G大小的测试卷

    1. [root@k8s-master01 ~]# heketi-cli volume create --size=5
    2. Name: vol_11e37e5de2b6010f36cbcccb462c2de2
    3. Size: 5
    4. Volume Id: 11e37e5de2b6010f36cbcccb462c2de2
    5. Cluster Id: 8c12f6bfe29f894693770b7ee28d3d7d
    6. Mount: 192.168.1.8:vol_11e37e5de2b6010f36cbcccb462c2de2
    7. Mount Options: backup-volfile-servers=192.168.1.10,192.168.1.9
    8. Block: false
    9. Free Size: 0
    10. Reserved Size: 0
    11. Block Hosting Restriction: (none)
    12. Block Volumes: []
    13. Durability Type: replicate
    14. Distribute Count: 1
    15. Replica Count: 3
    16. [root@k8s-master01 ~]#
    17. #查看已经存在的卷
    18. [root@k8s-master01 ~]# heketi-cli volume list
    19. Id:03a6177cb7510b79e1081150d691e676 Cluster:8c12f6bfe29f894693770b7ee28d3d7d Name:vol_03a6177cb7510b79e1081150d691e676
    20. Id:11e37e5de2b6010f36cbcccb462c2de2 Cluster:8c12f6bfe29f894693770b7ee28d3d7d Name:vol_11e37e5de2b6010f36cbcccb462c2de2
    21. Id:13187107acad247c37022bd116fce4b5 Cluster:8c12f6bfe29f894693770b7ee28d3d7d Name:vol_13187107acad247c37022bd116fce4b5
    22. [root@k8s-master01 ~]#
    23. #查看卷的相信信息
    24. [root@k8s-master01 ~]# heketi-cli volume info 11e37e5de2b6010f36cbcccb462c2de2
    25. Name: vol_11e37e5de2b6010f36cbcccb462c2de2
    26. Size: 5
    27. Volume Id: 11e37e5de2b6010f36cbcccb462c2de2
    28. Cluster Id: 8c12f6bfe29f894693770b7ee28d3d7d
    29. Mount: 192.168.1.8:vol_11e37e5de2b6010f36cbcccb462c2de2
    30. Mount Options: backup-volfile-servers=192.168.1.10,192.168.1.9
    31. Block: false
    32. Free Size: 0
    33. Reserved Size: 0
    34. Block Hosting Restriction: (none)
    35. Block Volumes: []
    36. Durability Type: replicate
    37. Distribute Count: 1
    38. Replica Count: 3
    39. [root@k8s-master01 ~]#
  • 删除卷

    1. [root@k8s-master01 ~]# heketi-cli volume delete 11e37e5de2b6010f36cbcccb462c2de2
    2. Volume 11e37e5de2b6010f36cbcccb462c2de2 deleted
    3. [root@k8s-master01 ~]#
  • 查看集群节点信息 ```json [root@k8s-master01 ~]# heketi-cli cluster list Clusters: Id:8c12f6bfe29f894693770b7ee28d3d7d [file][block] [root@k8s-master01 ~]#

    [root@k8s-master01 ~]# heketi-cli cluster info 8c12f6bfe29f894693770b7ee28d3d7d Cluster id: 8c12f6bfe29f894693770b7ee28d3d7d Nodes: 1eacec0cf601a9047a16db12476a168f 56703f998bfaea94384e2d8b8c70e9f8 7a23f906da56a56f51822549c9574c81 Volumes: 03a6177cb7510b79e1081150d691e676 13187107acad247c37022bd116fce4b5 Block: true

File: true [root@k8s-master01 ~]#

  1. <a name="siQzH"></a>
  2. ## 利用glusterfs作为k8s的存储类(StorageClass)
  3. - 官方地址:[https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs)
  4. - K8S所有节点安装glusterfs-fuse,否则Pod将无法挂载存储
  5. - 创建glusterfs-storageclass.yaml文件
  6. - 创建storageclass
  7. ```yaml
  8. [root@k8s-master01 ~]# yum -y install centos-release-gluster
  9. [root@k8s-master01 ~]# yum -y install glusterfs-fuse
  10. [root@k8s-master01 ~]# cd /etc/kubernetes/storage/
  11. [root@k8s-master01 storage]# vim storage-glusterfs.yaml
  12. apiVersion: storage.k8s.io/v1beta1
  13. kind: StorageClass
  14. metadata:
  15. name: glusterfs
  16. annotations:
  17. storageclass.kubernetes.io/is-default-class: "true"
  18. provisioner: kubernetes.io/glusterfs
  19. parameters:
  20. resturl: "http://192.168.1.8:18080"
  21. clusterid: "857bfb93a4ed8e917e47cf5a970d0182"
  22. restauthenabled: "true"
  23. restuser: "admin"
  24. #secretNamespace: "default"
  25. #secretName: "heketi-secret"
  26. restuserkey: "jiang110110!"
  27. gidMin: "40000"
  28. gidMax: "50000"
  29. volumetype: "replicate:2"
  30. [root@k8s-master01 storage]# kubectl apply -f storage-glusterfs.yaml
  31. storageclass.storage.k8s.io/glusterfs created
  32. [root@k8s-master01 storage]#
  33. [root@k8s-master01 storage]# kubectl get sc
  34. NAME PROVISIONER AGE
  35. glusterfs kubernetes.io/glusterfs 82m
  36. [root@k8s-master01 storage]#
  • 创建PVC ```yaml [root@k8s-master01 storage]# vim pvc-nginx.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx-www-a namespace: default annotations: volume.beta.kubernetes.io/storage-class: “glusterfs” spec: accessModes:
    • ReadWriteMany resources: requests: storage: 5Gi storageClassName: glusterfs volumeMode: Filesystem [root@k8s-master01 storage]# kubectl apply -f pvc-nginx.yaml persistentvolumeclaim/glusterfs-nginx created [root@k8s-master01 storage]#

[root@k8s-master01 storage]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE glusterfs-nginx Bound pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 5Gi RWX glusterfs 21s [root@k8s-master01 storage]# [root@k8s-master01 storage]# kubectl describe pvc glusterfs-nginx Name: glusterfs-nginx Namespace: default StorageClass: glusterfs Status: Bound Volume: pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {“apiVersion”:”v1”,”kind”:”PersistentVolumeClaim”,”metadata”:{“annotations”:{“volume.beta.kubernetes.io/storage-class”:”glusterfs”},”name”… pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-class: glusterfs volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs Finalizers: [kubernetes.io/pvc-protection] Capacity: 5Gi Access Modes: RWX VolumeMode: Filesystem Mounted By: Events: Type Reason Age From Message


Normal ProvisioningSucceeded 95s persistentvolume-controller Successfully provisioned volume pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 using kubernetes.io/glusterfs [root@k8s-master01 storage]#

  1. - Pod中使用PVC
  2. ```yaml
  3. [root@k8s-master01 storage]# vim my-nginx.yaml
  4. apiVersion: apps/v1
  5. kind: Deployment
  6. metadata:
  7. name: my-nginx
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: my-nginx
  13. template:
  14. metadata:
  15. labels:
  16. app: my-nginx
  17. spec:
  18. containers:
  19. - name: my-nginx
  20. image: daocloud.io/library/nginx:1.13.0-alpine
  21. ports:
  22. - containerPort: 80
  23. volumeMounts:
  24. - name: glusterfs-www
  25. mountPath: /var/www/html/
  26. volumes:
  27. - name: glusterfs-www
  28. persistentVolumeClaim:
  29. claimName: glusterfs-nginx-www-a
  30. [root@k8s-master01 storage]#
  31. [root@k8s-master01 storage]# kubectl apply -f my-nginx.yaml
  32. deployment.apps/my-nginx created
  33. [root@k8s-master01 storage]#
  34. [root@k8s-master01 storage]# kubectl get pods
  35. NAME READY STATUS RESTARTS AGE
  36. my-nginx-6c496dbb6f-4sjk6 1/1 Running 0 3m10s
  37. [root@k8s-master01 storage]#
  38. [root@k8s-master01 storage]# kubectl get pv
  39. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  40. pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 5Gi RWX Delete Bound default/glusterfs-nginx glusterfs 14h
  41. [root@k8s-master01 storage]# kubectl get pvc
  42. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  43. glusterfs-nginx Bound pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 5Gi RWX glusterfs 14h
  44. [root@k8s-master01 storage]#