我们经常会用NFS做后端存储来做测试,因为其部署简单。但是在生产中我们并不会去选择NFS,更多的是Ceph、Glusterfs等等,今天就来带大家了解在kubernetes中使用Glusterfs。

一、安装Glusterfs

1.1、规划

主机名 IP
glusterfs-master 10.1.10.128
glusterfs-node01 10.1.10.129
glusterfs-node02 10.1.10.130

1.2、安装

我们这里采用的是YUM安装,有兴趣的也可以用其他安装方式,比如源码安装

(1)、配置hosts(/etc/hosts)

  1. 10.1.10.129 glusterfs-node01
  2. 10.1.10.130 glusterfs-node02
  3. 10.1.10.128 glusterfs-master

(2)、YUM安装

  1. # yum install centos-release-gluster -y
  2. # yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

(3)、启动并配置开机自启动

  1. # systemctl start glusterd.service && systemctl enable glusterd.service

(4)、如果防火墙是开启的需要配置防火墙

  1. # 如果需要可以加iptables
  2. # iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT
  3. # iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT
  4. # iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
  5. # iptables -I INPUT -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT

(5)、将节点加入集群

  1. # gluster peer probe glusterfs-master
  2. # gluster peer probe glusterfs-node01
  3. # gluster peer probe glusterfs-node02

(6)、查看集群状态

  1. # gluster peer status
  2. Number of Peers: 2
  3. Hostname: glusterfs-node01
  4. Uuid: bb59f0ee-1901-443c-b721-1fe3a1edebb4
  5. State: Peer in Cluster (Connected)
  6. Other names:
  7. glusterfs-node01
  8. 10.1.10.129
  9. Hostname: glusterfs-node02
  10. Uuid: a0d1448a-d0f2-432a-bb45-b10650db106c
  11. State: Peer in Cluster (Connected)
  12. Other names:
  13. 10.1.10.130

1.3、测试

(1)、创建volume

  1. # 创建数据目录,节点都要操作
  2. # mkdir /data/gluster/data -p
  3. # gluster volume create glusterfs_volume replica 3 glusterfs-master:/data/gluster/data glusterfs-node01:/data/gluster/data glusterfs-node02:/data/gluster/data force

(2)、查看volume

  1. # gluster volume info
  2. Volume Name: glusterfs_volume
  3. Type: Replicate
  4. Volume ID: 53bdad7b-d40f-4160-bd42-4b70c8278506
  5. Status: Created
  6. Snapshot Count: 0
  7. Number of Bricks: 1 x 3 = 3
  8. Transport-type: tcp
  9. Bricks:
  10. Brick1: glusterfs-master:/data/gluster/data
  11. Brick2: glusterfs-node01:/data/gluster/data
  12. Brick3: glusterfs-node02:/data/gluster/data
  13. Options Reconfigured:
  14. transport.address-family: inet
  15. storage.fips-mode-rchecksum: on
  16. nfs.disable: on
  17. performance.client-io-threads: off

(3)、启动volume

  1. # gluster volume start glusterfs_volume

(4)、安装client

  1. # yum install -y glusterfs glusterfs-fuse

(5)、挂载

  1. # mount -t glusterfs glusterfs-master:glusterfs_volume /mnt

1.4、调优

  1. # 开启 指定 volume 的配额
  2. $ gluster volume quota k8s-volume enable
  3. # 限制 指定 volume 的配额
  4. $ gluster volume quota k8s-volume limit-usage / 1TB
  5. # 设置 cache 大小, 默认32MB
  6. $ gluster volume set k8s-volume performance.cache-size 4GB
  7. # 设置 io 线程, 太大会导致进程崩溃
  8. $ gluster volume set k8s-volume performance.io-thread-count 16
  9. # 设置 网络检测时间, 默认42s
  10. $ gluster volume set k8s-volume network.ping-timeout 10
  11. # 设置 写缓冲区的大小, 默认1M
  12. $ gluster volume set k8s-volume performance.write-behind-window-size 1024MB

二、在k8s中测试

2.1、简单测试

(1)、配置endpoints

  1. # curl -O https://raw.githubusercontent.com/kubernetes/examples/master/volumes/glusterfs/glusterfs-endpoints.json

修改glusterfs-endpoints.json,配置GlusterFS集群信息

  1. {
  2. "kind": "Endpoints",
  3. "apiVersion": "v1",
  4. "metadata": {
  5. "name": "glusterfs-cluster"
  6. },
  7. "subsets": [
  8. {
  9. "addresses": [
  10. {
  11. "ip": "10.1.10.128"
  12. }
  13. ],
  14. "ports": [
  15. {
  16. "port": 2020
  17. }
  18. ]
  19. }
  20. ]
  21. }

port可以随意写,ip为GlusterFS的IP地址

创建配置文件

  1. # kubectl apply -f glusterfs-endpoints.json
  2. # kubectl get ep
  3. NAME ENDPOINTS AGE
  4. glusterfs-cluster 10.1.10.128:2020 7m26s
  5. kubernetes 10.1.10.128:6443 27d

(2)、配置service

  1. curl -O https://raw.githubusercontent.com/kubernetes/examples/master/volumes/glusterfs/glusterfs-service.json

修改配置文件,我这里仅修改了端口

  1. {
  2. "kind": "Service",
  3. "apiVersion": "v1",
  4. "metadata": {
  5. "name": "glusterfs-cluster"
  6. },
  7. "spec": {
  8. "ports": [
  9. {"port": 2020}
  10. ]
  11. }
  12. }

创建service对象

  1. # kubectl apply -f glusterfs-service.json
  2. # kubectl get svc
  3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  4. glusterfs-cluster ClusterIP 10.254.44.189 <none> 2020/TCP 10m
  5. kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 27d

(3)、创建pod测试

  1. curl -O https://raw.githubusercontent.com/kubernetes/examples/master/volumes/glusterfs/glusterfs-pod.json

修改配置文件,修改volumes下的path为我们上面创建的volume名

  1. {
  2. "apiVersion": "v1",
  3. "kind": "Pod",
  4. "metadata": {
  5. "name": "glusterfs"
  6. },
  7. "spec": {
  8. "containers": [
  9. {
  10. "name": "glusterfs",
  11. "image": "nginx",
  12. "volumeMounts": [
  13. {
  14. "mountPath": "/mnt/glusterfs",
  15. "name": "glusterfsvol"
  16. }
  17. ]
  18. }
  19. ],
  20. "volumes": [
  21. {
  22. "name": "glusterfsvol",
  23. "glusterfs": {
  24. "endpoints": "glusterfs-cluster",
  25. "path": "glusterfs_volume",
  26. "readOnly": true
  27. }
  28. }
  29. ]
  30. }
  31. }

创建Pod对象

  1. # kubectl apply -f glusterfs-pod.yaml
  2. # kubectl get pod
  3. NAME READY STATUS RESTARTS AGE
  4. glusterfs 1/1 Running 0 51s
  5. pod-demo 1/1 Running 8 25h
  6. # kubectl exec -it glusterfs -- df -h
  7. Filesystem Size Used Avail Use% Mounted on
  8. overlay 17G 2.5G 15G 15% /
  9. tmpfs 64M 0 64M 0% /dev
  10. tmpfs 910M 0 910M 0% /sys/fs/cgroup
  11. /dev/mapper/centos-root 17G 2.5G 15G 15% /etc/hosts
  12. 10.1.10.128:glusterfs_volume 17G 5.3G 12G 31% /mnt/glusterfs
  13. shm 64M 0 64M 0% /dev/shm
  14. tmpfs 910M 12K 910M 1% /run/secrets/kubernetes.io/serviceaccount
  15. tmpfs 910M 0 910M 0% /proc/acpi
  16. tmpfs 910M 0 910M 0% /proc/scsi
  17. tmpfs 910M 0 910M 0% /sys/firmware

我们从磁盘挂载情况可以看到挂载成功了。

2.2、静态PV测试

(1)、创建pv(glusterfs-pv.yaml)

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: glusterfs-pv
  5. spec:
  6. capacity:
  7. storage: 5Mi
  8. accessModes:
  9. - ReadWriteMany
  10. glusterfs:
  11. endpoints: glusterfs-cluster
  12. path: glusterfs_volume
  13. ---
  14. apiVersion: v1
  15. kind: PersistentVolumeClaim
  16. metadata:
  17. name: glusterfs-pvc
  18. spec:
  19. accessModes:
  20. - ReadWriteMany
  21. resources:
  22. requests:
  23. storage: 5Mi

创建pv和pvc对象

  1. # kubectl apply -f glusterfs-pv.yaml
  2. # kubectl get pv
  3. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  4. glusterfs-pv 5Mi RWX Retain Bound default/glusterfs-pvc 15s
  5. # kubectl get pvc
  6. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  7. glusterfs-pvc Bound glusterfs-pv 5Mi RWX 18s

从上面可知绑定成功,可以自定写一个pod进行测试。

2.3、动态PV测试

在这里我们需要借助heketi来管理Glusterfs。

Heketi 提供了丰富的RESTful API 用来对于Glusterfs的volume进行管理。Heketi可以同时管理多个cluster,每个cluster由多个node组成,每个node都是一个物理机,准确的说是一个裸盘。然后每个裸有多个bricks,而volume就是多个bricks组成的,但是,一个volume不可以跨node组成。示意图如下所示。

4.8、使用Glusterfs做持久化存储(全) - 图1

参考文档:https://blog.csdn.net/DevOps008/article/details/80757974

2.3.1、安装Heketi

(1)、安装

  1. # yum -y install heketi heketi-client

(2)、配置heketi(/etc/heketi/heketi.json)

  1. {
  2. "_port_comment": "Heketi Server Port Number",
  3. "port": "48080", # 请求端口,默认是8080
  4. "_use_auth": "Enable JWT authorization. Please enable for deployment",
  5. "use_auth": false,
  6. "_jwt": "Private keys for access",
  7. "jwt": {
  8. "_admin": "Admin has access to all APIs",
  9. "admin": {
  10. "key": "admin@P@ssW0rd" # 管理员密码
  11. },
  12. "_user": "User only has access to /volumes endpoint",
  13. "user": {
  14. "key": "user@P@ssW0rd" # 普通用户密码
  15. }
  16. },
  17. "_glusterfs_comment": "GlusterFS Configuration",
  18. "glusterfs": {
  19. "_executor_comment": [
  20. "Execute plugin. Possible choices: mock, ssh",
  21. "mock: This setting is used for testing and development.",
  22. " It will not send commands to any node.",
  23. "ssh: This setting will notify Heketi to ssh to the nodes.",
  24. " It will need the values in sshexec to be configured.",
  25. "kubernetes: Communicate with GlusterFS containers over",
  26. " Kubernetes exec api."
  27. ],
  28. "executor": "ssh",
  29. "_sshexec_comment": "SSH username and private key file information",
  30. "sshexec": {
  31. "keyfile": "/etc/heketi/private_key", # ssh私钥目录
  32. "user": "root", # ssh用户
  33. "port": "22", # ssh端口
  34. "fstab": "/etc/fstab"
  35. },
  36. "_kubeexec_comment": "Kubernetes configuration",
  37. "kubeexec": {
  38. "host" :"https://kubernetes.host:8443",
  39. "cert" : "/path/to/crt.file",
  40. "insecure": false,
  41. "user": "kubernetes username",
  42. "password": "password for kubernetes user",
  43. "namespace": "OpenShift project or Kubernetes namespace",
  44. "fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"
  45. },
  46. "_db_comment": "Database file name",
  47. "db": "/var/lib/heketi/heketi.db",
  48. "_loglevel_comment": [
  49. "Set log level. Choices are:",
  50. " none, critical, error, warning, info, debug",
  51. "Default is warning"
  52. ],
  53. "loglevel" : "debug"
  54. }
  55. }

说明:heketi用来管理cluster的,其中配置地方在executor,其管理方式有以下三种

  • mock
  • ssh
  • kubernetes

mock,顾名思义就是测试,在这种模式下,可以对于自己的配置文件什么的进行检验,但是处于此模式下,虽然你可以看到node添加成功,volume创建成功,但是这些volume是不可用的,无法挂载的。所以如果要在SVT或者PROD环境用的话,一定要用ssh或者kubernetes模式。我们这里是用的ssh模式。

(3)、配置免密

  1. # ssh-keygen -t rsa -q -f /etc/heketi/private_key -N ""
  2. # ssh-copy-id -i /etc/heketi/private_key.pub root@10.1.10.128
  3. # ssh-copy-id -i /etc/heketi/private_key.pub root@10.1.10.129
  4. # ssh-copy-id -i /etc/heketi/private_key.pub root@10.1.10.130

(4)、启动heketi

  1. # 给目录授权
  2. # chown heketi.heketi /etc/heketi/ -R
  3. # systemctl enable heketi.service && systemctl start heketi.service
  4. # 测试
  5. # curl http://10.1.10.128:48080/hello
  6. Hello from Heketi

(5)、配置topology

拓扑信息用于让Heketi确认可以使用的存储节点、磁盘和集群,必须自行确定节点的故障域。故障域是赋予一组节点的整数值,这组节点共享相同的交换机、电源或其他任何会导致它们同时失效的组件。必须确认哪些节点构成一个集群,Heketi使用这些信息来确保跨故障域中创建副本,从而提供数据冗余能力,Heketi支持多个Gluster存储集群。

配置Heketi拓扑注意以下几点:

  • 可以通过topology.json文件定义组建的GlusterFS集群;
  • topology指定了层级关系:clusters —> nodes —> node/devices —> hostnames/zone;
  • node/hostnames字段的manage建议填写主机ip,指管理通道,注意当heketi服务器不能通过hostname访问GlusterFS节点时不能填写hostname;
  • node/hostnames字段的storage建议填写主机ip,指存储数据通道,与manage可以不一样,生产环境管理网络和存储网络建议分离;
  • node/zone字段指定了node所处的故障域,heketi通过跨故障域创建副本,提高数据高可用性质,如可以通过rack的不同区分zone值,创建跨机架的故障域;
  • devices字段指定GlusterFS各节点的盘符(可以是多块盘),必须是未创建文件系统的裸设备。

以上内容来源:https://www.cnblogs.com/itzgr/p/11913342.html#_labelTop

配置文件如下(/etc/heketi/topology.json)

  1. {
  2. "clusters": [
  3. {
  4. "nodes": [
  5. {
  6. "node": {
  7. "hostnames": {
  8. "manage": [
  9. "10.1.10.128"
  10. ],
  11. "storage": [
  12. "10.1.10.128"
  13. ]
  14. },
  15. "zone": 1
  16. },
  17. "devices": [
  18. "/dev/sdb1" # 必须是未创建文件系统的裸磁盘
  19. ]
  20. },
  21. {
  22. "node": {
  23. "hostnames": {
  24. "manage": [
  25. "10.1.10.129"
  26. ],
  27. "storage": [
  28. "10.1.10.129"
  29. ]
  30. },
  31. "zone": 1
  32. },
  33. "devices": [
  34. "/dev/sdb1"
  35. ]
  36. },
  37. {
  38. "node": {
  39. "hostnames": {
  40. "manage": [
  41. "10.1.10.130"
  42. ],
  43. "storage": [
  44. "10.1.10.130"
  45. ]
  46. },
  47. "zone": 1
  48. },
  49. "devices": [
  50. "/dev/sdb1"
  51. ]
  52. }
  53. ]
  54. }
  55. ]
  56. }

重要说明:devices字段指定GlusterFS各节点的盘符(可以是多块盘),必须是未创建文件系统的裸设备

由于每次使用heketi-cli命令的时候都需要写用户名、密码等,我们就将其写入环境变量,方便操作。

  1. # echo "export HEKETI_CLI_SERVER=http://10.1.10.128:48080" >> /etc/profile.d/heketi.sh
  2. # echo "alias heketi-cli='heketi-cli --user admin --secret admin@P@ssW0rd'" >> ~/.bashrc
  3. # source /etc/profile.d/heketi.sh
  4. # source ~/.bashrc
  5. # echo $HEKETI_CLI_SERVER
  6. http://10.1.10.128:48080

(6)、创建cluster

  1. # heketi-cli --server $HEKETI_CLI_SERVER --user admin --secret admin@P@ssW0rd topology load --json=/etc/heketi/topology.json
  2. Creating cluster ... ID: cca360f44db482f03297a151886eea19
  3. Allowing file volumes on cluster.
  4. Allowing block volumes on cluster.
  5. Creating node 10.1.10.128 ... ID: 5216dafba986a087d7c3b1e11fa36c05
  6. Adding device /dev/sdb1 ... OK
  7. Creating node 10.1.10.129 ... ID: e384286825957b60213cc9b2cb604744
  8. Adding device /dev/sdb1 ... OK
  9. Creating node 10.1.10.130 ... ID: 178a8c6fcfb8ccb02b1b871db01254c2
  10. Adding device /dev/sdb1 ... OK

(7)、查看集群信息

  1. # 查看集群列表
  2. # heketi-cli cluster list
  3. Clusters:
  4. Id:cca360f44db482f03297a151886eea19 [file][block]
  5. # 查看集群详细信息
  6. # heketi-cli cluster info cca360f44db482f03297a151886eea19
  7. # 查看节点信息
  8. # heketi-cli node list
  9. # 查看节点详细信息
  10. # heketi-cli node info 68f16b2d54acf1c18e354ec46aa736ad

2.3.2、创建volume测试

  1. # heketi-cli volume create --size=2 --replica=2
  2. Name: vol_4f1a171ab06adf80460c84f2132e96e0
  3. Size: 2
  4. Volume Id: 4f1a171ab06adf80460c84f2132e96e0
  5. Cluster Id: cca360f44db482f03297a151886eea19
  6. Mount: 10.1.10.129:vol_4f1a171ab06adf80460c84f2132e96e0
  7. Mount Options: backup-volfile-servers=10.1.10.130,10.1.10.128
  8. Block: false
  9. Free Size: 0
  10. Reserved Size: 0
  11. Block Hosting Restriction: (none)
  12. Block Volumes: []
  13. Durability Type: replicate
  14. Distribute Count: 1
  15. Replica Count: 2
  16. # heketi-cli volume list
  17. Id:4f1a171ab06adf80460c84f2132e96e0 Cluster:cca360f44db482f03297a151886eea19 Name:vol_4f1a171ab06adf80460c84f2132e96e0
  18. # heketi-cli volume info 4f1a171ab06adf80460c84f2132e96e0
  19. Name: vol_4f1a171ab06adf80460c84f2132e96e0
  20. Size: 2
  21. Volume Id: 4f1a171ab06adf80460c84f2132e96e0
  22. Cluster Id: cca360f44db482f03297a151886eea19
  23. Mount: 10.1.10.129:vol_4f1a171ab06adf80460c84f2132e96e0
  24. Mount Options: backup-volfile-servers=10.1.10.130,10.1.10.128
  25. Block: false
  26. Free Size: 0
  27. Reserved Size: 0
  28. Block Hosting Restriction: (none)
  29. Block Volumes: []
  30. Durability Type: replicate
  31. Distribute Count: 1
  32. Replica Count: 2
  33. # 挂载
  34. # mount -t glusterfs 10.1.10.129:vol_4f1a171ab06adf80460c84f2132e96e0 /mnt
  35. # 删除
  36. # heketi-cli volume delete 4f1a171ab06adf80460c84f2132e96e0

2.3.3、在k8s中测试

(1)、创建需要使用的secret(heketi-secret.yaml)

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: heketi-secret
  5. data:
  6. key: YWRtaW5AUEBzc1cwcmQ=
  7. type: kubernetes.io/glusterfs

其中key必须是base64转码后的,命令如下:

  1. echo -n "admin@P@ssW0rd" | base64

(2)、创建storageclass(heketi-storageclass.yaml)

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: heketi-storageclass
  5. parameters:
  6. resturl: "http://10.1.10.128:48080"
  7. clusterid: "cca360f44db482f03297a151886eea19"
  8. restauthenabled: "true" # 若heketi开启认证此处也必须开启auth认证
  9. restuser: "admin"
  10. secretName: "heketi-secret" # name/namespace与secret资源中定义一致
  11. secretNamespace: "default"
  12. volumetype: "replicate:3"
  13. provisioner: kubernetes.io/glusterfs
  14. reclaimPolicy: Delete

说明:

  • provisioner:表示存储分配器,需要根据后端存储的不同而变更;
  • reclaimPolicy: 默认即”Delete”,删除pvc后,相应的pv及后端的volume,brick(lvm)等一起删除;设置为”Retain”时则保留数据,若需删除则需要手工处理;
  • resturl:heketi API服务提供的url;
  • restauthenabled:可选参数,默认值为”false”,heketi服务开启认证时必须设置为”true”;
  • restuser:可选参数,开启认证时设置相应用户名;
  • secretNamespace:可选参数,开启认证时可以设置为使用持久化存储的namespace;
  • secretName:可选参数,开启认证时,需要将heketi服务的认证密码保存在secret资源中;
  • clusterid:可选参数,指定集群id,也可以是1个clusterid列表,格式为”id1,id2”;
  • volumetype:可选参数,设置卷类型及其参数,如果未分配卷类型,则有分配器决定卷类型;如”volumetype: replicate:3”表示3副本的replicate卷,”volumetype: disperse:4:2”表示disperse卷,其中‘4’是数据,’2’是冗余校验,”volumetype: none”表示distribute卷

(3)、创建pvc(heketi-pvc.yaml)

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: heketi-pvc
  5. annotations:
  6. volume.beta.kubernetes.io/storage-class: heketi-storageclass
  7. spec:
  8. accessModes:
  9. - ReadWriteOnce
  10. resources:
  11. requests:
  12. storage: 1Gi

(4)、查看sc和pvc的信息

  1. # kubectl get sc
  2. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
  3. heketi-storageclass kubernetes.io/glusterfs Delete Immediate false 6m53s
  4. # kubectl get pvc
  5. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  6. glusterfs-pvc Bound glusterfs-pv 5Mi RWX 26h
  7. heketi-pvc Bound pvc-0feb8666-6e7f-451d-ae6f-7f205206b225 1Gi RWO heketi-storageclass 82s

(5)、创建Pod挂载pvc(heketi-pod.yaml)

  1. kind: Pod
  2. apiVersion: v1
  3. metadata:
  4. name: heketi-pod
  5. spec:
  6. containers:
  7. - name: heketi-container
  8. image: busybox
  9. command:
  10. - sleep
  11. - "3600"
  12. volumeMounts:
  13. - name: heketi-volume
  14. mountPath: "/pv-data"
  15. readOnly: false
  16. volumes:
  17. - name: heketi-volume
  18. persistentVolumeClaim:
  19. claimName: heketi-pvc

创建Pod对象并查看结果

  1. # kubectl apply -f heketi-pod.yaml
  2. # kubectl get pod
  3. NAME READY STATUS RESTARTS AGE
  4. glusterfs 1/1 Running 0 26h
  5. heketi-pod 1/1 Running 0 2m55s

在pod中写入文件进行测试

  1. # kubectl exec -it heketi-pod -- /bin/sh
  2. / # cd /pv-data/
  3. /pv-data # echo "text" > 1111.txt
  4. /pv-data # ls
  5. 1111.txt

在存储节点查看是否有我们在pod中写入的文件

  1. # cd /var/lib/heketi/mounts/vg_bffb11849513dded78f671f64e76750c/brick_6ff640a2d45a7f146a296473e7145ee7
  2. [root@k8s-master brick_6ff640a2d45a7f146a296473e7145ee7]# ll
  3. total 0
  4. drwxrwsr-x 3 root 2000 40 Feb 7 14:27 brick
  5. [root@k8s-master brick_6ff640a2d45a7f146a296473e7145ee7]# cd brick/
  6. [root@k8s-master brick]# ll
  7. total 4
  8. -rw-r--r-- 2 root 2000 5 Feb 7 14:27 1111.txt