1.环境
主机名 | IP地址 | 角色 |
---|---|---|
k8s-master01 | 192.168.1.8 | k8s-master,glusterfs,heketi |
k8s-node01 | 192.168.1.9 | k8s-node,glusterfs |
k8s-node02 | 192.168.1.10 | k8s-node,glusterfs |
2.说明
因为是模拟环境,三台虚拟机全部重新添加一块磁盘用于作为glusterfs的存储节点,磁盘添加完成之后无需格式化,只需要找到磁盘的所在盘符即可
- 我这边添加完成的是/dev/sdb ```bash [root@k8s-master01 ~]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0009b911
Device Boot Start End Blocks Id System /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 104857599 51379200 8e Linux LVM
Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
… … …
[root@k8s-master01 ~]#
<a name="IZLDI"></a>
## 3.安装glustefs软件
- 所有节点全部安装glustefs软件
```bash
[root@k8s-master01 ~]# yum -y install centos-release-gluster
[root@k8s-master01 ~]# yum -y install glusterfs glusterfs-server glusterfs-fuse
[root@k8s-master01 ~]# systemctl enable glusterfsd
[root@k8s-master01 ~]# systemctl start glusterfsd
[root@k8s-master01 ~]# systemctl enable glusterd
[root@k8s-master01 ~]# systemctl start glusterd
- 将节点加入到集群可信池当中
- 在任意一个节点上使用如下命令发现其他节点,组件GlusterFS集群
[root@k8s-master01 ~]# gluster peer probe 192.168.1.9
[root@k8s-master01 ~]# gluster peer probe 192.168.1.10
[root@k8s-master01 ~]# gluster peer status
Number of Peers: 2
Hostname: 192.168.1.9
Uuid: 368760d3-c0be-4670-8616-0d4fef3ffc25
State: Peer in Cluster (Connected)
Hostname: 192.168.1.10
Uuid: 017fb361-cb4a-4a40-9f3c-62a94208d626
State: Peer in Cluster (Connected)
[root@k8s-master01 ~]#
- 测试
创建一个测试卷,我们直接使用系统默认的分区创建一个测试卷,不要使用我们挂载的裸磁盘设备,因为后面heketi只支持为格式化过的裸磁盘
[root@k8s-master01 ~]# gluster volume create test-volume replica 2 192.168.1.9:/home/gluster 192.168.1.10:/home/gluster force
volume create: test-volume: success: please start the volume to access data
[root@k8s-master01 ~]#
激活测试卷
[root@k8s-master01 ~]# gluster volume start test-volume
volume start: test-volume: success
[root@k8s-master01 ~]#
删除测试卷
[root@k8s-master01 ~]# gluster volume stop test-volume
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: test-volume: success
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# gluster volume delete test-volume
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: test-volume: success
[root@k8s-master01 ~]#
4.部署heketi
- Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期。 通过Heketi,就可以像使用OpenStack Manila,Kubernetes和OpenShift一样申请可以动态配置GlusterFS卷。Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不同的故障域内。同时Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。
- 有了heketi,存储管理员不必再管理或者配置brick、磁盘或可信存储池,heketi服务将为管理员管理所有硬件,并使其能够按需分配存储。不过,在heketi中注册的任何磁盘都必须以原始格式提供,而不能是创建过文件系统的磁盘分区
heketi项目地址:https://github.com/heketi/heketi
安装heketi ```bash [root@k8s-master01 ~]# yum -y install heketi heketi-client [root@k8s-master01 ~]# systemctl enable heketi [root@k8s-master01 ~]# systemctl start heketi
- 配置ssh互信
> 使用ssh的方式连接glusterfs集群的各个节点,并且拥有管理权限
```bash
[root@k8s-master01 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''
[root@k8s-master01 ~]# chown heketi.heketi /etc/heketi/heketi_key*
[root@k8s-master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.1.8
[root@k8s-master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.1.9
[root@k8s-master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.1.10
#测试ssh免密是否成功
ssh -i /etc/heketi/heketi_key root@192.168.1.8
ssh -i /etc/heketi/heketi_key root@192.168.1.9
ssh -i /etc/heketi/heketi_key root@192.168.1.10
- 配置heketi的主配置文件/etc/heketi/heketi.json
如果需要启用heketi的认证,则需要将”use_auth”的参数设置为”true”,并且需要在”jwt {}”配置段中为各用户设定相应的密码,用户名和密码可以自定义。 “glusterfs {}”配置段用于指定Glusterfs存储集群的认证方式及认证信息
[root@k8s-master01 ~]# vim /etc/heketi/heketi.json
{
"_port_comment": "Heketi Server Port Number",
#修改端口号,防止端口冲突
"port": "18080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
#开启认证
"use_auth": true,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
#设置管理员的key
"admin": {
"key": "adminkey"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "userkey"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
#使用ssh认证
"executor": "ssh",
"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key",
"user": "root",
"port": "22",
"fstab": "/etc/fstab"
},
"_kubeexec_comment": "Kubernetes configuration",
"kubeexec": {
"host" :"https://kubernetes.host:8443",
"cert" : "/path/to/crt.file",
"insecure": false,
"user": "kubernetes username",
"password": "password for kubernetes user",
"namespace": "OpenShift project or Kubernetes namespace",
"fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"
},
#设置heketi数据库文件位置
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"_loglevel_comment": [
"Set log level. Choices are:",
" none, critical, error, warning, info, debug",
"Default is warning"
],
#设置日志的输出级别
"loglevel" : "warning"
}
}
- 启动heketi服务 ```bash [root@k8s-master01 ~]# systemctl restart heketi
[root@k8s-master01 ~]# systemctl status heketi ● heketi.service - Heketi Server Loaded: loaded (/usr/lib/systemd/system/heketi.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2020-11-22 15:02:13 CST; 5h 38min ago Main PID: 850 (heketi) Tasks: 11 Memory: 31.8M CGroup: /system.slice/heketi.service └─850 /usr/bin/heketi —config=/etc/heketi/heketi.json
Nov 22 20:38:17 k8s-master01 heketi[850]: Tasks: 64 Nov 22 20:38:17 k8s-master01 heketi[850]: Memory: 123.6M Nov 22 20:38:17 k8s-master01 heketi[850]: CGroup: /system.slice/glusterd.service Nov 22 20:38:17 k8s-master01 heketi[850]: ├─ 1134 /usr/sbin/glusterd -p /var/run/glusterd.pid —log-level INFO Nov 22 20:38:17 k8s-master01 heketi[850]: ├─ 1261 /usr/sbin/glusterfsd -s 192.168.1.9 —volfile-id vol_03a6177cb7510b79e1081150d691e676.192.168.1.9.var-lib-heketi-mounts-vg_161c983bc2712439ed7a4a207c7ea4be-brick_573d2c3c…6/192.168.1.9-var-l Nov 22 20:38:17 k8s-master01 heketi[850]: ├─ 1272 /usr/sbin/glusterfs -s localhost —volfile-id shd/vol_03a6177cb7510b79e1081150d691e676 -p /var/run/gluster/shd/vol_03a6177cb7510b79e1081150d691e676/vol_03a6177cb7510b79e1081150d691e676-sh…-6 Nov 22 20:38:17 k8s-master01 heketi[850]: └─126830 /usr/sbin/glusterfsd -s 192.168.1.9 —volfile-id vol_13187107acad247c37022bd116fce4b5.192.168.1.9.var-lib-heketi-mounts-vg_161c983bc2712439ed7a4a207c7ea4be-brick_a97025d8…5/192.168.1.9-var-l Nov 22 20:38:17 k8s-master01 heketi[850]: Nov 22 15:03:55 k8s-node01 systemd[1]: Starting GlusterFS, a clustered file-system server… Nov 22 20:38:17 k8s-master01 heketi[850]: Nov 22 15:03:57 k8s-node01 systemd[1]: Started GlusterFS, a clustered file-system server. Nov 22 20:38:17 k8s-master01 heketi[850]: ]: Stderr [] Hint: Some lines were ellipsized, use -l to show in full. [root@k8s-master01 ~]#
<a name="riaXa"></a>
### 4.1 使用heketi创建集群
<a name="W7CoQ"></a>
#### 4.1.1方法1
> 如下是手动方式,一般我们都是直接使用第二种创建方法,使用拓扑的方式创建
> - 创建集群
```bash
heketi-cli --user admin --server http://192.168.1.8:18080 --secret adminkey --json cluster create
将节点加入集群 因为我们开启了heketi认证,每次执行heketi-cli操作时,都需要带上认证参数,比较麻烦,我们可以定义为alias
alias heketi-cli='heketi-cli --server "http://192.168.1.8:18080" --user "admin" --secret "adminkey"'
heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.1.8 --storage-host-name 192.168.1.8 --zone 1
heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.1.9 --storage-host-name 192.168.1.9 --zone 1
heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.1.10 --storage-host-name 192.168.1.10 --zone 1
看到有些文档说需要在centos上部署时,需要注释每台glusterfs上的/etc/sudoers中的Defaults requiretty,不然加第二个node死活报错,最后把日志级别调高才看到日志里有记录sudo提示require tty。由于我这里直接部署在ubuntu上,所有不存在上述问题。如果有遇到这种问题的,可以照着操作下。
- 添加device
这里需要特别说明的是,目前heketi仅支持使用裸分区或裸磁盘添加为device,不支持文件系统。
# --node参数给出的id是上一步创建node时生成的,这里只给出一个添加的示例,实际配置中,要添加每一个节点的每一块用于存储的硬盘
heketi-cli -json device add -name="/dev/sdb1" --node "c3638f57b5c5302c6f7cd5136c8fdc5e"
4.1.2 方法2(推荐)
- 利用配置文件的方式设置heketi拓扑
拓扑信息用于让heketi确认可用的节点,磁盘,和集群,管理员必须自行确定节点故障域和节点集群。故障域是赋予一组节点的整数值,这组节点共享相同的交换机,电源或其他任何导致他们同时失效的组件 一个适用于我们当前环境的示例配置如下(/etc/heketi/topology_daemon.json),它将根据glusterfs存储集群的实际环境把三个节点定义在同一个集群中,并且指明各节点上可用于提供存储空间的磁盘设备
[root@k8s-master01 ~]# vim /etc/heketi/topology_demo.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"192.168.1.8"
],
"storage": [
"192.168.1.8"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.1.9"
],
"storage": [
"192.168.1.9"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.1.10"
],
"storage": [
"192.168.1.10"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
利用如下命令加载拓扑信息,从而完成集群配置。此命令会生成一个集群,并为其添加的各节点随机生成ID号
[root@k8s-master01 ~]# heketi-cli topology load --json=/etc/heketi/topology_demo.json
创建一个5G大小的测试卷
[root@k8s-master01 ~]# heketi-cli volume create --size=5
Name: vol_11e37e5de2b6010f36cbcccb462c2de2
Size: 5
Volume Id: 11e37e5de2b6010f36cbcccb462c2de2
Cluster Id: 8c12f6bfe29f894693770b7ee28d3d7d
Mount: 192.168.1.8:vol_11e37e5de2b6010f36cbcccb462c2de2
Mount Options: backup-volfile-servers=192.168.1.10,192.168.1.9
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3
[root@k8s-master01 ~]#
#查看已经存在的卷
[root@k8s-master01 ~]# heketi-cli volume list
Id:03a6177cb7510b79e1081150d691e676 Cluster:8c12f6bfe29f894693770b7ee28d3d7d Name:vol_03a6177cb7510b79e1081150d691e676
Id:11e37e5de2b6010f36cbcccb462c2de2 Cluster:8c12f6bfe29f894693770b7ee28d3d7d Name:vol_11e37e5de2b6010f36cbcccb462c2de2
Id:13187107acad247c37022bd116fce4b5 Cluster:8c12f6bfe29f894693770b7ee28d3d7d Name:vol_13187107acad247c37022bd116fce4b5
[root@k8s-master01 ~]#
#查看卷的相信信息
[root@k8s-master01 ~]# heketi-cli volume info 11e37e5de2b6010f36cbcccb462c2de2
Name: vol_11e37e5de2b6010f36cbcccb462c2de2
Size: 5
Volume Id: 11e37e5de2b6010f36cbcccb462c2de2
Cluster Id: 8c12f6bfe29f894693770b7ee28d3d7d
Mount: 192.168.1.8:vol_11e37e5de2b6010f36cbcccb462c2de2
Mount Options: backup-volfile-servers=192.168.1.10,192.168.1.9
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3
[root@k8s-master01 ~]#
删除卷
[root@k8s-master01 ~]# heketi-cli volume delete 11e37e5de2b6010f36cbcccb462c2de2
Volume 11e37e5de2b6010f36cbcccb462c2de2 deleted
[root@k8s-master01 ~]#
查看集群节点信息 ```json [root@k8s-master01 ~]# heketi-cli cluster list Clusters: Id:8c12f6bfe29f894693770b7ee28d3d7d [file][block] [root@k8s-master01 ~]#
[root@k8s-master01 ~]# heketi-cli cluster info 8c12f6bfe29f894693770b7ee28d3d7d Cluster id: 8c12f6bfe29f894693770b7ee28d3d7d Nodes: 1eacec0cf601a9047a16db12476a168f 56703f998bfaea94384e2d8b8c70e9f8 7a23f906da56a56f51822549c9574c81 Volumes: 03a6177cb7510b79e1081150d691e676 13187107acad247c37022bd116fce4b5 Block: true
File: true [root@k8s-master01 ~]#
<a name="siQzH"></a>
## 利用glusterfs作为k8s的存储类(StorageClass)
- 官方地址:[https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs)
- K8S所有节点安装glusterfs-fuse,否则Pod将无法挂载存储
- 创建glusterfs-storageclass.yaml文件
- 创建storageclass
```yaml
[root@k8s-master01 ~]# yum -y install centos-release-gluster
[root@k8s-master01 ~]# yum -y install glusterfs-fuse
[root@k8s-master01 ~]# cd /etc/kubernetes/storage/
[root@k8s-master01 storage]# vim storage-glusterfs.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://192.168.1.8:18080"
clusterid: "857bfb93a4ed8e917e47cf5a970d0182"
restauthenabled: "true"
restuser: "admin"
#secretNamespace: "default"
#secretName: "heketi-secret"
restuserkey: "jiang110110!"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:2"
[root@k8s-master01 storage]# kubectl apply -f storage-glusterfs.yaml
storageclass.storage.k8s.io/glusterfs created
[root@k8s-master01 storage]#
[root@k8s-master01 storage]# kubectl get sc
NAME PROVISIONER AGE
glusterfs kubernetes.io/glusterfs 82m
[root@k8s-master01 storage]#
- 创建PVC
```yaml
[root@k8s-master01 storage]# vim pvc-nginx.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-nginx-www-a
namespace: default
annotations:
volume.beta.kubernetes.io/storage-class: “glusterfs”
spec:
accessModes:
- ReadWriteMany resources: requests: storage: 5Gi storageClassName: glusterfs volumeMode: Filesystem [root@k8s-master01 storage]# kubectl apply -f pvc-nginx.yaml persistentvolumeclaim/glusterfs-nginx created [root@k8s-master01 storage]#
[root@k8s-master01 storage]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
glusterfs-nginx Bound pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 5Gi RWX glusterfs 21s
[root@k8s-master01 storage]#
[root@k8s-master01 storage]# kubectl describe pvc glusterfs-nginx
Name: glusterfs-nginx
Namespace: default
StorageClass: glusterfs
Status: Bound
Volume: pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9
Labels:
Normal ProvisioningSucceeded 95s persistentvolume-controller Successfully provisioned volume pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 using kubernetes.io/glusterfs [root@k8s-master01 storage]#
- 在Pod中使用PVC
```yaml
[root@k8s-master01 storage]# vim my-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 1
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: daocloud.io/library/nginx:1.13.0-alpine
ports:
- containerPort: 80
volumeMounts:
- name: glusterfs-www
mountPath: /var/www/html/
volumes:
- name: glusterfs-www
persistentVolumeClaim:
claimName: glusterfs-nginx-www-a
[root@k8s-master01 storage]#
[root@k8s-master01 storage]# kubectl apply -f my-nginx.yaml
deployment.apps/my-nginx created
[root@k8s-master01 storage]#
[root@k8s-master01 storage]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-6c496dbb6f-4sjk6 1/1 Running 0 3m10s
[root@k8s-master01 storage]#
[root@k8s-master01 storage]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 5Gi RWX Delete Bound default/glusterfs-nginx glusterfs 14h
[root@k8s-master01 storage]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
glusterfs-nginx Bound pvc-f1805f35-adc6-46ad-8e7f-38c958a5e4d9 5Gi RWX glusterfs 14h
[root@k8s-master01 storage]#