1、Ceph资源对象
Ceph组件包含:
mon monitor 管理集群
mgr manager 监控管理
mds CephFs 源数据管理
rgw 对象存储
osd 存储
monitor、mgr和osd,csi provisioner以Deployments的形式部署
[root@master1 ceph]# kubectl get deployment -n rook-ceph
NAME READY UP-TO-DATE AVAILABLE AGE
csi-cephfsplugin-provisioner 2/2 2 2 15h
csi-rbdplugin-provisioner 2/2 2 2 15h
rook-ceph-crashcollector-master1 1/1 1 1 31m
rook-ceph-crashcollector-master2 1/1 1 1 15h
rook-ceph-crashcollector-node1 1/1 1 1 15h
rook-ceph-crashcollector-node2 1/1 1 1 15h
rook-ceph-mgr-a 1/1 1 1 15h
rook-ceph-mon-a 1/1 1 1 15h
rook-ceph-mon-b 1/1 1 1 15h
rook-ceph-mon-c 1/1 1 1 15h
rook-ceph-operator 1/1 1 1 16h
rook-ceph-osd-0 1/1 1 1 15h
rook-ceph-osd-1 1/1 1 1 15h
rook-ceph-osd-2 1/1 1 1 15h
rook-ceph-osd-3 1/1 1 1 32m
CSI的CephFS驱动和RBD驱动以DaemonSeets的⽅式部署
[root@master1 ceph]# kubectl get daemonset -n rook-ceph
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
csi-cephfsplugin 3 3 3 3 3
csi-rbdplugin 3 3 3 3 3
对外提供服务均通过service的形式
[root@master1 manifests]# kubectl get svc -n rook-ceph
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
csi-cephfsplugin-metrics ClusterIP 10.104.93.216
csi-rbdplugin-metrics ClusterIP 10.101.87.204
rook-ceph-mgr ClusterIP 10.111.12.252
rook-ceph-mgr-dashboard ClusterIP 10.99.8.95
rook-ceph-mon-a ClusterIP 10.108.154.141
rook-ceph-mon-b ClusterIP 10.102.28.81
rook-ceph-mon-c ClusterIP 10.109.23.216
初始化osd的使⽤的jobs控制器
[root@master1 manifests]# kubectl get job -n rook-ceph
NAME COMPLETIONS DURATION AGE
rook-ceph-osd-prepare-master1 1/1 3s 33m
rook-ceph-osd-prepare-master2 1/1 3s 33m
rook-ceph-osd-prepare-node1 1/1 4s 33m
rook-ceph-osd-prepare-node2 1/1 5s 33m
2、Rook Toolbox ROOK客户端
如何连接⾄Ceph集群像操作本地的集群⼀样呢? 社区提供了toolbox客户端
The Rook toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS, so more tools of your choosing can be easily installed with yum.
The toolbox can be run in two modes:
- Interactive: Start a toolbox pod where you can connect and execute Ceph commands from a shell
- One-time job: Run a script with Ceph commands and collect the results from the job log
安装ROOK客户端
[root@master1 ceph]# kubectl apply -f toolbox.yaml
[root@master1 ceph]# kubectl exec -it rook-ceph-tools-54fc95f4f4-vx4qw -n rook-ceph /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] — [COMMAND] instead.
[root@rook-ceph-tools-54fc95f4f4-vx4qw /]# ceph -s
cluster:
id: 826845d1-c1be-43e7-af18-86f250bb73ef
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 52m)
mgr: a(active, since 51m)
osd: 4 osds: 4 up (since 49m), 4 in (since 50m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 23 MiB used, 2.0 TiB / 2.0 TiB avail
pgs: 1 active+clean
3、k8s访问ceph
kubernetes如何访问ceph集群呢?需要配置⽂件和认证⽂件 。
1、安装ceph-comman
[root@rook-ceph-tools-54fc95f4f4-vx4qw yum.repos.d]# yum -y install ceph-common
Failed to set locale, defaulting to C
CentOS Linux 8 - AppStream 7.0 MB/s | 8.8 MB 00:01
CentOS Linux 8 - BaseOS 5.5 MB/s | 6.5 MB 00:01
CentOS Linux 8 - Extras 15 kB/s | 10 kB 00:00
Copr repo for python-scikit-learn owned by tchaikov 3.7 kB/s | 20 kB 00:05
ceph-iscsi noarch packages 2.2 kB/s | 2.9 kB 00:01
Ceph packages for x86_64 47 kB/s | 74 kB 00:01
Ceph noarch packages 15 kB/s | 15 kB 00:01
Ceph source packages 1.7 kB/s | 1.7 kB 00:01
Extra Packages for Enterprise Linux Modular 8 - x86_64 525 kB/s | 939 kB 00:01
Extra Packages for Enterprise Linux 8 - x86_64 6.1 MB/s | 10 MB 00:01
ganesha 6.2 kB/s | 13 kB 00:02
ganesha-noarch 334 B/s | 1.0 kB 00:03
tcmu-runner packages for x86_64 1.2 kB/s | 2.9 kB 00:02
tcmu-runner noarch packages 100 B/s | 257 B 00:02
tcmu-runner source packages 122 B/s | 257 B 00:02
Package ceph-common-2:16.2.5-0.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
2、 拷⻉ceph.conf⽂件和kering⽂件 在容器里面找到文件
[root@master1 ~]# kubectl exec -it rook-ceph-tools-54fc95f4f4-vx4qw -n rook-ceph cat /etc/ceph/ceph.conf | tee /etc/ceph/ceph.conf
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] — [COMMAND] instead.
[global]
mon_host = 10.109.23.216:6789,10.108.154.141:6789,10.102.28.81:6789
[client.admin]
keyring = /etc/ceph/keyring
[root@master1 ~]# ls /etc/ceph/ceph.conf
/etc/ceph/ceph.conf
[root@master1 ~]# cat /etc/ceph/ceph.conf
[global]
mon_host = 10.109.23.216:6789,10.108.154.141:6789,10.102.28.81:6789
[client.admin]
keyring = /etc/ceph/keyring
#kering认证⽂件
[root@master1 ~]# kubectl exec -it rook-ceph-tools-54fc95f4f4-vx4qw -n rook-ceph cat /etc/ceph/keyring | tee /etc/ceph/keyring
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] — [COMMAND] instead.
[client.admin]
key = AQCQhUBhg7f8NBAAlz2opGzrWcS1JMd2my+arw==
[root@master1 ~]# cat /etc/ceph/keyring
[client.admin]
key = AQCQhUBhg7f8NBAAlz2opGzrWcS1JMd2my+arw==
注:上述操作在所有kubernetes节点中执⾏
4、 本地运⾏ceph命令查看集群状态
[root@rook-ceph-tools-54fc95f4f4-vx4qw ~]# ceph -s
cluster:
id: 826845d1-c1be-43e7-af18-86f250bb73ef
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 2h)
osd: 4 osds: 4 up (since 2h), 4 in (since 2h)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 23 MiB used, 2.0 TiB / 2.0 TiB avail
pgs: 1 active+clean
[root@rook-ceph-tools-54fc95f4f4-vx4qw ~]#
4、访问RBD块存储
4.1创建pool
[root@rook-ceph-tools-54fc95f4f4-qczp8 /]# ceph osd pool create rook 16 16
pool ‘rook’ created
[root@rook-ceph-tools-54fc95f4f4-qczp8 /]# ceph osd lspools
1 device_health_metrics
2 rook
4.2在pool上创建RBD设备
[root@rook-ceph-tools-54fc95f4f4-qczp8 /]# rbd create -p rook —image rook-rbd.img —size 10G
[root@rook-ceph-tools-54fc95f4f4-qczp8 /]# rbd ls -p rook
rook-rbd.img
[root@rook-ceph-tools-54fc95f4f4-qczp8 /]# rbd info rook/rook-rbd.img
rbd image ‘rook-rbd.img’:
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: d17c6e602240
block_name_prefix: rbd_data.d17c6e602240
format: 2
features: layering
op_features:
flags:
create_timestamp: Wed Sep 15 06:51:35 2021
access_timestamp: Wed Sep 15 06:51:35 2021
modify_timestamp: Wed Sep 15 06:51:35 2021
4.3 客户端挂载RBD块
rbd map rook/rook-rbd.img
rbd showmapped
mkfs.xfs /dev/rbd0
mkfs.xfs /dev/rbd0
