date: 2021-12-07title: Ceph与OpenStack完美集成 #标题
tags: ceph #标签
categories: openstack # 分类
ceph与openstack集成是针对于openstack中的各个组件分别进行集成的,openstack中可以使用ceph来作为后端存储的组件有:glance(镜像服务)、nova(计算服务)、cinder(存储服务)以及cinder-backup(备份服务)。
ceph与openstack集成后的工作示意图如下:

自行准备测试环境,需要ceph集群和openstack环境,并且ceph与openstack集群不建议安装在同台机器上,openstack会修改系统的iptables规则,可能会造成ceph集群异常(如果你对iptables规则比较熟悉,可以自行放行ceph集群所用端口)。
环境准备
关于ceph与openstack的集群环境部署,可以参考我之前的文章:
机器列表
| OS | IP | 主机名 | 角色 |
|---|---|---|---|
| CentOS 7.5 | 192.168.20.2 | controller | openstack控制节点 |
| CentOS 7.5 | 192.168.20.3 | compute01 | openstack计算节点 |
| CentOS 7.5 | 192.168.20.4 | compute02 | openstack计算节点 |
| CentOS 7.5 | 192.168.20.5 | centos-20-5 | ceph集群-1 |
| CentOS 7.5 | 192.168.20.6 | centos-20-6 | ceph集群-2 |
| CentOS 7.5 | 192.168.20.10 | centos-20-10 | ceph集群-3 |
OpenStack集群安装ceph客户端
# OpenStack集群中所有节点执行$ cat > /etc/yum.repos.d/ceph.repo << EOF[ceph-norch]name=ceph-norchbaseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/enabled=1gpgcheck=0[ceph-x86_64]name=ceph-x86_64baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/enabled=1gpgcheck=0EOF$ yum -y install ceph-common python-rbd
配置ceph集群
配置ceph集群操作如果没有特殊说明,均在ceph的控制节点进行配置。
创建pool
# 创建pool$ ceph osd pool create volumes 16 16ceph osd pool create images 16 16ceph osd pool create backups 16 16ceph osd pool create vms 16 16# 新建的pool使用前必须进行初始化$ rbd pool init volumesrbd pool init imagesrbd pool init backupsrbd pool init vms
拷贝ceph集群配置文件至openstack节点
你openstack集群中的glance-api, cinder-volume, nova-compute以及 cinder-backup组件所在节点,都需要有ceph集群中的/etc/ceph/ceph.conf 配置文件。
# 将ceph集群的配置文件分发至openstack集群$ for i in 2 3 4;do rsync -az /etc/ceph/ceph.conf 192.168.20.${i}:/etc/ceph/ceph.conf;done
创建ceph集群中的相应用户
$ ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' mgr 'profile rbd pool=volumes, profile rbd pool=vms'ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'profile rbd pool=backups'
拷贝用户秘钥至OpenStack节点
# 定义openstack组件IP列表(自己根据变量名字去猜要哪个组件的IP吧)$ glance_api_server=192.168.20.2volume_server=192.168.20.2cinder_backup_server=192.168.20.2# 定义计算节点列表OS_computes=(192.168.20.3192.168.20.4)# 分发客户端秘钥并修改为相应权限ceph auth get-or-create client.glance | ssh ${glance_api_server} sudo tee /etc/ceph/ceph.client.glance.keyringssh ${glance_api_server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyringceph auth get-or-create client.cinder | ssh ${volume_server} sudo tee /etc/ceph/ceph.client.cinder.keyringssh ${volume_server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyringceph auth get-or-create client.cinder-backup | ssh ${cinder_backup_server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyringssh ${cinder_backup_server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyringfor i in ${OS_computes[@]};do ceph auth get-or-create client.cinder | ssh ${i} sudo tee /etc/ceph/ceph.client.cinder.keyring;done# 在所有计算节点创建秘钥的临时副本$ for i in ${OS_computes[@]};do ceph auth get-key client.cinder | ssh ${i} tee /tmp/client.cinder.key;done
在计算节点上添加密钥,并删除秘钥的临时副本
注:没有特别说明的话,此操作在OpenStack中所有计算节点进行配置。
# 任意节点生成一个uuid$ uuidgendd21e41a-0b19-4ae7-a8b6-8edb3ae7971a# 所有计算节点生成secret.xml配置文件$ cat > secret.xml <<EOF<secret ephemeral='no' private='no'><uuid>dd21e41a-0b19-4ae7-a8b6-8edb3ae7971a</uuid><usage type='ceph'><name>client.cinder secret</name></usage></secret>EOF# 定义virsh秘钥$ virsh secret-define --file secret.xml# 对秘钥设置value值,注意下面替换为你的uuidvirsh secret-set-value --secret dd21e41a-0b19-4ae7-a8b6-8edb3ae7971a --base64 $(cat /tmp/client.cinder.key) && rm -f /tmp/client.cinder.key secret.xml# 确认秘钥对设置成功$ virsh secret-listUUID 用量--------------------------------------------------------------------------------dd21e41a-0b19-4ae7-a8b6-8edb3ae7971a ceph client.cinder secret# 下面查出来的值就是ceph 集群中 client.cinder用户的秘钥$ virsh secret-get-value dd21e41a-0b19-4ae7-a8b6-8edb3ae7971aAQD4E4ZgD3UzOxAASinKtfoNo2+yCxbcQ/Yqhg==
至此,准备工作已经完成了,现在开始将各个组件与ceph进行对接。
glance与ceph对接
在OpenStack-glance-api节点执行。
修改glance配置文件
# 修改glance-api配置文件$ vim /etc/glance/glance-api.conf[DEFAULT]show_image_direct_url = True # 开启copy-on-write特性[glance_store]stores = file,http,rbd # 增加支持rbd选项default_store = rbd # 更改默认存储为rbd#filesystem_store_datadir = /var/lib/glance/images/ # 注释本地存储路径# 下面依次是定义连接ceph中的哪个pool、使用哪个用户去连接ceph、ceph的本地配置文件位置、chunk大小rbd_store_pool = imagesrbd_store_user = glancerbd_store_ceph_conf = /etc/ceph/ceph.confrbd_store_chunk_size = 8
重启并验证glance服务
$ systemctl restart openstack-glance-api# 重启服务后自行查看glance-api日志,无报错即表示正常;$ tailf -200 /var/log/glance/api.log
glance对接ceph测试
注意:ceph官方建议存储raw格式的镜像文件,不建议使用QCOW2格式,具体信息如下:

# 下载源镜像(体积很小,专用于测试的)$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img# 转换格式为raw$ qemu-img convert -f qcow2 -O raw cirros-0.4.0-x86_64-disk.img cirros-0.4.0-x86_64-disk.img.raw# 上传至glance$ openstack image create "cirros_ceph_test" \--file cirros-0.4.0-x86_64-disk.img.raw --disk-format raw \--container-format bare --public# 确认镜像已上传$ openstack image list+--------------------------------------+------------------+--------+| ID | Name | Status |+--------------------------------------+------------------+--------+| 2d7fbc08-4d62-4f2e-9279-fb3880b90798 | cirros_ceph_test | active |+--------------------------------------+------------------+--------+# ceph集群查看images存储池$ rbd -p images ls # 可以看到一个和OpenStack中镜像ID一样的块文件2d7fbc08-4d62-4f2e-9279-fb3880b90798# 查看此块文件详细信息$ rbd -p images info 2d7fbc08-4d62-4f2e-9279-fb3880b90798rbd image '2d7fbc08-4d62-4f2e-9279-fb3880b90798':size 12 MiB in 2 objectsorder 23 (8 MiB objects)snapshot_count: 1id: 1982a7365a1d5block_name_prefix: rbd_data.1982a7365a1d5format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenop_features:flags:create_timestamp: Mon Apr 26 11:15:13 2021access_timestamp: Mon Apr 26 11:15:13 2021modify_timestamp: Mon Apr 26 11:15:13 2021# 查看此块文件的快照列表(默认已有一个快照)$ rbd snap list images/2d7fbc08-4d62-4f2e-9279-fb3880b90798SNAPID NAME SIZE PROTECTED TIMESTAMP6 snap 12 MiB yes Mon Apr 26 11:15:13 2021# 查看快照的详细信息$ rbd info images/2d7fbc08-4d62-4f2e-9279-fb3880b90798@snaprbd image '2d7fbc08-4d62-4f2e-9279-fb3880b90798':size 12 MiB in 2 objectsorder 23 (8 MiB objects)snapshot_count: 1id: 1982a7365a1d5block_name_prefix: rbd_data.1982a7365a1d5format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenop_features:flags:create_timestamp: Mon Apr 26 11:15:13 2021access_timestamp: Mon Apr 26 11:15:13 2021modify_timestamp: Mon Apr 26 11:15:13 2021protected: True # 可以看到这个快照被保护起来了
cinder与Ceph对接
此操作在cinder服务所在节点进行操作。
修改cinder配置文件
$ vim /etc/cinder/cinder.conf# 新增如下内容[DEFAULT]...enabled_backends = cephglance_api_version = 2...[ceph]volume_driver = cinder.volume.drivers.rbd.RBDDrivervolume_backend_name = cephrbd_pool = volumesrbd_ceph_conf = /etc/ceph/ceph.confrbd_flatten_volume_from_snapshot = falserbd_max_clone_depth = 5rbd_store_chunk_size = 4rados_connect_timeout = -1rbd_user = cinderrbd_secret_uuid = dd21e41a-0b19-4ae7-a8b6-8edb3ae7971a # 这个UUID替换为你在计算节点用`virsh secret-list`指令查看出来的UUID
重启cinder服务
# 重启$ systemctl restart openstack-cinder-volume# 查看volume列表$ openstack volume service list+------------------+-----------------+------+---------+-------+----------------------------+| Binary | Host | Zone | Status | State | Updated At |+------------------+-----------------+------+---------+-------+----------------------------+| cinder-scheduler | controller | nova | enabled | up | 2021-04-26T06:20:00.000000 || cinder-volume | compute01@ssd | nova | enabled | up | 2021-04-26T06:20:09.000000 || cinder-volume | compute01@sata | nova | enabled | up | 2021-04-26T06:20:01.000000 || cinder-volume | controller@ceph | nova | enabled | up | 2021-04-26T06:20:05.000000 |+------------------+-----------------+------+---------+-------+----------------------------+
创建卷类型
要想使用ceph来保存存储卷,还需要对其设置卷类型。
# 创建卷类型$ cinder type-create ceph# 设置volume type 的backend name# 下面set后面的键值对,就是我们在cinder存储节点上指定的volume_backend_name那个键值对$ cinder type-key ceph set volume_backend_name=ceph
创建卷进行测试
自行登录到OpenStack控制台,按照如下提示,创建卷:

确保其类型为ceph,状态为可用(可以对这个卷进行扩容、快照等操作,点击卷记录右边的下箭头即可看到,这个功能自行验证即可):

查看卷ID:

# ceph中查看volumes池,会发现新增一个和卷ID一样的块文件$ rbd -p volumes lsvolume-216091d7-fcb9-4fbe-a54b-830ec8a7c3b7
cinder-backup与Ceph对接
在安装了cinder服务的节点进行配置。
修改cinder配置文件
$ vim /etc/cinder/cinder.conf[DEFAULT]... # 省略部分内容backup_driver = cinder.backup.drivers.ceph.CephBackupDriverbackup_ceph_conf = /etc/ceph/ceph.conf # 指定ceph配置文件backup_ceph_user = cinder-backup # 指定连接ceph的用户backup_ceph_chunk_size = 134217728backup_ceph_pool = backups # 指定连接ceph的哪个poolbackup_ceph_stripe_unit = 0backup_ceph_stripe_count = 0restore_discard_excess_bytes = true
重启cinder-backup服务
$ systemctl enable openstack-cinder-backup.servicesystemctl restart openstack-cinder-backup.service# 自行查看日志没有error信息$ tailf /var/log/cinder/backup.log
重启服务输出的日志如下:

确认cinder-backup服务为UP状态:

验证cinder-backup服务
1、horizon(openstack控制面板服务)启用cinder backup功能(非必须,使用命令行操作也一样):
$ vim /etc/openstack-dashboard/local_settings # 修改配置文件OPENSTACK_CINDER_FEATURES = {'enable_backup': True,}# 重启httpd$ systemctl restart httpd
2、控制面板创建第一个卷备份(创建备份时,卷不能被使用,否则会创建备份失败,具体如何解决我没研究)


3、命令行方式创建第二个卷备份
# 查询卷ID$ cinder list+--------------------------------------+--------+-----------+------+-------------+----------+--------------------------------------+| ID | Status | Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+--------+-----------+------+-------------+----------+--------------------------------------+| 216091d7-fcb9-4fbe-a54b-830ec8a7c3b7 | in-use | ceph_test | 11 | ceph | false | 8b17de01-c161-4df1-8c03-d1deb2c025c0 |+--------------------------------------+--------+-----------+------+-------------+----------+--------------------------------------+# 制作备份$ cinder backup-create 216091d7-fcb9-4fbe-a54b-830ec8a7c3b7 --name ceph_backup_2+-----------+--------------------------------------+| Property | Value |+-----------+--------------------------------------+| id | b6d61f9e-9c58-43c4-9779-a5083637ccef || name | ceph_backup_2 || volume_id | 216091d7-fcb9-4fbe-a54b-830ec8a7c3b7 |+-----------+--------------------------------------+# 查看备份列表$ cinder backup-list+--------------------------------------+--------------------------------------+-----------+---------------+------+--------------+-----------+| ID | Volume ID | Status | Name | Size | Object Count | Container |+--------------------------------------+--------------------------------------+-----------+---------------+------+--------------+-----------+| b6d61f9e-9c58-43c4-9779-a5083637ccef | 216091d7-fcb9-4fbe-a54b-830ec8a7c3b7 | available | ceph_backup_2 | 11 | 0 | backups || b80d39fe-d84c-4e94-9c4f-105b97d33c16 | 216091d7-fcb9-4fbe-a54b-830ec8a7c3b7 | available | ceph_backup_1 | 11 | 0 | backups |+--------------------------------------+--------------------------------------+-----------+---------------+------+--------------+-----------+# ceph集群查看backup池$ rbd -p backups ls # 由于是对同一个卷做的备份,所以只能看到一个块文件volume-216091d7-fcb9-4fbe-a54b-830ec8a7c3b7.backup.base# 查看块文件的详细信息$ rbd -p backups info volume-216091d7-fcb9-4fbe-a54b-830ec8a7c3b7.backup.baserbd image 'volume-216091d7-fcb9-4fbe-a54b-830ec8a7c3b7.backup.base':size 11 GiB in 2816 objectsorder 22 (4 MiB objects)snapshot_count: 2 # 可以看到有两个快照id: 197593a368629block_name_prefix: rbd_data.197593a368629format: 2features: layeringop_features:flags:create_timestamp: Mon Apr 26 18:03:34 2021access_timestamp: Mon Apr 26 18:03:34 2021modify_timestamp: Mon Apr 26 18:03:34 2021# 查看块文件的快照列表$ rbd snap list backups/volume-216091d7-fcb9-4fbe-a54b-830ec8a7c3b7.backup.baseSNAPID NAME SIZE PROTECTED TIMESTAMP4 backup.b80d39fe-d84c-4e94-9c4f-105b97d33c16.snap.1619431414.32 11 GiB Mon Apr 26 18:03:38 20219 backup.b6d61f9e-9c58-43c4-9779-a5083637ccef.snap.1619432100.13 11 GiB Mon Apr 26 18:15:02 2021
最终openstack的dashboard的备份列表如下:

注:如果是对同一个卷进行备份,那么在ceph中只是对基础块文件做了一个快照,属于增量备份,如果要删除备份,那么需要先删除最新快照,再删除旧的快照,否则新的快照是基于旧的快照做的增量备份,无法删除旧快照。
nova与ceph对接
修改nova配置文件
此操作需要在OpenStack集群上所有计算节点执行。
$ vim /etc/nova/nova.conf[libvirt]... # 增加或更改如下内容# 配置项从上到下分别是:指定镜像类型、指定连接ceph中的哪个pool、指定ceph配置文件# 执行连接ceph的用户名及密码images_type=rbdimages_rbd_pool=vmsimages_rbd_ceph_conf = /etc/ceph/ceph.confrbd_user = cinderrbd_secret_uuid = dd21e41a-0b19-4ae7-a8b6-8edb3ae7971a
重启nova服务
$ systemctl restart openstack-nova-compute
验证ceph与OpenStack集群对接
我们上面做了那么多,到这里就是完全完成了,现在开始创建一个虚机进行测试其各个组件是否对接ceph成功。


选择一个实例类型:

选择一个网络方案,并创建实例:

查看OpenStack集群与ceph信息
# OpenStack控制节点执行$ openstack server list+--------------------------------------+-----------+--------+---------------------+------------------+---------+| ID | Name | Status | Networks | Image | Flavor |+--------------------------------------+-----------+--------+---------------------+------------------+---------+| 1bbad26f-a9ca-4b65-9db7-1ae5b2c285d1 | ceph_test | ACTIVE | vpc-1=10.252.201.26 | cirros_ceph_test | m1.tiny |+--------------------------------------+-----------+--------+---------------------+------------------+---------+# ceph集群$ rbd -p vms ls # 有一个和虚机ID一样的块文件1bbad26f-a9ca-4b65-9db7-1ae5b2c285d1_disk
可以自行将前面创建的卷连接至虚机,在虚机中当成一个块设备使用功能,如下:


