1、集群状态
1.1、health检查
]$ ceph health detailHEALTH_OK
1.2、集群状态
]$ ceph -s cluster: id: 597abcf5-e6ce-4c68-b47b-ec4e33d66694 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 24m) mgr: ceph-mon1(active, since 25m), standbys: ceph-mon2 osd: 9 osds: 9 up (since 24m), 9 in (since 15h) rgw: 3 daemons active (ceph-mon1, ceph-mon2, ceph-mon3) task status: data: pools: 6 pools, 224 pgs objects: 2.14M objects, 784 GiB usage: 1.6 TiB used, 7.4 TiB / 9.0 TiB avail pgs: 224 active+clean
1.3、选举状态检查
]$ ceph quorum_status --format json-pretty{ "election_epoch": 1126, "quorum": [ 0, 1, 2 ], "quorum_names": [ "ceph-mon1", "ceph-mon2", "ceph-mon3" ], "quorum_leader_name": "ceph-mon1", "quorum_age": 1686, "monmap": ....
1.4、mon状态
]$ ceph mon state1: 3 mons at {ceph-mon1=[v2:10.10.5.27:3300/0,v1:10.10.5.27:6789/0],ceph-mon2=[v2:10.10.5.28:3300/0,v1:10.10.5.28:6789/0],ceph-mon3=[v2:10.10.5.29:3300/0,v1:10.10.5.29:6789/0]}, election epoch 1132, leader 0 ceph-mon1, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
1.5、OSD状态
]$ ceph osd stat9 osds: 9 up (since 36m), 9 in (since 36m); epoch: e1395]$ ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 9.00000 root default -3 3.00000 host ceph-mon1 0 hdd 1.00000 osd.0 up 1.00000 1.00000 1 hdd 1.00000 osd.1 up 1.00000 1.00000 2 hdd 1.00000 osd.2 up 1.00000 1.00000
1.6、查看存储池
]$ ceph osd pool ls.rgw.rootdefault.rgw.controldefault.rgw.metadefault.rgw.logdefault.rgw.buckets.indexdefault.rgw.buckets.data
1.7、查看存储池配额
]$ ceph osd pool get-quota default.rgw.buckets.dataquotas for pool 'default.rgw.buckets.data': max objects: N/A max bytes : N/A
1.8、查看PG状态
]$ ceph pg stat256 pgs: 256 active+clean; 794 GiB data, 1.6 TiB used, 7.4 TiB / 9.0 TiB avail
1.9、查看用户及权限
]$ ceph auth listosd.0 key: AQCDxBBhEcvjKBAAXued0MU9tW1VP/zdyopIwA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow
2、RGW状态
2.1、查看bucket列表
]$ radosgw-admin bucket list[ "new-bucket-ef602ac4", "vp_images"]
2.2、查看bucket属性
]$ radosgw-admin bucket stats --bucket=vp_images{ "bucket": "vp_images", "num_shards": 512, "tenant": "", "id": "9860c854-9a7c-46c2-b048-469688d2fae8.44746.7", # bucket ID
2.3、查看bucket索引分片信息
]$ rados -p default.rgw.buckets.index ls - | grep "9860c854-9a7c-46c2-b048-469688d2fae8.44746.7".dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.358.dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.31.dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.441
2.4、查看bucket 索引中的key
]$ rados -p default.rgw.buckets.index listomapkeys .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.330 |more
./vpimageServer/2018112713245806_01074322.jpg
1100011731_1.jpg
1100013322_9.jpg
1100018116_5.jpg
1100018640_2.jpg
2.5、查看对应索引信息存放的osd位置
]$ ceph osd map default.rgw.buckets.index .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.26
osdmap e1395 pool 'default.rgw.buckets.index' (6) object '.dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.26' -> pg 6.fbc7f63f (6.1f) -> up ([2,6,3], p2) acting ([2,6,3], p2)
2.6、查看对象数据存放在osd的位置
]$ ceph osd map default.rgw.buckets.data 1100383814_1.jpg
osdmap e1395 pool 'default.rgw.buckets.data' (10) object '1100383814_1.jpg' -> pg 10.982916a2 (10.22) -> up ([5,3,2,1,7,8], p5) acting ([5,3,2,1,7,8], p5)