1、集群状态

1.1、health检查

  1. ]$ ceph health detail
  2. HEALTH_OK

1.2、集群状态

  1. ]$ ceph -s
  2. cluster:
  3. id: 597abcf5-e6ce-4c68-b47b-ec4e33d66694
  4. health: HEALTH_OK
  5. services:
  6. mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 24m)
  7. mgr: ceph-mon1(active, since 25m), standbys: ceph-mon2
  8. osd: 9 osds: 9 up (since 24m), 9 in (since 15h)
  9. rgw: 3 daemons active (ceph-mon1, ceph-mon2, ceph-mon3)
  10. task status:
  11. data:
  12. pools: 6 pools, 224 pgs
  13. objects: 2.14M objects, 784 GiB
  14. usage: 1.6 TiB used, 7.4 TiB / 9.0 TiB avail
  15. pgs: 224 active+clean

1.3、选举状态检查

  1. ]$ ceph quorum_status --format json-pretty
  2. {
  3. "election_epoch": 1126,
  4. "quorum": [
  5. 0,
  6. 1,
  7. 2
  8. ],
  9. "quorum_names": [
  10. "ceph-mon1",
  11. "ceph-mon2",
  12. "ceph-mon3"
  13. ],
  14. "quorum_leader_name": "ceph-mon1",
  15. "quorum_age": 1686,
  16. "monmap":
  17. ....

1.4、mon状态

  1. ]$ ceph mon stat
  2. e1: 3 mons at {ceph-mon1=[v2:10.10.5.27:3300/0,v1:10.10.5.27:6789/0],ceph-mon2=[v2:10.10.5.28:3300/0,v1:10.10.5.28:6789/0],ceph-mon3=[v2:10.10.5.29:3300/0,v1:10.10.5.29:6789/0]}, election epoch 1132, leader 0 ceph-mon1, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3

1.5、OSD状态

  1. ]$ ceph osd stat
  2. 9 osds: 9 up (since 36m), 9 in (since 36m); epoch: e1395
  3. ]$ ceph osd tree
  4. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  5. -1 9.00000 root default
  6. -3 3.00000 host ceph-mon1
  7. 0 hdd 1.00000 osd.0 up 1.00000 1.00000
  8. 1 hdd 1.00000 osd.1 up 1.00000 1.00000
  9. 2 hdd 1.00000 osd.2 up 1.00000 1.00000

1.6、查看存储池

  1. ]$ ceph osd pool ls
  2. .rgw.root
  3. default.rgw.control
  4. default.rgw.meta
  5. default.rgw.log
  6. default.rgw.buckets.index
  7. default.rgw.buckets.data

1.7、查看存储池配额

  1. ]$ ceph osd pool get-quota default.rgw.buckets.data
  2. quotas for pool 'default.rgw.buckets.data':
  3. max objects: N/A
  4. max bytes : N/A

1.8、查看PG状态

  1. ]$ ceph pg stat
  2. 256 pgs: 256 active+clean; 794 GiB data, 1.6 TiB used, 7.4 TiB / 9.0 TiB avail

1.9、查看用户及权限

  1. ]$ ceph auth list
  2. osd.0
  3. key: AQCDxBBhEcvjKBAAXued0MU9tW1VP/zdyopIwA==
  4. caps: [mgr] allow profile osd
  5. caps: [mon] allow profile osd
  6. caps: [osd] allow

2、RGW状态

2.1、查看bucket列表

  1. ]$ radosgw-admin bucket list
  2. [
  3. "new-bucket-ef602ac4",
  4. "vp_images"
  5. ]

2.2、查看bucket属性

  1. ]$ radosgw-admin bucket stats --bucket=vp_images
  2. {
  3. "bucket": "vp_images",
  4. "num_shards": 512,
  5. "tenant": "",
  6. "id": "9860c854-9a7c-46c2-b048-469688d2fae8.44746.7", # bucket ID

2.3、查看bucket索引分片信息

  1. ]$ rados -p default.rgw.buckets.index ls - | grep "9860c854-9a7c-46c2-b048-469688d2fae8.44746.7"
  2. .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.358
  3. .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.31
  4. .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.441

2.4、查看bucket 索引中的key

]$ rados -p default.rgw.buckets.index listomapkeys .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.330 |more
./vpimageServer/2018112713245806_01074322.jpg
1100011731_1.jpg
1100013322_9.jpg
1100018116_5.jpg
1100018640_2.jpg

2.5、查看对应索引信息存放的osd位置

]$ ceph osd map default.rgw.buckets.index .dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.26
osdmap e1395 pool 'default.rgw.buckets.index' (6) object '.dir.9860c854-9a7c-46c2-b048-469688d2fae8.44746.7.26' -> pg 6.fbc7f63f (6.1f) -> up ([2,6,3], p2) acting ([2,6,3], p2)

2.6、查看对象数据存放在osd的位置

]$ ceph osd map default.rgw.buckets.data 1100383814_1.jpg
osdmap e1395 pool 'default.rgw.buckets.data' (10) object '1100383814_1.jpg' -> pg 10.982916a2 (10.22) -> up ([5,3,2,1,7,8], p5) acting ([5,3,2,1,7,8], p5)