1、OSD Health OSD健康状态

The rook-ceph-tools pod provides a simple environment to run Ceph tools. The ceph commands mentioned in this document should be run from the toolbox.
Once the is created, connect to the pod to execute the ceph commands to analyze the health of the cluster, in particular the OSDs and placement groups (PGs). Some common commands to analyze OSDs include:
ceph status
[root@rook-ceph-tools-54fc95f4f4-mg67d /]# ceph status
cluster:
id: 2d792034-41f1-4ce2-bdc0-3951bc09cab0
health: HEALTH_WARN
clock skew detected on mon.e

services:
mon: 4 daemons, quorum a,b,c,e (age 4h)
mgr: a(active, since 7d)
mds: 2/2 daemons up, 2 hot standby
osd: 5 osds: 5 up (since 4d), 5 in (since 4d)
rgw: 2 daemons active (1 hosts, 1 zones)

data:
volumes: 1/1 healthy
pools: 11 pools, 177 pgs
objects: 883 objects, 1.3 GiB
usage: 5.9 GiB used, 1.7 TiB / 1.7 TiB avail
pgs: 177 active+clean

io:
client: 2.6 KiB/s rd, 170 B/s wr, 5 op/s rd, 0 op/s wr

ceph osd tree
[root@rook-ceph-tools-54fc95f4f4-mg67d /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.70898 root default

-5 0.48830 host master2

1 hdd 0.48830 osd.1 up 1.00000 1.00000
-7 0.53709 host node1

2 hdd 0.48830 osd.2 up 1.00000 1.00000
3 hdd 0.04880 osd.3 up 1.00000 1.00000
-3 0.48830 host node2

0 hdd 0.48830 osd.0 up 1.00000 1.00000
-9 0.19530 host node3

4 hdd 0.19530 osd.4 up 1.00000 1.00000

ceph osd status
[root@rook-ceph-tools-54fc95f4f4-mg67d /]# ceph osd status
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE

0 node2 1465M 498G 0 0 1 0 exists,up

1 master2 1228M 498G 0 0 2 105 exists,up

2 node1 1290M 498G 0 0 2 12 exists,up

3 node1 342M 49.6G 0 0 0 0 exists,up

4 node3 1731M 198G 0 0 1 89 exists,up

ceph osd df
[root@rook-ceph-tools-54fc95f4f4-mg67d /]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
1 hdd 0.48830 1.00000 500 GiB 1.2 GiB 882 MiB 8 KiB 346 MiB 499 GiB 0.24 0.71 146 up
2 hdd 0.48830 1.00000 500 GiB 1.3 GiB 949 MiB 8 KiB 341 MiB 499 GiB 0.25 0.75 135 up
3 hdd 0.04880 1.00000 50 GiB 343 MiB 143 MiB 0 B 200 MiB 50 GiB 0.67 1.98 18 up
0 hdd 0.48830 1.00000 500 GiB 1.4 GiB 1.1 GiB 8 KiB 337 MiB 499 GiB 0.29 0.85 142 up
4 hdd 0.19530 1.00000 200 GiB 1.7 GiB 784 MiB 0 B 948 MiB 198 GiB 0.85 2.50 90 up
TOTAL 1.7 TiB 5.9 GiB 3.8 GiB 24 KiB 2.1 GiB 1.7 TiB 0.34

MIN/MAX VAR: 0.71/2.50 STDDEV: 0.28

ceph osd utilization
[root@rook-ceph-tools-54fc95f4f4-mg67d /]# ceph osd utilization
avg 106.2
stddev 48.4496 (expected baseline 9.21737)
min osd.3 with 18 pgs (0.169492 mean)
max osd.1 with 146 pgs (1.37476
mean)