1. [root@hci4 ceph]# ceph -s
    2. cluster:
    3. id: eff6581b-bbd9-4b08-800a-1114808ad183
    4. health: HEALTH_ERR
    5. 1 backfillfull osd(s)
    6. 1 nearfull osd(s)
    7. 5 pool(s) backfillfull
    8. 368831/3471036 objects misplaced (10.626%)
    9. Degraded data redundancy: 5824/3471036 objects degraded (0.168%), 4 pgs degraded, 4 pgs undersized
    10. Degraded data redundancy (low space): 26 pgs backfill_toofull
    11. services:
    12. mon: 3 daemons, quorum hci1,hci2,hci3
    13. mgr: hci2(active), standbys: hci3, hci1
    14. osd: 12 osds: 12 up, 12 in; 211 remapped pgs
    15. data:
    16. pools: 5 pools, 1280 pgs
    17. objects: 1129k objects, 2461 GB
    18. usage: 7457 GB used, 2589 GB / 10047 GB avail
    19. pgs: 5824/3471036 objects degraded (0.168%)
    20. 368831/3471036 objects misplaced (10.626%)
    21. 1069 active+clean
    22. 173 active+remapped+backfill_wait
    23. 22 active+remapped+backfill_wait+backfill_toofull
    24. 12 active+remapped+backfilling
    25. 4 active+undersized+degraded+remapped+backfill_wait+backfill_toofull
    26. io:
    27. client: 9891 kB/s rd, 6057 kB/s wr, 779 op/s rd, 954 op/s wr
    28. recovery: 79386 kB/s, 5 keys/s, 38 objects/s
    29. [root@hci4 ceph]#