1- 角色扩容

1.1- mon扩容

一个Ceph存储集群至少需要一个Ceph Monitor和Ceph Manager才能运行。为了获得高可用性,Ceph存储群集通常运行多个Ceph Monitor,这样单个Ceph Monitor的故障不会使Ceph存储群集停机。Ceph使用Paxos算法,该算法需要大多数Monitor(即,大于N / 2,其中N是Monitor的数量)才能形成仲裁。Monitor的奇数往往会更好,尽管这不是必需的。

  • 扩容mon

    1. ceph-deploy mon add mon3
    2. ceph-deploy mon add mon4
    3. ceph-deploy mon add mon5
  • 验证查看

    1. ceph mon stat
  • 查看仲裁选举情况 ceph quorum_status —format json-pretty

添加了新的Ceph Monitor,Ceph将开始同步Monitor并形成仲裁。可以通过执行以下命令检查仲裁状态:

  1. [root@mon1 ceph]# ceph -s
  2. cluster:
  3. id: 4f79f6df-abf6-4461-a5be-9e1aa6adb9a9
  4. health: HEALTH_WARN
  5. clock skew detected on mon.mon2
  6. services:
  7. mon: 2 daemons, quorum mon1,mon2 (age 10m)
  8. mgr: mon1(active, since 16s), standbys: mon2
  9. osd: 0 osds: 0 up, 0 in
  10. data:
  11. pools: 0 pools, 0 pgs
  12. objects: 0 objects, 0 B
  13. usage: 0 B used, 0 B / 0 B avail
  14. pgs:
  15. [root@mon1 ceph]# ceph quorum_status --format json-pretty
  16. {
  17. "election_epoch": 4,
  18. "quorum": [
  19. 0,
  20. 1
  21. ],
  22. "quorum_names": [
  23. "mon1",
  24. "mon2"
  25. ],
  26. "quorum_leader_name": "mon1",
  27. "quorum_age": 980,
  28. "monmap": {
  29. "epoch": 1,
  30. "fsid": "4f79f6df-abf6-4461-a5be-9e1aa6adb9a9",
  31. "modified": "2020-12-28 07:53:10.709467",
  32. "created": "2020-12-28 07:53:10.709467",
  33. "min_mon_release": 14,
  34. "min_mon_release_name": "nautilus",
  35. "features": {
  36. "persistent": [
  37. "kraken",
  38. "luminous",
  39. "mimic",
  40. "osdmap-prune",
  41. "nautilus"
  42. ],
  43. "optional": []
  44. },
  45. "mons": [
  46. {
  47. "rank": 0,
  48. "name": "mon1",
  49. "public_addrs": {
  50. "addrvec": [
  51. {
  52. "type": "v2",
  53. "addr": "10.68.3.121:3300",
  54. "nonce": 0
  55. },
  56. {
  57. "type": "v1",
  58. "addr": "10.68.3.121:6789",
  59. "nonce": 0
  60. }
  61. ]
  62. },
  63. "addr": "10.68.3.121:6789/0",
  64. "public_addr": "10.68.3.121:6789/0"
  65. },
  66. {
  67. "rank": 1,
  68. "name": "mon2",
  69. "public_addrs": {
  70. "addrvec": [
  71. {
  72. "type": "v2",
  73. "addr": "10.68.3.122:3300",
  74. "nonce": 0
  75. },
  76. {
  77. "type": "v1",
  78. "addr": "10.68.3.122:6789",
  79. "nonce": 0
  80. }
  81. ]
  82. },
  83. "addr": "10.68.3.122:6789/0",
  84. "public_addr": "10.68.3.122:6789/0"
  85. }
  86. ]
  87. }
  88. }
  • 查看mon状态 ceph mon stat / ceph mon dump
    1. [root@mon1 ceph]# ceph mon stat
    2. e1: 2 mons at {mon1=[v2:10.68.3.121:3300/0,v1:10.68.3.121:6789/0],mon2=[v2:10.68.3.122:3300/0,v1:10.68.3.122:6789/0]}, election epoch 4, leader 0 mon1, quorum 0,1 mon1,mon2
    1. [root@mon1 ceph]# ceph mon dump
    2. dumped monmap epoch 4
    3. epoch 4
    4. fsid 4f79f6df-abf6-4461-a5be-9e1aa6adb9a9
    5. last_changed 2020-12-28 16:36:15.204720
    6. created 2020-12-28 07:53:10.709467
    7. min_mon_release 14 (nautilus)
    8. 0: [v2:10.68.3.121:3300/0,v1:10.68.3.121:6789/0] mon.mon1
    9. 1: [v2:10.68.3.122:3300/0,v1:10.68.3.122:6789/0] mon.mon2
    10. 2: [v2:10.68.3.123:3300/0,v1:10.68.3.123:6789/0] mon.mon3
    11. 3: [v2:10.68.3.124:3300/0,v1:10.68.3.124:6789/0] mon.mon4
    12. 4: [v2:10.68.3.125:3300/0,v1:10.68.3.125:6789/0] mon.mon5

4.2- 扩容mgr节点

Ceph Manager守护程序以 active/standby模式运行。部署其他manager daemons可确保如果一个守护程序或主机发生故障,另一守护程序或主机可以接管而不会中断服务。

  • 添加mon3,mon4,mon5节点

    1. ceph-deploy mgr create mon3 mon4 mon5
  • 验证

    1. ceph mgr dump

    4.3- 扩容rgw

    1. ceph-deploy rgw mon3 mon4 mon

    4.3- 扩容OSD

  • 添加节点上剩余的osd磁盘 ```bash

    如果使用volume卷则卷地址为 vg/lv

    ceph-deploy osd create —bluestore mon3 —fs-type xfs —data /dev/sdd —journal journal/sdd

```