1、文档



2、相关问题


2.1、ceph-deploy new ceph-node-1 出错

  1. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  2. [ceph_deploy.cli][INFO ] Invoked (1.5.25): /usr/bin/ceph-deploy new --public-network 192.168.189.0/24 --cluster-network 192.168.190.0/24 ceph-node-1
  3. [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
  4. [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
  5. [ceph_deploy][ERROR ] Traceback (most recent call last):
  6. [ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
  7. [ceph_deploy][ERROR ] return f(*a, **kw)
  8. [ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 162, in _main
  9. [ceph_deploy][ERROR ] return args.func(args)
  10. [ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/new.py", line 141, in new
  11. [ceph_deploy][ERROR ] ssh_copy_keys(host, args.username)
  12. [ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/new.py", line 35, in ssh_copy_keys
  13. [ceph_deploy][ERROR ] if ssh.can_connect_passwordless(hostname):
  14. [ceph_deploy][ERROR ] File "/usr/lib/python2.7/site-packages/ceph_deploy/util/ssh.py", line 15, in can_connect_passwordless
  15. [ceph_deploy][ERROR ] if not remoto.connection.needs_ssh(hostname):
  16. [ceph_deploy][ERROR ] AttributeError: 'module' object has no attribute 'needs_ssh'
  17. [ceph_deploy][ERROR ]

解决:可能是cepo.repo配置的yum有问题,安装的ceph-deploy版本不对,N版ceph 需要2.0.1版本

2.2、umount /var/lib/ceph/osd/ceph-1 无法卸载

  1. [root@ceph-node-1 ceph]# umount /var/lib/ceph/osd/ceph-0
  2. umount: /var/lib/ceph/osd/ceph-0:目标忙。
  3. (有些情况下通过 lsof(8) fuser(1) 可以
  4. 找到有关使用该设备的进程的有用信息)

解决:可以通过fuser查看设备被哪个进程占用,之后杀死进程,就可以顺利umount了。

  1. [root@ceph-node-1 ceph]# fuser -mv /var/lib/ceph/osd/ceph-0
  2. 用户 进程号 权限 命令
  3. /var/lib/ceph/osd/ceph-0:
  4. root kernel mount /var/lib/ceph/osd/ceph-0
  5. ceph 1289 F.... ceph-osd
  6. [root@ceph-node-1 ceph]# kill -9 1289
  7. [root@ceph-node-1 ceph]# umount /var/lib/ceph/osd/ceph-0
  8. [root@ceph-node-1 ceph]#

2.3、Degraded data redundancy: 73/219 objects degraded (33.333%), 43 pgs degraded, 64 pgs undersized

  1. 重启osd
  2. systemctl restart ceph-osd.target

2.4、application not enabled on 1 pool(s)

  1. ceph health detail
  2. #启用授权
  3. ceph osd pool application enable ceph-pool rbd

2.5、clock skew detected on mon.hcs2, mon.hcs3

时间不同步