http://www.51niux.com/?id=161

    ceph详细中文文档(1).pdf

    1. [root@node01 ~]# ceph -h
    2. General usage:
    3. ==============
    4. usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
    5. [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
    6. [--name CLIENT_NAME] [--cluster CLUSTER]
    7. [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
    8. [--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
    9. [-W WATCH_CHANNEL] [--version] [--verbose] [--concise]
    10. [-f {json,json-pretty,xml,xml-pretty,plain,yaml}]
    11. [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]
    12. Ceph administration tool
    13. optional arguments:
    14. -h, --help request mon help
    15. -c CEPHCONF, --conf CEPHCONF
    16. ceph configuration file
    17. -i INPUT_FILE, --in-file INPUT_FILE
    18. input file, or "-" for stdin
    19. -o OUTPUT_FILE, --out-file OUTPUT_FILE
    20. output file, or "-" for stdout
    21. --setuser SETUSER set user file permission
    22. --setgroup SETGROUP set group file permission
    23. --id CLIENT_ID, --user CLIENT_ID
    24. client id for authentication
    25. --name CLIENT_NAME, -n CLIENT_NAME
    26. client name for authentication
    27. --cluster CLUSTER cluster name
    28. --admin-daemon ADMIN_SOCKET
    29. submit admin-socket commands ("help" for help
    30. -s, --status show cluster status
    31. -w, --watch watch live cluster changes
    32. --watch-debug watch debug events
    33. --watch-info watch info events
    34. --watch-sec watch security events
    35. --watch-warn watch warn events
    36. --watch-error watch error events
    37. -W WATCH_CHANNEL, --watch-channel WATCH_CHANNEL
    38. watch live cluster changes on a specific channel
    39. (e.g., cluster, audit, cephadm, or '*' for all)
    40. --version, -v display version
    41. --verbose make verbose
    42. --concise make less verbose
    43. -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format {json,json-pretty,xml,xml-pretty,plain,yaml}
    44. --connect-timeout CLUSTER_TIMEOUT
    45. set a timeout for connecting to the cluster
    46. --block block until completion (scrub and deep-scrub only)
    47. --period PERIOD, -p PERIOD
    48. polling period, default 1.0 second (for polling
    49. commands only)
    50. Local commands:
    51. ===============
    52. ping <mon.id> Send simple presence/life test to a mon
    53. <mon.id> may be 'mon.*' for all mons
    54. daemon {type.id|path} <cmd>
    55. Same as --admin-daemon, but auto-find admin socket
    56. daemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]
    57. daemonperf {type.id | path} list|ls [stat-pats] [priority]
    58. Get selected perf stats from daemon/admin socket
    59. Optional shell-glob comma-delim match string stat-pats
    60. Optional selection priority (can abbreviate name):
    61. critical, interesting, useful, noninteresting, debug
    62. List shows a table of all available stats
    63. Run <count> times (default forever),
    64. once per <interval> seconds (default 1)
    65. Monitor commands:
    66. =================
    67. alerts send (re)send alerts immediately
    68. auth add <entity> [<caps>...] add auth info for <entity> from input file, or random key if no
    69. input is given, and/or any caps specified in the command
    70. auth caps <entity> <caps>... update caps for <name> from caps specified in the command
    71. auth export [<entity>] write keyring for requested entity, or master keyring if none given
    72. auth get <entity> write keyring file with requested key
    73. auth get-key <entity> display requested key
    74. auth get-or-create <entity> [<caps>...] add auth info for <entity> from input file, or random key if no
    75. input given, and/or any caps specified in the command
    76. auth get-or-create-key <entity> [<caps>...] get, or add, key for <name> from system/caps pairs specified in
    77. the command. If key already exists, any given caps must match
    78. the existing caps for that key.
    79. auth import auth import: read keyring file from -i <file>
    80. auth ls list authentication state
    81. auth print-key <entity> display requested key
    82. auth print_key <entity> display requested key
    83. auth rm <entity> remove all caps for <name>
    84. balancer dump <plan> Show an optimization plan
    85. balancer eval [<option>] Evaluate data distribution for the current cluster or specific
    86. pool or specific plan
    87. balancer eval-verbose [<option>] Evaluate data distribution for the current cluster or specific
    88. pool or specific plan (verbosely)
    89. balancer execute <plan> Execute an optimization plan
    90. balancer ls List all plans
    91. balancer mode none|crush-compat|upmap Set balancer mode
    92. balancer off Disable automatic balancing
    93. balancer on Enable automatic balancing
    94. balancer optimize <plan> [<pools>...] Run optimizer to create a new plan
    95. balancer pool add <pools>... Enable automatic balancing for specific pools
    96. balancer pool ls List automatic balancing pools. Note that empty list means all
    97. existing pools will be automatic balancing targets, which is the
    98. default behaviour of balancer.
    99. balancer pool rm <pools>... Disable automatic balancing for specific pools
    100. balancer reset Discard all optimization plans
    101. balancer rm <plan> Discard an optimization plan
    102. balancer show <plan> Show details of an optimization plan
    103. balancer status Show balancer status
    104. config assimilate-conf Assimilate options from a conf, and return a new, minimal conf file
    105. config dump Show all configuration option(s)
    106. config generate-minimal-conf Generate a minimal ceph.conf file
    107. config get <who> [<key>] Show configuration option(s) for an entity
    108. config help <key> Describe a configuration option
    109. config log [<num:int>] Show recent history of config changes
    110. config ls List available configuration options
    111. config reset <num:int> Revert configuration to a historical version specified by <num>
    112. config rm <who> <name> Clear a configuration option for one or more entities
    113. config set <who> <name> <value> [--force] Set a configuration option for one or more entities
    114. config show <who> [<key>] Show running configuration
    115. config show-with-defaults <who> Show running configuration (including compiled-in defaults)
    116. config-key dump [<key>] dump keys and values (with optional prefix)
    117. config-key exists <key> check for <key>'s existence
    118. config-key get <key> get <key>
    119. config-key ls list keys
    120. config-key rm <key> rm <key>
    121. config-key set <key> [<val>] set <key> to value <val>
    122. crash archive <id> Acknowledge a crash and silence health warning(s)
    123. crash archive-all Acknowledge all new crashes and silence health warning(s)
    124. crash info <id> show crash dump metadata
    125. crash json_report <hours> Crashes in the last <hours> hours
    126. crash ls Show new and archived crash dumps
    127. crash ls-new Show new crash dumps
    128. crash post Add a crash dump (use -i <jsonfile>)
    129. crash prune <keep> Remove crashes older than <keep> days
    130. crash rm <id> Remove a saved crash <id>
    131. crash stat Summarize recorded crashes
    132. device check-health Check life expectancy of devices
    133. device get-health-metrics <devid> [<sample>] Show stored device metrics for the device
    134. device info <devid> Show information about a device
    135. device light on|off <devid> [ident|fault] [--force] Enable or disable the device light. Default type is `ident`
    136. Usage:
    137. device light (on|off) <devid> [ident|fault] [--force]
    138. device ls Show devices
    139. device ls-by-daemon <who> Show devices associated with a daemon
    140. device ls-by-host <host> Show devices on a host
    141. device ls-lights List currently active device indicator lights
    142. device monitoring off Disable device health monitoring
    143. device monitoring on Enable device health monitoring
    144. device predict-life-expectancy <devid> Predict life expectancy with local predictor
    145. device query-daemon-health-metrics <who> Get device health metrics for a given daemon
    146. device rm-life-expectancy <devid> Clear predicted device life expectancy
    147. device scrape-daemon-health-metrics <who> Scrape and store device health metrics for a given daemon
    148. device scrape-health-metrics [<devid>] Scrape and store health metrics
    149. device set-life-expectancy <devid> <from> [<to>] Set predicted device life expectancy
    150. df [detail] show cluster free space stats
    151. features report of connected features
    152. fs add_data_pool <fs_name> <pool> add data pool <pool>
    153. fs authorize <filesystem> <entity> <caps>... add auth for <entity> to access file system <filesystem> based on
    154. following directory and permissions pairs
    155. fs clone cancel <vol_name> <clone_name> [<group_name>] Cancel an pending or ongoing clone operation.
    156. fs clone status <vol_name> <clone_name> [<group_name>] Get status on a cloned subvolume.
    157. fs dump [<epoch:int>] dump all CephFS status, optionally from epoch
    158. fs fail <fs_name> bring the file system down and all of its ranks
    159. fs flag set enable_multiple <val> [--yes-i-really-mean-it] Set a global CephFS flag
    160. fs get <fs_name> get info about one filesystem
    161. fs ls list filesystems
    162. fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous- make new filesystem using named pools <metadata> and <data>
    163. metadata-overlay]
    164. fs reset <fs_name> [--yes-i-really-mean-it] disaster recovery only: reset to a single-MDS map
    165. fs rm <fs_name> [--yes-i-really-mean-it] disable the named filesystem
    166. fs rm_data_pool <fs_name> <pool> remove data pool <pool>
    167. fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data| set fs parameter <var> to <val>
    168. cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_
    169. timeout|session_autoclose|allow_standby_replay|down|joinable|min_
    170. compat_client <val> [--yes-i-really-mean-it] [--yes-i-really-
    171. really-mean-it]
    172. fs set-default <fs_name> set the default to the named filesystem
    173. fs status [<fs>] Show the status of a CephFS filesystem
    174. fs subvolume create <vol_name> <sub_name> [<size:int>] [<group_ Create a CephFS subvolume in a volume, and optionally, with a
    175. name>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>] specific size (in bytes), a specific data pool layout, a specific
    176. mode, and in a specific subvolume group
    177. fs subvolume getpath <vol_name> <sub_name> [<group_name>] Get the mountpath of a CephFS subvolume in a volume, and
    178. optionally, in a specific subvolume group
    179. fs subvolume info <vol_name> <sub_name> [<group_name>] Get the metadata of a CephFS subvolume in a volume, and optionally,
    180. in a specific subvolume group
    181. fs subvolume ls <vol_name> [<group_name>] List subvolumes
    182. fs subvolume resize <vol_name> <sub_name> <new_size> [<group_ Resize a CephFS subvolume
    183. name>] [--no-shrink]
    184. fs subvolume rm <vol_name> <sub_name> [<group_name>] [--force] Delete a CephFS subvolume in a volume, and optionally, in a
    185. specific subvolume group
    186. fs subvolume snapshot clone <vol_name> <sub_name> <snap_name> Clone a snapshot to target subvolume
    187. <target_sub_name> [<pool_layout>] [<group_name>] [<target_group_
    188. name>]
    189. fs subvolume snapshot create <vol_name> <sub_name> <snap_name> Create a snapshot of a CephFS subvolume in a volume, and
    190. [<group_name>] optionally, in a specific subvolume group
    191. fs subvolume snapshot ls <vol_name> <sub_name> [<group_name>] List subvolume snapshots
    192. fs subvolume snapshot protect <vol_name> <sub_name> <snap_name> Protect snapshot of a CephFS subvolume in a volume, and optionally,
    193. [<group_name>] in a specific subvolume group
    194. fs subvolume snapshot rm <vol_name> <sub_name> <snap_name> [<group_ Delete a snapshot of a CephFS subvolume in a volume, and
    195. name>] [--force] optionally, in a specific subvolume group
    196. fs subvolume snapshot unprotect <vol_name> <sub_name> <snap_name> Unprotect a snapshot of a CephFS subvolume in a volume, and
    197. [<group_name>] optionally, in a specific subvolume group
    198. fs subvolumegroup create <vol_name> <group_name> [<pool_layout>] Create a CephFS subvolume group in a volume, and optionally, with
    199. [<uid:int>] [<gid:int>] [<mode>] a specific data pool layout, and a specific numeric mode
    200. fs subvolumegroup getpath <vol_name> <group_name> Get the mountpath of a CephFS subvolume group in a volume
    201. fs subvolumegroup ls <vol_name> List subvolumegroups
    202. fs subvolumegroup rm <vol_name> <group_name> [--force] Delete a CephFS subvolume group in a volume
    203. fs subvolumegroup snapshot create <vol_name> <group_name> <snap_ Create a snapshot of a CephFS subvolume group in a volume
    204. name>
    205. fs subvolumegroup snapshot ls <vol_name> <group_name> List subvolumegroup snapshots
    206. fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> Delete a snapshot of a CephFS subvolume group in a volume
    207. [--force]
    208. fs volume create <name> [<placement>] Create a CephFS volume
    209. fs volume ls List volumes
    210. fs volume rm <vol_name> [<yes-i-really-mean-it>] Delete a FS volume by passing --yes-i-really-mean-it flag
    211. fsid show cluster FSID/UUID
    212. health [detail] show cluster health
    213. health mute <code> [<ttl>] [--sticky] mute health alert
    214. health unmute [<code>] unmute existing health alert mute(s)
    215. influx config-set <key> <value> Set a configuration value
    216. influx config-show Show current configuration
    217. influx send Force sending data to Influx
    218. insights Retrieve insights report
    219. insights prune-health <hours> Remove health history older than <hours> hours
    220. iostat Get IO rates
    221. log <logtext>... log supplied text to the monitor log
    222. log last [<num:int>] [debug|info|sec|warn|error] [*|cluster|audit| print last few lines of the cluster log
    223. cephadm]
    224. mds compat rm_compat <feature:int> remove compatible feature
    225. mds compat rm_incompat <feature:int> remove incompatible feature
    226. mds compat show show mds compatibility settings
    227. mds count-metadata <property> count MDSs by metadata field property
    228. mds fail <role_or_gid> Mark MDS failed: trigger a failover if a standby is available
    229. mds metadata [<who>] fetch metadata for mds <role>
    230. mds ok-to-stop <ids>... check whether stopping the specified MDS would reduce immediate
    231. availability
    232. mds repaired <role> mark a damaged MDS rank as no longer damaged
    233. mds rm <gid:int> remove nonactive mds
    234. mds versions check running versions of MDSs
    235. mgr count-metadata <property> count ceph-mgr daemons by metadata field property
    236. mgr dump [<epoch:int>] dump the latest MgrMap
    237. mgr fail [<who>] treat the named manager daemon as failed
    238. mgr metadata [<who>] dump metadata for all daemons or a specific daemon
    239. mgr module disable <module> disable mgr module
    240. mgr module enable <module> [--force] enable mgr module
    241. mgr module ls list active mgr modules
    242. mgr self-test background start <workload> Activate a background workload (one of command_spam, throw_
    243. exception)
    244. mgr self-test background stop Stop background workload if any is running
    245. mgr self-test cluster-log <channel> <priority> <message> Create an audit log record.
    246. mgr self-test config get <key> Peek at a configuration value
    247. mgr self-test config get_localized <key> Peek at a configuration value (localized variant)
    248. mgr self-test health clear [<checks>...] Clear health checks by name. If no names provided, clear all.
    249. mgr self-test health set <checks> Set a health check from a JSON-formatted description.
    250. mgr self-test insights_set_now_offset <hours> Set the now time for the insights module.
    251. mgr self-test module <module> Run another module's self_test() method
    252. mgr self-test remote Test inter-module calls
    253. mgr self-test run Run mgr python interface tests
    254. mgr services list service endpoints provided by mgr modules
    255. mgr versions check running versions of ceph-mgr daemons
    256. mon add <name> <addr> add new monitor named <name> at <addr>
    257. mon count-metadata <property> count mons by metadata field property
    258. mon dump [<epoch:int>] dump formatted monmap (optionally from epoch)
    259. mon enable-msgr2 enable the msgr2 protocol on port 3300
    260. mon feature ls [--with-value] list available mon map features to be set/unset
    261. mon feature set <feature_name> [--yes-i-really-mean-it] set provided feature on mon map
    262. mon getmap [<epoch:int>] get monmap
    263. mon metadata [<id>] fetch metadata for mon <id>
    264. mon ok-to-add-offline check whether adding a mon and not starting it would break quorum
    265. mon ok-to-rm <id> check whether removing the specified mon would break quorum
    266. mon ok-to-stop <ids>... check whether mon(s) can be safely stopped without reducing
    267. immediate availability
    268. mon rm <name> remove monitor named <name>
    269. mon scrub scrub the monitor stores
    270. mon set-addrs <name> <addrs> set the addrs (IPs and ports) a specific monitor binds to
    271. mon set-rank <name> <rank:int> set the rank for the specified mon
    272. mon set-weight <name> <weight:int> set the weight for the specified mon
    273. mon stat summarize monitor status
    274. mon versions check running versions of monitors
    275. node ls [all|osd|mon|mds|mgr] list all nodes in cluster [type]
    276. orch apply [mon|mgr|rbd-mirror|crash|alertmanager|grafana|node- Update the size or placement for a service or apply a large yaml
    277. exporter|prometheus] [<placement>] [--unmanaged] spec
    278. orch apply mds <fs_name> [<placement>] [--unmanaged] Update the number of MDS instances for the given fs_name
    279. orch apply nfs <svc_id> <pool> [<namespace>] [<placement>] [-- Scale an NFS service
    280. unmanaged]
    281. orch apply osd [--all-available-devices] [--preview] [<service_ Create OSD daemon(s) using a drive group spec
    282. name>] [--unmanaged] [plain|json|json-pretty|yaml]
    283. orch apply rgw <realm_name> <zone_name> [<subcluster>] [<port: Update the number of RGW instances for the given zone
    284. int>] [--ssl] [<placement>] [--unmanaged]
    285. orch cancel cancels ongoing operations
    286. orch daemon add [mon|mgr|rbd-mirror|crash|alertmanager|grafana| Add daemon(s)
    287. node-exporter|prometheus] [<placement>]
    288. orch daemon add iscsi <pool> [<fqdn_enabled>] [<trusted_ip_list>] Start iscsi daemon(s)
    289. [<placement>]
    290. orch daemon add mds <fs_name> [<placement>] Start MDS daemon(s)
    291. orch daemon add nfs <svc_arg> <pool> [<namespace>] [<placement>] Start NFS daemon(s)
    292. orch daemon add osd [<svc_arg>] Create an OSD service. Either --svc_arg=host:drives
    293. orch daemon add rgw [<realm_name>] [<zone_name>] [<placement>] Start RGW daemon(s)
    294. orch daemon rm <names>... [--force] Remove specific daemon(s)
    295. orch daemon start|stop|restart|redeploy|reconfig <name> Start, stop, restart, redeploy, or reconfig a specific daemon
    296. orch device ls [<hostname>...] [plain|json|json-pretty|yaml] [-- List devices on a host
    297. refresh]
    298. orch device zap <hostname> <path> [--force] Zap (erase!) a device so it can be re-used
    299. orch host add <hostname> [<addr>] [<labels>...] Add a host
    300. orch host label add <hostname> <label> Add a host label
    301. orch host label rm <hostname> <label> Remove a host label
    302. orch host ls [plain|json|json-pretty|yaml] List hosts
    303. orch host rm <hostname> Remove a host
    304. orch host set-addr <hostname> <addr> Update a host address
    305. orch ls [<service_type>] [<service_name>] [--export] [plain|json| List services known to orchestrator
    306. json-pretty|yaml] [--refresh]
    307. orch osd rm <svc_id>... [--replace] [--force] Remove OSD services
    308. orch osd rm status status of OSD removal operation
    309. orch pause Pause orchestrator background work
    310. orch ps [<hostname>] [<service_name>] [<daemon_type>] [<daemon_ List daemons known to orchestrator
    311. id>] [plain|json|json-pretty|yaml] [--refresh]
    312. orch resume Resume orchestrator background work (if paused)
    313. orch rm <service_name> [--force] Remove a service
    314. orch set backend <module_name> Select orchestrator module backend
    315. orch start|stop|restart|redeploy|reconfig <service_name> Start, stop, restart, redeploy, or reconfig an entire service (i.e.
    316. all daemons)
    317. orch status Report configured backend and its status
    318. orch upgrade check [<image>] [<ceph_version>] Check service versions vs available and target containers
    319. orch upgrade pause Pause an in-progress upgrade
    320. orch upgrade resume Resume paused upgrade
    321. orch upgrade start [<image>] [<ceph_version>] Initiate upgrade
    322. orch upgrade status Check service versions vs available and target containers
    323. orch upgrade stop Stop an in-progress upgrade
    324. osd blacklist add|rm <addr> [<expire:float>] add (optionally until <expire> seconds from now) or remove <addr>
    325. from blacklist
    326. osd blacklist clear clear all blacklisted clients
    327. osd blacklist ls show blacklisted clients
    328. osd blocked-by print histogram of which OSDs are blocking their peers
    329. osd count-metadata <property> count OSDs by metadata field property
    330. osd crush add <id|osd.id> <weight:float> <args>... add or update crushmap position and weight for <name> with
    331. <weight> and location <args>
    332. osd crush add-bucket <name> <type> [<args>...] add no-parent (probably root) crush bucket <name> of type <type>
    333. to location <args>
    334. osd crush class create <class> create crush device class <class>
    335. osd crush class ls list all crush device classes
    336. osd crush class ls-osd <class> list all osds belonging to the specific <class>
    337. osd crush class rename <srcname> <dstname> rename crush device class <srcname> to <dstname>
    338. osd crush class rm <class> remove crush device class <class>
    339. osd crush create-or-move <id|osd.id> <weight:float> <args>... create entry or move existing entry for <name> <weight> at/to
    340. location <args>
    341. osd crush dump dump crush map
    342. osd crush get-device-class <ids>... get classes of specified osd(s) <id> [<id>...]
    343. osd crush get-tunable straw_calc_version get crush tunable <tunable>
    344. osd crush link <name> <args>... link existing entry for <name> under location <args>
    345. osd crush ls <node> list items beneath a node in the CRUSH tree
    346. osd crush move <name> <args>... move existing entry for <name> to location <args>
    347. osd crush rename-bucket <srcname> <dstname> rename bucket <srcname> to <dstname>
    348. osd crush reweight <name> <weight:float> change <name>'s weight to <weight> in crush map
    349. osd crush reweight-all recalculate the weights for the tree to ensure they sum correctly
    350. osd crush reweight-subtree <name> <weight:float> change all leaf items beneath <name> to <weight> in crush map
    351. osd crush rm <name> [<ancestor>] remove <name> from crush map (everywhere, or just at <ancestor>)
    352. osd crush rm-device-class <ids>... remove class of the osd(s) <id> [<id>...],or use <all|any> to
    353. remove all.
    354. osd crush rule create-erasure <name> [<profile>] create crush rule <name> for erasure coded pool created with
    355. <profile> (default default)
    356. osd crush rule create-replicated <name> <root> <type> [<class>] create crush rule <name> for replicated pool to start from <root>,
    357. replicate across buckets of type <type>, use devices of type
    358. <class> (ssd or hdd)
    359. osd crush rule create-simple <name> <root> <type> [firstn|indep] create crush rule <name> to start from <root>, replicate across
    360. buckets of type <type>, using a choose mode of <firstn|indep> (
    361. default firstn; indep best for erasure pools)
    362. osd crush rule dump [<name>] dump crush rule <name> (default all)
    363. osd crush rule ls list crush rules
    364. osd crush rule ls-by-class <class> list all crush rules that reference the same <class>
    365. osd crush rule rename <srcname> <dstname> rename crush rule <srcname> to <dstname>
    366. osd crush rule rm <name> remove crush rule <name>
    367. osd crush set <id|osd.id> <weight:float> <args>... update crushmap position and weight for <name> to <weight> with
    368. location <args>
    369. osd crush set [<prior_version:int>] set crush map from input file
    370. osd crush set-all-straw-buckets-to-straw2 convert all CRUSH current straw buckets to use the straw2 algorithm
    371. osd crush set-device-class <class> <ids>... set the <class> of the osd(s) <id> [<id>...],or use <all|any> to
    372. set all.
    373. osd crush set-tunable straw_calc_version <value:int> set crush tunable <tunable> to <value>
    374. osd crush show-tunables show current crush tunables
    375. osd crush swap-bucket <source> <dest> [--yes-i-really-mean-it] swap existing bucket contents from (orphan) bucket <source> and
    376. <target>
    377. osd crush tree [--show-shadow] dump crush buckets and items in a tree view
    378. osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel| set crush tunables values to <profile>
    379. optimal|default
    380. osd crush unlink <name> [<ancestor>] unlink <name> from crush map (everywhere, or just at <ancestor>)
    381. osd crush weight-set create <pool> flat|positional create a weight-set for a given pool
    382. osd crush weight-set create-compat create a default backward-compatible weight-set
    383. osd crush weight-set dump dump crush weight sets
    384. osd crush weight-set ls list crush weight sets
    385. osd crush weight-set reweight <pool> <item> <weight:float>... set weight for an item (bucket or osd) in a pool's weight-set
    386. osd crush weight-set reweight-compat <item> <weight:float>... set weight for an item (bucket or osd) in the backward-compatible
    387. weight-set
    388. osd crush weight-set rm <pool> remove the weight-set for a given pool
    389. osd crush weight-set rm-compat remove the backward-compatible weight-set
    390. osd deep-scrub <who> initiate deep scrub on osd <who>, or use <all|any> to deep scrub
    391. all
    392. osd destroy <id|osd.id> [--force] [--yes-i-really-mean-it] mark osd as being destroyed. Keeps the ID intact (allowing reuse),
    393. but removes cephx keys, config-key data and lockbox keys,
    394. rendering data permanently unreadable.
    395. osd df [plain|tree] [class|name] [<filter>] show OSD utilization
    396. osd down <ids>... [--definitely-dead] set osd(s) <id> [<id>...] down, or use <any|all> to set all osds
    397. down
    398. osd drain <osd_ids:int>... drain osd ids
    399. osd drain status show status
    400. osd drain stop [<osd_ids:int>...] show status for osds. Stopping all if osd_ids are omitted
    401. osd dump [<epoch:int>] print summary of OSD map
    402. osd erasure-code-profile get <name> get erasure code profile <name>
    403. osd erasure-code-profile ls list all erasure code profiles
    404. osd erasure-code-profile rm <name> remove erasure code profile <name>
    405. osd erasure-code-profile set <name> [<profile>...] [--force] create erasure code profile <name> with [<key[=value]> ...] pairs.
    406. Add a --force at the end to override an existing profile (VERY
    407. DANGEROUS)
    408. osd find <id|osd.id> find osd <id> in the CRUSH map and show its location
    409. osd force-create-pg <pgid> [--yes-i-really-mean-it] force creation of pg <pgid>
    410. osd get-require-min-compat-client get the minimum client version we will maintain compatibility with
    411. osd getcrushmap [<epoch:int>] get CRUSH map
    412. osd getmap [<epoch:int>] get OSD map
    413. osd getmaxosd show largest OSD id
    414. osd in <ids>... set osd(s) <id> [<id>...] in, can use <any|all> to automatically
    415. set all previously out osds in
    416. osd info [<id|osd.id>] print osd's {id} information (instead of all osds from map)
    417. osd last-stat-seq <id|osd.id> get the last pg stats sequence number reported for this osd
    418. osd lost <id|osd.id> [--yes-i-really-mean-it] mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE
    419. REPLICAS EXIST, BE CAREFUL
    420. osd ls [<epoch:int>] show all OSD ids
    421. osd ls-tree [<epoch:int>] <name> show OSD ids under bucket <name> in the CRUSH map
    422. osd map <pool> <object> [<nspace>] find pg for <object> in <pool> with [namespace]
    423. osd metadata [<id|osd.id>] fetch metadata for osd {id} (default all)
    424. osd new <uuid> [<id|osd.id>] Create a new OSD. If supplied, the `id` to be replaced needs to
    425. exist and have been previously destroyed. Reads secrets from JSON
    426. file via `-i <file>` (see man page).
    427. osd numa-status show NUMA status of OSDs
    428. osd ok-to-stop <ids>... check whether osd(s) can be safely stopped without reducing
    429. immediate data availability
    430. osd out <ids>... set osd(s) <id> [<id>...] out, or use <any|all> to set all osds out
    431. osd pause pause osd
    432. osd perf print dump of OSD perf summary stats
    433. osd pg-temp <pgid> [<id|osd.id>...] set pg_temp mapping pgid:[<id> [<id>...]] (developers only)
    434. osd pg-upmap <pgid> <id|osd.id>... set pg_upmap mapping <pgid>:[<id> [<id>...]] (developers only)
    435. osd pg-upmap-items <pgid> <id|osd.id>... set pg_upmap_items mapping <pgid>:{<id> to <id>, [...]} (
    436. developers only)
    437. osd pool application disable <pool> <app> [--yes-i-really-mean-it] disables use of an application <app> on pool <poolname>
    438. osd pool application enable <pool> <app> [--yes-i-really-mean-it] enable use of an application <app> [cephfs,rbd,rgw] on pool
    439. <poolname>
    440. osd pool application get [<pool>] [<app>] [<key>] get value of key <key> of application <app> on pool <poolname>
    441. osd pool application rm <pool> <app> <key> removes application <app> metadata key <key> on pool <poolname>
    442. osd pool application set <pool> <app> <key> <value> sets application <app> metadata key <key> to <value> on pool
    443. <poolname>
    444. osd pool autoscale-status report on pool pg_num sizing recommendation and intent
    445. osd pool cancel-force-backfill <who>... restore normal recovery priority of specified pool <who>
    446. osd pool cancel-force-recovery <who>... restore normal recovery priority of specified pool <who>
    447. osd pool create <pool> [<pg_num:int>] [<pgp_num:int>] [replicated| create pool
    448. erasure] [<erasure_code_profile>] [<rule>] [<expected_num_objects:
    449. int>] [<size:int>] [<pg_num_min:int>] [on|off|warn] [<target_size_
    450. bytes:int>] [<target_size_ratio:float>]
    451. osd pool deep-scrub <who>... initiate deep-scrub on pool <who>
    452. osd pool force-backfill <who>... force backfill of specified pool <who> first
    453. osd pool force-recovery <who>... force recovery of specified pool <who> first
    454. osd pool get <pool> size|min_size|pg_num|pgp_num|crush_rule| get pool parameter <var>
    455. hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_
    456. dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_
    457. count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_max_
    458. bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
    459. cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
    460. erasure_code_profile|min_read_recency_for_promote|all|min_write_
    461. recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_
    462. search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_
    463. interval|recovery_priority|recovery_op_priority|scrub_priority|
    464. compression_mode|compression_algorithm|compression_required_ratio|
    465. compression_max_blob_size|compression_min_blob_size|csum_type|
    466. csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_
    467. algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_
    468. size_bytes|target_size_ratio
    469. osd pool get-quota <pool> obtain object or byte limits for pool
    470. osd pool ls [detail] list pools
    471. osd pool mksnap <pool> <snap> make snapshot <snap> in <pool>
    472. osd pool rename <srcpool> <destpool> rename <srcpool> to <destpool>
    473. osd pool repair <who>... initiate repair on pool <who>
    474. osd pool rm <pool> [<pool2>] [--yes-i-really-really-mean-it] [-- remove pool
    475. yes-i-really-really-mean-it-not-faking]
    476. osd pool rmsnap <pool> <snap> remove snapshot <snap> from <pool>
    477. osd pool scrub <who>... initiate scrub on pool <who>
    478. osd pool set <pool> size|min_size|pg_num|pgp_num|pgp_num_actual| set pool parameter <var> to <val>
    479. crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_
    480. fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|
    481. hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_
    482. max_objects|cache_target_dirty_ratio|cache_target_dirty_high_
    483. ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_
    484. age|min_read_recency_for_promote|min_write_recency_for_promote|
    485. fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_
    486. min_interval|scrub_max_interval|deep_scrub_interval|recovery_
    487. priority|recovery_op_priority|scrub_priority|compression_mode|
    488. compression_algorithm|compression_required_ratio|compression_max_
    489. blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_
    490. max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_
    491. mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_
    492. ratio <val> [--yes-i-really-mean-it]
    493. osd pool set-quota <pool> max_objects|max_bytes <val> set object or byte limit on pool
    494. osd pool stats [<pool_name>] obtain stats from all pools, or from specified pool
    495. osd primary-affinity <id|osd.id> <weight:float> adjust osd primary-affinity from 0.0 <= <weight> <= 1.0
    496. osd primary-temp <pgid> <id|osd.id> set primary_temp mapping pgid:<id>|-1 (developers only)
    497. osd purge <id|osd.id> [--force] [--yes-i-really-mean-it] purge all osd data from the monitors including the OSD id and
    498. CRUSH position
    499. osd purge-new <id|osd.id> [--yes-i-really-mean-it] purge all traces of an OSD that was partially created but never
    500. started
    501. osd repair <who> initiate repair on osd <who>, or use <all|any> to repair all
    502. osd require-osd-release luminous|mimic|nautilus|octopus [--yes-i- set the minimum allowed OSD release to participate in the cluster
    503. really-mean-it]
    504. osd reweight <id|osd.id> <weight:float> reweight osd to 0.0 < <weight> < 1.0
    505. osd reweight-by-pg [<oload:int>] [<max_change:float>] [<max_osds: reweight OSDs by PG distribution [overload-percentage-for-
    506. int>] [<pools>...] consideration, default 120]
    507. osd reweight-by-utilization [<oload:int>] [<max_change:float>] reweight OSDs by utilization [overload-percentage-for-
    508. [<max_osds:int>] [--no-increasing] consideration, default 120]
    509. osd reweightn <weights> reweight osds with {<id>: <weight>,...})
    510. osd rm-pg-upmap <pgid> clear pg_upmap mapping for <pgid> (developers only)
    511. osd rm-pg-upmap-items <pgid> clear pg_upmap_items mapping for <pgid> (developers only)
    512. osd safe-to-destroy <ids>... check whether osd(s) can be safely destroyed without reducing data
    513. durability
    514. osd scrub <who> initiate scrub on osd <who>, or use <all|any> to scrub all
    515. osd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance| set <key>
    516. norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|pglog_
    517. hardlimit [--yes-i-really-mean-it]
    518. osd set-backfillfull-ratio <ratio:float> set usage ratio at which OSDs are marked too full to backfill
    519. osd set-full-ratio <ratio:float> set usage ratio at which OSDs are marked full
    520. osd set-group <flags> <who>... set <flags> for batch osds or crush nodes, <flags> must be a comma-
    521. separated subset of {noup,nodown,noin,noout}
    522. osd set-nearfull-ratio <ratio:float> set usage ratio at which OSDs are marked near-full
    523. osd set-require-min-compat-client <version> [--yes-i-really-mean- set the minimum client version we will maintain compatibility with
    524. it]
    525. osd setcrushmap [<prior_version:int>] set crush map from input file
    526. osd setmaxosd <newmax:int> set new maximum osd value
    527. osd stat print summary of OSD map
    528. osd status [<bucket>] Show the status of OSDs within a bucket, or all
    529. osd stop <ids>... stop the corresponding osd daemons and mark them as down
    530. osd test-reweight-by-pg [<oload:int>] [<max_change:float>] [<max_ dry run of reweight OSDs by PG distribution [overload-percentage-
    531. osds:int>] [<pools>...] for-consideration, default 120]
    532. osd test-reweight-by-utilization [<oload:int>] [<max_change: dry run of reweight OSDs by utilization [overload-percentage-for-
    533. float>] [<max_osds:int>] [--no-increasing] consideration, default 120]
    534. osd tier add <pool> <tierpool> [--force-nonempty] add the tier <tierpool> (the second one) to base pool <pool> (the
    535. first one)
    536. osd tier add-cache <pool> <tierpool> <size:int> add a cache <tierpool> (the second one) of size <size> to existing
    537. pool <pool> (the first one)
    538. osd tier cache-mode <pool> none|writeback|forward|readonly| specify the caching mode for cache tier <pool>
    539. readforward|proxy|readproxy [--yes-i-really-mean-it]
    540. osd tier rm <pool> <tierpool> remove the tier <tierpool> (the second one) from base pool <pool> (
    541. the first one)
    542. osd tier rm-overlay <pool> remove the overlay pool for base pool <pool>
    543. osd tier set-overlay <pool> <overlaypool> set the overlay pool for base pool <pool> to be <overlaypool>
    544. osd tree [<epoch:int>] [up|down|in|out|destroyed...] print OSD tree
    545. osd tree-from [<epoch:int>] <bucket> [up|down|in|out|destroyed...] print OSD tree in bucket
    546. osd unpause unpause osd
    547. osd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance| unset <key>
    548. norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim
    549. osd unset-group <flags> <who>... unset <flags> for batch osds or crush nodes, <flags> must be a
    550. comma-separated subset of {noup,nodown,noin,noout}
    551. osd utilization get basic pg distribution stats
    552. osd versions check running versions of OSDs
    553. pg cancel-force-backfill <pgid>... restore normal backfill priority of <pgid>
    554. pg cancel-force-recovery <pgid>... restore normal recovery priority of <pgid>
    555. pg debug unfound_objects_exist|degraded_pgs_exist show debug info about pgs
    556. pg deep-scrub <pgid> start deep-scrub on <pgid>
    557. pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] show human-readable versions of pg map (only 'all' valid with
    558. plain)
    559. pg dump_json [all|summary|sum|pools|osds|pgs...] show human-readable version of pg map in json only
    560. pg dump_pools_json show pg pools info in json only
    561. pg dump_stuck [inactive|unclean|stale|undersized|degraded...] show information about stuck pgs
    562. [<threshold:int>]
    563. pg force-backfill <pgid>... force backfill of <pgid> first
    564. pg force-recovery <pgid>... force recovery of <pgid> first
    565. pg getmap get binary pg map to -o/stdout
    566. pg ls [<pool:int>] [<states>...] list pg with specific pool, osd, state
    567. pg ls-by-osd <id|osd.id> [<pool:int>] [<states>...] list pg on osd [osd]
    568. pg ls-by-pool <poolstr> [<states>...] list pg with pool = [poolname]
    569. pg ls-by-primary <id|osd.id> [<pool:int>] [<states>...] list pg with primary = [osd]
    570. pg map <pgid> show mapping of pg to osds
    571. pg repair <pgid> start repair on <pgid>
    572. pg repeer <pgid> force a PG to repeer
    573. pg scrub <pgid> start scrub on <pgid>
    574. pg stat show placement group status.
    575. progress Show progress of recovery operations
    576. progress clear Reset progress tracking
    577. progress json Show machine readable progress information
    578. prometheus file_sd_config Return file_sd compatible prometheus config for mgr cluster
    579. quorum_status report status of monitor quorum
    580. rbd mirror snapshot schedule add <level_spec> <interval> [<start_ Add rbd mirror snapshot schedule
    581. time>]
    582. rbd mirror snapshot schedule list [<level_spec>] List rbd mirror snapshot schedule
    583. rbd mirror snapshot schedule remove <level_spec> [<interval>] Remove rbd mirror snapshot schedule
    584. [<start_time>]
    585. rbd mirror snapshot schedule status [<level_spec>] Show rbd mirror snapshot schedule status
    586. rbd perf image counters [<pool_spec>] [write_ops|write_bytes|write_ Retrieve current RBD IO performance counters
    587. latency|read_ops|read_bytes|read_latency]
    588. rbd perf image stats [<pool_spec>] [write_ops|write_bytes|write_ Retrieve current RBD IO performance stats
    589. latency|read_ops|read_bytes|read_latency]
    590. rbd task add flatten <image_spec> Flatten a cloned image asynchronously in the background
    591. rbd task add migration abort <image_spec> Abort a prepared migration asynchronously in the background
    592. rbd task add migration commit <image_spec> Commit an executed migration asynchronously in the background
    593. rbd task add migration execute <image_spec> Execute an image migration asynchronously in the background
    594. rbd task add remove <image_spec> Remove an image asynchronously in the background
    595. rbd task add trash remove <image_id_spec> Remove an image from the trash asynchronously in the background
    596. rbd task cancel <task_id> Cancel a pending or running asynchronous task
    597. rbd task list [<task_id>] List pending or running asynchronous tasks
    598. rbd trash purge schedule add <level_spec> <interval> [<start_time>] Add rbd trash purge schedule
    599. rbd trash purge schedule list [<level_spec>] List rbd trash purge schedule
    600. rbd trash purge schedule remove <level_spec> [<interval>] [<start_ Remove rbd trash purge schedule
    601. time>]
    602. rbd trash purge schedule status [<level_spec>] Show rbd trash purge schedule status
    603. report [<tags>...] report full status of cluster, optional title tag strings
    604. restful create-key <key_name> Create an API key with this name
    605. restful create-self-signed-cert Create localized self signed certificate
    606. restful delete-key <key_name> Delete an API key with this name
    607. restful list-keys List all API keys
    608. restful restart Restart API server
    609. service dump dump service map
    610. service status dump service state
    611. status show cluster status
    612. telegraf config-set <key> <value> Set a configuration value
    613. telegraf config-show Show current configuration
    614. telegraf send Force sending data to Telegraf
    615. telemetry off Disable telemetry reports from this cluster
    616. telemetry on [<license>] Enable telemetry reports from this cluster
    617. telemetry send [ceph|device...] [<license>] Force sending data to Ceph telemetry
    618. telemetry show [<channels>...] Show last report or report to be sent
    619. telemetry show-device Show last device report or device report to be sent
    620. telemetry status Show current configuration
    621. tell <type.id> <args>... send a command to a specific daemon
    622. test_orchestrator load_data load dummy data into test orchestrator
    623. time-sync-status show time sync status
    624. versions check running versions of ceph daemons
    625. zabbix config-set <key> <value> Set a configuration value
    626. zabbix config-show Show current configuration
    627. zabbix discovery Discovering Zabbix data
    628. zabbix send Force sending data to Zabbix
    629. [root@node01 ~]#

    鲲鹏BoostKit分布式存储使能套件 Ceph移植&部署&调优指南 02.pdf