http://www.51niux.com/?id=161
ceph详细中文文档(1).pdf
[root@node01 ~]# ceph -h General usage: ==============usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE] [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER] [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug] [--watch-info] [--watch-sec] [--watch-warn] [--watch-error] [-W WATCH_CHANNEL] [--version] [--verbose] [--concise] [-f {json,json-pretty,xml,xml-pretty,plain,yaml}] [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]Ceph administration tooloptional arguments: -h, --help request mon help -c CEPHCONF, --conf CEPHCONF ceph configuration file -i INPUT_FILE, --in-file INPUT_FILE input file, or "-" for stdin -o OUTPUT_FILE, --out-file OUTPUT_FILE output file, or "-" for stdout --setuser SETUSER set user file permission --setgroup SETGROUP set group file permission --id CLIENT_ID, --user CLIENT_ID client id for authentication --name CLIENT_NAME, -n CLIENT_NAME client name for authentication --cluster CLUSTER cluster name --admin-daemon ADMIN_SOCKET submit admin-socket commands ("help" for help -s, --status show cluster status -w, --watch watch live cluster changes --watch-debug watch debug events --watch-info watch info events --watch-sec watch security events --watch-warn watch warn events --watch-error watch error events -W WATCH_CHANNEL, --watch-channel WATCH_CHANNEL watch live cluster changes on a specific channel (e.g., cluster, audit, cephadm, or '*' for all) --version, -v display version --verbose make verbose --concise make less verbose -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format {json,json-pretty,xml,xml-pretty,plain,yaml} --connect-timeout CLUSTER_TIMEOUT set a timeout for connecting to the cluster --block block until completion (scrub and deep-scrub only) --period PERIOD, -p PERIOD polling period, default 1.0 second (for polling commands only) Local commands: ===============ping <mon.id> Send simple presence/life test to a mon <mon.id> may be 'mon.*' for all monsdaemon {type.id|path} <cmd> Same as --admin-daemon, but auto-find admin socketdaemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]daemonperf {type.id | path} list|ls [stat-pats] [priority] Get selected perf stats from daemon/admin socket Optional shell-glob comma-delim match string stat-pats Optional selection priority (can abbreviate name): critical, interesting, useful, noninteresting, debug List shows a table of all available stats Run <count> times (default forever), once per <interval> seconds (default 1) Monitor commands: =================alerts send (re)send alerts immediatelyauth add <entity> [<caps>...] add auth info for <entity> from input file, or random key if no input is given, and/or any caps specified in the commandauth caps <entity> <caps>... update caps for <name> from caps specified in the commandauth export [<entity>] write keyring for requested entity, or master keyring if none givenauth get <entity> write keyring file with requested keyauth get-key <entity> display requested keyauth get-or-create <entity> [<caps>...] add auth info for <entity> from input file, or random key if no input given, and/or any caps specified in the commandauth get-or-create-key <entity> [<caps>...] get, or add, key for <name> from system/caps pairs specified in the command. If key already exists, any given caps must match the existing caps for that key.auth import auth import: read keyring file from -i <file>auth ls list authentication stateauth print-key <entity> display requested keyauth print_key <entity> display requested keyauth rm <entity> remove all caps for <name>balancer dump <plan> Show an optimization planbalancer eval [<option>] Evaluate data distribution for the current cluster or specific pool or specific planbalancer eval-verbose [<option>] Evaluate data distribution for the current cluster or specific pool or specific plan (verbosely)balancer execute <plan> Execute an optimization planbalancer ls List all plansbalancer mode none|crush-compat|upmap Set balancer modebalancer off Disable automatic balancingbalancer on Enable automatic balancingbalancer optimize <plan> [<pools>...] Run optimizer to create a new planbalancer pool add <pools>... Enable automatic balancing for specific poolsbalancer pool ls List automatic balancing pools. Note that empty list means all existing pools will be automatic balancing targets, which is the default behaviour of balancer.balancer pool rm <pools>... Disable automatic balancing for specific poolsbalancer reset Discard all optimization plansbalancer rm <plan> Discard an optimization planbalancer show <plan> Show details of an optimization planbalancer status Show balancer statusconfig assimilate-conf Assimilate options from a conf, and return a new, minimal conf fileconfig dump Show all configuration option(s)config generate-minimal-conf Generate a minimal ceph.conf fileconfig get <who> [<key>] Show configuration option(s) for an entityconfig help <key> Describe a configuration optionconfig log [<num:int>] Show recent history of config changesconfig ls List available configuration optionsconfig reset <num:int> Revert configuration to a historical version specified by <num>config rm <who> <name> Clear a configuration option for one or more entitiesconfig set <who> <name> <value> [--force] Set a configuration option for one or more entitiesconfig show <who> [<key>] Show running configurationconfig show-with-defaults <who> Show running configuration (including compiled-in defaults)config-key dump [<key>] dump keys and values (with optional prefix)config-key exists <key> check for <key>'s existenceconfig-key get <key> get <key>config-key ls list keysconfig-key rm <key> rm <key>config-key set <key> [<val>] set <key> to value <val>crash archive <id> Acknowledge a crash and silence health warning(s)crash archive-all Acknowledge all new crashes and silence health warning(s)crash info <id> show crash dump metadatacrash json_report <hours> Crashes in the last <hours> hourscrash ls Show new and archived crash dumpscrash ls-new Show new crash dumpscrash post Add a crash dump (use -i <jsonfile>)crash prune <keep> Remove crashes older than <keep> dayscrash rm <id> Remove a saved crash <id>crash stat Summarize recorded crashesdevice check-health Check life expectancy of devicesdevice get-health-metrics <devid> [<sample>] Show stored device metrics for the devicedevice info <devid> Show information about a devicedevice light on|off <devid> [ident|fault] [--force] Enable or disable the device light. Default type is `ident`Usage: device light (on|off) <devid> [ident|fault] [--force]device ls Show devicesdevice ls-by-daemon <who> Show devices associated with a daemondevice ls-by-host <host> Show devices on a hostdevice ls-lights List currently active device indicator lightsdevice monitoring off Disable device health monitoringdevice monitoring on Enable device health monitoringdevice predict-life-expectancy <devid> Predict life expectancy with local predictordevice query-daemon-health-metrics <who> Get device health metrics for a given daemondevice rm-life-expectancy <devid> Clear predicted device life expectancydevice scrape-daemon-health-metrics <who> Scrape and store device health metrics for a given daemondevice scrape-health-metrics [<devid>] Scrape and store health metricsdevice set-life-expectancy <devid> <from> [<to>] Set predicted device life expectancydf [detail] show cluster free space statsfeatures report of connected featuresfs add_data_pool <fs_name> <pool> add data pool <pool>fs authorize <filesystem> <entity> <caps>... add auth for <entity> to access file system <filesystem> based on following directory and permissions pairsfs clone cancel <vol_name> <clone_name> [<group_name>] Cancel an pending or ongoing clone operation.fs clone status <vol_name> <clone_name> [<group_name>] Get status on a cloned subvolume.fs dump [<epoch:int>] dump all CephFS status, optionally from epochfs fail <fs_name> bring the file system down and all of its ranksfs flag set enable_multiple <val> [--yes-i-really-mean-it] Set a global CephFS flagfs get <fs_name> get info about one filesystemfs ls list filesystemsfs new <fs_name> <metadata> <data> [--force] [--allow-dangerous- make new filesystem using named pools <metadata> and <data> metadata-overlay] fs reset <fs_name> [--yes-i-really-mean-it] disaster recovery only: reset to a single-MDS mapfs rm <fs_name> [--yes-i-really-mean-it] disable the named filesystemfs rm_data_pool <fs_name> <pool> remove data pool <pool>fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data| set fs parameter <var> to <val> cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_ timeout|session_autoclose|allow_standby_replay|down|joinable|min_ compat_client <val> [--yes-i-really-mean-it] [--yes-i-really- really-mean-it] fs set-default <fs_name> set the default to the named filesystemfs status [<fs>] Show the status of a CephFS filesystemfs subvolume create <vol_name> <sub_name> [<size:int>] [<group_ Create a CephFS subvolume in a volume, and optionally, with a name>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>] specific size (in bytes), a specific data pool layout, a specific mode, and in a specific subvolume groupfs subvolume getpath <vol_name> <sub_name> [<group_name>] Get the mountpath of a CephFS subvolume in a volume, and optionally, in a specific subvolume groupfs subvolume info <vol_name> <sub_name> [<group_name>] Get the metadata of a CephFS subvolume in a volume, and optionally, in a specific subvolume groupfs subvolume ls <vol_name> [<group_name>] List subvolumesfs subvolume resize <vol_name> <sub_name> <new_size> [<group_ Resize a CephFS subvolume name>] [--no-shrink] fs subvolume rm <vol_name> <sub_name> [<group_name>] [--force] Delete a CephFS subvolume in a volume, and optionally, in a specific subvolume groupfs subvolume snapshot clone <vol_name> <sub_name> <snap_name> Clone a snapshot to target subvolume <target_sub_name> [<pool_layout>] [<group_name>] [<target_group_ name>] fs subvolume snapshot create <vol_name> <sub_name> <snap_name> Create a snapshot of a CephFS subvolume in a volume, and [<group_name>] optionally, in a specific subvolume groupfs subvolume snapshot ls <vol_name> <sub_name> [<group_name>] List subvolume snapshotsfs subvolume snapshot protect <vol_name> <sub_name> <snap_name> Protect snapshot of a CephFS subvolume in a volume, and optionally, [<group_name>] in a specific subvolume groupfs subvolume snapshot rm <vol_name> <sub_name> <snap_name> [<group_ Delete a snapshot of a CephFS subvolume in a volume, and name>] [--force] optionally, in a specific subvolume groupfs subvolume snapshot unprotect <vol_name> <sub_name> <snap_name> Unprotect a snapshot of a CephFS subvolume in a volume, and [<group_name>] optionally, in a specific subvolume groupfs subvolumegroup create <vol_name> <group_name> [<pool_layout>] Create a CephFS subvolume group in a volume, and optionally, with [<uid:int>] [<gid:int>] [<mode>] a specific data pool layout, and a specific numeric modefs subvolumegroup getpath <vol_name> <group_name> Get the mountpath of a CephFS subvolume group in a volumefs subvolumegroup ls <vol_name> List subvolumegroupsfs subvolumegroup rm <vol_name> <group_name> [--force] Delete a CephFS subvolume group in a volumefs subvolumegroup snapshot create <vol_name> <group_name> <snap_ Create a snapshot of a CephFS subvolume group in a volume name> fs subvolumegroup snapshot ls <vol_name> <group_name> List subvolumegroup snapshotsfs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> Delete a snapshot of a CephFS subvolume group in a volume [--force] fs volume create <name> [<placement>] Create a CephFS volumefs volume ls List volumesfs volume rm <vol_name> [<yes-i-really-mean-it>] Delete a FS volume by passing --yes-i-really-mean-it flagfsid show cluster FSID/UUIDhealth [detail] show cluster healthhealth mute <code> [<ttl>] [--sticky] mute health alerthealth unmute [<code>] unmute existing health alert mute(s)influx config-set <key> <value> Set a configuration valueinflux config-show Show current configurationinflux send Force sending data to Influxinsights Retrieve insights reportinsights prune-health <hours> Remove health history older than <hours> hoursiostat Get IO rateslog <logtext>... log supplied text to the monitor loglog last [<num:int>] [debug|info|sec|warn|error] [*|cluster|audit| print last few lines of the cluster log cephadm] mds compat rm_compat <feature:int> remove compatible featuremds compat rm_incompat <feature:int> remove incompatible featuremds compat show show mds compatibility settingsmds count-metadata <property> count MDSs by metadata field propertymds fail <role_or_gid> Mark MDS failed: trigger a failover if a standby is availablemds metadata [<who>] fetch metadata for mds <role>mds ok-to-stop <ids>... check whether stopping the specified MDS would reduce immediate availabilitymds repaired <role> mark a damaged MDS rank as no longer damagedmds rm <gid:int> remove nonactive mdsmds versions check running versions of MDSsmgr count-metadata <property> count ceph-mgr daemons by metadata field propertymgr dump [<epoch:int>] dump the latest MgrMapmgr fail [<who>] treat the named manager daemon as failedmgr metadata [<who>] dump metadata for all daemons or a specific daemonmgr module disable <module> disable mgr modulemgr module enable <module> [--force] enable mgr modulemgr module ls list active mgr modulesmgr self-test background start <workload> Activate a background workload (one of command_spam, throw_ exception)mgr self-test background stop Stop background workload if any is runningmgr self-test cluster-log <channel> <priority> <message> Create an audit log record.mgr self-test config get <key> Peek at a configuration valuemgr self-test config get_localized <key> Peek at a configuration value (localized variant)mgr self-test health clear [<checks>...] Clear health checks by name. If no names provided, clear all.mgr self-test health set <checks> Set a health check from a JSON-formatted description.mgr self-test insights_set_now_offset <hours> Set the now time for the insights module.mgr self-test module <module> Run another module's self_test() methodmgr self-test remote Test inter-module callsmgr self-test run Run mgr python interface testsmgr services list service endpoints provided by mgr modulesmgr versions check running versions of ceph-mgr daemonsmon add <name> <addr> add new monitor named <name> at <addr>mon count-metadata <property> count mons by metadata field propertymon dump [<epoch:int>] dump formatted monmap (optionally from epoch)mon enable-msgr2 enable the msgr2 protocol on port 3300mon feature ls [--with-value] list available mon map features to be set/unsetmon feature set <feature_name> [--yes-i-really-mean-it] set provided feature on mon mapmon getmap [<epoch:int>] get monmapmon metadata [<id>] fetch metadata for mon <id>mon ok-to-add-offline check whether adding a mon and not starting it would break quorummon ok-to-rm <id> check whether removing the specified mon would break quorummon ok-to-stop <ids>... check whether mon(s) can be safely stopped without reducing immediate availabilitymon rm <name> remove monitor named <name>mon scrub scrub the monitor storesmon set-addrs <name> <addrs> set the addrs (IPs and ports) a specific monitor binds tomon set-rank <name> <rank:int> set the rank for the specified monmon set-weight <name> <weight:int> set the weight for the specified monmon stat summarize monitor statusmon versions check running versions of monitorsnode ls [all|osd|mon|mds|mgr] list all nodes in cluster [type]orch apply [mon|mgr|rbd-mirror|crash|alertmanager|grafana|node- Update the size or placement for a service or apply a large yaml exporter|prometheus] [<placement>] [--unmanaged] specorch apply mds <fs_name> [<placement>] [--unmanaged] Update the number of MDS instances for the given fs_nameorch apply nfs <svc_id> <pool> [<namespace>] [<placement>] [-- Scale an NFS service unmanaged] orch apply osd [--all-available-devices] [--preview] [<service_ Create OSD daemon(s) using a drive group spec name>] [--unmanaged] [plain|json|json-pretty|yaml] orch apply rgw <realm_name> <zone_name> [<subcluster>] [<port: Update the number of RGW instances for the given zone int>] [--ssl] [<placement>] [--unmanaged] orch cancel cancels ongoing operationsorch daemon add [mon|mgr|rbd-mirror|crash|alertmanager|grafana| Add daemon(s) node-exporter|prometheus] [<placement>] orch daemon add iscsi <pool> [<fqdn_enabled>] [<trusted_ip_list>] Start iscsi daemon(s) [<placement>] orch daemon add mds <fs_name> [<placement>] Start MDS daemon(s)orch daemon add nfs <svc_arg> <pool> [<namespace>] [<placement>] Start NFS daemon(s)orch daemon add osd [<svc_arg>] Create an OSD service. Either --svc_arg=host:drivesorch daemon add rgw [<realm_name>] [<zone_name>] [<placement>] Start RGW daemon(s)orch daemon rm <names>... [--force] Remove specific daemon(s)orch daemon start|stop|restart|redeploy|reconfig <name> Start, stop, restart, redeploy, or reconfig a specific daemonorch device ls [<hostname>...] [plain|json|json-pretty|yaml] [-- List devices on a host refresh] orch device zap <hostname> <path> [--force] Zap (erase!) a device so it can be re-usedorch host add <hostname> [<addr>] [<labels>...] Add a hostorch host label add <hostname> <label> Add a host labelorch host label rm <hostname> <label> Remove a host labelorch host ls [plain|json|json-pretty|yaml] List hostsorch host rm <hostname> Remove a hostorch host set-addr <hostname> <addr> Update a host addressorch ls [<service_type>] [<service_name>] [--export] [plain|json| List services known to orchestrator json-pretty|yaml] [--refresh] orch osd rm <svc_id>... [--replace] [--force] Remove OSD servicesorch osd rm status status of OSD removal operationorch pause Pause orchestrator background workorch ps [<hostname>] [<service_name>] [<daemon_type>] [<daemon_ List daemons known to orchestrator id>] [plain|json|json-pretty|yaml] [--refresh] orch resume Resume orchestrator background work (if paused)orch rm <service_name> [--force] Remove a serviceorch set backend <module_name> Select orchestrator module backendorch start|stop|restart|redeploy|reconfig <service_name> Start, stop, restart, redeploy, or reconfig an entire service (i.e. all daemons)orch status Report configured backend and its statusorch upgrade check [<image>] [<ceph_version>] Check service versions vs available and target containersorch upgrade pause Pause an in-progress upgradeorch upgrade resume Resume paused upgradeorch upgrade start [<image>] [<ceph_version>] Initiate upgradeorch upgrade status Check service versions vs available and target containersorch upgrade stop Stop an in-progress upgradeosd blacklist add|rm <addr> [<expire:float>] add (optionally until <expire> seconds from now) or remove <addr> from blacklistosd blacklist clear clear all blacklisted clientsosd blacklist ls show blacklisted clientsosd blocked-by print histogram of which OSDs are blocking their peersosd count-metadata <property> count OSDs by metadata field propertyosd crush add <id|osd.id> <weight:float> <args>... add or update crushmap position and weight for <name> with <weight> and location <args>osd crush add-bucket <name> <type> [<args>...] add no-parent (probably root) crush bucket <name> of type <type> to location <args>osd crush class create <class> create crush device class <class>osd crush class ls list all crush device classesosd crush class ls-osd <class> list all osds belonging to the specific <class>osd crush class rename <srcname> <dstname> rename crush device class <srcname> to <dstname>osd crush class rm <class> remove crush device class <class>osd crush create-or-move <id|osd.id> <weight:float> <args>... create entry or move existing entry for <name> <weight> at/to location <args>osd crush dump dump crush maposd crush get-device-class <ids>... get classes of specified osd(s) <id> [<id>...]osd crush get-tunable straw_calc_version get crush tunable <tunable>osd crush link <name> <args>... link existing entry for <name> under location <args>osd crush ls <node> list items beneath a node in the CRUSH treeosd crush move <name> <args>... move existing entry for <name> to location <args>osd crush rename-bucket <srcname> <dstname> rename bucket <srcname> to <dstname>osd crush reweight <name> <weight:float> change <name>'s weight to <weight> in crush maposd crush reweight-all recalculate the weights for the tree to ensure they sum correctlyosd crush reweight-subtree <name> <weight:float> change all leaf items beneath <name> to <weight> in crush maposd crush rm <name> [<ancestor>] remove <name> from crush map (everywhere, or just at <ancestor>)osd crush rm-device-class <ids>... remove class of the osd(s) <id> [<id>...],or use <all|any> to remove all.osd crush rule create-erasure <name> [<profile>] create crush rule <name> for erasure coded pool created with <profile> (default default)osd crush rule create-replicated <name> <root> <type> [<class>] create crush rule <name> for replicated pool to start from <root>, replicate across buckets of type <type>, use devices of type <class> (ssd or hdd)osd crush rule create-simple <name> <root> <type> [firstn|indep] create crush rule <name> to start from <root>, replicate across buckets of type <type>, using a choose mode of <firstn|indep> ( default firstn; indep best for erasure pools)osd crush rule dump [<name>] dump crush rule <name> (default all)osd crush rule ls list crush rulesosd crush rule ls-by-class <class> list all crush rules that reference the same <class>osd crush rule rename <srcname> <dstname> rename crush rule <srcname> to <dstname>osd crush rule rm <name> remove crush rule <name>osd crush set <id|osd.id> <weight:float> <args>... update crushmap position and weight for <name> to <weight> with location <args>osd crush set [<prior_version:int>] set crush map from input fileosd crush set-all-straw-buckets-to-straw2 convert all CRUSH current straw buckets to use the straw2 algorithmosd crush set-device-class <class> <ids>... set the <class> of the osd(s) <id> [<id>...],or use <all|any> to set all.osd crush set-tunable straw_calc_version <value:int> set crush tunable <tunable> to <value>osd crush show-tunables show current crush tunablesosd crush swap-bucket <source> <dest> [--yes-i-really-mean-it] swap existing bucket contents from (orphan) bucket <source> and <target>osd crush tree [--show-shadow] dump crush buckets and items in a tree viewosd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel| set crush tunables values to <profile> optimal|default osd crush unlink <name> [<ancestor>] unlink <name> from crush map (everywhere, or just at <ancestor>)osd crush weight-set create <pool> flat|positional create a weight-set for a given poolosd crush weight-set create-compat create a default backward-compatible weight-setosd crush weight-set dump dump crush weight setsosd crush weight-set ls list crush weight setsosd crush weight-set reweight <pool> <item> <weight:float>... set weight for an item (bucket or osd) in a pool's weight-setosd crush weight-set reweight-compat <item> <weight:float>... set weight for an item (bucket or osd) in the backward-compatible weight-setosd crush weight-set rm <pool> remove the weight-set for a given poolosd crush weight-set rm-compat remove the backward-compatible weight-setosd deep-scrub <who> initiate deep scrub on osd <who>, or use <all|any> to deep scrub allosd destroy <id|osd.id> [--force] [--yes-i-really-mean-it] mark osd as being destroyed. Keeps the ID intact (allowing reuse), but removes cephx keys, config-key data and lockbox keys, rendering data permanently unreadable.osd df [plain|tree] [class|name] [<filter>] show OSD utilizationosd down <ids>... [--definitely-dead] set osd(s) <id> [<id>...] down, or use <any|all> to set all osds downosd drain <osd_ids:int>... drain osd idsosd drain status show statusosd drain stop [<osd_ids:int>...] show status for osds. Stopping all if osd_ids are omittedosd dump [<epoch:int>] print summary of OSD maposd erasure-code-profile get <name> get erasure code profile <name>osd erasure-code-profile ls list all erasure code profilesosd erasure-code-profile rm <name> remove erasure code profile <name>osd erasure-code-profile set <name> [<profile>...] [--force] create erasure code profile <name> with [<key[=value]> ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS)osd find <id|osd.id> find osd <id> in the CRUSH map and show its locationosd force-create-pg <pgid> [--yes-i-really-mean-it] force creation of pg <pgid>osd get-require-min-compat-client get the minimum client version we will maintain compatibility withosd getcrushmap [<epoch:int>] get CRUSH maposd getmap [<epoch:int>] get OSD maposd getmaxosd show largest OSD idosd in <ids>... set osd(s) <id> [<id>...] in, can use <any|all> to automatically set all previously out osds inosd info [<id|osd.id>] print osd's {id} information (instead of all osds from map)osd last-stat-seq <id|osd.id> get the last pg stats sequence number reported for this osdosd lost <id|osd.id> [--yes-i-really-mean-it] mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFULosd ls [<epoch:int>] show all OSD idsosd ls-tree [<epoch:int>] <name> show OSD ids under bucket <name> in the CRUSH maposd map <pool> <object> [<nspace>] find pg for <object> in <pool> with [namespace]osd metadata [<id|osd.id>] fetch metadata for osd {id} (default all)osd new <uuid> [<id|osd.id>] Create a new OSD. If supplied, the `id` to be replaced needs to exist and have been previously destroyed. Reads secrets from JSON file via `-i <file>` (see man page).osd numa-status show NUMA status of OSDsosd ok-to-stop <ids>... check whether osd(s) can be safely stopped without reducing immediate data availabilityosd out <ids>... set osd(s) <id> [<id>...] out, or use <any|all> to set all osds outosd pause pause osdosd perf print dump of OSD perf summary statsosd pg-temp <pgid> [<id|osd.id>...] set pg_temp mapping pgid:[<id> [<id>...]] (developers only)osd pg-upmap <pgid> <id|osd.id>... set pg_upmap mapping <pgid>:[<id> [<id>...]] (developers only)osd pg-upmap-items <pgid> <id|osd.id>... set pg_upmap_items mapping <pgid>:{<id> to <id>, [...]} ( developers only)osd pool application disable <pool> <app> [--yes-i-really-mean-it] disables use of an application <app> on pool <poolname>osd pool application enable <pool> <app> [--yes-i-really-mean-it] enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname>osd pool application get [<pool>] [<app>] [<key>] get value of key <key> of application <app> on pool <poolname>osd pool application rm <pool> <app> <key> removes application <app> metadata key <key> on pool <poolname>osd pool application set <pool> <app> <key> <value> sets application <app> metadata key <key> to <value> on pool <poolname>osd pool autoscale-status report on pool pg_num sizing recommendation and intentosd pool cancel-force-backfill <who>... restore normal recovery priority of specified pool <who>osd pool cancel-force-recovery <who>... restore normal recovery priority of specified pool <who>osd pool create <pool> [<pg_num:int>] [<pgp_num:int>] [replicated| create pool erasure] [<erasure_code_profile>] [<rule>] [<expected_num_objects: int>] [<size:int>] [<pg_num_min:int>] [on|off|warn] [<target_size_ bytes:int>] [<target_size_ratio:float>] osd pool deep-scrub <who>... initiate deep-scrub on pool <who>osd pool force-backfill <who>... force backfill of specified pool <who> firstosd pool force-recovery <who>... force recovery of specified pool <who> firstosd pool get <pool> size|min_size|pg_num|pgp_num|crush_rule| get pool parameter <var> hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_ dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period|hit_set_ count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_max_ bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio| cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age| erasure_code_profile|min_read_recency_for_promote|all|min_write_ recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_set_ search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_ interval|recovery_priority|recovery_op_priority|scrub_priority| compression_mode|compression_algorithm|compression_required_ratio| compression_max_blob_size|compression_min_blob_size|csum_type| csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_ algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_ size_bytes|target_size_ratio osd pool get-quota <pool> obtain object or byte limits for poolosd pool ls [detail] list poolsosd pool mksnap <pool> <snap> make snapshot <snap> in <pool>osd pool rename <srcpool> <destpool> rename <srcpool> to <destpool>osd pool repair <who>... initiate repair on pool <who>osd pool rm <pool> [<pool2>] [--yes-i-really-really-mean-it] [-- remove pool yes-i-really-really-mean-it-not-faking] osd pool rmsnap <pool> <snap> remove snapshot <snap> from <pool>osd pool scrub <who>... initiate scrub on pool <who>osd pool set <pool> size|min_size|pg_num|pgp_num|pgp_num_actual| set pool parameter <var> to <val> crush_rule|hashpspool|nodelete|nopgchange|nosizechange|write_ fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|hit_set_period| hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_bytes|target_ max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_ age|min_read_recency_for_promote|min_write_recency_for_promote| fast_read|hit_set_grade_decay_rate|hit_set_search_last_n|scrub_ min_interval|scrub_max_interval|deep_scrub_interval|recovery_ priority|recovery_op_priority|scrub_priority|compression_mode| compression_algorithm|compression_required_ratio|compression_max_ blob_size|compression_min_blob_size|csum_type|csum_min_block|csum_ max_block|allow_ec_overwrites|fingerprint_algorithm|pg_autoscale_ mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_size_ ratio <val> [--yes-i-really-mean-it] osd pool set-quota <pool> max_objects|max_bytes <val> set object or byte limit on poolosd pool stats [<pool_name>] obtain stats from all pools, or from specified poolosd primary-affinity <id|osd.id> <weight:float> adjust osd primary-affinity from 0.0 <= <weight> <= 1.0osd primary-temp <pgid> <id|osd.id> set primary_temp mapping pgid:<id>|-1 (developers only)osd purge <id|osd.id> [--force] [--yes-i-really-mean-it] purge all osd data from the monitors including the OSD id and CRUSH positionosd purge-new <id|osd.id> [--yes-i-really-mean-it] purge all traces of an OSD that was partially created but never startedosd repair <who> initiate repair on osd <who>, or use <all|any> to repair allosd require-osd-release luminous|mimic|nautilus|octopus [--yes-i- set the minimum allowed OSD release to participate in the cluster really-mean-it] osd reweight <id|osd.id> <weight:float> reweight osd to 0.0 < <weight> < 1.0osd reweight-by-pg [<oload:int>] [<max_change:float>] [<max_osds: reweight OSDs by PG distribution [overload-percentage-for- int>] [<pools>...] consideration, default 120]osd reweight-by-utilization [<oload:int>] [<max_change:float>] reweight OSDs by utilization [overload-percentage-for- [<max_osds:int>] [--no-increasing] consideration, default 120]osd reweightn <weights> reweight osds with {<id>: <weight>,...})osd rm-pg-upmap <pgid> clear pg_upmap mapping for <pgid> (developers only)osd rm-pg-upmap-items <pgid> clear pg_upmap_items mapping for <pgid> (developers only)osd safe-to-destroy <ids>... check whether osd(s) can be safely destroyed without reducing data durabilityosd scrub <who> initiate scrub on osd <who>, or use <all|any> to scrub allosd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance| set <key> norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim|pglog_ hardlimit [--yes-i-really-mean-it] osd set-backfillfull-ratio <ratio:float> set usage ratio at which OSDs are marked too full to backfillosd set-full-ratio <ratio:float> set usage ratio at which OSDs are marked fullosd set-group <flags> <who>... set <flags> for batch osds or crush nodes, <flags> must be a comma- separated subset of {noup,nodown,noin,noout}osd set-nearfull-ratio <ratio:float> set usage ratio at which OSDs are marked near-fullosd set-require-min-compat-client <version> [--yes-i-really-mean- set the minimum client version we will maintain compatibility with it] osd setcrushmap [<prior_version:int>] set crush map from input fileosd setmaxosd <newmax:int> set new maximum osd valueosd stat print summary of OSD maposd status [<bucket>] Show the status of OSDs within a bucket, or allosd stop <ids>... stop the corresponding osd daemons and mark them as downosd test-reweight-by-pg [<oload:int>] [<max_change:float>] [<max_ dry run of reweight OSDs by PG distribution [overload-percentage- osds:int>] [<pools>...] for-consideration, default 120]osd test-reweight-by-utilization [<oload:int>] [<max_change: dry run of reweight OSDs by utilization [overload-percentage-for- float>] [<max_osds:int>] [--no-increasing] consideration, default 120]osd tier add <pool> <tierpool> [--force-nonempty] add the tier <tierpool> (the second one) to base pool <pool> (the first one)osd tier add-cache <pool> <tierpool> <size:int> add a cache <tierpool> (the second one) of size <size> to existing pool <pool> (the first one)osd tier cache-mode <pool> none|writeback|forward|readonly| specify the caching mode for cache tier <pool> readforward|proxy|readproxy [--yes-i-really-mean-it] osd tier rm <pool> <tierpool> remove the tier <tierpool> (the second one) from base pool <pool> ( the first one)osd tier rm-overlay <pool> remove the overlay pool for base pool <pool>osd tier set-overlay <pool> <overlaypool> set the overlay pool for base pool <pool> to be <overlaypool>osd tree [<epoch:int>] [up|down|in|out|destroyed...] print OSD treeosd tree-from [<epoch:int>] <bucket> [up|down|in|out|destroyed...] print OSD tree in bucketosd unpause unpause osdosd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance| unset <key> norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim osd unset-group <flags> <who>... unset <flags> for batch osds or crush nodes, <flags> must be a comma-separated subset of {noup,nodown,noin,noout}osd utilization get basic pg distribution statsosd versions check running versions of OSDspg cancel-force-backfill <pgid>... restore normal backfill priority of <pgid>pg cancel-force-recovery <pgid>... restore normal recovery priority of <pgid>pg debug unfound_objects_exist|degraded_pgs_exist show debug info about pgspg deep-scrub <pgid> start deep-scrub on <pgid>pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] show human-readable versions of pg map (only 'all' valid with plain)pg dump_json [all|summary|sum|pools|osds|pgs...] show human-readable version of pg map in json onlypg dump_pools_json show pg pools info in json onlypg dump_stuck [inactive|unclean|stale|undersized|degraded...] show information about stuck pgs [<threshold:int>] pg force-backfill <pgid>... force backfill of <pgid> firstpg force-recovery <pgid>... force recovery of <pgid> firstpg getmap get binary pg map to -o/stdoutpg ls [<pool:int>] [<states>...] list pg with specific pool, osd, statepg ls-by-osd <id|osd.id> [<pool:int>] [<states>...] list pg on osd [osd]pg ls-by-pool <poolstr> [<states>...] list pg with pool = [poolname]pg ls-by-primary <id|osd.id> [<pool:int>] [<states>...] list pg with primary = [osd]pg map <pgid> show mapping of pg to osdspg repair <pgid> start repair on <pgid>pg repeer <pgid> force a PG to repeerpg scrub <pgid> start scrub on <pgid>pg stat show placement group status.progress Show progress of recovery operationsprogress clear Reset progress trackingprogress json Show machine readable progress informationprometheus file_sd_config Return file_sd compatible prometheus config for mgr clusterquorum_status report status of monitor quorumrbd mirror snapshot schedule add <level_spec> <interval> [<start_ Add rbd mirror snapshot schedule time>] rbd mirror snapshot schedule list [<level_spec>] List rbd mirror snapshot schedulerbd mirror snapshot schedule remove <level_spec> [<interval>] Remove rbd mirror snapshot schedule [<start_time>] rbd mirror snapshot schedule status [<level_spec>] Show rbd mirror snapshot schedule statusrbd perf image counters [<pool_spec>] [write_ops|write_bytes|write_ Retrieve current RBD IO performance counters latency|read_ops|read_bytes|read_latency] rbd perf image stats [<pool_spec>] [write_ops|write_bytes|write_ Retrieve current RBD IO performance stats latency|read_ops|read_bytes|read_latency] rbd task add flatten <image_spec> Flatten a cloned image asynchronously in the backgroundrbd task add migration abort <image_spec> Abort a prepared migration asynchronously in the backgroundrbd task add migration commit <image_spec> Commit an executed migration asynchronously in the backgroundrbd task add migration execute <image_spec> Execute an image migration asynchronously in the backgroundrbd task add remove <image_spec> Remove an image asynchronously in the backgroundrbd task add trash remove <image_id_spec> Remove an image from the trash asynchronously in the backgroundrbd task cancel <task_id> Cancel a pending or running asynchronous taskrbd task list [<task_id>] List pending or running asynchronous tasksrbd trash purge schedule add <level_spec> <interval> [<start_time>] Add rbd trash purge schedulerbd trash purge schedule list [<level_spec>] List rbd trash purge schedulerbd trash purge schedule remove <level_spec> [<interval>] [<start_ Remove rbd trash purge schedule time>] rbd trash purge schedule status [<level_spec>] Show rbd trash purge schedule statusreport [<tags>...] report full status of cluster, optional title tag stringsrestful create-key <key_name> Create an API key with this namerestful create-self-signed-cert Create localized self signed certificaterestful delete-key <key_name> Delete an API key with this namerestful list-keys List all API keysrestful restart Restart API serverservice dump dump service mapservice status dump service statestatus show cluster statustelegraf config-set <key> <value> Set a configuration valuetelegraf config-show Show current configurationtelegraf send Force sending data to Telegraftelemetry off Disable telemetry reports from this clustertelemetry on [<license>] Enable telemetry reports from this clustertelemetry send [ceph|device...] [<license>] Force sending data to Ceph telemetrytelemetry show [<channels>...] Show last report or report to be senttelemetry show-device Show last device report or device report to be senttelemetry status Show current configurationtell <type.id> <args>... send a command to a specific daemontest_orchestrator load_data load dummy data into test orchestratortime-sync-status show time sync statusversions check running versions of ceph daemonszabbix config-set <key> <value> Set a configuration valuezabbix config-show Show current configurationzabbix discovery Discovering Zabbix datazabbix send Force sending data to Zabbix[root@node01 ~]#
鲲鹏BoostKit分布式存储使能套件 Ceph移植&部署&调优指南 02.pdf