命令操作

  • /opt/flink-1.11.2/bin/flink run -d -ynm test-play -yqu public -m yarn-cluster -yjm 1024 -ytm 1024 /mnt/data7/fanjianglong/test/play-summary-1.0-SNAPSHOT.jar ```bash Syntax: run [OPTIONS] “run” action options: -c,—class Class with the program entry point

    1. ("main()" method). Only needed if the
    2. JAR file does not specify the class in
    3. its manifest.

    -C,—classpath Adds a URL to each user code

    1. classloader on all nodes in the
    2. cluster. The paths must specify a
    3. protocol (e.g. file://) and be
    4. accessible on all nodes (e.g. by means
    5. of a NFS share). You can use this
    6. option multiple times for specifying
    7. more than one URL. The protocol must
    8. be supported by the {@link
    9. java.net.URLClassLoader}.

    -d,—detached If present, runs the job in detached

    1. mode

    -n,—allowNonRestoredState Allow to skip savepoint state that

    1. cannot be restored. You need to allow
    2. this if you removed an operator from
    3. your program that was part of the
    4. program when the savepoint was
    5. triggered.

    -p,—parallelism The parallelism with which to run the

    1. program. Optional flag to override the
    2. default value specified in the
    3. configuration.

    -py,—python Python script with the program entry

    1. point. The dependent resources can be
    2. configured with the `--pyFiles`
    3. option.

    -pyarch,—pyArchives Add python archive files for job. The

    1. archive files will be extracted to the
    2. working directory of python UDF
    3. worker. Currently only zip-format is
    4. supported. For each archive file, a
    5. target directory be specified. If the
    6. target directory name is specified,
    7. the archive file will be extracted to
    8. a name can directory with the
    9. specified name. Otherwise, the archive
    10. file will be extracted to a directory
    11. with the same name of the archive
    12. file. The files uploaded via this
    13. option are accessible via relative
    14. path. '#' could be used as the
    15. separator of the archive file path and
    16. the target directory name. Comma (',')
    17. could be used as the separator to
    18. specify multiple archive files. This
    19. option can be used to upload the
    20. virtual environment, the data files
    21. used in Python UDF (e.g.: --pyArchives
    22. file:///tmp/py37.zip,file:///tmp/data.
    23. zip#data --pyExecutable
    24. py37.zip/py37/bin/python). The data
    25. files could be accessed in Python UDF,
    26. e.g.: f = open('data/data.txt', 'r').

    -pyexec,—pyExecutable Specify the path of the python

    1. interpreter used to execute the python
    2. UDF worker (e.g.: --pyExecutable
    3. /usr/local/bin/python3). The python
    4. UDF worker depends on Python 3.5+,
    5. Apache Beam (version == 2.19.0), Pip
    6. (version >= 7.1.0) and SetupTools
    7. (version >= 37.0.0). Please ensure
    8. that the specified environment meets
    9. the above requirements.

    -pyfs,—pyFiles Attach custom python files for job.

    1. These files will be added to the
    2. PYTHONPATH of both the local client
    3. and the remote python UDF worker. The
    4. standard python resource file suffixes
    5. such as .py/.egg/.zip or directory are
    6. all supported. Comma (',') could be
    7. used as the separator to specify
    8. multiple files (e.g.: --pyFiles
    9. file:///tmp/myresource.zip,hdfs:///$na
    10. menode_address/myresource2.zip).

    -pym,—pyModule Python module with the program entry

    1. point. This option must be used in
    2. conjunction with `--pyFiles`.

    -pyreq,—pyRequirements Specify a requirements.txt file which

    1. defines the third-party dependencies.
    2. These dependencies will be installed
    3. and added to the PYTHONPATH of the
    4. python UDF worker. A directory which
    5. contains the installation packages of
    6. these dependencies could be specified
    7. optionally. Use '#' as the separator
    8. if the optional parameter exists
    9. (e.g.: --pyRequirements
    10. file:///tmp/requirements.txt#file:///t
    11. mp/cached_dir).

    -s,—fromSavepoint Path to a savepoint to restore the job

    1. from (for example
    2. hdfs:///flink/savepoint-1537).

    -sae,—shutdownOnAttachedExit If the job is submitted in attached

    1. mode, perform a best-effort cluster
    2. shutdown when the CLI is terminated
    3. abruptly, e.g., in response to a user
    4. interrupt, such as typing Ctrl + C.

    Options for Generic CLI mode: -D Generic configuration options for

    1. execution/deployment and for the configured executor.
    2. The available options can be found at
    3. https://ci.apache.org/projects/flink/flink-docs-stabl
    4. e/ops/config.html

    -e,—executor DEPRECATED: Please use the -t option instead which is

    1. also available with the "Application Mode".
    2. The name of the executor to be used for executing the
    3. given job, which is equivalent to the
    4. "execution.target" config option. The currently
    5. available executors are: "collection", "remote",
    6. "local", "kubernetes-session", "yarn-per-job",
    7. "yarn-session".

    -t,—target The deployment target for the given application,

    1. which is equivalent to the "execution.target" config
    2. option. The currently available targets are:
    3. "collection", "remote", "local",
    4. "kubernetes-session", "yarn-per-job", "yarn-session",
    5. "yarn-application" and "kubernetes-application".

    Options for yarn-cluster mode: -d,—detached If present, runs the job in detached

    1. mode

    -m,—jobmanager Address of the JobManager to which to

    1. connect. Use this flag to connect to a
    2. different JobManager than the one
    3. specified in the configuration.

    -yat,—yarnapplicationType Set a custom application type for the

    1. application on YARN

    -yD use value for given property -yd,—yarndetached If present, runs the job in detached

    1. mode (deprecated; use non-YARN
    2. specific option instead)

    -yh,—yarnhelp Help for the Yarn session CLI. -yid,—yarnapplicationId Attach to running YARN session -yj,—yarnjar Path to Flink jar file -yjm,—yarnjobManagerMemory Memory for JobManager Container with

    1. optional unit (default: MB)

    -ynl,—yarnnodeLabel Specify YARN node label for the YARN

    1. application

    -ynm,—yarnname Set a custom name for the application

    1. on YARN

    -yq,—yarnquery Display available YARN resources

    1. (memory, cores)

    -yqu,—yarnqueue Specify YARN queue. -ys,—yarnslots Number of slots per TaskManager -yt,—yarnship Ship files in the specified directory

    1. (t for transfer)

    -ytm,—yarntaskManagerMemory Memory per TaskManager Container with

    1. optional unit (default: MB)

    -yz,—yarnzookeeperNamespace Namespace to create the Zookeeper

    1. sub-paths for high availability mode

    -z,—zookeeperNamespace Namespace to create the Zookeeper

    1. sub-paths for high availability mode

    Options for default mode: -m,—jobmanager Address of the JobManager to which to

    1. connect. Use this flag to connect to a
    2. different JobManager than the one specified
    3. in the configuration.

    -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths

    1. for high availability mode

Action “info” shows the optimized execution plan of the program (JSON).

Syntax: info [OPTIONS] “info” action options: -c,—class Class with the program entry point (“main()” method). Only needed if the JAR file does not specify the class in its manifest. -p,—parallelism The parallelism with which to run the program. Optional flag to override the default value specified in the configuration.

Action “list” lists running and scheduled programs.

Syntax: list [OPTIONS] “list” action options: -a,—all Show all programs and their JobIDs -r,—running Show only running programs and their JobIDs -s,—scheduled Show only scheduled programs and their JobIDs Options for Generic CLI mode: -D Generic configuration options for execution/deployment and for the configured executor. The available options can be found at https://ci.apache.org/projects/flink/flink-docs-stabl e/ops/config.html -e,—executor DEPRECATED: Please use the -t option instead which is also available with the “Application Mode”. The name of the executor to be used for executing the given job, which is equivalent to the “execution.target” config option. The currently available executors are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”. -t,—target The deployment target for the given application, which is equivalent to the “execution.target” config option. The currently available targets are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”, “yarn-application” and “kubernetes-application”.

Options for yarn-cluster mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -yid,—yarnapplicationId Attach to running YARN session -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Options for default mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Action “stop” stops a running program with a savepoint (streaming jobs only).

Syntax: stop [OPTIONS] “stop” action options: -d,—drain Send MAX_WATERMARK before taking the savepoint and stopping the pipelne. -p,—savepointPath Path to the savepoint (for example hdfs:///flink/savepoint-1537). If no directory is specified, the configured default will be used (“state.savepoints.dir”). Options for Generic CLI mode: -D Generic configuration options for execution/deployment and for the configured executor. The available options can be found at https://ci.apache.org/projects/flink/flink-docs-stabl e/ops/config.html -e,—executor DEPRECATED: Please use the -t option instead which is also available with the “Application Mode”. The name of the executor to be used for executing the given job, which is equivalent to the “execution.target” config option. The currently available executors are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”. -t,—target The deployment target for the given application, which is equivalent to the “execution.target” config option. The currently available targets are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”, “yarn-application” and “kubernetes-application”.

Options for yarn-cluster mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -yid,—yarnapplicationId Attach to running YARN session -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Options for default mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Action “cancel” cancels a running program.

Syntax: cancel [OPTIONS] “cancel” action options: -s,—withSavepoint DEPRECATION WARNING: Cancelling a job with savepoint is deprecated. Use “stop” instead. Trigger savepoint and cancel job. The target directory is optional. If no directory is specified, the configured default directory (state.savepoints.dir) is used. Options for Generic CLI mode: -D Generic configuration options for execution/deployment and for the configured executor. The available options can be found at https://ci.apache.org/projects/flink/flink-docs-stabl e/ops/config.html -e,—executor DEPRECATED: Please use the -t option instead which is also available with the “Application Mode”. The name of the executor to be used for executing the given job, which is equivalent to the “execution.target” config option. The currently available executors are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”. -t,—target The deployment target for the given application, which is equivalent to the “execution.target” config option. The currently available targets are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”, “yarn-application” and “kubernetes-application”.

Options for yarn-cluster mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -yid,—yarnapplicationId Attach to running YARN session -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Options for default mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Action “savepoint” triggers savepoints for a running job or disposes existing ones.

Syntax: savepoint [OPTIONS] [] “savepoint” action options: -d,—dispose Path of savepoint to dispose. -j,—jarfile Flink program JAR file. Options for Generic CLI mode: -D Generic configuration options for execution/deployment and for the configured executor. The available options can be found at https://ci.apache.org/projects/flink/flink-docs-stabl e/ops/config.html -e,—executor DEPRECATED: Please use the -t option instead which is also available with the “Application Mode”. The name of the executor to be used for executing the given job, which is equivalent to the “execution.target” config option. The currently available executors are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”. -t,—target The deployment target for the given application, which is equivalent to the “execution.target” config option. The currently available targets are: “collection”, “remote”, “local”, “kubernetes-session”, “yarn-per-job”, “yarn-session”, “yarn-application” and “kubernetes-application”.

Options for yarn-cluster mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -yid,—yarnapplicationId Attach to running YARN session -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode

Options for default mode: -m,—jobmanager Address of the JobManager to which to connect. Use this flag to connect to a different JobManager than the one specified in the configuration. -z,—zookeeperNamespace Namespace to create the Zookeeper sub-paths for high availability mode ```

run

提交一个程序

options

  • -c(—class) :带有主程序入库的类名,如果JAR包未指定主类
  • -C(—classpath) :为在集群上的所有节点的每个用户代码类加载器添加一个URL,路径必须指定一个文件协议(file://…)并可在所有节点均可访问,可以指定多个URL,但协议必须被**java.net.URLClassLoader**支持
  • -d(—detached):配置的话以detached 模式运行Job任务
  • -n(-allowNonRestoredState):如果程序从savepoint触发,则允许跳过savepoint状态,该行为无法恢复
  • -p(—parallelism) :程序运行的并行度,将覆盖配置中的默认值
  • -py(—python) :程序入口的Python脚本,依赖资源可通过 --pyFiles 选项配置
  • -pyarch(—pyArchives) :为作业添加Python归档文件,归档文件将被提取到python UDF所在的worker节点,目前仅支持zip格式。对于每一个归档文件,一个目的目录须被指定;如果目标目录名称指定,归档文件将被提取到一个和指定名称一致的目录下,否则将提取到和归档文件名称一致的目录下;通过此上传的文件可通过相对路径访问, #可作为归档文件和目标目录的分隔符 ,,可以用作多个归档文件的分隔符,选项可用于上传虚拟环境,在Python UDF中使用数据文件(例如: -pyarch file:///tmp/py37.zip,file:///tmp/data.zip#data --pyExecutable py37.zip/py37/bin/python),在Python UDF中访问数据文件(f = open('data/data.txt', 'r')
  • -pyexec(—pyExecutable) :指定用于执行Python UDF worker的Python解释器路径(例如:--pyExecutable /usr/local/bin/python3), UDF worker需要环境要求为:依赖Python版本为3.5+,Apache Beam版本为2.19.0,Pip版本为>=7.1.0,Setup Tools>=37.0.0
  • -pyfs(—pyFiles) :为一个Job作业添加自定义的Python文件,这些文件将被添加到本地客户端的pythonPath以及Python UDF worker中;以标准Python资源文件后缀,例如.py/.egg/.zip 或者目录都支持,,分隔多个文件(eg:—pyFiles file:///tmp/myresource.zip,hdfs:///$namenode_address/myresource2.zip
  • -pym(—pyModule) :带有程序入口的类的Python模块,与--pyFiles结合使用
  • -pyreq(—pyRequirements) :制定一个requirements.txt文件制定第三方依赖,并安装这些依赖项到Python UDF Worker,可选制定一个包含这些依赖的目录,使用 #分隔,例如:--pyRequirements file:///tmp/requirements.txt#file:///tmp/cached_dir
  • -s(—fromSavepoint) :恢复作业的保存点路径(例如 hdfs:///flink/savepoint-1537)
  • -sae(—shutdownOnAttachedExit) :如果作业以atteached mode模式提交,当CLI突然终止时尽力终止集群,例如响应用户中断,键入CTRL + C

    CLI模式

  • -D :用于执行或部署已配置的executor的通用配置项,

  • -t(—target) :替代了-e选项对于给定应用的部署目标,与 **execution.target** 配置选项含义一致,可选项有:

    • collection
    • remote
    • local
    • kuberntetes-session
    • yarn-pre-job
    • yarn-session
    • yarn-application
    • kubernetes-application

      yarn-cluster模式

  • -d(—detached):Job任务是否以detached mode运行

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager
  • -yat(—yarnapplicationType) :为yarn上应用程序设置自定义应用程序类型
  • -yD :使用给定属性值
  • -yd(—yarndetached):已弃用,以detached mode运行Job
  • -yh(—yarnhelp):Yarn Session CLI 的帮助
  • -yid(—yarnapplicationId) :附加到运行的Yarn session中
  • -yj(—yarnjar) :Flink Jar的路径
  • -yjm(—yarnjobManagerMemory) :JobManager Container的内存(默认为MB)
  • -ynl(—yarnnodelLabel) :给Yarn应用指定yarn节点标签
  • ynm(—yarnname) :对在yarn上的应用自定义名称
  • -yq(—yarnquery) :显示可用的Yarn资源(内存,核数)
  • -yqu(—yarnqueue) :指定yarn 队列
  • -ys(—yarnslots) :每个TaskManager的slots数
  • -yt(—yarnship) :传送指定目录中文件(t标识transfer传输)
  • -ytm(—yarntaskManagerMemory) :每个TaskManager Container的内存(默认MB)
  • -yz(—yarnzookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间
  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    默认模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    info

    显示程序执行计划(JSON格式)

  • -c(—class) :带有主程序入库的类名,如果JAR包未指定主类

  • -p(—parallelism) :程序运行的并行度,将覆盖配置中的默认值

    list

    列出running和scheduled状态的程序

    options

  • -a(-all):显示所有程序和JobId

  • -r(—runnning):仅显示运行状态的程序和JobId
  • -s(—scheduled):显示scheduled状态的程序和JobId

    CLI模式

  • -D :用于执行或部署已配置的executor的通用配置项,

  • -t(—target) :替代了-e选项对于给定应用的部署目标,与 **execution.target** 配置选项含义一致,可选项有:

    • collection
    • remote
    • local
    • kuberntetes-session
    • yarn-pre-job
    • yarn-session
    • yarn-application
    • kubernetes-application

      yarn-cluster模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -yid(—yarnapplicationId) :附加到运行的Yarn session中
  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    默认模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    stop

    options

  • -d(—drain):在获取savepoint和停止管道前发送MAX_WATERMARK

  • -p(—savepointPath) :保存点的路径(例如 hdfs:///flink/savepoint-1537)。 如果没有指定目录,将使用配置的默认值(“state.savepoints.dir”)

    CLI 模式

  • -D :用于执行或部署已配置的executor的通用配置项,

  • -t(—target) :替代了-e选项对于给定应用的部署目标,与 **execution.target** 配置选项含义一致,可选项有:

    • collection
    • remote
    • local
    • kuberntetes-session
    • yarn-pre-job
    • yarn-session
    • yarn-application
    • kubernetes-application

      yarn-cluster模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -yid(—yarnapplicationId) :附加到运行的Yarn session中
  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    默认模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    cancel

    取消一个运行中的程序

    options

  • -s(—withSavepoint) :弃用,建议使用stop命令,触发保存点并取消作业,目标目录可选,未指定则使用配置的默认目录(state.savepoints.dir)

    CLI模式

  • -D :用于执行或部署已配置的executor的通用配置项,

  • -t(—target) :替代了-e选项对于给定应用的部署目标,与 **execution.target** 配置选项含义一致,可选项有:

    • collection
    • remote
    • local
    • kuberntetes-session
    • yarn-pre-job
    • yarn-session
    • yarn-application
    • kubernetes-application

      yarn-cluster模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -yid(—yarnapplicationId) :附加到运行的Yarn session中
  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    默认模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    savepoint

    触发正在运行作业的savepoint或者处置现有的savepoint

    options

  • -d(—dispose) :触发的sacepoint的路径

  • -j(—jarfile) :Flink的Jar包程序

    CLI模式

  • -D :用于执行或部署已配置的executor的通用配置项,

  • -t(—target) :替代了-e选项对于给定应用的部署目标,与 **execution.target** 配置选项含义一致,可选项有:

    • collection
    • remote
    • local
    • kuberntetes-session
    • yarn-pre-job
    • yarn-session
    • yarn-application
    • kubernetes-application

      yarn-cluster模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -yid(—yarnapplicationId) :附加到运行的Yarn session中
  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间

    默认模式

  • -m(—jobmanager) :访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager

  • -z(—zookeeperNamespace) :为了高可用创建zookeeper子路径的命名空间