命令操作
/opt/flink-1.11.2/bin/flink run -d -ynm test-play -yqu public -m yarn-cluster -yjm 1024 -ytm 1024 /mnt/data7/fanjianglong/test/play-summary-1.0-SNAPSHOT.jar
```bash Syntax: run [OPTIONS]“run” action options: -c,—class Class with the program entry point ("main()" method). Only needed if the
JAR file does not specify the class in
its manifest.
-C,—classpath
Adds a URL to each user code classloader on all nodes in the
cluster. The paths must specify a
protocol (e.g. file://) and be
accessible on all nodes (e.g. by means
of a NFS share). You can use this
option multiple times for specifying
more than one URL. The protocol must
be supported by the {@link
java.net.URLClassLoader}.
-d,—detached If present, runs the job in detached
mode
-n,—allowNonRestoredState Allow to skip savepoint state that
cannot be restored. You need to allow
this if you removed an operator from
your program that was part of the
program when the savepoint was
triggered.
-p,—parallelism
The parallelism with which to run the program. Optional flag to override the
default value specified in the
configuration.
-py,—python
Python script with the program entry point. The dependent resources can be
configured with the `--pyFiles`
option.
-pyarch,—pyArchives
Add python archive files for job. The archive files will be extracted to the
working directory of python UDF
worker. Currently only zip-format is
supported. For each archive file, a
target directory be specified. If the
target directory name is specified,
the archive file will be extracted to
a name can directory with the
specified name. Otherwise, the archive
file will be extracted to a directory
with the same name of the archive
file. The files uploaded via this
option are accessible via relative
path. '#' could be used as the
separator of the archive file path and
the target directory name. Comma (',')
could be used as the separator to
specify multiple archive files. This
option can be used to upload the
virtual environment, the data files
used in Python UDF (e.g.: --pyArchives
file:///tmp/py37.zip,file:///tmp/data.
zip#data --pyExecutable
py37.zip/py37/bin/python). The data
files could be accessed in Python UDF,
e.g.: f = open('data/data.txt', 'r').
-pyexec,—pyExecutable
Specify the path of the python interpreter used to execute the python
UDF worker (e.g.: --pyExecutable
/usr/local/bin/python3). The python
UDF worker depends on Python 3.5+,
Apache Beam (version == 2.19.0), Pip
(version >= 7.1.0) and SetupTools
(version >= 37.0.0). Please ensure
that the specified environment meets
the above requirements.
-pyfs,—pyFiles
Attach custom python files for job. These files will be added to the
PYTHONPATH of both the local client
and the remote python UDF worker. The
standard python resource file suffixes
such as .py/.egg/.zip or directory are
all supported. Comma (',') could be
used as the separator to specify
multiple files (e.g.: --pyFiles
file:///tmp/myresource.zip,hdfs:///$na
menode_address/myresource2.zip).
-pym,—pyModule
Python module with the program entry point. This option must be used in
conjunction with `--pyFiles`.
-pyreq,—pyRequirements
Specify a requirements.txt file which defines the third-party dependencies.
These dependencies will be installed
and added to the PYTHONPATH of the
python UDF worker. A directory which
contains the installation packages of
these dependencies could be specified
optionally. Use '#' as the separator
if the optional parameter exists
(e.g.: --pyRequirements
file:///tmp/requirements.txt#file:///t
mp/cached_dir).
-s,—fromSavepoint
Path to a savepoint to restore the job from (for example
hdfs:///flink/savepoint-1537).
-sae,—shutdownOnAttachedExit If the job is submitted in attached
mode, perform a best-effort cluster
shutdown when the CLI is terminated
abruptly, e.g., in response to a user
interrupt, such as typing Ctrl + C.
Options for Generic CLI mode: -D
Generic configuration options for execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-e,—executor
DEPRECATED: Please use the -t option instead which is also available with the "Application Mode".
The name of the executor to be used for executing the
given job, which is equivalent to the
"execution.target" config option. The currently
available executors are: "collection", "remote",
"local", "kubernetes-session", "yarn-per-job",
"yarn-session".
-t,—target
The deployment target for the given application, which is equivalent to the "execution.target" config
option. The currently available targets are:
"collection", "remote", "local",
"kubernetes-session", "yarn-per-job", "yarn-session",
"yarn-application" and "kubernetes-application".
Options for yarn-cluster mode: -d,—detached If present, runs the job in detached
mode
-m,—jobmanager
Address of the JobManager to which to connect. Use this flag to connect to a
different JobManager than the one
specified in the configuration.
-yat,—yarnapplicationType
Set a custom application type for the application on YARN
-yD
use value for given property -yd,—yarndetached If present, runs the job in detached mode (deprecated; use non-YARN
specific option instead)
-yh,—yarnhelp Help for the Yarn session CLI. -yid,—yarnapplicationId
Attach to running YARN session -yj,—yarnjar Path to Flink jar file -yjm,—yarnjobManagerMemory Memory for JobManager Container with optional unit (default: MB)
-ynl,—yarnnodeLabel
Specify YARN node label for the YARN application
-ynm,—yarnname
Set a custom name for the application on YARN
-yq,—yarnquery Display available YARN resources
(memory, cores)
-yqu,—yarnqueue
Specify YARN queue. -ys,—yarnslots Number of slots per TaskManager -yt,—yarnship Ship files in the specified directory (t for transfer)
-ytm,—yarntaskManagerMemory
Memory per TaskManager Container with optional unit (default: MB)
-yz,—yarnzookeeperNamespace
Namespace to create the Zookeeper sub-paths for high availability mode
-z,—zookeeperNamespace
Namespace to create the Zookeeper sub-paths for high availability mode
Options for default mode: -m,—jobmanager
Address of the JobManager to which to connect. Use this flag to connect to a
different JobManager than the one specified
in the configuration.
-z,—zookeeperNamespace
Namespace to create the Zookeeper sub-paths for high availability mode
Action “info” shows the optimized execution plan of the program (JSON).
Syntax: info [OPTIONS]
Action “list” lists running and scheduled programs.
Syntax: list [OPTIONS]
“list” action options:
-a,—all Show all programs and their JobIDs
-r,—running Show only running programs and their JobIDs
-s,—scheduled Show only scheduled programs and their JobIDs
Options for Generic CLI mode:
-D
Options for yarn-cluster mode:
-m,—jobmanager
Options for default mode:
-m,—jobmanager
Action “stop” stops a running program with a savepoint (streaming jobs only).
Syntax: stop [OPTIONS]
Options for yarn-cluster mode:
-m,—jobmanager
Options for default mode:
-m,—jobmanager
Action “cancel” cancels a running program.
Syntax: cancel [OPTIONS]
Options for yarn-cluster mode:
-m,—jobmanager
Options for default mode:
-m,—jobmanager
Action “savepoint” triggers savepoints for a running job or disposes existing ones.
Syntax: savepoint [OPTIONS]
Options for yarn-cluster mode:
-m,—jobmanager
Options for default mode:
-m,—jobmanager
run
options
- -c(—class)
:带有主程序入库的类名,如果JAR包未指定主类 - -C(—classpath)
:为在集群上的所有节点的每个用户代码类加载器添加一个URL,路径必须指定一个文件协议(file://…)并可在所有节点均可访问,可以指定多个URL,但协议必须被 **java.net.URLClassLoader**
支持 - -d(—detached):配置的话以detached 模式运行Job任务
- -n(-allowNonRestoredState):如果程序从savepoint触发,则允许跳过savepoint状态,该行为无法恢复
- -p(—parallelism)
:程序运行的并行度,将覆盖配置中的默认值 - -py(—python)
:程序入口的Python脚本,依赖资源可通过 --pyFiles
选项配置 - -pyarch(—pyArchives)
:为作业添加Python归档文件,归档文件将被提取到python UDF所在的worker节点,目前仅支持zip格式。对于每一个归档文件,一个目的目录须被指定;如果目标目录名称指定,归档文件将被提取到一个和指定名称一致的目录下,否则将提取到和归档文件名称一致的目录下;通过此上传的文件可通过相对路径访问, #
可作为归档文件和目标目录的分隔符 ,,
可以用作多个归档文件的分隔符,选项可用于上传虚拟环境,在Python UDF中使用数据文件(例如:-pyarch file:///tmp/py37.zip,file:///tmp/data.zip#data --pyExecutable py37.zip/py37/bin/python
),在Python UDF中访问数据文件(f = open('data/data.txt', 'r')
) - -pyexec(—pyExecutable)
:指定用于执行Python UDF worker的Python解释器路径(例如: --pyExecutable /usr/local/bin/python3
), UDF worker需要环境要求为:依赖Python版本为3.5+,Apache Beam版本为2.19.0,Pip版本为>=7.1.0,Setup Tools>=37.0.0 - -pyfs(—pyFiles)
:为一个Job作业添加自定义的Python文件,这些文件将被添加到本地客户端的pythonPath以及Python UDF worker中;以标准Python资源文件后缀,例如.py/.egg/.zip 或者目录都支持, ,
分隔多个文件(eg:—pyFiles file:///tmp/myresource.zip,hdfs:///$namenode_address/myresource2.zip - -pym(—pyModule)
:带有程序入口的类的Python模块,与 --pyFiles
结合使用 - -pyreq(—pyRequirements)
:制定一个requirements.txt文件制定第三方依赖,并安装这些依赖项到Python UDF Worker,可选制定一个包含这些依赖的目录,使用 #
分隔,例如:--pyRequirements file:///tmp/requirements.txt#file:///tmp/cached_dir
- -s(—fromSavepoint)
:恢复作业的保存点路径(例如 hdfs:///flink/savepoint-1537) -sae(—shutdownOnAttachedExit) :如果作业以atteached mode模式提交,当CLI突然终止时尽力终止集群,例如响应用户中断,键入CTRL + C
CLI模式
-D
:用于执行或部署已配置的executor的通用配置项, -t(—target)
:替代了 -e
选项对于给定应用的部署目标,与**execution.target**
配置选项含义一致,可选项有:-d(—detached):Job任务是否以detached mode运行
- -m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager - -yat(—yarnapplicationType)
:为yarn上应用程序设置自定义应用程序类型 - -yD
:使用给定属性值 - -yd(—yarndetached):已弃用,以detached mode运行Job
- -yh(—yarnhelp):Yarn Session CLI 的帮助
- -yid(—yarnapplicationId)
:附加到运行的Yarn session中 - -yj(—yarnjar)
:Flink Jar的路径 - -yjm(—yarnjobManagerMemory)
:JobManager Container的内存(默认为MB) - -ynl(—yarnnodelLabel)
:给Yarn应用指定yarn节点标签 - ynm(—yarnname)
:对在yarn上的应用自定义名称 - -yq(—yarnquery) :显示可用的Yarn资源(内存,核数)
- -yqu(—yarnqueue)
:指定yarn 队列 - -ys(—yarnslots)
:每个TaskManager的slots数 - -yt(—yarnship)
:传送指定目录中文件(t标识transfer传输) - -ytm(—yarntaskManagerMemory)
:每个TaskManager Container的内存(默认MB) - -yz(—yarnzookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 默认模式
-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 info
显示程序执行计划(JSON格式)
-c(—class)
:带有主程序入库的类名,如果JAR包未指定主类 -p(—parallelism)
:程序运行的并行度,将覆盖配置中的默认值 list
options
-a(-all):显示所有程序和JobId
- -r(—runnning):仅显示运行状态的程序和JobId
-s(—scheduled):显示scheduled状态的程序和JobId
CLI模式
-D
:用于执行或部署已配置的executor的通用配置项, -t(—target)
:替代了 -e
选项对于给定应用的部署目标,与**execution.target**
配置选项含义一致,可选项有:-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager - -yid(—yarnapplicationId)
:附加到运行的Yarn session中 -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 默认模式
-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 stop
options
-d(—drain):在获取savepoint和停止管道前发送MAX_WATERMARK
-p(—savepointPath)
:保存点的路径(例如 hdfs:///flink/savepoint-1537)。 如果没有指定目录,将使用配置的默认值(“state.savepoints.dir”) CLI 模式
-D
:用于执行或部署已配置的executor的通用配置项, -t(—target)
:替代了 -e
选项对于给定应用的部署目标,与**execution.target**
配置选项含义一致,可选项有:-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager - -yid(—yarnapplicationId)
:附加到运行的Yarn session中 -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 默认模式
-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 cancel
options
-s(—withSavepoint)
:弃用,建议使用stop命令,触发保存点并取消作业,目标目录可选,未指定则使用配置的默认目录(state.savepoints.dir) CLI模式
-D
:用于执行或部署已配置的executor的通用配置项, -t(—target)
:替代了 -e
选项对于给定应用的部署目标,与**execution.target**
配置选项含义一致,可选项有:-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager - -yid(—yarnapplicationId)
:附加到运行的Yarn session中 -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 默认模式
-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 savepoint
触发正在运行作业的savepoint或者处置现有的savepoint
options
-d(—dispose)
:触发的sacepoint的路径 -
CLI模式
-D
:用于执行或部署已配置的executor的通用配置项, -t(—target)
:替代了 -e
选项对于给定应用的部署目标,与**execution.target**
配置选项含义一致,可选项有:-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager - -yid(—yarnapplicationId)
:附加到运行的Yarn session中 -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间 默认模式
-m(—jobmanager)
:访问的JobManager的地址连接,使用此标志连接到在配置中指定的不同的JobManager - -z(—zookeeperNamespace)
:为了高可用创建zookeeper子路径的命名空间