一、前提条件
- jdk8+
- zookeeper
- hadoop on yarn搭建完成
hadoop on yarn 搭建参考https://www.yuque.com/docs/share/d152d0a8-1fef-41f7-ba82-81372f263764?#
二、修改yarn配置
2.1 停止hdfs和yarn
2.2 vim yarn-site.xml
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.resourcemanager.am.max-attempts</name>
<value>4</value>
<description>
The maximum number of application master execution attempts.
</description>
</property>
2.3 vim mapred-site.xml
<property>
<name>mapreduce.map.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>4096</value>
</property>
2.4 重启hdfs和yarn
三、解压安装flink
3.1 下载
http://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.7.2/flink-1.7.2-bin-hadoop28-scala_2.11.tgz
**
3.2 解压
tar zxvf flink-1.7.2-bin-hadoop28-scala_2.11.tgz -C /opt
**
3.3 配置
3.3.1 vim masters
bigdata02:8081
bigdata03:8081
3.3.2 vim slaves
bigdata01
bigdata02
bigdata03
3.3.3 vim flink-conf.yaml
high-availability: zookeeper
high-availability.zookeeper.quorum: bigdata01:2181,bigdata02:2181,bigdata03:2181
high-availability.zookeeper.path.root: /flink
#high-availability.cluster-id: bigdata01
high-availability.storageDir: hdfs:///flink/recovery
pplication-attempts: 10
四、添加环境变量
export FLINK_HOME=/opt/flink-1.7.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$FLINK_HOME/bin:$PATH
五、使用
启动
yarn-session.sh -n 2 -jm 1024m -tm 4096m
提交任务
flink run -m yarn-cluster -yjm 1024m -ytm 1024m -ys 2 ./examples/batch/WordCount.jar --input hdfs:///data/input/wc.txt --output hdfs:///data/output