安装flink集群运行在yarn上:
    https://www.cnblogs.com/hxuhongming/p/12873010.html
    https://www.cnblogs.com/hxuhongming/p/12819916.html
    https://yijiyong.com/dp/flink/02-install.html
    先安装zk集群和hadoop集群:

    1. curl -O https://archive.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
    2. scp -r zookeeper-3.4.10.tar.gz root@hadoop2:`pwd`
    3. scp -r zookeeper-3.4.10.tar.gz root@hadoop3:`pwd`
    4. tar -zxvf zookeeper-3.4.10.tar.gz
    5. cp zoo_sample.cfg zoo.cfg
    6. vim zoo.cfg
    7. # 三个节点都需要启动zookeeper
    8. dataDir=/data/zookeeper
    9. server.1=0.0.0.0:2888:3888
    10. server.2=hadoop2:2888:3888
    11. server.3=hadoop3:2888:3888
    12. 新建myid文件并编辑
    1. cd /export/server/zookeeper-3.4.10/bin
    2. ./zkServer.sh start //启动zookeeper
    3. ./zkServer.sh stop //停止zookeeper
    4. ./zkServer.sh status //查看zookeeper状态

    flink安装配置:

    1. ################################################################################
    2. jobmanager.rpc.port: 6123
    3. jobmanager.memory.process.size: 1600m
    4. taskmanager.memory.process.size: 1728m
    5. taskmanager.numberOfTaskSlots: 1
    6. parallelism.default: 1
    7. high-availability: zookeeper
    8. high-availability.storageDir: hdfs:///flink/ha/
    9. high-availability.zookeeper.quorum: node1:2181,node2:2181,node3:2181
    10. high-availability.zookeeper.path.root: /flink
    11. # high-availability.zookeeper.client.acl: open
    12. state.backend: filesystem
    13. # Directory for checkpoints filesystem, when using any of the default bundled
    14. state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints
    15. state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints
    16. jobmanager.execution.failover-strategy: region
    17. # Override below configuration to provide custom ZK service name if configured
    18. # zookeeper.sasl.service-name: zookeeper
    19. # The configuration below must match one of the values set in "security.kerberos.login.contexts"
    20. # zookeeper.sasl.login-context-name: Client
    1. vim /etc/profile
    2. export FLINK_HOME=/export/server/flink-1.13.5
    3. export PATH=$PATH:$FLINK_HOME/bin
    4. source /etc/profile

    启动顺序:先启动zk和hdfs,再启动flink
    cd /export/server/flink-1.13.5/bin
    ./start-cluster.sh
    查看日志:
    cd /export/server/flink-1.13.5/log
    tailf flink-root-standalonesession-0-node1.log

    image.png

    1. 1.
    2. (1)申请资源
    3. cd /export/server/flink-1.13.5/bin
    4. ./yarn-session.sh -nm wordCount -n 2
    5. (2)查看申请的资源
    6. yarn application --list
    7. 2.
    8. 提交wordCount计算作业
    9. ./flink run -yid application_1652507424094_0001 ../examples/batch/WordCount.jar

    image.png