Spark集群启动脚本(start-spark-cluster)

  1. cd /usr/local/bin
  2. sudo vi start-spark-cluster

脚本内容如下:

  1. #!/bin/bash
  2. SPARK_HOME=/home/vagrant/modules/spark
  3. echo "start spark-cluster-------------------------------------------------"
  4. for i in bigdata-node1
  5. do
  6. ssh $i "source /etc/profile;${SPARK_HOME}/sbin/start-all.sh"
  7. ssh $i "source /etc/profile;${SPARK_HOME}/sbin/start-history-server.sh"
  8. done
  9. sleep 5s
  10. jpsx
  11. exit 0

设置脚本执行权限:

  1. sudo chmod +x start-spark-cluster
  2. sudo chown vagrant:vagrant start-spark-cluster

Spark集群停止脚本(stop-spark-cluster)

  1. cd /usr/local/bin
  2. sudo vi stop-spark-cluster

脚本内容如下:

  1. #!/bin/bash
  2. SPARK_HOME=/home/vagrant/modules/spark
  3. echo "stop spark-cluster"
  4. for i in bigdata-node1 bigdata-node2 bigdata-node3
  5. do
  6. ssh $i "source /etc/profile;${SPARK_HOME}/sbin/stop-history-server.sh"
  7. ssh $i "source /etc/profile;${SPARK_HOME}/sbin/stop-all.sh"
  8. done
  9. sleep 5s
  10. jpsx
  11. exit 0

设置脚本执行权限:

  1. sudo chmod +x stop-spark-cluster
  2. sudo chown vagrant:vagrant stop-spark-cluster

分发(选做):

  1. # 使用root账户分发至其他节点
  2. sudo scp -r /usr/local/bin/*-spark-cluster root@bigdata-node2:/usr/local/bin/
  3. sudo scp -r /usr/local/bin/*-spark-cluster root@bigdata-node3:/usr/local/bin/