其他分享
首页 > 其他分享> > Spark Standalone cluster try

Spark Standalone cluster try

作者:互联网

Spark Standalone cluster

node*
-- stop firewalld
systemctl stop firewalld
systemctl disable firewalld
-- tar spark
cd /opt
tar -zxvf spark-2.4.0-bin-hadoop2.7.tgz
cd spark-2.4.0-bin-hadoop2.7
-- cp application & application data
ftp spark.test-1.0.jar -> /opt/spark-2.4.0-bin-hadoop2.7
ftp words_count.txt -> /opt/spark-2.4.0-bin-hadoop2.7/data

node1
cd /opt/spark-2.4.0-bin-hadoop2.7
./sbin/start-master.sh

node2
cd /opt/spark-2.4.0-bin-hadoop2.7
./sbin/start-slave.sh spark://node1:7077

node3
cd /opt/spark-2.4.0-bin-hadoop2.7
./sbin/start-slave.sh spark://node1:7077

node4
cd /opt/spark-2.4.0-bin-hadoop2.7
./sbin/start-slave.sh spark://node1:7077

node?
cd /opt/spark-2.4.0-bin-hadoop2.7
./bin/spark-submit --class xyz.fz.spark.WordsCount --master spark://node1:7077 spark.test-1.0.jar

spark result: Lines with Basics: 2, lines with Programming: 2

 

标签:bin,opt,start,try,cluster,Spark,hadoop2.7,spark,2.4
来源: https://www.cnblogs.com/xiayudashan/p/10497205.html