其他分享
首页 > 其他分享> > |NO.Z.00006|——————————|^^ 配置 ^^|——|Hadoop&Spark.V06|------------------------------------------|Spar

|NO.Z.00006|——————————|^^ 配置 ^^|——|Hadoop&Spark.V06|------------------------------------------|Spar

作者:互联网



[BigDataHadoop:Hadoop&Spark.V06]                                        [BigDataHadoop.Spark内存级快速计算引擎][|章节一|Hadoop|spark|sparkcore:Spark-Standalone集群模式&standalone配置&core&mamory|]








一、集群模式--Standalone模式
### --- 集群模式--Standalone模式

~~~     参考:http://spark.apache.org/docs/latest/spark-standalone.html
~~~     分布式部署才能真正体现分布式计算的价值
~~~     与单机运行的模式不同,这里必须先启动Spark的Master和Worker守护进程;关闭 yarn 对应的服务
~~~     不用启动Hadoop服务,除非要使用HDFS的服务
二、检查集群状态
### --- 使用jps检查,可以发现:

~~~     # 启动服务
[root@hadoop01 ~]# start-dfs.sh 
[root@hadoop01 ~]# stopt-yarn.sh
[root@hadoop02 ~]# start-all-spark.sh
### --- 使用浏览器查看:http://hadoop02:8080/

~~~     # 检查集群状态
[root@hadoop02 ~]# jps

Hadoop01:Worker
Hadoop02:Master、Worker
Hadoop03:Worker


三、Standalone配置
### --- Standalone 配置

~~~     sbin/start-master.sh / sbin/stop-master.sh
~~~     sbin/start-slaves.sh / sbin/stop-slave.sh
~~~     sbin/start-slave.sh / sbin/stop-slaves.sh
~~~     sbin/start-all.sh / sbin/stop-all.sh
~~~     备注:./sbin/start-slave.sh [options];启动节点上的worker进程,调试中较为常用
### --- standalone配置:定义core和mamory参数

~~~     在 spark-env.sh 中定义:定义spark-worker的core和mamory
~~~     官方文档地址:http://spark.apache.org/docs/latest/spark-standalone.html
~~~     # 默认是使用所有的core和mamory

SPARK_WORKER_CORES:Total number of cores to allow Spark applications to use on the machine (default: all available cores).

SPARK_WORKER_MEMORY:Total amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GiB); note that each application's individual memory is configured using its spark.executor.memory property.
### --- 测试在 spark-env.sh 中增加参数,分发到集群,重启服务:

[root@hadoop02 ~]# vim $SPARK_HOME/conf/spark-env.sh

export SPARK_WORKER_CORES=10
export SPARK_WORKER_MEMORY=20g
[root@hadoop02 ~]# rsync-script $SPARK_HOME/conf/spark-env.sh
 
[root@hadoop02 ~]# stop-all-spark.sh 
[root@hadoop02 ~]# start-all-spark.sh  
### --- 在浏览器中观察集群状态,测试完成后将以上两个参数分别改为10、20g,重启服务。
~~~     修改回默认的配置
~~~     发送到其它主机并重启服务

[root@hadoop02 ~]#  vim $SPARK_HOME/conf/spark-env.sh
# export SPARK_WORKER_CORES=1
# export SPARK_WORKER_MEMORY=2g


附录一:定版本文件
### --- $SPARK_HOME/conf/spark-env.sh

[root@hadoop02 ~]# vim $SPARK_HOME/conf/spark-env.sh
export JAVA_HOME=/opt/yanqi/servers/jdk1.8.0_231
export HADOOP_HOME=/opt/yanqi/servers/hadoop-2.9.2
export HADOOP_CONF_DIR=/opt/yanqi/servers/hadoop-2.9.2/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/opt/yanqi/servers/hadoop-2.9.2/bin/hadoop classpath)
export SPARK_MASTER_HOST=hadoop02
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1g








===============================END===============================


Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart                                                                                                                                                   ——W.S.Landor



来自为知笔记(Wiz)

标签:core,hadoop02,sparkcore,sh,export,spark,SPARK,root,Spark
来源: https://www.cnblogs.com/yanqivip/p/16131830.html