首页 > 数据库> > |NO.Z.00016|——————————|Deployment|——|Hadoop&OLAP数据库管理系统.v16|---------------------------------|Kylin.
|NO.Z.00016|——————————|Deployment|——|Hadoop&OLAP数据库管理系统.v16|---------------------------------|Kylin.
作者:互联网
[BigDataHadoop:Hadoop&OLAP数据库管理系统.V16] [Deployment.OLAP数据库管理系统][|Kylin:sparkcore高可用配置|]
一、高可用配置:spark standalone集群配置
### --- 修改 spark-env.sh 文件,并分发到集群中
[root@hadoop01 ~]# vim $SPARK_HOME/conf/spark-env.sh
# export SPARK_MASTER_HOST=hadoop01 # 注释掉这2行内容
# export SPARK_MASTER_PORT=7077 # 注释掉这2行内容
~~~ # 最后一行添加如下内容
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01,hadoop02,hadoop03 -Dspark.deploy.zookeeper.dir=/spark"
~~~ # 发送到其它节点
[root@hadoop01 ~]# rsync-script $SPARK_HOME/conf/spark-env.sh
二、启动集群并验证### --- 启动 Spark 集群hadoop01
~~~ # 在Hadoop01节点上重启spark服务:需要启动hdfs/yarn/zookeeper服务
[root@hadoop01 ~]# stop-all-spark.sh
[root@hadoop01 ~]# start-all-spark.sh
~~~ # 查看服务进程
[root@hadoop00 ~]# jps
Hadoop01 Worker
Hadoop02 Master Worker # 此时master节点在Hadoop02上
Hadoop03 Worker
三、浏览器输入:http://hadoop01:8080/显示为:ALIVE### --- 在Hadoop02上启动master服务
[root@hadoop02 ~]# start-master.sh
~~~ # 查看服务进程
[root@hadoop00 ~]# jps
Hadoop01 Master Worker # 备用master节点在Hadoop01上
Hadoop02 Master Worker # 此时master节点在Hadoop02上
Hadoop03 Worker
四、进入浏览器输入:http://hadoop02:8080/,此时 Master 的状态为:STANDBY五、杀死Hadoop01上 Master 进程,Hadoop02:ALIVE;http://hadoop02:8080/;Master:STANDBY六、停止spark集群### --- 停止集群状态
~~~ # 停止spark集群
[root@hadoop02 ~]# stop-all-spark.sh
~~~ # 停止zk服务
[root@hadoop02 ~]# ./zk-all.sh stop
七、zookeeper说明### --- 高可用(ZK、Local Flile;在ZK中记录集群的状态)
[root@hadoop02 ~]# zkCli.sh
[zk: localhost:2181(CONNECTED) 1] ls / # 记录的位置
[zookeeper, spark]
[zk: localhost:2181(CONNECTED) 2] ls /spark # 选举的信息
[leader_election, master_status]
===============================END===============================
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart ——W.S.Landor
来自为知笔记(Wiz)
标签:v07,Kylin,hadoop02,hadoop01,sparkcore,sh,Hadoop02,spark,root 来源: https://www.cnblogs.com/yanqivip/p/16159227.html