Hadoop3.x入门:完全分布式Hadoop集群安装部署
作者:互联网
一、概述
上篇文章我们写了Hadoop3.1.1的源码编译,这里我们将编译的源码进行部署,作为我后面远程代码调试的目标集群,这里我把部署的一些重要的步骤写一写,希望对新手们有空,对Hadoop很熟悉的就不用看了。
集群节点:
节点 | 主机名 | 说明 |
192.168.0.101 | master.hadoop.ljs | master节点 |
192.168.0.102 | worker1.hadoop.ljs | worker1节点 |
192.168.0.103 | worker2.hadoop.ljs | worker2节点 |
软件版本:
Apache Hadoop3.1.1
JDK1.8
Centos7.2
二、安装部署
1.集群的初始化工作,请参照:Spark2.x入门:集群(Standalone)安装、配置、启动脚本详解,ssh免密,关闭防火墙、jdk安装等上面这篇文章已经详解讲了,这里不再讲解;
2.修改配置文件,在Master节点配置好后,直接复制到另外两个worker节点即可:
1).修改hadoop-env.sh文件,添加以下内容,我这里用root用户安装的,你如果用其他用户下面就配置你的用户即可:
# export JAVA_HOME= export JAVA_HOME=/opt/jdk1.8.0_112# Location of Hadoop. By default, Hadoop will attempt to determine# this location based upon its execution path.# export HADOOP_HOME=export HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODEMANAGER_USER=root
2).修改hdfs-site.xml,文件内容如下:
<configuration> //namenode元数据目录 <property> <name>dfs.name.dir</name> <value>/data/app/dataDir/dfs/name</value> <description> Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently. </description> </property> //datanode数据目录,就是你自己的数据 <property> <name>dfs.data.dir</name> <value>/data/app/dataDir/dfs/data</value> <description> Comma separated list of paths on the localfilesystem of a DataNode where it should store itsblocks. </description> </property> //web的端口一般就是50070 <property> <name>dfs.namenode.http-address</name> <value>master.hadoop.ljs:50070</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>master.hadoop.ljs:50090</value> </property> //三副本 <property> <name>dfs.replication</name> <value>3</value> </property> //文件操作权限检查,这里配置成false <property> <name>dfs.permissions</name> <value>false</value> <description>need not permissions</description> </property></configuration>
3).修改core-site.xml,文件内容如下:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master.hadoop.ljs:8020</value> </property> //临时文件路径 <property> <name>hadoop.tmp.dir</name> <value>/data/app/dataDir/tmp</value> </property></configuration>
4).修改yarn-site.xml,为方便查看日志,这里配置了日志聚合、每个nodemanager分配多少内存,文件内容如下:
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master.hadoop.ljs</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>2592000</value> </property> <property> <name>yarn.log.server.url</name> <value>http://master.hadoop.ljs:19888/jobhistory/logs</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/data/app/dataDir/yarn/local</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/data/app/dataDir/yarn/log</value> </property> <property> <name>yarn.nodemanager.log.retain-second</name> <value>604800</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/app-logs</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir-suffix</name> <value>logs</value> </property> <property> <name>yarn.nodemanager.delete.debug-delay-sec</name> <value>600</value> </property><property> <name>yarn.nodemanager.localizer.cache.target-size-mb</name> <value>1024</value></property><property> <name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name> <value>60000</value></property><property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value></property><property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value></property><property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value></property><property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value></property><property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>2</value></property><property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>2</value></property><property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value></property></configuration>
5).mapred-site.xml,上面yarn-site.xml也配置了historyserver服务,这里跟它配置要一致:
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>master.hadoop.ljs:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>master.hadoop.ljs:19888</value></property><property> <name>mapreduce.reduce.memory.mb</name> <value>1024</value></property><property> <name>mapreduce.map.memory.mb</name> <value>1024</value></property><property> <name>yarn.app.mapreduce.am.resource.mb</name> <value>1024</value></property></configuration>
6).修改workers文件,上面指定了数据三副本,这里最少也得配置三个datanode,如果你配置的是一个副本,可以配置一个或者更多datanode,文件内容如下:
[root@master hadoop]# cat workers master.hadoop.ljsworker1.hadoop.ljsworker1.hadoop.ljs
3.配置文件修改完成,拷贝到worker1、worker节点:
[root@master hadoop]# scp -r /data/app/hadoop-3.1.1 worker1:/data/app/[root@master hadoop]# scp -r /data/app/hadoop-3.1.1 worker1:/data/app/
4.为了方便操作,这里可以修改下环境变量,在/etc/profile添加:
export HADOOP_HOME=/data/app/hadoop-3.1.1export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
执行 source /etc/profile生效
source /etc/profile
5.启动集群,这里把常用命令列一下:
1)启动整个集群,在namenode节点执行
/data/app/hadoop-3.1.1/sbin/start-all.sh
2)停止整个集群,在namenode节点执行
/data/app/hadoop-3.1.1/sbin/stop-all.sh
3)单独启动/停止namenode,只需在namenode节点执行
/data/app/hadoop-3.1.1/sbin/hadoop-daemon.sh start namenode /data/app/hadoop-3.1.1/sbin/hadoop-daemon.sh stop namenode
4)单独启动/停止datanode,各个datanode都要执行
/data/app/hadoop-3.1.1/sbin/hadoop-daemon.sh start datanode /data/app/hadoop-3.1.1/sbin/hadoop-daemon.sh stop datanode
5)启动/停止所有datanode,在namenode节点执行
/data/app/hadoop-3.1.1/sbin/hadoop-daemons.sh start datanode /data/app/hadoop-3.1.1/sbin/hadoop-daemons.sh stop datanode
6)启动/停止整个yarn服务,在namenode节点执行
/data/app/hadoop-3.1.1/sbin/start-yarn.sh /data/app/hadoop-3.1.1/sbin/stop-yarn.sh
7)启动/停止yarn resourcemanager服务,在namenode节点执行:
/data/app/hadoop-3.1.1/sbin/yarn-daemon.sh start resourcemanager/data/app/hadoop-3.1.1/sbin/yarn-daemon.sh stop resourcemanager
8)启动/停止单个yarn nodemanager服务,各个nodemanager都要执行
/data/app/hadoop-3.1.1/sbin/yarn-daemon.sh start nodemanager/data/app/hadoop-3.1.1/sbin/yarn-daemon.sh stop nodemanager
9)启动/停止所有yarn nodemanager服务,在namenode节点执行:
/data/app/hadoop-3.1.1/sbin/yarn-daemons.sh start nodemanager/data/app/hadoop-3.1.1/sbin/yarn-daemons.sh stop nodemanager
10)启动/停止historyserver
/data/app/hadoop-3.1.1/sbin/mr-jobhistory-daemon.sh start historyserver/data/app/hadoop-3.1.1/sbin/mr-jobhistory-daemon.sh stop historyserver
6.集群启动后,可访问master.hadoop.ljs:50070端口,进行验证。
标签:sbin,app,hadoop,yarn,Hadoop,Hadoop3,3.1,data,分布式 来源: https://blog.51cto.com/15080019/2653906