其他分享
首页 > 其他分享> > hadoop 之 hadoop2.7.7升级到hadoop2.8.5

hadoop 之 hadoop2.7.7升级到hadoop2.8.5

作者:互联网

服务器规划

 

准备阶段

停服务

stop-yarn.sh
stop-dfs.sh

 

备份

备份NameNode目录

---------- 101,102操作 -----------------

hdfs-site.xml中的 dfs.namenode.dir

vi /app/hadoop-2.7.7/etc/hadoop/hdfs-site.xml

<property>

    <name>dfs.namenode.name.dir</name>

    <value>/app/hadoop-2.7.7/tmp/name</value>

</property>

mkdir /app/bak-hadoop-2.7.7
cd /app/bak-hadoop-2.7.7
cp -r /app/hadoop-2.7.7/tmp/name ./

备份journal

--101,102,103 操作 --

hdfs-site.xml中的 dfs.journalnode.edits.dir

vi /app/hadoop-2.7.7/etc/hadoop/hdfs-site.xml

<property>

    <name>dfs.journalnode.edits.dir</name>

    <value>/app/hadoop-2.7.7/tmp/journal</value>

</property>

cd /app/bak-hadoop-2.7.7
cp -r /app/hadoop-2.7.7/tmp/journal ./

安装新版本的Hadoop

---------- 101操作 -----------------

解压

cd /app
tar -zxvf hadoop-2.8.5.tar.gz

分发到其他机器(可以先删除hadoop-2.8.5/share/doc文件夹)

for i in 192.168.100.{102..107};do echo "=======$i=======";scp -r /app/hadoop-2.8.5 $i:/app/;done

hdfs-site.xml添加滚动升级配置

------------- 101 操作 --------------------

vi /app/hadoop-2.8.5/etc/hadoop/hdfs-site.xml

<property>

      <name>dfs.namenode.duringRollingUpgrade.enable</name>

      <value>true</value>

</property>

 

分发到其他机器

for i in 192.168.100.{102..107};do echo "=======$i=======";scp /app/hadoop-2.8.5/etc/hadoop/hdfs-site.xml $i:/app/hadoop-2.8.5/etc/hadoop/;done

修改环境变量

HADOOP_HOME改为hadoop-2.8.5的安装目录

vi ~/.bash_profile

export HADOOP_HOME=/app/hadoop-2.7.7 修改为  export HADOOP_HOME=/app/hadoop-2.8.5

分发到其他机器

for i in 192.168.100.{102..107};do echo "=======$i=======";scp ~/.bash_profile $i:~/;ssh $i "source ~/.bash_profile"; done

 

验证环境变量是否生效

which hadoop

~/app/hadoop-2.8.5/bin/hadoop

滚动升级

启动Hadoop2.7.7 hdfs

/app/hadoop-2.7.7/sbin/start-dfs.sh

创建用于回滚的fsimage

/app/hadoop-2.7.7/bin/hdfs dfsadmin -rollingUpgrade prepare

检查回滚映像状态

/app/hadoop-2.7.7/bin/hdfs dfsadmin -rollingUpgrade query

关闭hadoop2.7.7的namenode和datanode

---- 101,102 操作 -----

/app/hadoop-2.7.7/sbin/hadoop-daemon.sh stop namenode

 

---- 104,105,106,107 操作 -----

/app/hadoop-2.7.7/sbin/hadoop-daemon.sh stop datanode

启动hadoop2.8.5的namenode

在新版本的hadoop使用“-rollingUpgrade started”选项启动namenode

---- 101,102 操作 -----

/app/hadoop-2.8.5/sbin/hadoop-daemon.sh start namenode -rollingupgrade started

启动hadoop2.8.5的ResourceManager

---- 102,103 操作 -----

/app/hadoop-2.8.5/sbin/yarn-daemon.sh start resourcemanager

启动resourcemanager报错:

java.io.FileNotFoundException: /tmp/ats/entity-file-history/active does not exist

解决:

在hdfs上创建目录/tmp/ats/entity-file-history/active

Hadoop fs –mkdir –p /tmp/ats/entity-file-history/active

完成滚动升级

/app/hadoop-2.7.7/bin/hdfs dfsadmin -rollingUpgrade finalize

 

标签:hdfs,app,hadoop,2.8,hadoop2.8,namenode,hadoop2.7,2.7
来源: https://www.cnblogs.com/simple-li/p/15722320.html