CDH5.8.4-hadoop2.6.0安装hbase-1.2.0
作者:互联网
1.安装zookeeper
1.1zookeeper下载地址
http://archive.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.8.4.tar.gz
1.2安装
解压重命名:
[hadoop@host151 bigdata]$ tar -zxvf zookeeper-3.4.5-cdh5.8.4.tar.gz
[hadoop@host151 bigdata]$ mv zookeeper-3.4.5-cdh5.8.4 zookeeper
创建data文件文件夹,并在文件夹里面添加myid文件存放id值,zookeeper集群中的myid值各不相同,可依次修改,从1,2,3....依次类推,各个安装节点都要修改
[hadoop@host151 data]$ mkdir data
[hadoop@host151 data]$ cd data
[hadoop@host151 data]$ touch myid
[hadoop@host151 data]$ echo '1' > myid
[hadoop@host151 data]$ cat myid
1
修改配置文件zoo.cfg,上面配置的myid文件必须在也在这个dataDir配置的目录下面,否则报错,server后面配置的值与myid文件中的数字相同。
[hadoop@host151 conf]$ mv zoo_sample.cfg zoo.cfg
[hadoop@host151 conf]$ vim zoo.cfg
dataDir=/home/hadoop/bigdata/zookeeper/data
server.1=192.168.206.151:2888:3888
server.2=192.168.206.152:2888:3888
server.3=192.168.206.153:2888:3888
分发到各个节点
[hadoop@host151 bigdata]$ scp -r zookeeper hadoop@host152:/home/hadoop/bigdata
[hadoop@host151 bigdata]$ scp -r zookeeper hadoop@host153:/home/hadoop/bigdata
各个节点上启动和查看启动后的状态,有显示如下则为成功
[hadoop@host151 sbin]$ ./zkServer.sh restart
[hadoop@host151 sbin]$ ./zkServer.sh status
Mode: leader 主节点
Mode: follower 从节点
2.安装hbase
2.1上传解析重,命名hbase
[hadoop@host151 bigdata]# tar -zxf hbase-1.2.0-cdh5.8.4.tar.gz
[hadoop@host151 bigdata]# mv hbase-1.2.0-cdh5.8.4 hbase
2.2配置环境变量,并生效配置文件
[hadoop@host151 bigdata]vim /home/hadoop/.bash_profile
export HBASE_HOME=/home/hadoop/bigdata/hbase
export PATH=$PATH:$HBASE_HOME/bin
[hadoop@host151 hbase]$ source /home/hadoop/.bash_profile
2.3添加从节点到regionservers文件
[hadoop@host152 conf]$ cd hbase/conf
[hadoop@host151 conf]$ vim regionservers
host152
host153
2.4修改hbase-env.sh环境变量
[hadoop@host151 conf]$ vim hbase-env.sh
export JAVA_HOME=/opt/jdk1.8.0_131
export HBASE_CLASSPATH=/home/hadoop/bigdata/hbase/conf
export HBASE_MANAGES_ZK=false #设置成false时用自己的zk,true用hbase内置的zk
2.4修改hbase-site.xml,添加配置文件如下
<property>
<name>hbase.master</name>
<value>host151:60000</value>
</property>
<property>
<name>hbase.master.maxclockskew</name>
<value>180000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>host151,host152,host153</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/hadoop/bigdata/datas/hbase/tmp</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://host151:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.wal.provider</name>
<value>filesystem</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>120000</value>
</property>
2.5分发各个节点
[hadoop@host151 bigdata]$ scp -r hbase hadoop@host152:/home/hadoop/bigdata
[hadoop@host151 bigdata]$ scp -r hbase hadoop@host153:/home/hadoop/bigdata
2.6启动hbase
全部启动
./start-hbase.sh
单个节点启动和停止,其中带master是主节点,regionserver是从节点。
./hbase-daemon.sh start master
./hbase-daemon.sh stop master
./hbase-daemon.sh start regionserver
./hbase-daemon.sh stop regionserver
2.7hbase页面访问地址
SimpleSimpleSimples 发布了62 篇原创文章 · 获赞 7 · 访问量 3万+ 私信 关注标签:zookeeper,CDH5.8,bigdata,hadoop,host151,home,hadoop2.6,hbase 来源: https://blog.csdn.net/SimpleSimpleSimples/article/details/104153732