zookeeper
作者:互联网
3个节点修改主机名为zookeeper1、zookeeper2、zookeeper3,命令如下:
zookeeper1节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper1
zookeeper2节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper2
zookeeper3节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper3
zookeeper1节点:
[root@zookeeper1 ~]# hostnamectl
Static hostname: zookeeper1
Icon name: computer-vm
Chassis: vm
Machine ID: dae72fe0cc064eb0b7797f25bfaf69df
Boot ID: c642ea4be7d349d0a929e557f23ce3dc
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64
zookeeper2节点:
[root@zookeeper2 ~]# hostnamectl
Static hostname: zookeeper2
Icon name: computer-vm
Chassis: vm
Machine ID: dae72fe0cc064eb0b7797f25bfaf69df
Boot ID: cfcaf92af7a44028a098dc4792b441f4
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64
zookeeper3节点:
[root@zookeeper3 ~]# hostnamectl
Static hostname: zookeeper3
Icon name: computer-vm
Chassis: vm
Machine ID: dae72fe0cc064eb0b7797f25bfaf69df
Boot ID: cff5bbd45243451e88d14e1ec75098c0
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64
(2)配置hosts文件
3个节点修改/etc/hosts文件,3个节点均修改成如下代码所示:
#vi /etc/hosts
192.168.200.135 zookeeper
192.168.200.136zookeeper2
192.168.200.137 zookeeper3
(3)配置YUM源
将提供的gpmall-repo目录上传至3个节点的/opt目录下,首先将3个节点/etc/yum.repo.d目录下的文件移动到/media目录下,命令如下:
#mv /etc/yum.repos.d/* /media/
在3个节点上创建/etc/yum.repo.d/local.repo,文件内容如下:
#cat /etc/yum.repos.d/local.repo
[gpmall]
name=gpmall
baseurl=file:///opt/gpmall-repo
gpgcheck=0
enabled=1
#yum clean all
#yum list
2.搭建ZooKeeper集群
(1)安装JDK环境
#yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
openjdk version “1.8.0_222”
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
(2)解压ZooKeeper软件包
#tar -zxvf zookeeper-3.4.14.tar.gz
(3)修改3个节点配置文件
在zookeeper1节点,进入zookeeper-3.4.14/conf目录下,修改zoo_sample.cfg文件为zoo.cfg,并编辑该文件内容如下:
[root@zookeeper1 conf]# vi zoo.cfg
[root@zookeeper1 conf]# grep -n ‘^’[a-Z] zoo.cfg
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/tmp/zookeeper
14:clientPort=2181
29:server.1=1192.168.200.135:2888:3888
30:server.2=192.168.200.136:2888:3888
31:server.3=192.168.200.137:2888:3888
(4)创建myid文件
zookeeper1节点:
[root@zookeeper1 ~]# mkdir /tmp/zookeeper
[root@zookeeper1 ~]# vi /tmp/zookeeper/myid
[root@zookeeper1 ~]# cat /tmp/zookeeper/myid
1
zookeeper2节点:
[root@zookeeper2 ~]# mkdir /tmp/zookeeper
[root@zookeeper2 ~]# vi /tmp/zookeeper/myid
[root@zookeeper2 ~]# cat /tmp/zookeeper/myid
2
zookeeper3节点:
[root@zookeeper3 ~]# mkdir /tmp/zookeeper
[root@zookeeper3 ~]# vi /tmp/zookeeper/myid
[root@zookeeper3 ~]# cat /tmp/zookeeper/myid
3
(5)启动ZooKeeper服务
[root@zookeeper1 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[root@zookeeper1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Mode: follower
zookeeper2节点:
[root@zookeeper2 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Starting zookeeper … already running as process 10175.
[root@zookeeper2 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Mode: leader
zookeeper3节点:
[root@zookeeper3 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[root@zookeeper3 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Mode: follower
标签:bin,tmp,zookeeper,zookeeper1,root,节点 来源: https://www.cnblogs.com/lzqabc/p/16691780.html