版本选择
这里的选择是hadoop2.10.1 HBase2.3.5

1.zookeeper安装

2.Hadoop安装

3.下载安装包

  1. https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.3.5/hbase-2.3.5-src.tar.gz

4.压缩到指定目录

  1. [hadoop@hadoop1 ~]$ tar -zxvf hbase-1.2.6-bin.tar.gz -C /usr/local/

5.修改配置文件

配置文件目录在安装包的conf文件夹中

5.1修改hbase-env.sh

  1. [hadoop@hadoop1 conf]$ vi hbase-env.sh
  2. export JAVA_HOME=/usr/local/jdk1.8
  3. export HBASE_MANAGES_ZK=false

5.2修改hbase-site.xml

  1. [hadoop@hadoop1 conf]$ vi hbase-site.xml
  2. <configuration>
  3. <property>
  4. <!-- 指定 hbase HDFS 上存储的路径 -->
  5. <name>hbase.rootdir</name>
  6. <value>hdfs://myha01/hbase126</value>
  7. </property>
  8. <property>
  9. <!-- 指定 hbase 是分布式的 -->
  10. <name>hbase.cluster.distributed</name>
  11. <value>true</value>
  12. </property>
  13. <property>
  14. <!-- 指定 zk 的地址,多个用“,”分割 -->
  15. <name>hbase.zookeeper.quorum</name>
  16. <value>hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181</value>
  17. </property>
  18. </configuration>

5.3修改regionservers

  1. [hadoop@hadoop1 conf]$ vi regionservers
  2. hadoop1
  3. hadoop2
  4. hadoop3
  5. hadoop4

5.4修改backup-masters

  1. [hadoop@hadoop1 conf]$ vi backup-masters
  2. hadoop4

5.5修改hdfs-site.xml和core-site.xml

最重要一步,要把 hadoop 的 hdfs-site.xml 和 core-site.xml 放到 hbase-1.2.6/conf 下

  1. [hadoop@hadoop1 conf]$ cd ~/apps/hadoop-2.7.5/etc/hadoop/
  2. [hadoop@hadoop1 hadoop]$ cp core-site.xml hdfs-site.xml ~/apps/hbase-1.2.6/conf/

5.6将HBase安装包分发到其他节点

分发之前先删除HBase目录下的docs文件夹,

  1. [hadoop@hadoop1 hbase-1.2.6]$ rm -rf docs/
  2. [hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop2:$PWD
  3. [hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop3:$PWD
  4. [hadoop@hadoop1 apps]$ scp -r hbase-1.2.6/ hadoop4:$PWD

5.7配置环境变量

所有服务器都有进行配置

  1. [hadoop@hadoop1 apps]$ vi /etc/profile
  2. #HBase
  3. export HBASE_HOME=/home/hadoop/apps/hbase-1.2.6
  4. export PATH=$PATH:$HBASE_HOME/bin
  5. [hadoop@hadoop1 apps]$ source ~/.bashrc

启动HBase集群

1.启动zookeeper集群

每个zookeeper节点都要执行以下命令

  1. [hadoop@hadoop1 apps]$ zkServer.sh start
  2. ZooKeeper JMX enabled by default
  3. Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
  4. Starting zookeeper ... STARTED
  5. [hadoop@hadoop1 apps]$

2.启动HDFS集群以及YARN集群

如果需要运行MapReduce程序则启动yarn集群,否则不需要启动

  1. [hadoop@hadoop1 apps]$ start-dfs.sh
  2. Starting namenodes on [hadoop1 hadoop2]
  3. hadoop2: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop2.out
  4. hadoop1: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop1.out
  5. hadoop3: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop3.out
  6. hadoop4: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop4.out
  7. hadoop2: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out
  8. hadoop1: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop1.out
  9. Starting journal nodes [hadoop1 hadoop2 hadoop3]
  10. hadoop3: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop3.out
  11. hadoop2: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop2.out
  12. hadoop1: starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop1.out
  13. Starting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]
  14. hadoop2: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop2.out
  15. hadoop1: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop1.out
  16. [hadoop@hadoop1 apps]$

启动完成之后检查以下namenode的状态

  1. [hadoop@hadoop1 apps]$ hdfs haadmin -getServiceState nn1
  2. standby
  3. [hadoop@hadoop1 apps]$ hdfs haadmin -getServiceState nn2
  4. active
  5. [hadoop@hadoop1 apps]$

3.启动HBase

保证 ZooKeeper 集群和 HDFS 集群启动正常的情况下启动 HBase 集群 启动命令:start-hbase.sh,在哪台节点上执行此命令,哪个节点就是主节点

  1. [hadoop@hadoop1 conf]$ start-hbase.sh
  2. starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop1.out
  3. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
  4. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
  5. hadoop3: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop3.out
  6. hadoop4: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop4.out
  7. hadoop2: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop2.out
  8. hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
  9. hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
  10. hadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
  11. hadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
  12. hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
  13. hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
  14. hadoop1: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop1.out
  15. hadoop4: starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop4.out
  16. [hadoop@hadoop1 conf]$

观看启动日志可以看到:
(1)首先在命令执行节点启动 master
(2)然后分别在 hadoop02,hadoop03,hadoop04,hadoop05 启动 regionserver
(3)然后在 backup-masters 文件中配置的备节点上再启动一个 master 主进程

验证是否正常启动

1.检查各进程是否启动正常

主节点和备用节点都启动 hmaster 进程
各从节点都启动 hregionserver 进程
HBase集群安装 - 图1
HBase集群安装 - 图2
HBase集群安装 - 图3
HBase集群安装 - 图4
按照对应的配置信息各个节点应该要启动的进程如上图所示

2.通过访问浏览器页面

hadoop1
http://hadoop1:16010/master-status
image.png
hadoop4
http://hadoop4:16010/master-status
image.png

3.验证高可用

  1. [root@hadoop1 ~]# jps
  2. 37266 NameNode
  3. 39444 HRegionServer
  4. 2421 QuorumPeerMain
  5. 2517 JournalNode
  6. 37879 JobHistoryServer
  7. 39739 Jps
  8. 31486 DFSZKFailoverController
  9. 39262 HMaster
  10. [root@hadoop1 ~]# kill -9 39262

hadoop1界面访问不了
image.png
hadoop4变成主节点
image.png

4.如果有节点相应的进程没有启动,可以手动启动

启动HMaster进程

  1. [root@hadoop1 ~]# jps
  2. 37266 NameNode
  3. 39444 HRegionServer
  4. 2421 QuorumPeerMain
  5. 2517 JournalNode
  6. 31486 DFSZKFailoverController
  7. 40382 Jps
  8. [root@hadoop1 ~]# hbase-daemon.sh start master
  9. running master, logging to /usr/local/hbase-2.3.5//logs/hbase-root-master-hadoop1.out
  10. [root@hadoop1 ~]# jps
  11. 37266 NameNode
  12. 39444 HRegionServer
  13. 2421 QuorumPeerMain
  14. 2517 JournalNode
  15. 40474 HMaster
  16. 40603 Jps
  17. 31486 DFSZKFailoverController

image.png