本来应该直接安装 Hive 3+ 的,但我们有些集群使用的是 Hive1.2 ,本集群还要他们的预发布环境,就只能选用 1.2 版本了。

1. 解压程序

[hadoop@bigdata1 package]# tar -xzvf apache-hive-1.2.2-bin.tar.gz [hadoop@bigdata1 package]# mv apache-hive-1.2.2-bin ../hive122 [hadoop@bigdata1 package]# cd ../hive122 [root@bigdata1 hive122]# pwd /home/hadoop/hive122 [root@bigdata1 hive122]# ls -lrt total 72 -rw-r—r— 1 hadoop hadoop 397 Mar 31 2017 NOTICE -rw-r—r— 1 hadoop hadoop 24754 Mar 31 2017 LICENSE -rw-r—r— 1 hadoop hadoop 4255 Apr 1 2017 RELEASE_NOTES.txt -rw-r—r— 1 hadoop hadoop 4374 Apr 1 2017 README.txt drwxr-xr-x 4 hadoop hadoop 4096 Sep 8 11:17 examples drwxr-xr-x 3 hadoop hadoop 4096 Sep 8 11:17 scripts drwxr-xr-x 3 hadoop hadoop 4096 Sep 8 11:17 bin drwxr-xr-x 4 hadoop hadoop 4096 Sep 8 11:17 lib drwxr-xr-x 7 hadoop hadoop 4096 Sep 8 11:17 hcatalog drwxr-xr-x 2 hadoop hadoop 4096 Sep 8 11:17 conf [root@bigdata1 hive122]# cd conf [root@bigdata1 conf]# pwd /home/hadoop/hive122/conf [root@bigdata1 conf]# ls -lrt total 188 -rw-r—r— 1 hadoop hadoop 3050 Mar 31 2017 hive-log4j.properties.template -rw-r—r— 1 hadoop hadoop 2378 Mar 31 2017 hive-env.sh.template -rw-r—r— 1 hadoop hadoop 1139 Mar 31 2017 beeline-log4j.properties.template -rw-r—r— 1 hadoop hadoop 2662 Mar 31 2017 hive-exec-log4j.properties.template -rw-r—r— 1 hadoop hadoop 1593 Apr 1 2017 ivysettings.xml -rw-r—r— 1 hadoop hadoop 168715 Apr 1 2017 hive-default.xml.template [root@bigdata1 conf]# cp hive-env.sh.template hive-env.sh [root@bigdata1 conf]# cp hive-default.xml.template hive-site.xml
[root@bigdata1 conf]#

2. 修改配置文件

参数有很多,初始需要修改的主要是以下几个

javax.jdo.option.ConnectionURL jdbc:mysql://bigdata1:3306/hive?useSSL=false JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName hive Username to use against metastore database javax.jdo.option.ConnectionPassword Hive password to use against metastore database hive.metastore.warehouse.dir /user/hive/warehouse location of default database for the warehouse hive.exec.scratchdir /tmp/hive HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}. hive.querylog.location /home/hadoop/hive122/logs Location of Hive run time structured log file

3.新增需要的目录

[hadoop@bigdata1 conf]# mkdir /home/hadoop/hive122/logs [hadoop@bigdata1 conf]#

4.设置环境变量(root账户)

只需要在主节点配置

hive

export HIVE_HOME=/home/hadoop/hive122 export PATH=$HIVE_HOME/bin:$PATH export HIVE_CONF_DIR=$HIVE_HOME/conf

[root@bigdata1 conf]# source /etc/profile [root@bigdata1 conf]# su hadoop [hadoop@bigdata1 conf]$ source /etc/profile [hadoop@bigdata1 conf]$

5.初始化元数据、启动服务

[hadoop@bigdata1 lib]$ $HIVE_HOME/bin/schematool -dbType mysql -initSchema Metastore connection URL: jdbc:mysql://bigdata1:33006/hive?useSSL=false Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Starting metastore schema initialization to 1.2.0 Initialization script hive-schema-1.2.0.mysql.sql Initialization script completed schemaTool completed [hadoop@bigdata1 lib]$

[hadoop@bigdata1 lib]$ nohup $HIVE_HOME/bin/hive —service metastore > $HIVE_HOME/logs/metastore.log 2>&1 &
[1] 4602 [hadoop@bigdata1 lib]$ [hadoop@bigdata1 lib]$ nohup $HIVE_HOME/bin/hive —service hiveserver2 > $HIVE_HOME/logs/hiveserver2.log 2>&1 &
[2] 4763 [hadoop@bigdata1 lib]$ jps 22400 HistoryServer 22322 HistoryServer 30850 SparkSubmit 21202 NameNode 28098 NodeManager 23701 JobHistoryServer 21431 DataNode 4872 Jps 21609 SecondaryNameNode 27817 ResourceManager 4602 RunJar 4763 RunJar [hadoop@bigdata1 lib]$ [hadoop@bigdata1 lib]$ hive

Logging initialized using configuration in jar:file:/home/hadoop/hive122/lib/hive-common-1.2.2.jar!/hive-log4j.properties hive> quit; [hadoop@bigdata1 lib]$ [hadoop@bigdata1 lib]$ beeline -nhadoop -ppassword -u jdbc:hive2://bigdata1:10000/default Connecting to jdbc:hive2://bigdata1:10000/default Connected to: Apache Hive (version 1.2.2) Driver: Hive JDBC (version 1.2.2) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 1.2.2 by Apache Hive 0: jdbc:hive2://bigdata1:10000/default> show databases; +————————+—+ | database_name | +————————+—+ | default | +————————+—+ 1 row selected (0.074 seconds) 0: jdbc:hive2://bigdata1:10000/default> create database ysxs; No rows affected (0.132 seconds) 0: jdbc:hive2://bigdata1:10000/default> show databases; +————————+—+ | database_name | +————————+—+ | default | | ysxs | +————————+—+ 2 rows selected (0.011 seconds) 0: jdbc:hive2://bigdata1:10000/default> !q Closing: 0: jdbc:hive2://bigdata1:10000/default [hadoop@bigdata1 lib]$

6.问题解决

Found class jline.Terminal, but interface was expected

这个错误是hive中的 jline-2.12.jar 这个jar包与hadoop中yarn中的lib下的这个jline-xxx.jar版本不一致造成的。只需要将hive中的这jline-2.12.jar包移动到hadoop的yarn中的lib下,替换掉原来的jar包就OK了。

[hadoop@bigdata1 lib]$ pwd /home/hadoop/hive122/lib [hadoop@bigdata1 lib]$ ls -l jline -rw-r—r— 1 hadoop hadoop 213854 Mar 18 2017 jline-2.12.jar [hadoop@bigdata1 lib]$ ls /home/hadoop/hadoop260/share/hadoop/yarn/lib/jline /home/hadoop/hadoop260/share/hadoop/yarn/lib/jline-0.9.94.jar [hadoop@bigdata1 lib]$ cp ./jline-2.12.jar /home/hadoop/hadoop260/share/hadoop/yarn/lib/ [hadoop@bigdata1 lib]$

[hadoop@bigdata1 lib]$ $HIVE_HOME/bin/schematool -dbType mysql -initSchema Metastore connection URL: jdbc:mysql://bigdata1:33006/hive?useSSL=false Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Starting metastore schema initialization to 1.2.0 Initialization script hive-schema-1.2.0.mysql.sql [ERROR] Terminal initialization failed; falling back to unsupported java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected at jline.TerminalFactory.create(TerminalFactory.java:101) at jline.TerminalFactory.get(TerminalFactory.java:158) at org.apache.hive.beeline.BeeLineOpts.(BeeLineOpts.java:74) at org.apache.hive.beeline.BeeLine.(BeeLine.java:117) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:346) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:326) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:266) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:243) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:473) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Exception in thread “main” java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected at org.apache.hive.beeline.BeeLineOpts.(BeeLineOpts.java:102) at org.apache.hive.beeline.BeeLine.(BeeLine.java:117) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:346) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:326) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:266) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:243) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:473) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) [hadoop@bigdata1 lib]$ ^C