其他分享
首页 > 其他分享> > hadoop-hive安装

hadoop-hive安装

作者:互联网

Hive的安装

1、下载安装包:apache-hive-3.1.2-bin.tar.gz(这是格式)资源在CSDN上可以找到

上传至linux系统/opt/software/路径

2、解压软件

cd /opt/software/

tar -zxvf hive.tar.gz -C /opt/module/

3、修改系统环境变量

vim /etc/profile

添加内容:

cd /opt/software/
tar -zxvf hive.tar.gz -C /opt/module/

重启环境配置: 

source /etc/profile

4、修改hive环境变量

source /etc/profile

编辑hive-config.sh文件

vi hive-config.sh

新增内容:

export JAVA_HOME=/opt/module/jdk1.8.0_212

export HIVE_HOME=/opt/module/hive

export HADOOP_HOME=/opt/module/hadoop-3.1.3

export HIVE_CONF_DIR=/opt/module/hive/conf

5、在/opt/module/hive/conf/创建hive-site.xml配置文件

vi hive-site.xml

6、修改Hive配置文件,找到对应的位置进行修改

 

 <?xml version="1.0" encoding="UTF-8" standalone="no"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

    <name>javax.jdo.option.ConnectionDriverName</name>

    <value>com.mysql.cj.jdbc.Driver</value>

    <description>Driver class name for a JDBC metastore</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionUserName</name>

    <value>root</value>

    <description>Username to use against metastore database</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionPassword</name>

    <value>123456</value>

    <description>password to use against metastore database</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://192.168.1.102:3306/hive?useUnicode=true&amp;characterEncoding=utf8&amp;useSSL=false&amp;serverTimezone=GMT</value>

    <description>

      JDBC connect string for a JDBC metastore.

      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.

      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.

    </description>

  </property>

  <property>

    <name>datanucleus.schema.autoCreateAll</name>

    <value>true</value>

    <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>

  </property>

<property>

    <name>hive.metastore.schema.verification</name>

    <value>false</value>

    <description>

      Enforce metastore schema version consistency.

      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic

            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures

            proper metastore schema migration. (Default)

      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.

    </description>

  </property>

<property>

    <name>hive.exec.local.scratchdir</name>

    <value>/opt/module/hive/tmp/${user.name}</value>

    <description>Local scratch space for Hive jobs</description>

  </property>

  <property>

<name>system:java.io.tmpdir</name>

<value>/opt/module/hive/iotmp</value>

<description/>

</property>



  <property>

    <name>hive.downloaded.resources.dir</name>

<value>/opt/module/hive/tmp/${hive.session.id}_resources</value>

    <description>Temporary local directory for added resources in the remote file system.</description>

  </property>

<property>

    <name>hive.querylog.location</name>

    <value>/opt/module/hive/tmp/${system:user.name}</value>

    <description>Location of Hive run time structured log file</description>

  </property>

  <property>

    <name>hive.server2.logging.operation.log.location</name>

<value>/opt/module/hive/tmp/${system:user.name}/operation_logs</value>

    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>

  </property>

  <property>

    <name>hive.metastore.db.type</name>

    <value>mysql</value>

    <description>

      Expects one of [derby, oracle, mysql, mssql, postgres].

      Type of database used by the metastore. Information schema &amp; JDBCStorageHandler depend on it.

    </description>

  </property>

  <property>

    <name>hive.cli.print.current.db</name>

    <value>true</value>

    <description>Whether to include the current database in the Hive prompt.</description>

  </property>

  <property>

    <name>hive.cli.print.header</name>

    <value>true</value>

    <description>Whether to print the names of the columns in query output.</description>

  </property>

  <property>

    <name>hive.metastore.warehouse.dir</name>

    <value>/opt/hive/warehouse</value>

    <description>location of default database for the warehouse</description>

  </property>

</configuration>

7、上传mysql驱动包到/opt/module/hive/lib/文件夹下

驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包

8、确保 mysql数据库中有名称为hive的数据库

9、初始化初始化元数据库

 schematool -dbType mysql -initSchema

10、确保Hadoop启动

11、启动hive

hive

12、检测是否启动成功

show databases;

标签:opt,hadoop,module,metastore,hive,mysql,安装,schema
来源: https://blog.csdn.net/qq_54417407/article/details/121873344