Hive 3.1.2安裝部署
說明
已經提前部署好hadoop 叢集3.2.1 版本
下載
這裡我們下載的版本是:apache-hive-3.1.2-bin.tar.gz
驅動的jar 包下載
https://dev.mysql.com/downloads/connector/j/
System 要選Platform Independent ,選擇字尾tar.gz 的包進行下載
下載
因為我們要用mysql 資料庫來存放hive 的後設資料,所以需要提前安裝好mysql 資料庫
https://dev.mysql.com/downloads/mysql/
這裡我們選擇的版本是:5.7.29
mysql-community-common-5.7.29-1.el6.x86_64.rpm
mysql-community-libs-5.7.29-1.el6.x86_64.rpm
mysql-community-client-5.7.29-1.el6.x86_64.rpm
mysql-community-server-5.7.29-1.el6.x86_64.rpm
安裝
解除安裝Mariadb
因為Centos 預設支援的資料庫是Mariadb, 所以需求先解除安裝Mariadb
[root@hadoop3 soft]# rpm -qa | grep maria*
marisa-0.2.4-4.el7.x86_64
mariadb-libs-5.5.56-2.el7.x86_64
解除安裝資料庫:
[root@hadoop3 soft]# yum -y remove mari*
安裝MySQL 軟體安裝所需的依賴包
yum -y install gcc* gcc-c++ ncurses* ncurses-devel* cmake* make* perl* bison* libaio-devel* libgcrypt*
上傳mysql 安裝介質到相應目錄
[root@hadoop3 ~]# cd /hadoop/soft/
[root@hadoop3 soft]# ls -rtl
total 1465928
-rw-r--r--. 1 root root 278813748 Jul 8 16:01 apache-hive-3.1.2-bin.tar.gz
-rw-r--r--. 1 root root 3930793 Jul 8 16:02 mysql-connector-java-8.0.20.tar.gz
-rw-r--r--. 1 root root 2596180 Apr 15 10:49 mysql-community-libs-5.7.29-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 4085448 Apr 15 10:54 mysql-community-devel-5.7.29-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 27768112 Apr 15 11:11 mysql-community-client-5.7.29-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 183618644 Apr 15 14:47 mysql-community-server-5.7.29-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 318972 Apr 15 14:52 mysql-community-common-5.7.29-1.el7.x86_64.rpm
安裝mysql
[root@hadoop3 soft]# rpm -ivh mysql-community-common-5.7.29-1.el7.x86_64.rpm
[root@hadoop3 soft]# rpm -ivh mysql-community-libs-5.7.29-1.el7.x86_64.rpm
[root@hadoop3 soft]# rpm -ivh mysql-community-devel-5.7.29-1.el7.x86_64.rpm
[root@hadoop3 soft]# rpm -ivh mysql-community-client-5.7.29-1.el7.x86_64.rpm
[root@hadoop3 soft]# rpm -ivh mysql-community-server-5.7.29-1.el7.x86_64.rpm
建立相關目錄並賦權
[root@hadoop3 hadoop]# mkdir -p /hadoop/mysql/product/{binlog,relaylog,log,undo,redo,data}
[root@hadoop3 hadoop]# chown -R mysql:mysql /hadoop/mysql/product
修改初始化選項檔案
[root@hadoop3 hadoop]# vi /etc/my.cnf
[client] socket = /hadoop/mysql/product/data/mysql.sock [mysql] prompt="\\u@\\h:\\r:\\m:\\s[\\d]>" no-auto-rehash [mysqld] #BASE port = 3306 basedir = /usr datadir = /hadoop/mysql/product/data socket = /hadoop/mysql/product/data/mysql.sock pid-file = /hadoop/mysql/product/data/mysqld.pid skip-name-resolve lower_case_table_names=1 symbolic-links=0 default_time_zone = "+8:00" log_timestamps = SYSTEM open_files_limit = 65535 #Character character-set-server =utf8mb4 #Connection max_connections=1000 max_connect_errors=999999999 interactive_timeout=3600 wait_timeout=3600 #BINLOG server-id = 1 #搭建主從環境兩套資料庫,server-id需不同 sync_binlog=1 log-bin = /hadoop/mysql/product/binlog/mysql-bin max_binlog_size = 1073741824 binlog_cache_size = 1M binlog_format= ROW expire_logs_days = 7 relay-log=/hadoop/mysql/product/relaylog/mysql-relay-bin #LOG log-error=/hadoop/mysql/product/log/mysqld.err log_output = FILE slow_query_log = ON slow_query_log_file = /hadoop/mysql/product/log/mysql-slow.log long_query_time = 3 general_log = OFF general_log_file = /hadoop/mysql/product/log/mysql-general.log #GTID gtid_mode=on enforce_gtid_consistency=on log-slave-updates=on #INNODB innodb_buffer_pool_size = 12884901888 innodb_buffer_pool_instances = 8 innodb_log_group_home_dir = /hadoop/mysql/product/redo innodb_undo_directory = /hadoop/mysql/product/undo innodb_undo_tablespaces = 2 innodb_log_file_size= 1073741824 innodb_log_files_in_group=3 innodb_flush_method=O_DIRECT innodb_flush_log_at_trx_commit=1 innodb_data_file_path = ibdata1:500M:autoextend innodb_temp_data_file_path = ibtmp1:200M:autoextend #cache buffer key_buffer_size = 32M read_buffer_size = 8M read_rnd_buffer_size = 4M bulk_insert_buffer_size = 64M sort_buffer_size = 4M join_buffer_size = 4M max_allowed_packet = 16M tmp_table_size = 32M query_cache_size = 0 query_cache_type = off
啟動資料庫服務
安裝包自動新增mysql 服務到自啟動服務中,但是mysql 在首次安裝完畢不會自動啟動。需要手動啟動資料庫服務。
[root@hadoop3 hadoop]# systemctl start mysqld
修改密碼
安裝完畢,預設安裝資料庫Root 賬號有密碼保護,初始密碼為隨機字元,密碼可在錯誤日誌內檢視。
[root@hadoop3 mysql]# grep 'temporary password' /hadoop/mysql/product/log/mysqld.err
2020-07-08T17:37:00.359029+08:00 1 [Note] A temporary password is generated for root@localhost: MyzejBjK-6S< -- 初始密碼
修改
[root@hadoop3 mysql]# mysqladmin -uroot -p password 'Hzmc321#'
Enter password: -- 輸入初始密碼
mysqladmin: [Warning] Using a password on the command line interface can be insecure.
Warning: Since password will be sent to server in plain text, use ssl connection to ensure password safety.
注:
MySQL5.7 之後有預設密碼策略:密碼必須滿足:數字、小寫字母、大寫字母 、特殊字元、長度至少 8 位
測試連線
[root@hadoop3 mysql]# mysql -uroot -pHzmc321#
mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.29-log MySQL Community Server (GPL) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. root@localhost:05:39:58[(none)]>
檢視服務是否開機啟動
[root@hadoop3 bin]# systemctl is-enabled mysqld
Enabled
可以看到已經設定了開機自啟動
如果沒有設定,則使用命令systemctl enable mysqld 設定開機自啟動。
至此,MySQL 資料庫安裝完成。
安裝
Hive 只需在一個節點上部署即可,在三節點的hadoop 叢集中,hadoop3 的資源較空閒,故計劃將hive 安裝在第三個節點
上傳介質包到相應目錄並解壓
[root@hadoop3 soft]# ls -rtl
total 1679204
-rw-r--r--. 1 root root 278813748 Jul 8 16:01 apache-hive-3.1.2-bin.tar.gz
-rw-r--r--. 1 root root 3930793 Jul 8 16:02 mysql-connector-java-8.0.20.tar.gz
解壓hive 到相應目錄
[root@hadoop3 soft]# tar -xvf apache-hive-3.1.2-bin.tar.gz -C /hadoop/hive/
配置環境變數
vi /etc/profile
新增:
#set hive environment
export HIVE_HOME=/hadoop/hive/apache-hive-3.1.2-bin
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH= $PATH:$HIVE_HOME/bin
source /etc/profile 讓環境變數生效
hive --version 檢視版本號
Jar
包衝突,使用hadoop
下面的jar
包,所以這裡重新命名掉hive
下的jar
包
[root@hadoop3 lib]# cd /hadoop/hive/apache-hive-3.1.2-bin/lib/
[root@hadoop3 lib]# mv log4j-slf4j-impl-2.10.0.jar log4j-slf4j-impl-2.10.0.jar_bak
重新檢視版本,報錯
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
這個是guava.jar 這個包在hadoop 和hive 的版本不統一造成的,檢查下hive 下面的guava 的版本為:
而在hadoop3.2.1 下面的版本是:
我們將hadoop 中的高版本替換到hive 中
移除hive 下的包
[root@hadoop3 lib]# mv guava-19.0.jar guava-19.0.jar_bak
將hadoop 下的包複製到hive 下
[root@hadoop3 lib]# cp guava-27.0-jre.jar /hadoop/hive/apache-hive-3.1.2-bin/lib/
重新查詢
修改配置檔案
在修改相應配置檔案之前,先建立後續要用到的hdfs 的目錄
使用如下目錄建立Hive 在HDFS 中的所需目錄(hive.metastore.warehouse.dir 對應的目錄)並設定許可權
[root@hadoop3 ~]# hadoop fs -mkdir -p /hadoop/hive/apache-hive-3.1.2-bin/tmp
[root@hadoop3 ~]# hadoop fs -mkdir -p /hadoop/hive/apache-hive-3.1.2-bin/warehouse
[root@hadoop3 ~]# hadoop fs -chmod -R 777 /hadoop/hive/apache-hive-3.1.2-bin/tmp
[root@hadoop3~]# hadoop fs -chmod -R 777 /hadoop/hive/apache-hive-3.1.2-bin/warehouse
[root@hadoop3 ~]# hadoop fs -ls /
Found 3 items
drwxr-xr-x - root supergroup 0 2020-07-09 11:20 /hadoop
drwxr-xr-x - root supergroup 0 2020-07-03 15:45 /hbase
drwxrwx--- - root supergroup 0 2020-07-09 10:44 /tmproot@hadoop001:~/hadoop-3.2.1/bin# hadoop fs -chmod g+w /hadoop/hive/apache-hive-3.1.2-bin/warehouse
配置hive-site.xml
該檔案主要配置mysql 地址,mysql 主要是存一些後設資料的資料庫,mysql 提前安裝好
javax.jdo.option.ConnectionUserName 資料庫的使用者名稱
javax.jdo.option.ConnectionPassword 資料庫帳號的密碼
javax.jdo.option.ConnectionURL 資料庫地址
javax.jdo.option.ConnectionDriverName 資料庫連線驅動
將 hive-site.xml 檔案中的 ${system:java.io.tmpdir} 替換為hive 的本地臨時目錄,例如我使用的是/hadoop/hive/apache-hive-3.1.2-bin/temp ,如果該目錄不存在,需要先進行建立,並且賦予讀寫許可權。
[root@hadoop3 ~]# mkdir /hadoop/hive/apache-hive-3.1.2-bin/temp
[root@hadoop3 ~]# chmod -R 777 /hadoop/hive/apache-hive-3.1.2-bin/temp
cp hive-default.xml.template hive-site.xml
原始檔案很大,我沒可以選擇直接新建一個空的檔案進行編輯
[root@hadoop3 conf]# touch hive-site.xml
[root@hadoop3 conf]# vi hive-site.xml
新增如下內容
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/hadoop/hive/apache-hive-3.1.2-bin/warehouse</value> </property> <!--url--> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop3:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value> </property> <!--mysql使用者名稱--> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!--mysql中root使用者密碼--> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>Hzmc321#</value> </property> <!--mysql驅動--> <!--資料庫的驅動類名稱--> <!--新版本8.0版本的驅動為com.mysql.cj.jdbc.Driver--> <!--舊版本5.x版本的驅動為com.mysql.jdbc.Driver--> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.cj.jdbc.Driver</value> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/hadoop/hive/apache-hive-3.1.2-bin/temp/root</value> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/hadoop/hive/apache-hive-3.1.2-bin/temp/${hive.session.id}_resources</value> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/hadoop/hive/apache-hive-3.1.2-bin/temp/root/operation_logs</value> </property> <property> <name>hive.querylog.location</name> <value>/hadoop/hive/apache-hive-3.1.2-bin/temp/root</value> </property> </configuration>
將MySQL8 的驅動包複製到Hive 安裝目錄的lib 目錄中
[root@hadoop3 soft]# tar -xvf mysql-connector-java-8.0.20.tar.gz
[root@hadoop3 soft]# cd mysql-connector-java-8.0.20
[root@hadoop3 mysql-connector-java-8.0.20]# cp mysql-connector-java-8.0.20.jar /hadoop/hive/apache-hive-3.1.2-bin/lib/
修改hive-env.sh
[root@hadoop3 conf]# cp hive-env.sh.template hive-env.sh
[root@hadoop3 conf]# vi hive-env.sh
新增如下內容
#hadoop 安裝目錄 export HADOOP_HOME=/hadoop/hadoop-3.2.1 #hive 配置檔案目錄 export HIVE_CONF_DIR=/hadoop/hive/apache-hive-3.1.2-bin/conf #hive 依賴jar包目錄 export HIVE_AUX_JARS_PATH=/hadoop/hive/apache-hive-3.1.2-bin/lib
配置hive-log4j2.properties
[root@hadoop3 conf]# cp hive-log4j2.properties.template hive-log4j2.properties
[root@hadoop3 conf]# vi hive-log4j2.properties
修改內容
property.hive.log.dir = /hadoop/hive/apache-hive-3.1.2-bin/temp/root
資料庫初始化、啟動和測試
[root@hadoop3 conf]# schematool -dbType mysql -initSchema
沒有遠端訪問mysql 資料庫的許可權
[root@hadoop3 ~]# mysql -uroot -p
Enter password: root@localhost:12:29:59[(none)]> root@localhost:12:30:00[(none)]>show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.19 sec) root@localhost:12:30:08[(none)]>use mysql Database changed root@localhost:12:30:13[mysql]>select user, authentication_string, host from user; +---------------+-------------------------------------------+-----------+ | user | authentication_string | host | +---------------+-------------------------------------------+-----------+ | root | *1373DBE8BAF1B4AAFF2A6EE94BF22769E7477566 | localhost | | mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost | | mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost | +---------------+-------------------------------------------+-----------+ 3 rows in set (0.00 sec) root@localhost:12:30:15[mysql]>update user set host = '%' where user = 'root'; Query OK, 1 row affected (0.05 sec) Rows matched: 1 Changed: 1 Warnings: 0 root@localhost:12:30:50[mysql]>FLUSH PRIVILEGES; Query OK, 0 rows affected (0.02 sec) root@localhost:12:30:59[mysql]>select user, authentication_string, host from user; +---------------+-------------------------------------------+-----------+ | user | authentication_string | host | +---------------+-------------------------------------------+-----------+ | root | *1373DBE8BAF1B4AAFF2A6EE94BF22769E7477566 | % | | mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost | | mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost | +---------------+-------------------------------------------+-----------+ 3 rows in set (0.00 sec)
修改許可權後重新初始化
[root@hadoop3 conf]# schematool -dbType mysql -initSchema
Metastore connection URL: jdbc:mysql://hadoop3:3306/hive?createDatabaseIfNotExist=true&useSSL=false Metastore Connection Driver : com.mysql.cj.jdbc.Driver Metastore connection User: root Starting metastore schema initialization to 3.1.0 Initialization script hive-schema-3.1.0.mysql.sql Initialization script completed schemaTool completed
資料庫初始化完成之後,會在MySQL8 資料庫裡生成如下metadata 表用於儲存Hive 的後設資料資訊
hive
[root@hadoop3 lib]# hive
2020-07-09 12:44:16,884 INFO [main] conf.HiveConf: Found configuration file file:/hadoop/hive/apache-hive-3.1.2-bin/conf/hive-site.xml Hive Session ID = d1f51bdc-ec09-43dd-b6fe-2055e565d5bc 2020-07-09 12:44:21,105 INFO [main] SessionState: Hive Session ID = d1f51bdc-ec09-43dd-b6fe-2055e565d5bc Logging initialized using configuration in file:/hadoop/hive/apache-hive-3.1.2-bin/conf/hive-log4j2.properties Async: true 2020-07-09 12:44:21,168 INFO [main] SessionState: Logging initialized using configuration in file:/hadoop/hive/apache-hive-3.1.2-bin/conf/hive-log4j2.properties Async: true 2020-07-09 12:44:22,684 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/root/d1f51bdc-ec09-43dd-b6fe-2055e565d5bc 2020-07-09 12:44:22,720 INFO [main] session.SessionState: Created local directory: /hadoop/hive/apache-hive-3.1.2-bin/temp/root/d1f51bdc-ec09-43dd-b6fe-2055e565d5bc 2020-07-09 12:44:22,730 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/root/d1f51bdc-ec09-43dd-b6fe-2055e565d5bc/_tmp_space.db …… 2020-07-09 12:44:35,646 INFO [pool-10-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_tables_by_type: db=@hive#default pat=.*,type=MATERIALIZED_VIEW 2020-07-09 12:44:35,688 INFO [pool-10-thread-1] metastore.HiveMetaStore: 1: get_multi_table : db=default tbls= 2020-07-09 12:44:35,688 INFO [pool-10-thread-1] HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_multi_table : db=default tbls= 2020-07-09 12:44:35,704 INFO [pool-10-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized hive>
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29956245/viewspace-2933095/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Hive(八)安裝部署Hive
- 八、hive3.1.2 安裝及其配置(本地模式和遠端模式)Hive模式
- Hive-1.1.0-cdh5.7.0安裝部署HiveH5
- Hive安裝Hive
- 【Hive一】Hive安裝及配置Hive
- Hive的安裝Hive
- Hive安裝配置Hive
- Hive學習之Hive的安裝Hive
- 安裝和體驗hiveHive
- Centos7安裝配置Hive教程。CentOSHive
- Linux安裝部署Linux
- ELK安裝部署
- chromedriver安裝部署Chrome
- canal安裝部署
- SQOOP安裝部署OOP
- keepalived 安裝部署
- Hadoop安裝部署Hadoop
- Zabbix安裝部署
- Doris安裝部署
- Hive環境部署Hive
- Centos7安裝安裝部署dockerCentOSDocker
- Linux環境Hive安裝配置及使用LinuxHive
- ElasticSearch + Kibana 安裝部署Elasticsearch
- hadoop的安裝部署Hadoop
- Jenkins安裝部署(一)Jenkins
- Saltstack基本安裝部署
- docke安裝與部署
- CDH - [02] 安裝部署
- gitlab - [02] 安裝部署Gitlab
- Tomcat 8安裝部署Tomcat
- DataX - [02] 安裝部署
- flume的安裝部署
- RocketMQ安裝及部署MQ
- python安裝部署(3.12)Python
- Apache Ranger安裝部署ApacheRanger
- openGauss Datakit安裝部署
- Oozie--安裝部署
- minio client安裝部署client