Hadoop2.3、Hbase0.98、Hive0.13之Hive的安裝部署配置以及資料測試
簡介:
Hive是基於Hadoop的一個資料倉儲工具,可以將結構化的資料檔案對映為一張資料庫表,並提供簡單的sql查詢功能,可以將sql語句轉換為MapReduce任務進行執行。 其優點是學習成本低,可以透過類SQL語句快速實現簡單的MapReduce統計,不必開發專門的MapReduce應用,十分適合資料倉儲的統計分析。
1, 適用場景
Hive 構建在基於靜態批處理的Hadoop 之上,Hadoop 通常都有較高的延遲並且在作業提交和排程的時候需要大量的開銷。因此,Hive 並不能夠在大規模資料集上實現低延遲快速的查詢,例如,Hive 在幾百MB 的資料集上執行查詢一般有分鐘級的時間延遲。因此,
Hive 並不適合那些需要低延遲的應用,例如,聯機事務處理(OLTP)。Hive 查詢操作過程嚴格遵守Hadoop MapReduce 的作業執行模型,Hive 將使用者的HiveQL 語句透過直譯器轉換為MapReduce 作業提交到Hadoop 叢集上,Hadoop 監控作業執行過程,然後返回作業執行結果給使用者。Hive 並非為聯機事務處理而設計,Hive 並不提供實時的查詢和基於行級的資料更新操作。Hive 的最佳使用場合是大資料集的批處理作業,例如,網路日誌分析。
2,下載安裝
前期hadoop安裝準備,參考:http://blog.itpub.net/26230597/viewspace-1257609/
下載地址
wget
解壓安裝
tar zxvf apache-hive-0.13.1-bin.tar.gz -C /home/hadoop/src/
PS:Hive只需要在一個節點上安裝即可,本例安裝在name節點上面的虛擬機器上面,與hadoop的name節點複用一臺虛擬機器。
3,配置hive環境變數
vim hive-env.sh
export HIVE_HOME=/home/hadoop/src/hive-0.13.1
export PATH=$PATH:$HIVE_HOME/bin
4,配置hadoop以及hbase引數
vim hive-env.sh
# Set HADOOP_HOME to point to a specific hadoop install directory
HADOOP_HOME=/home/hadoop/src/hadoop-2.3.0/
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/home/hadoop/src/hive-0.13.1/conf
# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/home/hadoop/src/hive-0.13.1/lib
5,驗證安裝:
啟動hive命令列模式,出現hive,說明安裝成功了
[hadoop@name01 lib]$ hive --service cli
15/01/09 00:20:32 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
Logging initialized using configuration in jar:file:/home/hadoop/src/hive-0.13.1/lib/hive-common-0.13.1.jar!/hive-log4j.properties
建立表,執行create命令,出現OK,說明命令執行成功,也說明hive安裝成功。
hive> create table test(key string);
OK
Time taken: 8.749 seconds
hive>
6,驗證可用性
啟動hive
[hadoop@name01 root]$hive --service metastore &
檢視後臺hive執行程式
[hadoop@name01 root]$ ps -eaf|grep hive
hadoop 4025 2460 1 22:52 pts/0 00:00:19 /usr/lib/jvm/jdk1.7.0_60/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/hadoop/src/hadoop-2.3.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/hadoop/src/hadoop-2.3.0 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/hadoop/src/hadoop-2.3.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/hadoop/src/hive-0.13.1/lib/hive-service-0.13.1.jar org.apache.hadoop.hive.metastore.HiveMetaStore
hadoop 4575 4547 0 23:14 pts/1 00:00:00 grep hive
[hadoop@name01 root]$
6.1在hive下執行命令,建立2個欄位的表,欄位間隔用’,’隔開:
hive> create table test(key string);
OK
Time taken: 8.749 seconds
hive> create table tim_test(id int,name string) row format delimited fields terminated by ',';
OK
Time taken: 0.145 seconds
hive>
6.2準備匯入到資料庫的txt檔案,並輸入值:
[hadoop@name01 hive-0.13.1]$ more tim_hive_test.txt
123,xinhua
456,dingxilu
789,fanyulu
903,fahuazhengroad
[hadoop@name01 hive-0.13.1]$
6.4 再開啟一個xshell埠,進入伺服器端啟動hive:
[hadoop@name01 root]$ hive --service metastore
Starting Hive Metastore Server
6.5 再開啟一個xshell埠,進入hive客戶端錄入資料:
[hadoop@name01 hive-0.13.1]$ hive
Logging initialized using configuration in jar:file:/home/hadoop/src/hive-0.13.1/lib/hive-common-0.13.1.jar!/hive-log4j.properties
hive> load data local inpath '/home/hadoop/src/hive-0.13.1/tim_hive_test.txt' into table tim_test;
Copying data from file:/home/hadoop/src/hive-0.13.1/tim_hive_test.txt
Copying file: file:/home/hadoop/src/hive-0.13.1/tim_hive_test.txt
Loading data to table default.tim_test
[Warning] could not update stats.
OK
Time taken: 7.208 seconds
hive>
6.6 驗證錄入資料是否成功,看到dfs出來有tim_test
hive> dfs -ls /home/hadoop/hive/warehouse;
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2015-01-12 01:47 /home/hadoop/hive/warehouse/hive_hbase_mapping_table_1
drwxr-xr-x - hadoop supergroup 0 2015-01-12 02:11 /home/hadoop/hive/warehouse/tim_test
hive>
7,安裝部署中的報錯記錄:
報錯1:
[hadoop@name01 conf]$ hive --service metastore
Starting Hive Metastore Server
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
缺少mysql的jar包,copy到hive的lib目錄下面,OK。
報錯2:
[hadoop@name01 conf]$ hive --service metastore
Starting Hive Metastore Server
javax.jdo.JDOFatalDataStoreException: Unable to open a test connection to the given database. JDBC url = jdbc:mysql://192.168.52.130:3306/hive_remote?createDatabaseIfNotExist=true, username = root. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
java.sql.SQLException: null, message from server: "Host '192.168.52.128' is not allowed to connect to this MySQL server"
將hadoop使用者新增到mysql組:
[root@data02 mysql]# gpasswd -a hadoop mysql
Adding user hadoop to group mysql
[root@data02 mysql]#
^C[hadoop@name01 conf]$ telnet 192.168.52.130 3306
Trying 192.168.52.130...
Connected to 192.168.52.130.
Escape character is '^]'.
G
Host '192.168.52.128' is not allowed to connect to this MySQL serverConnection closed by foreign host.
[hadoop@name01 conf]$
解決辦法:修改mysql賬號
mysql> update user set user = 'hadoop' where user = 'root' and host='%';
Query OK, 1 row affected (0.04 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> flush privileges;
Query OK, 0 rows affected (0.09 sec)
mysql>
報錯3:
[hadoop@name01 conf]$ hive --service metastore
Starting Hive Metastore Server
javax.jdo.JDOException: Exception thrown calling table.exists() for hive_remote.`SEQUENCE_TABLE`
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
……
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
解決,去遠端mysql庫上修改字符集從utf8mb4修改成utf8
mysql> alter database hive_remote /*!40100 DEFAULT CHARACTER SET utf8 */;
Query OK, 1 row affected (0.03 sec)
mysql>
然後在data01上面配置hive client端
scp -r hive-0.13.1/ data01:/home/hadoop/src/
報錯3:
繼續啟動,檢視日誌資訊:
[hadoop@name01 conf]$ hive --service metastore
Starting Hive Metastore Server
卡在這裡不動,去看日誌資訊
[hadoop@name01 hadoop]$ tail -f hive.log
2015-01-09 03:46:27,692 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(229)) - Initialized ObjectStore
2015-01-09 03:46:27,892 WARN [main]: metastore.ObjectStore (ObjectStore.java:checkSchema(6295)) - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.0
2015-01-09 03:46:30,574 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(551)) - Added admin role in metastore
2015-01-09 03:46:30,582 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(560)) - Added public role in metastore
2015-01-09 03:46:31,168 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(588)) - No user is added in admin role, since config is empty
2015-01-09 03:46:31,473 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5178)) - Starting DB backed MetaStore Server
2015-01-09 03:46:31,481 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5190)) - Started the new metaserver on port [9083]...
2015-01-09 03:46:31,481 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5192)) - Options.minWorkerThreads = 200
2015-01-09 03:46:31,482 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5194)) - Options.maxWorkerThreads = 100000
2015-01-09 03:46:31,482 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5196)) - TCP keepalive = true
在hive-site.xml上新增如下:
報錯4:
2015-01-09 04:01:43,053 INFO [main]: metastore.ObjectStore (ObjectStore.java:setConf(229)) - Initialized ObjectStore
2015-01-09 04:01:43,540 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(551)) - Added admin role in metastore
2015-01-09 04:01:43,546 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(560)) - Added public role in metastore
2015-01-09 04:01:43,684 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(588)) - No user is added in admin role, since config is empty
2015-01-09 04:01:44,041 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5178)) - Starting DB backed MetaStore Server
2015-01-09 04:01:44,054 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5190)) - Started the new metaserver on port [9083]...
2015-01-09 04:01:44,054 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5192)) - Options.minWorkerThreads = 200
2015-01-09 04:01:44,054 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5194)) - Options.maxWorkerThreads = 100000
2015-01-09 04:01:44,054 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(5196)) - TCP keepalive = true
2015-01-09 04:24:13,917 INFO [Thread-3]: metastore.HiveMetaStore (HiveMetaStore.java:run(5073)) - Shutting down hive metastore.
解決:
查了好久,No user is added in admin role, since config is empty沒有查到問題所在,碰到此類情況的一起交流下,歡迎留言。
----------------------------------------------------------------------------------------------------------------
有,文章允許轉載,但必須以連結方式註明源地址,否則追究法律責任!>
原部落格地址: http://blog.itpub.net/26230597/viewspace-1400379/
原作者:黃杉 (mchdba)
----------------------------------------------------------------------------------------------------------------
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26230597/viewspace-1400379/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Hive -------- 使用mysql儲存hive後設資料,Mysql的安裝以及配置步驟HiveMySql
- Hive的安裝部署Hive
- HIVE的安裝配置Hive
- Hive 3.1.2安裝部署Hive
- Hive(八)安裝部署Hive
- Hive安裝配置Hive
- Hive的安裝與配置Hive
- 【Hive一】Hive安裝及配置Hive
- Hive學習之Hive的安裝Hive
- Hive學習之二 《Hive的安裝之自定義mysql資料庫》HiveMySql資料庫
- 自動化測試之:Jenkins安裝與部署Jenkins
- vue測試安裝和配置Vue
- RHEL6下puppet部署管理1之安裝測試
- GoldenGate的安裝、配置與測試Go
- Hive-1.1.0-cdh5.7.0安裝部署HiveH5
- 測試資料庫是否安裝成功資料庫
- hive的安裝(後設資料庫: MySQL)Hive資料庫MySql
- 【Hadoop】大資料安裝部署之虛擬機器的安裝Hadoop大資料虛擬機
- Hive遠端模式安裝配置Hive模式
- 資料整合實現以及平臺安裝部署入門
- hadoop+hive+hbase 的安裝配置HadoopHive
- Centos7安裝配置Hive教程。CentOSHive
- Jmeter下載安裝配置---測試小白JMeter
- Hive的安裝Hive
- Hadoop之hive安裝過程以及執行常見問題HadoopHive
- Statspack之四-測試安裝好的Statspack
- swupdate+hawkbit部署以及測試 二
- hadoop上安裝hive2.3.2學習總結—hive安裝+mysql以及碰到坑點HadoopHiveMySql
- Hbase資料庫安裝部署資料庫
- ElasticSearch5.6.3的安裝部署以及叢集部署、ElasticSearch-head的安裝ElasticsearchH5
- Mahout學習之Mahout簡介、安裝、配置、入門程式測試
- 硬碟測試軟體IOMETER安裝配置指南硬碟
- FastDFS安裝、配置、部署(一)AST
- MAC 安裝 apache ab 壓力測試工具以及遇到的坑MacApache
- mysql資料庫索引的建立以及效能測試MySql資料庫索引
- Hive安裝Hive
- Redis之安裝部署Redis
- 功能測試之存量資料新與增資料測試