Hadoop2.7實戰v1.0之Hive-2.0.0的Hiveserver2服務和beeline遠端除錯
0.環境
Hadoop2.7實戰v1.0之Hive-2.0.0+MySQL遠端模式安裝: http://blog.itpub.net/30089851/viewspace-2082805/
機器 hadoop-01:192.168.33.01
1.開啟metastore 和 hiveserver2服務
[root@sht-sgmhadoopnn-01 bin]# hive --service metastore &
[1] 31092
[root@hadoop-01 bin]# hive --service hiveserver2 &
[root@hadoop-01 bin]# ps -ef|grep hive
root 31092 21892 11 21:57 pts/0 00:00:15 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root 31206 21892 15 21:57 pts/0 00:00:21 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hive.service.server.HiveServer2
[root@hadoop-01 bin]# netstat -nlp |grep 31206
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 31206/java
tcp 0 0 0.0.0.0:10002 0.0.0.0:* LISTEN 31206/java
####開啟web http://192.168.33.01:10002/hiveserver2.jsp
埠10000 (hive.server2.thrift.port) 和10002
2.beeline除錯,遠端連線到HiveServer2 http://blog.csdn.net/huanggang028/article/details/44591663
Beeline工作模式有兩種,即本地嵌入模式和遠端模式。嵌入模式情況下,它返回一個嵌入式的Hive(類似於Hive CLI)。
而遠端模式則是透過Thrift協議與某個單獨的HiveServer2程式進行連線通訊。
[root@hadoop-01 bin]# ./beeline
Beeline version 2.0.0 by Apache Hive
beeline> !connect jdbc:hive2://192.168.33.01:10000 root root
Connecting to jdbc:hive2://192.168.33.01:10000
Error: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=,code=0)
###錯誤 User: root is not allowed to impersonate root
3.停止叢集和hive,修改配置,同步配置,重啟叢集和hive
[root@hadoop-01 sbin]# ./stop-all.sh
[root@hadoop-01 sbin]# ps -ef|grep hive
root 31092 21892 11 21:57 pts/0 00:00:15 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root 31206 21892 15 21:57 pts/0 00:00:21 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hive.service.server.HiveServer2
[root@hadoop-01 sbin]# kill -9 31092
[root@hadoop-01 sbin]# kill -9 31206
[root@hadoop-01 sbin]# cd ../ect/hadoop/
###一定要同步配置
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-02:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-03:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-04:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-05:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# cd ../../sbin
[root@hadoop-01 sbin]# ./start-all.sh
[root@hadoop-01 sbin]# cd /hadoop/hive-remote-server/bin
[root@hadoop-01 bin]# hive --service metastore &
[root@hadoop-01 bin]# hive --service hiveserver2 &
4.beeline再次除錯
[root@hadoop-01 bin]# ./beeline
Beeline version 2.0.0 by Apache Hive
beeline> !connect jdbc:hive2://192.168.33.01:10000 root root
Connecting to jdbc:hive2://192.168.33.01:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/hive-remote-server/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connected to: Apache Hive (version 2.0.0)
Driver: Hive JDBC (version 2.0.0)
16/06/08 22:32:56 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.33.01:10000> show tables;
INFO : Compiling command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306): show tables
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
INFO : Completed compiling command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306); Time taken: 1.579 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306): show tables
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306); Time taken: 0.075 seconds
INFO : OK
+--------------+--+
| tab_name |
+--------------+--+
| studentinfo |
+--------------+--+
1 row selected (2.06 seconds)
0: jdbc:hive2://192.168.33.01:10000> select * from studentinfo;
INFO : Compiling command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e): select * from studentinfo
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:studentinfo.id, type:int, comment:null), FieldSchema(name:studentinfo.name, type:string, comment:null), FieldSchema(name:studentinfo.age, type:int, comment:null), FieldSchema(name:studentinfo.tel, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e); Time taken: 2.276 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e): select * from studentinfo
INFO : Completed executing command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e); Time taken: 0.001 seconds
INFO : OK
+-----------------+-------------------+------------------+------------------+--+
| studentinfo.id | studentinfo.name | studentinfo.age | studentinfo.tel |
+-----------------+-------------------+------------------+------------------+--+
| 1 | a | 26 | 113 |
| 2 | b | 11 | 222 |
+-----------------+-------------------+------------------+------------------+--+
2 rows selected (2.741 seconds)
0: jdbc:hive2://192.168.33.01:10000>
Hadoop2.7實戰v1.0之Hive-2.0.0+MySQL遠端模式安裝: http://blog.itpub.net/30089851/viewspace-2082805/
機器 hadoop-01:192.168.33.01
1.開啟metastore 和 hiveserver2服務
[root@sht-sgmhadoopnn-01 bin]# hive --service metastore &
[1] 31092
[root@hadoop-01 bin]# hive --service hiveserver2 &
[root@hadoop-01 bin]# ps -ef|grep hive
root 31092 21892 11 21:57 pts/0 00:00:15 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root 31206 21892 15 21:57 pts/0 00:00:21 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hive.service.server.HiveServer2
[root@hadoop-01 bin]# netstat -nlp |grep 31206
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 31206/java
tcp 0 0 0.0.0.0:10002 0.0.0.0:* LISTEN 31206/java
####開啟web http://192.168.33.01:10002/hiveserver2.jsp
埠10000 (hive.server2.thrift.port) 和10002
2.beeline除錯,遠端連線到HiveServer2 http://blog.csdn.net/huanggang028/article/details/44591663
Beeline工作模式有兩種,即本地嵌入模式和遠端模式。嵌入模式情況下,它返回一個嵌入式的Hive(類似於Hive CLI)。
而遠端模式則是透過Thrift協議與某個單獨的HiveServer2程式進行連線通訊。
[root@hadoop-01 bin]# ./beeline
Beeline version 2.0.0 by Apache Hive
beeline> !connect jdbc:hive2://192.168.33.01:10000 root root
Connecting to jdbc:hive2://192.168.33.01:10000
Error: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=,code=0)
###錯誤 User: root is not allowed to impersonate root
3.停止叢集和hive,修改配置,同步配置,重啟叢集和hive
[root@hadoop-01 sbin]# ./stop-all.sh
[root@hadoop-01 sbin]# ps -ef|grep hive
root 31092 21892 11 21:57 pts/0 00:00:15 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root 31206 21892 15 21:57 pts/0 00:00:21 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Djava.library.path=/hadoop/hadoop-2.7.2/lib -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.7.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.7.2 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /hadoop/hive-remote-server/lib/hive-service-2.0.0.jar org.apache.hive.service.server.HiveServer2
[root@hadoop-01 sbin]# kill -9 31092
[root@hadoop-01 sbin]# kill -9 31206
[root@hadoop-01 sbin]# cd ../ect/hadoop/
點選(此處)摺疊或開啟
-
[root@hadoop-01 hadoop]# vi core-site.xml
-
-
-
#使用者“root”可以代理所有主機上的所有使用者
-
<property>
-
<name>hadoop.proxyuser.root.hosts</name>
-
<value>*</value>
-
</property>
-
<property>
-
<name>hadoop.proxyuser.root.groups</name>
-
<value>*</value>
- </property>
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-02:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-03:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-04:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# scp core-site.xml root@hadoop-05:/hadoop/hadoop-2.7.2/etc/hadoop/
core-site.xml 100% 1779 1.7KB/s 00:00
[root@hadoop-01 hadoop]# cd ../../sbin
[root@hadoop-01 sbin]# ./start-all.sh
[root@hadoop-01 sbin]# cd /hadoop/hive-remote-server/bin
[root@hadoop-01 bin]# hive --service metastore &
[root@hadoop-01 bin]# hive --service hiveserver2 &
4.beeline再次除錯
[root@hadoop-01 bin]# ./beeline
Beeline version 2.0.0 by Apache Hive
beeline> !connect jdbc:hive2://192.168.33.01:10000 root root
Connecting to jdbc:hive2://192.168.33.01:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/hive-remote-server/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connected to: Apache Hive (version 2.0.0)
Driver: Hive JDBC (version 2.0.0)
16/06/08 22:32:56 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.33.01:10000> show tables;
INFO : Compiling command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306): show tables
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
INFO : Completed compiling command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306); Time taken: 1.579 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306): show tables
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=root_20160608223313_3c8f0f43-c860-4127-8962-109e751ea306); Time taken: 0.075 seconds
INFO : OK
+--------------+--+
| tab_name |
+--------------+--+
| studentinfo |
+--------------+--+
1 row selected (2.06 seconds)
0: jdbc:hive2://192.168.33.01:10000> select * from studentinfo;
INFO : Compiling command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e): select * from studentinfo
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:studentinfo.id, type:int, comment:null), FieldSchema(name:studentinfo.name, type:string, comment:null), FieldSchema(name:studentinfo.age, type:int, comment:null), FieldSchema(name:studentinfo.tel, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e); Time taken: 2.276 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e): select * from studentinfo
INFO : Completed executing command(queryId=root_20160608223330_c11ea86f-4c91-49bc-924e-ce6f70c0884e); Time taken: 0.001 seconds
INFO : OK
+-----------------+-------------------+------------------+------------------+--+
| studentinfo.id | studentinfo.name | studentinfo.age | studentinfo.tel |
+-----------------+-------------------+------------------+------------------+--+
| 1 | a | 26 | 113 |
| 2 | b | 11 | 222 |
+-----------------+-------------------+------------------+------------------+--+
2 rows selected (2.741 seconds)
0: jdbc:hive2://192.168.33.01:10000>
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-2120421/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 本地除錯遠端服務除錯
- debug技巧之遠端除錯除錯
- Dapr 遠端除錯之 Nocalhost除錯
- PHPSTROM遠端除錯PHP除錯
- Pycharm遠端除錯PyCharm除錯
- 《遠端控制》-服務端實現(一)服務端
- IDEA、ECLIPSE遠端除錯IdeaEclipse除錯
- Pycharm連線遠端伺服器並實現遠端除錯PyCharm伺服器除錯
- 使用Clion優雅的完全遠端自動同步和遠端除錯c++除錯C++
- 使用IDEA遠端debug除錯Idea除錯
- vscode遠端除錯c++VSCode除錯C++
- pycharm 遠端除錯之二PyCharm除錯
- Xdebug+PhpStorm 遠端除錯PHPORM除錯
- 如何提高後臺服務應用問題的排查效率?日誌 VS 遠端除錯除錯
- 基於 Scrcpy 的遠端除錯方案除錯
- 從零入門 Serverless | SAE 的遠端除錯和雲端聯調Server除錯
- VS - 打斷點/本地除錯/遠端除錯 問題斷點除錯
- 使用IDEA進行遠端除錯Idea除錯
- 使用Xdebug進行遠端除錯除錯
- WebStorm遠端除錯Node.jsWebORM除錯Node.js
- Homestead+PhpStorm+Xdebug 遠端除錯PHPORM除錯
- vs搭建遠端除錯環境除錯
- phpstorm 遠端除錯 homstead 程式碼PHPORM除錯
- grpc實戰——服務端流式呼叫RPC服務端
- Java使用HttpClient實現遠端服務呼叫JavaHTTPclient
- 一路踩坑,被迫聊聊 C# 程式碼除錯技巧和遠端除錯C#除錯
- 移動端真機除錯實戰經驗除錯
- vscode配置遠端linux系統除錯VSCodeLinux除錯
- vsc 如何除錯遠端python程式碼除錯Python
- IntelliJ IDEA遠端除錯Elasticsearch6.1.2IntelliJIdea除錯Elasticsearch
- windows系統vscode遠端除錯MySQLWindowsVSCode除錯MySql
- Pycharm同步遠端伺服器除錯PyCharm伺服器除錯
- 遠端除錯 Android 裝置網頁除錯Android網頁
- 使用Intellij IDEA遠端除錯Spark程式IntelliJIdea除錯Spark
- [菜鳥SpringCloud入門]第四章:遠端呼叫服務實戰SpringGCCloud
- (2)什麼是服務拆分和遠端呼叫
- 小白的學習筆記——服務拆分和遠端呼叫筆記
- 透過QEMU 和 IDA Pro遠端除錯裝置韌體除錯
- 使用nodejs和Java訪問遠端伺服器的服務NodeJSJava伺服器