【Hadoop】pyhton連結hive
使用hive模組
-
pip install sasl
-
pip install thrift
-
pip install thrift - sasl
-
pip install PyHive
-
[ root@ ip - 172 - 31 - 40 - 242 ~ ] # more testpyhive . py
-
from pyhive import hive
-
conn = hive . Connection ( host = 'xxxxxxx' , port = 10000 , database = 'collection' , username = '' )
-
cursor = conn . cursor ( )
-
cursor . execute ( 'select * from tb_partition limit 10' )
-
for result in cursor . fetchall ( ) :
-
print result
[root@ip-172-31-40-242 ~]# python testpyhive.py
(u'1', u'2', u'201707')
(u'1', u'2', u'201707')
(u'123', None, u'201709')
(u'123', u'456', u'201709')
(u'45678', u'456', u'201709')
(u'123', None, u'201709')
(u'123', u'456', u'201709')
(u'45678', u'456', u'201709')
(u'123', None, u'201709')
(u'123', u'456', u'201709')
官方API:
如何使用Python Impyla客戶端連線Hive和Impala
# -*- coding:utf-8 -*- from impala.dbapi import connect conn = connect(host='172.31.46.109',port=10000,database='collection',auth_mechanism='PLAIN') print(conn) cursor = conn.cursor() # param = '''SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; SET hive.support.concurrency=true; ''' cursor.execute(param) cursor.execute('SELECT uid FROM redefine_collection where uid=4028 limit 10') print cursor.description # prints the result set's schema results = cursor.fetchall() print results # # Python連線Impala(ImpalaTest.py) # # from impala.dbapi importconnect # # conn = connect(host='ip-172-31-26-80.ap-southeast-1.compute.internal',port=21050) # # print(conn) # # cursor = conn.cursor() # # cursor.execute('show databases') # # print cursor.description # prints the result set's schema # # results = cursor.fetchall() # # print(results) # # cursor.execute('SELECT * FROM test limit 10') # # print cursor.description # prints the result set's schema # # results = cursor.fetchall() # # print(results) 參考:https://cloud.tencent.com/developer/article/1078029
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29096438/viewspace-2156928/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Hadoop(五)C#連線HiveHadoopC#Hive
- Hive之 hive與hadoop的聯絡HiveHadoop
- Hadoop Hive介紹HadoopHive
- hadoop上安裝hive2.3.2學習總結—hive安裝+mysql以及碰到坑點HadoopHiveMySql
- Hadoop Hive遷移至MaxComputeHadoopHive
- Hadoop實戰:Hive操作使用HadoopHive
- Hadoop+hive環境搭建HadoopHive
- Hadoop--Map/Reduce實現多表連結Hadoop
- 高可用Hadoop平臺-整合Hive HAProxyHadoopHive
- hadoop hive hbase 入門學習 (三)HadoopHive
- 從零自學Hadoop(15):Hive表操作HadoopHive
- hadoop 實現HIVE和HBASE協作!HadoopHive
- [Hadoop]Pig與Hive的區別HadoopHive
- hadoop+hive+hbase 的安裝配置HadoopHive
- Hive學習筆記 6 Hive的JDBC連線Hive筆記JDBC
- [Hadoop]Hive r0.9.0中文文件(三)之Hive相關命令HadoopHive
- Hive1.2.1 啟動報錯 ClassNotFoundException: org.apache.hadoop.hive.service.HiveServerHiveExceptionApacheHadoopServer
- [Hive]Hive中表連線的優化,加快查詢速度Hive優化
- [Hadoop]Hive r0.9.0中文文件(四)之Hive變數的使用HadoopHive變數
- Pyhton開發入門心得
- Pyhton內建函式大全函式
- hadoop之旅7-centerOS7 : Hive環境搭建HadoopROSHive
- hadoop之旅8-centerOS7 : Hive的基本操作HadoopROSHive
- Hadoop2.7.3+Hive2.1.1+Spark2.1.0環境搭建HadoopHiveSpark
- 從零自學Hadoop(18):Hive的CLI和JDBCHadoopHiveJDBC
- Hive學習筆記 1 Hive體系結構Hive筆記
- Hadoop大資料實戰系列文章之HiveHadoop大資料Hive
- Hadoop學習筆記—17.Hive框架學習Hadoop筆記Hive框架
- 從零自學Hadoop(14):Hive介紹及安裝HadoopHive
- 配置hadoop HIVE後設資料儲存在mysql中HadoopHiveMySql
- hive生成連續的時間和連續的數Hive
- 2- hive後設資料與hadoop的關係HiveHadoop
- hadoop,hive啟用lzo壓縮和建立lzo索引薦HadoopHive索引
- 大資料時代之hadoop(六):hadoop 生態圈(pig,hive,hbase,ZooKeeper,Sqoop)大資料HadoopHive
- Hive||beeline連線的InvalidURLHive
- Pyhton提高:with上下文管理器
- hive基礎總結(面試常用)Hive面試
- Hive常用命令總結Hive