【Hadoop】pyhton連結hive
使用hive模組
-
pip install sasl
-
pip install thrift
-
pip install thrift - sasl
-
pip install PyHive
-
[ root@ ip - 172 - 31 - 40 - 242 ~ ] # more testpyhive . py
-
from pyhive import hive
-
conn = hive . Connection ( host = 'xxxxxxx' , port = 10000 , database = 'collection' , username = '' )
-
cursor = conn . cursor ( )
-
cursor . execute ( 'select * from tb_partition limit 10' )
-
for result in cursor . fetchall ( ) :
-
print result
[root@ip-172-31-40-242 ~]# python testpyhive.py
(u'1', u'2', u'201707')
(u'1', u'2', u'201707')
(u'123', None, u'201709')
(u'123', u'456', u'201709')
(u'45678', u'456', u'201709')
(u'123', None, u'201709')
(u'123', u'456', u'201709')
(u'45678', u'456', u'201709')
(u'123', None, u'201709')
(u'123', u'456', u'201709')
官方API:
如何使用Python Impyla客戶端連線Hive和Impala
# -*- coding:utf-8 -*- from impala.dbapi import connect conn = connect(host='172.31.46.109',port=10000,database='collection',auth_mechanism='PLAIN') print(conn) cursor = conn.cursor() # param = '''SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; SET hive.support.concurrency=true; ''' cursor.execute(param) cursor.execute('SELECT uid FROM redefine_collection where uid=4028 limit 10') print cursor.description # prints the result set's schema results = cursor.fetchall() print results # # Python連線Impala(ImpalaTest.py) # # from impala.dbapi importconnect # # conn = connect(host='ip-172-31-26-80.ap-southeast-1.compute.internal',port=21050) # # print(conn) # # cursor = conn.cursor() # # cursor.execute('show databases') # # print cursor.description # prints the result set's schema # # results = cursor.fetchall() # # print(results) # # cursor.execute('SELECT * FROM test limit 10') # # print cursor.description # prints the result set's schema # # results = cursor.fetchall() # # print(results) 參考:https://cloud.tencent.com/developer/article/1078029
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29096438/viewspace-2156928/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Hadoop(五)C#連線HiveHadoopC#Hive
- Hadoop Hive介紹HadoopHive
- Hadoop Hive遷移至MaxComputeHadoopHive
- Hadoop實戰:Hive操作使用HadoopHive
- Hadoop+hive環境搭建HadoopHive
- pyhton
- hadoop之旅8-centerOS7 : Hive的基本操作HadoopROSHive
- hadoop之旅7-centerOS7 : Hive環境搭建HadoopROSHive
- Hadoop大資料實戰系列文章之HiveHadoop大資料Hive
- Pyhton-類(2)
- Hive||beeline連線的InvalidURLHive
- [Hive]Hive中表連線的優化,加快查詢速度Hive優化
- 2- hive後設資料與hadoop的關係HiveHadoop
- 【Hadoop篇】--Hadoop常用命令總結Hadoop
- 基於Hadoop不同版本搭建hive叢集(附配置檔案)HadoopHive
- 通過ES-Hadoop實現Hive讀寫Elasticsearch資料HadoopHiveElasticsearch
- 工良出品:包教會,Hadoop、Hive 搭建部署簡易教程HadoopHive
- springboot連線hive無法啟動Spring BootHive
- hive生成連續的時間和連續的數Hive
- hadoop(二)—hadoop配置、執行錯誤總結Hadoop
- 手把手教你搭建hadoop+hive測試環境(新手向)HadoopHive
- 數倉小組作業(一)Mac 安裝JDK、Mysql、Hadoop、HiveMacJDKMySqlHadoopHive
- Hadoop面試題總結Hadoop面試題
- 2024.7.13(hadoop學習總結)Hadoop
- Cloudera hadoop認證總結CloudHadoop
- Pyhton提高:with上下文管理器
- 【連結 1】與靜態連結庫連結
- Hive計算最大連續登陸天數Hive
- hive基礎總結(面試常用)Hive面試
- Hive所有的配置總結 轉載Hive
- 硬連結和軟連結
- Hadoop安裝錯誤總結Hadoop
- Windows10系統下Hadoop和Hive開發環境搭建填坑指南WindowsHadoopHive開發環境
- 連結串列 - 單向連結串列
- cmake 連結動態連結庫
- Linux軟連結和硬連結Linux
- 軟連結 vs. 硬連結
- 連結串列-迴圈連結串列