pyspark.sql.DataFrame與pandas.DataFrame之間的相互轉換

birdlove1987發表於2017-06-10

程式碼如下,步驟流程在程式碼註釋中可見:

# -*- coding: utf-8 -*-
import pandas as pd
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark import SparkContext

#初始化資料

#初始化pandas DataFrame
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], index=['row1', 'row2'], columns=['c1', 'c2', 'c3'])

#列印資料
print df

#初始化spark DataFrame
sc = SparkContext()
if __name__ == "__main__":
    spark = SparkSession\
        .builder\
        .appName("testDataFrame")\
        .getOrCreate()

sentenceData = spark.createDataFrame([
    (0.0, "I like Spark"),
    (1.0, "Pandas is useful"),
    (2.0, "They are coded by Python ")
], ["label", "sentence"])

#顯示資料
sentenceData.select("label").show()

#spark.DataFrame 轉換成 pandas.DataFrame
sqlContest = SQLContext(sc)
spark_df = sqlContest.createDataFrame(df)

#顯示資料
spark_df.select("c1").show()


# pandas.DataFrame 轉換成 spark.DataFrame
pandas_df = sentenceData.toPandas()

#列印資料
print pandas_df

程式結果



相關文章