Scala - DataFrame
基本概念
What's DataFrame
A DataFrame is equivalent to a relational table in Spark SQL [1]。
DataFrame的前身是SchemaRDD,從Spark 1.3.0開始SchemaRDD更名為DataFrame [2]。其實從使用上來看,跟RDD的區別主要是有了Schema,這樣就能根據不同行和列得到對應的值。
Why DataFrame, Motivition
比RDD有更多的操作,而且執行計劃上也比RDD有更多的最佳化。能夠方便處理大規模結構化資料。
How to use DataFrame
建立DataFrame
建立一個空的DataFrame
這裡schema是一個StructType型別的
sqlContext.createDataFrame(sc.emptyRDD[Row], schema)
從一個List建立
def listToDataFrame(list: ListBuffer[List[Any]], schema:StructType): DataFrame = { val rows = list.map{x => Row(x:_*)} val rdd = sqlContext.sparkContext.parallelize(rows) sqlContext.createDataFrame(rdd, schema) }
直接透過RDD生成
val departments = sc.parallelize(Array( (31, "Sales"), (33, "Engineering"), (34, "Clerical"), (35, "Marketing") )).toDF("DepartmentID", "DepartmentName")val employees = sc.parallelize(Array[(String, Option[Int])]( ("Rafferty", Some(31)), ("Jones", Some(33)), ("Heisenberg", Some(33)), ("Robinson", Some(34)), ("Smith", Some(34)), ("Williams", null) )).toDF("LastName", "DepartmentID")
讀取json檔案建立[5]
json檔案
{"name":"Michael"} {"name":"Andy", "age":30} {"name":"Justin", "age":19}
建立DataFrame
val df = sqlContext.jsonFile("/path/to/your/jsonfile") df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
從parquet檔案讀出建立
val df:DataFrame = sqlContext.read.parquet("/Users/robin/workspace/cooked_data/bt")
從MySQL讀取表chuang建立[5]
val jdbcDF = sqlContext.load("jdbc", Map("url" -> "jdbc:mysql://localhost:3306/db?user=aaa&password=111", "dbtable" -> "your_table"))
從Hive建立[5]
Spark提供了一個HiveContext的上下文,其實是SQLContext的一個子類,但從作用上來說,sqlContext也支援Hive資料來源。只要在部署Spark的時候加入Hive選項,並把已有的hive-site.xml檔案挪到$SPARK_HOME/conf路徑下,就可以直接用Spark查詢包含已有後設資料的Hive表了
sqlContext.sql("select count(*) from hive_people")
從CSV檔案建立
有個spark-csv的library
可以從maven引入,也可以k從spark-shell $SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.5.0
val df = sqlContext.read.format("com.databricks.spark.csv"). option("header", "true"). option("inferSchema","true"). load("/Users/username/tmp/person.csv")
DataFrame基本操作
官方例子
// To create DataFrame using SQLContextval people = sqlContext.read.parquet("...")val department = sqlContext.read.parquet("...") people.filter("age > 30") .join(department, people("deptId") === department("id")) .groupBy(department("name"), "gender") .agg(avg(people("salary")), max(people("age")))
Filter
把id為null的行都filter掉
df.withColumn("id", when(expr("id is null"), 0).otherwise(1)).show
Join連線
inner join [4]
val employees = sc.parallelize(Array[(String, Option[Int])]( ("Rafferty", Some(31)), ("Jones", Some(33)), ("Heisenberg", Some(33)), ("Robinson", Some(34)), ("Smith", Some(34)), ("Williams", null) )).toDF("LastName", "DepartmentID")val departments = sc.parallelize(Array( (31, "Sales"), (33, "Engineering"), (34, "Clerical"), (35, "Marketing") )).toDF("DepartmentID", "DepartmentName") departments.show() +------------+--------------+ |DepartmentID|DepartmentName| +------------+--------------+ | 31| Sales| | 33| Engineering| | 34| Clerical| | 35| Marketing| +------------+--------------+ employees.join(departments, "DepartmentID").show() +------------+----------+--------------+ |DepartmentID| LastName|DepartmentName| +------------+----------+--------------+ | 31| Rafferty| Sales| | 33| Jones| Engineering| | 33|Heisenberg| Engineering| | 34| Robinson| Clerical| | 34| Smith| Clerical| | null| Williams| null| +------------+----------+--------------+
left outer join [4]
employees.join(departments, Seq("DepartmentID"), "left_outer").show() +------------+----------+--------------+ |DepartmentID| LastName|DepartmentName| +------------+----------+--------------+ | 31| Rafferty| Sales| | 33| Jones| Engineering| | 33|Heisenberg| Engineering| | 34| Robinson| Clerical| | 34| Smith| Clerical| | null| Williams| null| +------------+----------+--------------+
val d1 = df.groupBy("startDate","endDate").agg(max("price") as "price").show
Join expression 用表示式連線 [3]
val products = sc.parallelize(Array( ("steak", "1990-01-01", "2000-01-01", 150), ("steak", "2000-01-02", "2020-01-01", 180), ("fish", "1990-01-01", "2020-01-01", 100) )).toDF("name", "startDate", "endDate", "price") products.show() +-----+----------+----------+-----+ | name| startDate| endDate|price| +-----+----------+----------+-----+ |steak|1990-01-01|2000-01-01| 150| |steak|2000-01-02|2020-01-01| 180| | fish|1990-01-01|2020-01-01| 100| +-----+----------+----------+-----+val orders = sc.parallelize(Array( ("1995-01-01", "steak"), ("2000-01-01", "fish"), ("2005-01-01", "steak") )).toDF("date", "product") orders.show() +----------+-------+ | date|product| +----------+-------+ |1995-01-01| steak| |2000-01-01| fish| |2005-01-01| steak| +----------+-------+ orders.join(products, $"product" === $"name" && $"date" >= $"startDate" && $"date" <= $"endDate") .show() +----------+-------+-----+----------+----------+-----+ | date|product| name| startDate| endDate|price| +----------+-------+-----+----------+----------+-----+ |2000-01-01| fish| fish|1990-01-01|2020-01-01| 100| |1995-01-01| steak|steak|1990-01-01|2000-01-01| 150| |2005-01-01| steak|steak|2000-01-02|2020-01-01| 180| +----------+-------+-----+----------+----------+-----+
Join types:
inner, outer, left_outer, right_outer, leftsemi
Join with dataframe alias
val joinedDF = testDF.as('a).join(genmodDF.as('b), $"a.PassengerId" === $"b.PassengerId") joinedDF.select($"a.PassengerId", $"b.PassengerId").take(10)val joinedDF = testDF.join(genmodDF, testDF("PassengerId") === genmodDF("PassengerId"), "inner")
作者:虎耳
連結:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/2334/viewspace-2818881/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Pandas - DataFrame.loc
- 【pyspark】dataframe常用操作Spark
- Pandas DataFrame常用方法
- 資料分析: DataFrame
- DataFrame概述與使用
- Scala
- Scala 簡介 [摘自 Scala程式設計 ]程式設計
- Scala學習總結(from scala for the Impatient)
- PySpark DataFrame教程與演示Spark
- PySpark筆記(三):DataFrameSpark筆記
- Spark建立空的DataFrameSpark
- Awesome Scala
- scala(一)
- python pandas DataFrame-A 更新 DataFrame-B中指定列相同的資料Python
- scala入門之編寫scala指令碼指令碼
- 給 dataframe 列重新命名
- DataFrame刪除複合索引索引
- SparkSQL /DataFrame /Spark RDD誰快?SparkSQL
- dataframe 萬用字元篩選字元
- Python中建立DataFrame的方法Python
- Spark DataFrame的groupBy vs groupByKeySpark
- 【pandas學習筆記】DataFrame筆記
- Scala特質
- Scala(四):物件物件
- Scala(三):類
- scala(四)集合
- Scala操作Map
- scala 列舉
- 【01】DataFrame的建立和屬性
- panda.DataFrame.loc 使用詳解
- Python資料分析 DataFrame 筆記Python筆記
- python list tuple str dic series dataframePython
- 使用Pandas DataFrame輸出報告
- Apache Spark Dataframe Join語法教程ApacheSpark
- Pandas 基礎 (2) - Dataframe 基礎
- Spark SQL學習——DataFrame和DataSetSparkSQL
- Flink - 安裝包scala 2.12和scala 2.11的區別
- Scala 的學習