Spark 從零到開發(五)初識Spark SQL
Spark SQL是用於結構化資料處理的Spark模組。 與基本的Spark RDD API不同,Spark SQL提供的介面為Spark提供了有關資料結構和正在執行的計算的更多資訊。 在內部,Spark SQL使用此額外資訊來執行額外的最佳化。 有幾種與Spark SQL互動的方法,包括SQL和Dataset API。 在計算結果時,使用相同的執行引擎,與您用於表達計算的API /語言無關。 這種統一意味著開發人員可以輕鬆地在不同的API之間來回切換,從而提供表達給定轉換的最自然的方式。
Spark SQL的一個用途是執行SQL查詢。 Spark SQL還可用於從現有Hive安裝中讀取資料。從另一種程式語言中執行SQL時,結果將作為資料集/資料框返回。 您還可以使用命令列或JDBC / ODBC與SQL介面進行互動。
資料集和資料框架
資料集是分散式資料集合。 Dataset是Spark 1.6中新增的一個新介面,它提供了RDD的優勢(強型別,使用強大的lambda函式的能力)以及Spark SQL最佳化執行引擎的優點。資料集可以從JVM物件構造,然後使用功能轉換(map,flatMap,filter等)進行操作。
1. 入門
Spark中所有功能的入口點都是類。要建立基本的SparkSession
,只需使用SparkSession.builder()
:
import org.apache.spark.sql.SparkSession; SparkSession spark = SparkSession .builder() .appName("Java Spark SQL basic example") .config("spark.some.config.option", "some-value") .getOrCreate();
1.1 建立DataFrames
使用SparkSession,應用程式可以從現有RDD,Hive表或Spark資料來源建立DataFrame。
基於JSON檔案的內容建立DataFrame的示例:
import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row; Dataset<Row> df = spark.read().json("examples/src/main/resources/people.json");// Displays the content of the DataFrame to stdoutdf.show();// +----+-------+// | age| name|// +----+-------+// |null|Michael|// | 30| Andy|// | 19| Justin|// +----+-------+
1.2 資料集操作
資料集進行結構化資料處理的基本示例:
// col("...") is preferable to df.col("...")import static org.apache.spark.sql.functions.col;// Print the schema in a tree format(列印後設資料)df.printSchema();// root// |-- age: long (nullable = true)// |-- name: string (nullable = true)// Select only the "name" column(查詢name 這列)df.select("name").show();// +-------+// | name|// +-------+// |Michael|// | Andy|// | Justin|// +-------+// Select everybody, but increment the age by 1 (查詢name age列,age列加一)df.select(col("name"), col("age").plus(1)).show();// +-------+---------+// | name|(age + 1)|// +-------+---------+// |Michael| null|// | Andy| 31|// | Justin| 20|// +-------+---------+// Select people older than 21 (查詢age大於21的資料)df.filter(col("age").gt(21)).show();// +---+----+// |age|name|// +---+----+// | 30|Andy|// +---+----+// Count people by agedf.groupBy("age").count().show(); (分組查詢:列名age數量統計)// +----+-----+// | age|count|// +----+-----+// | 19| 1|// |null| 1|// | 30| 1|// +----+-----+
1.3 以程式設計方式來查詢sql
import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;// Register the DataFrame as a SQL temporary viewdf.createOrReplaceTempView("people"); Dataset<Row> sqlDF = spark.sql("SELECT * FROM people"); sqlDF.show();// +----+-------+// | age| name|// +----+-------+// |null|Michael|// | 30| Andy|// | 19| Justin|// +----+-------+
1.4 全域性臨時檢視
Spark SQL中的臨時檢視是會話範圍的,如果建立它的會話終止,它將消失。 如果您希望擁有一個在所有會話之間共享的臨時檢視並保持活動狀態,直到Spark應用程式終止,您可以建立一個全域性臨時檢視。 全域性臨時檢視與系統保留的資料庫global_temp繫結,我們必須使用限定名稱來引用它,例如SELECT * FROM global_temp.view1
。
// Register the DataFrame as a global temporary view(建立一個全域性臨時檢視物件)df.createGlobalTempView("people");// Global temporary view is tied to a system preserved database `global_temp`(查詢名字為people的全域性臨時檢視)spark.sql("SELECT * FROM global_temp.people").show();// +----+-------+// | age| name|// +----+-------+// |null|Michael|// | 30| Andy|// | 19| Justin|// +----+-------+// Global temporary view is cross-sessionspark.newSession().sql("SELECT * FROM global_temp.people").show();// +----+-------+// | age| name|// +----+-------+// |null|Michael|// | 30| Andy|// | 19| Justin|// +----+-------+
1.5 建立資料集
資料集與RDD類似,但是,它們不使用Java序列化或Kryo,而是使用專門的編碼器來序列化物件以便透過網路進行處理或傳輸。 雖然編碼器和標準序列化都負責將物件轉換為位元組,但編碼器是動態生成的程式碼,並使用一種格式,允許Spark執行許多操作,如過濾,排序和雜湊,而無需將位元組反序列化為物件。
import java.util.Arrays;import java.util.Collections;import java.io.Serializable;import org.apache.spark.api.java.function.MapFunction;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;import org.apache.spark.sql.Encoder;import org.apache.spark.sql.Encoders;public static class Person implements Serializable { private String name; private int age; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } }// Create an instance of a Bean classPerson person = new Person(); person.setName("Andy"); person.setAge(32);// Encoders are created for Java beans(對javabean進行編碼)Encoder<Person> personEncoder = Encoders.bean(Person.class); Dataset<Person> javaBeanDS = spark.createDataset( Collections.singletonList(person), personEncoder ); javaBeanDS.show();// +---+----+// |age|name|// +---+----+// | 32|Andy|// +---+----+// Encoders for most common types are provided in class Encoders()Encoder<Integer> integerEncoder = Encoders.INT(); Dataset<Integer> primitiveDS = spark.createDataset(Arrays.asList(1, 2, 3), integerEncoder); Dataset<Integer> transformedDS = primitiveDS.map( (MapFunction<Integer, Integer>) value -> value + 1, integerEncoder); transformedDS.collect(); // Returns [2, 3, 4]// DataFrames can be converted to a Dataset by providing a class. Mapping based on name(DataFrames可以基於對映的名字將一個類轉換成資料集)String path = "examples/src/main/resources/people.json"; Dataset<Person> peopleDS = spark.read().json(path).as(personEncoder); peopleDS.show();// +----+-------+// | age| name|// +----+-------+// |null|Michael|// | 30| Andy|// | 19| Justin|// +----+-------+
1.6 與RDD互動
Spark SQL支援兩種不同的方法將現有RDD轉換為資料集。
第一種方法使用反射來推斷包含特定型別物件的RDD的模式。 這種基於反射的方法可以提供更簡潔的程式碼.
第二種方法是透過程式設計介面,允許您構建模式,然後將其應用於現有RDD。
1.6.1 使用反射模式
Spark SQL支援自動將JavaBeans的RDD轉換為DataFrame。 使用反射獲得的BeanInfo定義了表的模式。 目前,Spark SQL不支援包含Map欄位的JavaBean。 但是支援巢狀的JavaBeans和List或Array欄位。 您可以透過建立實現Serializable的類來建立JavaBean,併為其所有欄位設定getter和setter。
import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.function.Function;import org.apache.spark.api.java.function.MapFunction;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;import org.apache.spark.sql.Encoder;import org.apache.spark.sql.Encoders;// Create an RDD of Person objects from a text fileJavaRDD<Person> peopleRDD = spark.read() .textFile("examples/src/main/resources/people.txt") .javaRDD() .map(line -> { String[] parts = line.split(","); Person person = new Person(); person.setName(parts[0]); person.setAge(Integer.parseInt(parts[1].trim())); return person; });// Apply a schema to an RDD of JavaBeans to get a DataFrameDataset<Row> peopleDF = spark.createDataFrame(peopleRDD, Person.class);// Register the DataFrame as a temporary viewpeopleDF.createOrReplaceTempView("people");// SQL statements can be run by using the sql methods provided by sparkDataset<Row> teenagersDF = spark.sql("SELECT name FROM people WHERE age BETWEEN 13 AND 19");// The columns of a row in the result can be accessed by field indexEncoder<String> stringEncoder = Encoders.STRING(); Dataset<String> teenagerNamesByIndexDF = teenagersDF.map( (MapFunction<Row, String>) row -> "Name: " + row.getString(0), stringEncoder); teenagerNamesByIndexDF.show();// +------------+// | value|// +------------+// |Name: Justin|// +------------+// or by field nameDataset<String> teenagerNamesByFieldDF = teenagersDF.map( (MapFunction<Row, String>) row -> "Name: " + row.<String>getAs("name"), stringEncoder); teenagerNamesByFieldDF.show();// +------------+// | value|// +------------+// |Name: Justin|// +------------+
1.6.2 程式設計方式模式
如果無法提前定義JavaBean類(例如,記錄的結構以字串形式編碼,或者文字資料集將被解析,並且欄位將針對不同的使用者進行不同的投影),則可以透過程式設計方式建立Dataset<Row> 有三個步驟。
從原始RDD建立行的RDD;
建立由與步驟1中建立的RDD中的行結構匹配的StructType表示的模式。
透過SparkSession提供的createDataFrame方法將模式應用於行的RDD。
import java.util.ArrayList;import java.util.List;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.function.Function;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;import org.apache.spark.sql.types.DataTypes;import org.apache.spark.sql.types.StructField;import org.apache.spark.sql.types.StructType;// Create an RDD(建立一個RDD)JavaRDD<String> peopleRDD = spark.sparkContext() .textFile("examples/src/main/resources/people.txt", 1) .toJavaRDD();// The schema is encoded in a stringString schemaString = "name age";// Generate the schema based on the string of schemaList<StructField> fields = new ArrayList<>();for (String fieldName : schemaString.split(" ")) { StructField field = DataTypes.createStructField(fieldName, DataTypes.StringType, true); fields.add(field); } StructType schema = DataTypes.createStructType(fields);// Convert records of the RDD (people) to RowsJavaRDD<Row> rowRDD = peopleRDD.map((Function<String, Row>) record -> { String[] attributes = record.split(","); return RowFactory.create(attributes[0], attributes[1].trim()); });// Apply the schema to the RDDDataset<Row> peopleDataFrame = spark.createDataFrame(rowRDD, schema);// Creates a temporary view using the DataFramepeopleDataFrame.createOrReplaceTempView("people");// SQL can be run over a temporary view created using DataFramesDataset<Row> results = spark.sql("SELECT name FROM people");// The results of SQL queries are DataFrames and support all the normal RDD operations// The columns of a row in the result can be accessed by field index or by field nameDataset<String> namesDS = results.map( (MapFunction<Row, String>) row -> "Name: " + row.getString(0), Encoders.STRING()); namesDS.show();// +-------------+// | value|// +-------------+// |Name: Michael|// | Name: Andy|// | Name: Justin|// +-------------+
1.7 聚合
內建的DataFrames函式提供了常見的聚合,例如count(),countDistinct(),avg(),max(),min()等。雖然這些函式是為DataFrames設計的,但Spark SQL也有型別安全的版本 其中一些在Scala和Java中使用強型別資料集。 此外,使用者不限於預定義的聚合函式,並且可以建立自己的聚合函式。
1.7.1 無使用者定義的聚合函式
使用者必須擴充套件UserDefinedAggregateFunction抽象類以實現自定義無型別聚合函式。
import java.util.ArrayList;import java.util.List;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;import org.apache.spark.sql.SparkSession;import org.apache.spark.sql.expressions.MutableAggregationBuffer;import org.apache.spark.sql.expressions.UserDefinedAggregateFunction;import org.apache.spark.sql.types.DataType;import org.apache.spark.sql.types.DataTypes;import org.apache.spark.sql.types.StructField;import org.apache.spark.sql.types.StructType;public static class MyAverage extends UserDefinedAggregateFunction { private StructType inputSchema; private StructType bufferSchema; public MyAverage() { List<StructField> inputFields = new ArrayList<>(); inputFields.add(DataTypes.createStructField("inputColumn", DataTypes.LongType, true)); inputSchema = DataTypes.createStructType(inputFields); List<StructField> bufferFields = new ArrayList<>(); bufferFields.add(DataTypes.createStructField("sum", DataTypes.LongType, true)); bufferFields.add(DataTypes.createStructField("count", DataTypes.LongType, true)); bufferSchema = DataTypes.createStructType(bufferFields); } // Data types of input arguments of this aggregate function public StructType inputSchema() { return inputSchema; } // Data types of values in the aggregation buffer public StructType bufferSchema() { return bufferSchema; } // The data type of the returned value public DataType dataType() { return DataTypes.DoubleType; } // Whether this function always returns the same output on the identical input public boolean deterministic() { return true; } // Initializes the given aggregation buffer. The buffer itself is a `Row` that in addition to // standard methods like retrieving a value at an index (e.g., get(), getBoolean()), provides // the opportunity to update its values. Note that arrays and maps inside the buffer are still // immutable. public void initialize(MutableAggregationBuffer buffer) { buffer.update(0, 0L); buffer.update(1, 0L); } // Updates the given aggregation buffer `buffer` with new input data from `input` public void update(MutableAggregationBuffer buffer, Row input) { if (!input.isNullAt(0)) { long updatedSum = buffer.getLong(0) + input.getLong(0); long updatedCount = buffer.getLong(1) + 1; buffer.update(0, updatedSum); buffer.update(1, updatedCount); } } // Merges two aggregation buffers and stores the updated buffer values back to `buffer1` public void merge(MutableAggregationBuffer buffer1, Row buffer2) { long mergedSum = buffer1.getLong(0) + buffer2.getLong(0); long mergedCount = buffer1.getLong(1) + buffer2.getLong(1); buffer1.update(0, mergedSum); buffer1.update(1, mergedCount); } // Calculates the final result public Double evaluate(Row buffer) { return ((double) buffer.getLong(0)) / buffer.getLong(1); } }// Register the function to access itspark.udf().register("myAverage", new MyAverage()); Dataset<Row> df = spark.read().json("examples/src/main/resources/employees.json"); df.createOrReplaceTempView("employees"); df.show();// +-------+------+// | name|salary|// +-------+------+// |Michael| 3000|// | Andy| 4500|// | Justin| 3500|// | Berta| 4000|// +-------+------+Dataset<Row> result = spark.sql("SELECT myAverage(salary) as average_salary FROM employees"); result.show();// +--------------+// |average_salary|// +--------------+// | 3750.0|// +--------------+
1.7.2 使用者定義聚合函式
強型別資料集的使用者定義聚合圍繞Aggregator抽象類。
import java.io.Serializable;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Encoder;import org.apache.spark.sql.Encoders;import org.apache.spark.sql.SparkSession;import org.apache.spark.sql.TypedColumn;import org.apache.spark.sql.expressions.Aggregator;public static class Employee implements Serializable { private String name; private long salary; // Constructors, getters, setters...}public static class Average implements Serializable { private long sum; private long count; // Constructors, getters, setters...}public static class MyAverage extends Aggregator<Employee, Average, Double> { // A zero value for this aggregation. Should satisfy the property that any b + zero = b public Average zero() { return new Average(0L, 0L); } // Combine two values to produce a new value. For performance, the function may modify `buffer` // and return it instead of constructing a new object public Average reduce(Average buffer, Employee employee) { long newSum = buffer.getSum() + employee.getSalary(); long newCount = buffer.getCount() + 1; buffer.setSum(newSum); buffer.setCount(newCount); return buffer; } // Merge two intermediate values public Average merge(Average b1, Average b2) { long mergedSum = b1.getSum() + b2.getSum(); long mergedCount = b1.getCount() + b2.getCount(); b1.setSum(mergedSum); b1.setCount(mergedCount); return b1; } // Transform the output of the reduction public Double finish(Average reduction) { return ((double) reduction.getSum()) / reduction.getCount(); } // Specifies the Encoder for the intermediate value type public Encoder<Average> bufferEncoder() { return Encoders.bean(Average.class); } // Specifies the Encoder for the final output value type public Encoder<Double> outputEncoder() { return Encoders.DOUBLE(); } } Encoder<Employee> employeeEncoder = Encoders.bean(Employee.class); String path = "examples/src/main/resources/employees.json"; Dataset<Employee> ds = spark.read().json(path).as(employeeEncoder); ds.show();// +-------+------+// | name|salary|// +-------+------+// |Michael| 3000|// | Andy| 4500|// | Justin| 3500|// | Berta| 4000|// +-------+------+MyAverage myAverage = new MyAverage();// Convert the function to a `TypedColumn` and give it a nameTypedColumn<Employee, Double> averageSalary = myAverage.toColumn().name("average_salary"); Dataset<Double> result = ds.select(averageSalary); result.show();// +--------------+// |average_salary|// +--------------+// | 3750.0|// +--------------+
作者:PlayInJava
連結:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/1806/viewspace-2819668/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Spark SQL | Spark,從入門到精通SparkSQL
- 從零開始認識 SparkSpark
- spark初識二Spark
- 大資料開發-Spark-初識Spark-Graph && 快速入門大資料Spark
- Hello Spark! | Spark,從入門到精通Spark
- Spark系列 - (3) Spark SQLSparkSQL
- Spark從入門到放棄——初始Spark(一)Spark
- Spark 從零到開發(四)單詞計數的三種環境實現Spark
- Spark SQL:4.對Spark SQL的理解SparkSQL
- spark學習筆記--Spark SQLSpark筆記SQL
- Spark 原始碼系列(九)Spark SQL 初體驗之解析過程詳解Spark原始碼SQL
- Spark SQL 教程: 通過示例瞭解 Spark SQLSparkSQL
- Spark SQL 教程: 透過示例瞭解 Spark SQLSparkSQL
- Spark SQL 開窗函式SparkSQL函式
- Spark SQL知識點與實戰SparkSQL
- Spark 系列(九)—— Spark SQL 之 Structured APISparkSQLStructAPI
- spark2.2.0 配置spark sql 操作hiveSparkSQLHive
- Spark SQL知識點大全與實戰SparkSQL
- Spark Streaming + Spark SQL 實現配置化ETSparkSQL
- Spark 系列(十一)—— Spark SQL 聚合函式 AggregationsSparkSQL函式
- Spark API 全集(1):Spark SQL Dataset & DataFrame APISparkAPISQL
- Spark入門(五)--Spark的reduce和reduceByKeySpark
- Spark從入門到放棄---RDDSpark
- Spark面試題(七)——Spark程式開發調優Spark面試題
- Spark從入門到放棄——Spark2.4.7安裝和啟動(二)Spark
- 從0到1進行Spark history分析Spark
- Flume+Spark+Hive+Spark SQL離線分析系統SparkHiveSQL
- spark sql 實踐(續)SparkSQL
- IDEA開發Spark應用並提交本地Spark 2.1.0 standIdeaSpark
- Hive on Spark 和 Spark sql on Hive,你能分的清楚麼HiveSparkSQL
- Hive on Spark和Spark sql on Hive,你能分的清楚麼HiveSparkSQL
- Spark SQL | 目前Spark社群最活躍的元件之一SparkSQL元件
- Flask框架從入門到精通之模板初識(五)Flask框架
- Cris 的 Spark SQL 筆記SparkSQL筆記
- Spark SQL原始碼解析(五)SparkPlan準備和執行階段SparkSQL原始碼
- Spark SQL的官網解釋SparkSQL
- Spark SQL如何選擇join策略SparkSQL
- Spark SQL學習——DataFrame和DataSetSparkSQL