Spark2 Dataset去重、差集、交集

智慧先行者發表於2016-11-25
import org.apache.spark.sql.functions._
      
// 對整個DataFrame的資料去重 
data.distinct() 
data.dropDuplicates() 
      
// 對指定列的去重 
val colArray=Array("affairs", "gender") 
data.dropDuplicates(colArray) 
//data.dropDuplicates("affairs", "gender") 
      
  
val df=data.filter("gender=='male' ") 
// data與df的差集 
data.except(df).show 
+-------+------+----+------------+--------+-------------+---------+----------+------+ 
|affairs|gender| age|yearsmarried|children|religiousness|education|occupation|rating| 
+-------+------+----+------------+--------+-------------+---------+----------+------+ 
|    0.0|female|32.0|        15.0|     yes|          1.0|     12.0|       1.0|   4.0| 
|    0.0|female|32.0|         1.5|      no|          2.0|     17.0|       5.0|   5.0| 
|    0.0|female|32.0|        15.0|     yes|          4.0|     16.0|       1.0|   2.0| 
|    0.0|female|22.0|        0.75|      no|          2.0|     12.0|       1.0|   3.0| 
|    0.0|female|27.0|         4.0|      no|          4.0|     14.0|       6.0|   4.0| 
+-------+------+----+------------+--------+-------------+---------+----------+------+ 


// data與df的交集
data.intersect(df)

 

相關文章