spark stream初探

白喬發表於2015-03-09

spark帶了一個NetworkWordCount測試程式,用以統計來自某TCP連線的單詞輸入:

/usr/local/spark/bin/run-example streaming.NetworkWordCount localhost 9999

再啟動netcat:
nc -lk 9999

嘗試輸入一些單詞:

hello world
damn it

可以看到NetworkWordCount產生如下輸出:

-------------------------------------------
Time: 1425866862000 ms
-------------------------------------------
(world,1)
(hello,1)
-------------------------------------------
Time: 1425866877000 ms
-------------------------------------------
(damn,1)
(it,1)

也可以手動在shell裡輸入NetworkWordCount的程式碼:

scala> :paste
// Entering paste mode (ctrl-D to finish)

import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._

// Create a local StreamingContext with two working thread and batch interval of 1 second.
// The master requires 2 cores to prevent from a starvation scenario.

val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(1))
// Create a DStream that will connect to hostname:port, like localhost:9999
val lines = ssc.socketTextStream("localhost", 9999)
// Split each line into words
val words = lines.flatMap(_.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)

// Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print()
ssc.start()             // Start the computation
ssc.awaitTermination()  // Wait for the computation to terminate

執行後,即可在螢幕上得到類似的輸出。