12 Spark Streaming原始碼解讀之Executor容錯安全性

weixin_34006468發表於2016-05-22

要篇內容介紹Executor的容錯,容錯方式有WAL、訊息重放、其他

  1. 首先介紹WAL的方法,就是儲存資料前,先把資料寫日誌。從ReceiverSupervisorImpl的pushAndReportBlock的方法開始看,程式碼如下
def pushAndReportBlock(
      receivedBlock: ReceivedBlock,
      metadataOption: Option[Any],
      blockIdOption: Option[StreamBlockId]
    ) {
    val blockId = blockIdOption.getOrElse(nextBlockId)
    val time = System.currentTimeMillis
    val blockStoreResult = receivedBlockHandler.storeBlock(blockId, receivedBlock)
    logDebug(s"Pushed block $blockId in ${(System.currentTimeMillis - time)} ms")
    val numRecords = blockStoreResult.numRecords
    val blockInfo = ReceivedBlockInfo(streamId, numRecords, metadataOption, blockStoreResult)
    trackerEndpoint.askWithRetry[Boolean](AddBlock(blockInfo))
    logDebug(s"Reported block $blockId")
}

呼叫receivedBlockHandler的storeBlock方法,receivedBlockHandler決定了採用哪種方式來儲存資料,程式碼如下

private val receivedBlockHandler: ReceivedBlockHandler = {
    if (WriteAheadLogUtils.enableReceiverLog(env.conf)) {
      if (checkpointDirOption.isEmpty) {
        throw new SparkException(
          "Cannot enable receiver write-ahead log without checkpoint directory set. " +
            "Please use streamingContext.checkpoint() to set the checkpoint directory. " +
            "See documentation for more details.")
      }
      new WriteAheadLogBasedBlockHandler(env.blockManager, receiver.streamId,
        receiver.storageLevel, env.conf, hadoopConf, checkpointDirOption.get)
    } else {
      new BlockManagerBasedBlockHandler(env.blockManager, receiver.storageLevel)
    }
}

如果開啟WAL的方式,會將資料儲存到checkpoint目錄,如果checkpoint目錄沒有配置,就丟擲異常。
先看WriteAheadLogBasedBlockHandler,開啟WAL後,採用BlockManager儲存資料時就不需要複本了,否則和WAL同時做容錯就是重複性工作,降低了系統的效能。
再看BlockManagerBasedBlockHandler,就是將資料交給BlockManager儲存,根據使用者定義的儲存級別來儲存,系統一般預設儲存級別為MEMORY_AND_DISK_SER_2,如果對資料安全性要求不高也可以不要複本。

  1. 訊息重放就是一種非常高效的方式,採用kafka的Direct API介面讀取資料時首先計算offset的位置,如果job異常,根據消費的offset位置重新指定kafka的offset,從失敗的位置讀取。kafka直接做為檔案儲存系統,就像hdfs一樣,具體怎麼使用以後的章節還會介紹。

相關文章