計算向資料移動
MR程式並不會在客戶端執行任何的計算操作,它是為計算工作做好準備,例如計算出切片資訊,直接影響到Map任務的並行度。
在Driver中提交任務時,會寫到這樣的語句:
boolean result = job.waitForCompletion(true);
進入到waitForCompletion
中:
public boolean waitForCompletion(boolean verbose) throws IOException, InterruptedException,
ClassNotFoundException {
if (state == JobState.DEFINE) {
// 提交任務語句
submit();
}
..............
繼續跟進 submit()
:
public void submit() throws IOException, InterruptedException, ClassNotFoundException {
ensureState(JobState.DEFINE);
setUseNewAPI();
connect();
final JobSubmitter submitter =
getJobSubmitter(cluster.getFileSystem(), cluster.getClient());
status = ugi.doAs(new PrivilegedExceptionAction<JobStatus>() {
public JobStatus run() throws IOException, InterruptedException,
ClassNotFoundException {
// 執行提交任務
return submitter.submitJobInternal(Job.this, cluster);
}
});
..............
}
上面程式碼可以看出,客戶端經過連線叢集,獲得任務提交器submitter
後執行了submitJobInternal(Job.this, cluster)
方法,進入看(其實我只想看切片方法)
/**
* Internal method for submitting jobs to the system.
* The job submission process involves:
* 1、Checking the input and output specifications of the job.
* 2、Computing the InputSplits for the job.
* 3、Setup the requisite accounting information for the
* DistributedCache of the job, if necessary.
* 4、Copying the job's jar and configuration to the map-reduce system
* directory on the distributed file-system.
* 5、Submitting the job to the JobTracker and optionally
* monitoring it's status.
*/
..............
// Create the splits for the job
LOG.debug("Creating splits at " + jtFs.makeQualified(submitJobDir));
int maps = writeSplits(job, submitJobDir);
conf.setInt(MRJobConfig.NUM_MAPS, maps);
LOG.info("number of splits:" + maps);
..............
從這個方法頭上的註釋資訊可以看到,在真正執行任務之前,客戶端做了這麼5件事,稍微翻譯一下:
- 檢查作業的輸入和輸出規範;
- 計算輸入切片的數量;
- 如有必要,為作業的
DistributedCache
設定必要的記帳資訊; - 將作業的 jar 和配置複製到分散式檔案系統上的 map-reduce system 目錄;
- 將作業提交給
JobTracker
並可選擇監控它的狀態
可以看到執行切片的方法時writeSplits(job, submitJobDir)
private int writeSplits(org.apache.hadoop.mapreduce.JobContext job,Path jobSubmitDir) throws IOException,InterruptedException, ClassNotFoundException {
JobConf jConf = (JobConf)job.getConfiguration();
int maps;
if (jConf.getUseNewMapper()) {
maps = writeNewSplits(job, jobSubmitDir);
} else {
maps = writeOldSplits(jConf, jobSubmitDir);
}
return maps;
}
也有新舊API的區分,看新的writeNewSplits(job, jobSubmitDir)
private <T extends InputSplit>
int writeNewSplits(JobContext job, Path jobSubmitDir) throws IOException,
InterruptedException, ClassNotFoundException {
..................
// 只看切片方法
List<InputSplit> splits = input.getSplits(job);
T[] array = (T[]) splits.toArray(new InputSplit[splits.size()]);
..............
// 返回值是陣列的長度,也就是切片的個數,也就是mapTask的並行度
return array.length;
}
進入切片方法,方法太長了,刪除部分,留下核心業務邏輯。這個得好好說說
public List<InputSplit> getSplits(JobContext job) throws IOException {
// 如果沒有指定的話,minSize = 1
long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job));
// 如果沒有指定的話,maxSize = Long.Max
long maxSize = getMaxSplitSize(job);
// generate splits
List<InputSplit> splits = new ArrayList<InputSplit>();
// FileStatus這個概念來自於HDFS,儲存客戶端提交檔案的後設資料
List<FileStatus> files = listStatus(job);
for (FileStatus file: files) {
// 獲取到檔案的路徑
Path path = file.getPath();
// 獲取到檔案的長度
long length = file.getLen();
if (length != 0) {
// 資料塊位置陣列,用於儲存該檔案對應的資料塊的位置
BlockLocation[] blkLocations;
if (file instanceof LocatedFileStatus) {
blkLocations = ((LocatedFileStatus) file).getBlockLocations();
} else {
FileSystem fs = path.getFileSystem(job.getConfiguration());
blkLocations = fs.getFileBlockLocations(file, 0, length);
}
if (isSplitable(job, path)) { // 沒有指定,預設是可分片的
long blockSize = file.getBlockSize();
// 返回預設值:切片大小 = 塊大小
long splitSize = computeSplitSize(blockSize, minSize, maxSize);
// 獲取整個檔案的長度,用於計算切片的偏移量
long bytesRemaining = length;
// SPLIT_SLOP 的大小是1.1
// 這個判斷表示式的含義是如果剩餘的塊體積大大於1.1倍的切片大小,繼續切片
while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {
// 在這計算了一步塊索引
int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
//-----------getBlockIndex() begin--------------------------------------------
protected int getBlockIndex(BlockLocation[] blkLocations, long offset) {
for (int i = 0 ; i < blkLocations.length; i++) {
// is the offset inside this block?
if ((blkLocations[i].getOffset() <= offset) &&
(offset < blkLocations[i].getOffset() + blkLocations[i].getLength())){
// 程式碼邏輯非常簡單,就是返回當前offset是在哪個block裡面
return i;
}
}
....................
//-----------getBlockIndex() end----------------------------------------------
// 計算完成之後加入切片集合
// 切片資訊包括:路徑,偏移量,切片大小,伺服器節點【支撐計算向資料移動】
splits.add(makeSplit(path, length-bytesRemaining, splitSize,
blkLocations[blkIndex].getHosts(),
blkLocations[blkIndex].getCachedHosts()));
bytesRemaining -= splitSize;
}
// 計算剩餘資料塊的切片資訊
if (bytesRemaining != 0) {
int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
splits.add(makeSplit(path, length-bytesRemaining, bytesRemaining,
blkLocations[blkIndex].getHosts(),
blkLocations[blkIndex].getCachedHosts()));
}
} else { // not splitable :不能切片,那就是一片
splits.add(makeSplit(path, 0, length, blkLocations[0].getHosts(),
blkLocations[0].getCachedHosts()));
}
}
......
// 返回切片檔案的集合。根據集合中資料的個數,就可以計算出有多少個maptask
return splits;
}