複習
- hdfs的讀寫
- secondary namenode的工作原理
- shell指令碼定時採集資料到hdfs
mapreduce
- 是一個程式設計框架
- 分為兩個階段:
- map階段,task併發例項各司其職
- reduce階段,task併發例項依然各司其職,但依賴第一階段的task併發例項
- mapreduce只能分為1個map階段和1個reduce階段
- 可以通過多個mapreduce串聯解決大型複雜問題
- mr application master的作用
實踐
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String[] words = line.split(" ");
for(String word:words){
context.write(new Text(word), new IntWritable(1));
}
}
}
2. reducer類
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int count = 0;
for(IntWritable value:values){
count += value.get();
}
context.write(key, new IntWritable(count));
}
}
3. 定義一個主類,用來描述job並提交job
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job wcjob = Job.getInstance(conf);
wcjob.setJarByClass(WordCountRunner.class);
wcjob.setMapperClass(WordCountMapper.class);
wcjob.setReducerClass(WordCountReducer.class);
wcjob.setMapOutputKeyClass(Text.class);
wcjob.setMapOutputValueClass(IntWritable.class);
wcjob.setOutputKeyClass(Text.class);
wcjob.setOutputValueClass(IntWritable.class);
FileInputFormat.setInputPaths(wcjob, "hdfs://hdp-server01:9000/wordcount/data/big.txt");
FileOutputFormat.setOutputPath(wcjob, new Path("hdfs://hdp-server01:9000/wordcount/output/"));
boolean res = wcjob.waitForCompletion(true);
System.exit(res?0:1);
}
- 用debug追蹤FileInputFormat()的執行