Hadoop 專欄 - MapReduce 入門

小馬哥發表於2021-01-21

MapReduce的基本思想

先舉一個簡單的例子: 打個比方我們有三個人鬥地主, 要數數牌夠不夠, 一種最簡單的方法可以找一個人數數是不是有54張(傳統單機計算); 還可以三個人各分一摞牌數各自的(Map階段), 三個人的總數加起來彙總(Reduce階段).

所以MapReduce的思想即: "分治"+"彙總". 大資料量下, 一臺機器處理不了的資料, 就用多臺機器, 以分散式叢集的形式來處理.

關於Map與Reduce有很多文章將這兩個詞直譯為對映和規約, 其實Map的思想就是各自負責一塊實行分治, Reduce的思想即: 將分治的結果彙總. 幹嘛翻譯的這麼生硬呢(故意讓人覺得大資料很神祕麼?)

MapReduce的程式設計入門

還是很簡單的模式: 包含8個步驟

我們那最簡單的單詞計數來舉例(號稱大資料的HelloWorld), 先讓大家跑起來看看現象再說.

按照MapReduce思想有兩個主要步驟, Mapper與Reducer, 剩餘的東西Hadoop都幫助我們實現了, 先入門實踐再瞭解原理;

MapReducer有兩種執行模式: 1,叢集模式(生產環境);2,本地模式(試驗學習)

前提:

1, 下載一個Hadoop的安裝包, 放到本地, 並配置到環境變數裡面;

2, 下載一個hadoop.dll放到hadoop的bin目錄下

 

建立Maven工程, 匯入依賴

	  <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>2.10.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.10.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>2.10.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-mapreduce-client-core</artifactId>
      <version>2.10.1</version>
    </dependency>

資料檔案D:\Source\data\demo_result1\xx.txt

hello,world,hadoop
hive,sqoop,flume,hello
kitty,tom,jerry,world
hadoop

 

開始編寫程式碼

第一步, 建立Mapper類

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class BaseMapper extends Mapper<LongWritable, Text, Text, LongWritable> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] words = value.toString().split(",");
        Text keyout = new Text();
        LongWritable valueout = new LongWritable(1);
        for (String word : words) {
            keyout.set(word);
            context.write(keyout, valueout);
        }
    }
}

 

第二步, 建立Reducer類

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class BaseReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
    @Override
    protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
        int x = 0;
        for (LongWritable value : values) {
            x += value.get();
        }
        context.write(key, new LongWritable(x));
    }
}

 

第三步, 建立Job啟動類

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class MainJob extends Configured implements Tool {
    @Override
    public int run(String[] strings) throws Exception {
        Job job = Job.getInstance(super.getConf(), MainJob.class.getName());
		//叢集執行時候: 要打包
        job.setJarByClass(MainJob.class);
        //1, 讀取輸入檔案解析類
        job.setInputFormatClass(TextInputFormat.class);
        TextInputFormat.setInputPaths(job,new Path("D:\\Source\\data\\data_in"));
        //2, 設定Mapper類
        job.setMapperClass(BaseMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);
		 //3, 設定shuffle階段的分割槽, 排序, 規約, 分組
        //7, 設定Reducer類
        job.setReducerClass(BaseReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);
        //8, 設定檔案輸出類以及輸出地址
        job.setOutputFormatClass(TextOutputFormat.class);
        TextOutputFormat.setOutputPath(job,new Path("D:\\Source\\data\\demo_result1"));				
      	//啟動MapReduceJob
        boolean completion = job.waitForCompletion(true);
        return completion?0:1;
    }
    public static void main(String[] args) {
        MainJob mainJob = new MainJob();
        try {
            Configuration configuration = new Configuration();
            configuration.set("mapreduce.framework.name","local");
            configuration.set("yarn.resourcemanager.hostname","local");
            int run = ToolRunner.run(configuration, mainJob, args);
            System.exit(run);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

相關文章