介紹如何在Intellij Idea中通過建立maven工程配置MapReduce的程式設計環境。
一、軟體環境
我使用的軟體版本如下:
- Intellij Idea 2017.1
- Maven 3.3.9
- Hadoop偽分散式環境( 安裝教程可參考這裡)
二、建立maven工程
開啟Idea,file->new->Project,左側皮膚選擇maven工程。(如果只跑MapReduce建立Java工程即可,不用勾選Creat from archetype,如果想建立web工程或者使用骨架可以勾選)
設定GroupId和ArtifactId,下一步。
設定工程儲存路徑,下一步。
Finish之後,空白工程的路徑如下圖所示。
完整的工程路徑如下圖所示:
三、新增maven依賴
在pom.xml新增依賴,對於Hadoop 2.7.3版本的hadoop,需要的jar包有以下幾個:
- hadoop-common
- hadoop-hdfs
- hadoop-mapreduce-client-core
- hadoop-mapreduce-client-jobclient
-
log4j( 列印日誌)
pom.xml中的依賴如下:
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
四、配置log4j
在src/main/resources
目錄下新增log4j的配置檔案log4j.properties
,內容如下:
log4j.rootLogger = debug,stdout
### 輸出資訊到控制抬 ###
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = [%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n
五、啟動Hadoop
啟動Hadoop,執行命令:
cd hadoop-2.7.3/
./sbin/start-all.sh
訪問http://localhost:50070/檢視hadoop是否正常啟動。
六、執行WordCount(從本地讀取檔案)
在工程根目錄下新建input資料夾,input資料夾下新增dream.txt,隨便寫入一些單詞:
I have a dream
a dream
在src/main/java目錄下新建包,新增FileUtil.java,建立一個刪除output檔案的函式,以後就不用手動刪除了。內容如下:
package com.mrtest.hadoop;
import java.io.File;
/**
* Created by bee on 3/25/17.
*/
public class FileUtil {
public static boolean deleteDir(String path) {
File dir = new File(path);
if (dir.exists()) {
for (File f : dir.listFiles()) {
if (f.isDirectory()) {
deleteDir(f.getName());
} else {
f.delete();
}
}
dir.delete();
return true;
} else {
System.out.println("檔案(夾)不存在!");
return false;
}
}
}
編寫WordCount的MapReduce程式WordCount.java,內容如下:
package com.mrtest.hadoop;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
/**
* Created by bee on 3/25/17.
*/
public class WordCount {
public static class TokenizerMapper extends
Mapper<Object, Text, Text, IntWritable> {
public static final IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
this.word.set(itr.nextToken());
context.write(this.word, one);
}
}
}
public static class IntSumReduce extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context)
throws IOException, InterruptedException {
int sum = 0;
IntWritable val;
for (Iterator i = values.iterator(); i.hasNext(); sum += val.get()) {
val = (IntWritable) i.next();
}
this.result.set(sum);
context.write(key, this.result);
}
}
public static void main(String[] args)
throws IOException, ClassNotFoundException, InterruptedException {
FileUtil.deleteDir("output");
Configuration conf = new Configuration();
String[] otherArgs = new String[]{"input/dream.txt","output"};
if (otherArgs.length != 2) {
System.err.println("Usage:Merge and duplicate removal <in> <out>");
System.exit(2);
}
Job job = Job.getInstance(conf, "WordCount");
job.setJarByClass(WordCount.class);
job.setMapperClass(WordCount.TokenizerMapper.class);
job.setReducerClass(WordCount.IntSumReduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
執行完畢以後,會在工程根目錄下增加一個output資料夾,開啟output/part-r-00000,內容如下:
I 1
a 2
dream 2
have 1
這裡在main函式中新增了一個String型別的陣列,如果想用main函式的args陣列接受引數,在執行時指定輸入和輸出路徑也是可以的。執行WordCount之前,配置Configuration並指定Program arguments即可。
七、執行WordCount(從HDFS讀取檔案)
在HDFS上新建資料夾:
hadoop fs -mkdir /worddir
如果出現Namenode安全模式導致的不能建立資料夾提示:
mkdir: Cannot create directory /worddir. Name node is in safe mode.
執行以下命令關閉safe mode:
hadoop dfsadmin -safemode leave
上傳本地檔案:
hadoop fs -put dream.txt /worddir
修改otherArgs引數,指定輸入為檔案在HDFS上的路徑:
String[] otherArgs = new String[]{"hdfs://localhost:9000/wo
驗證過程: