Hadoop 在Windows7作業系統下使用Eclipse來搭建hadoop開發環境
網上有一些都是在Linux下使用安裝Eclipse來進行hadoop應用開發,但是大部分Java程式設計師對linux系統不是那麼熟悉,所以需要在windows下開發hadoop程式,所以經過試驗,總結了下如何在windows下使用Eclipse來開發hadoop程式程式碼。
1、 需要下載hadoop的專門外掛jar包
hadoop版本為2.3.0,hadoop叢集搭建在centos6x上面,外掛包下載地址為:包名字為hadoop-eclipse-plugin-2.3.0,可以適用於hadoop2x系列軟體版本。
2、 把外掛包放到eclipse/plugins目錄下
為了以後方便,我這裡把儘可能多的jar包都放進來了,如下圖所示:
3、重啟eclipse,配置Hadoop installation directory
如果外掛安裝成功,開啟Windows—Preferences後,在視窗左側會有Hadoop Map/Reduce選項,點選此選項,在視窗右側設定Hadoop安裝路徑。
4、配置Map/Reduce Locations
開啟Windows-->Open Perspective-->Other
選擇Map/Reduce,點選OK,在右下方看到有個Map/Reduce
Locations的圖示,如下圖所示:
點選Map/Reduce Location選項卡,點選右邊小象圖示,開啟Hadoop Location配置視窗:
輸入Location Name,任意名稱即可.配置Map/Reduce Master和DFS Mastrer,Host和Port配置成與core-site.xml的設定一致即可。
去找core-site.xml配置:
<property> <name>fs.default.name</name> <value>hdfs://name01:9000</value> </property>
在介面配置如下:
點選"Finish"按鈕,關閉視窗。點選左側的DFSLocations—>myhadoop(上一步配置的location name),如能看到user,表示安裝成功,但是進去看到報錯資訊:Error: Permission denied: user=root,access=READ_EXECUTE,inode="/tmp";hadoop:supergroup:drwx---------,如下圖所示:
應該是許可權問題:把/tmp/目錄下面所有的關於hadoop的資料夾設定成hadoop使用者所有然後分配授予777許可權。
cd /tmp/
chmod 777 /tmp/
chown -R hadoop.hadoop /tmp/hsperfdata_root
之後重新連線開啟DFS Locations就顯示正常了。
Map/Reduce Master (此處為Hadoop叢集的Map/Reduce地址,應該和mapred-site.xml中的mapred.job.tracker設定相同)
(1):點選報錯:
An internal error occurred during: "Connecting to DFS hadoopname01".
java.net.UnknownHostException: name01
直接在hostname那一欄裡面設定ip地址為:192.168.52.128,即可,這樣就正常開啟了,如下圖所示:
5、新建WordCount專案
File—>Project,選擇Map/Reduce Project,輸入專案名稱WordCount等。
在WordCount專案裡新建class,名稱為WordCount,報錯程式碼如下:Invalid Hadoop Runtime specified; please click 'Configure Hadoop install directory' or fill in library location input field,報錯原因是目錄選擇不對,不能選擇在跟目錄E:\hadoop下,換成E:\u\hadoop\就可以了,如下所示:
一路下一步過去,點選Finished按鈕,完成工程建立,Eclipse控制檯下面出現如下資訊:
14-12-9 下午04時03分10秒: Eclipse is running in a JRE, but a JDK is required
Some Maven plugins may not work when importing projects or updating source folders.
14-12-9 下午04時03分13秒: Refreshing [/WordCount/pom.xml]
14-12-9 下午04時03分14秒: Refreshing [/WordCount/pom.xml]
14-12-9 下午04時03分14秒: Refreshing [/WordCount/pom.xml]
14-12-9 下午04時03分14秒: Updating index central|
14-12-9 下午04時04分10秒: Updated index for central|
6, Lib包匯入:
需要新增的hadoop相應jar包有:
/hadoop-2.3.0/share/hadoop/common下所有jar包,及裡面的lib目錄下所有jar包,
/hadoop-2.3.0/share/hadoop/hdfs下所有jar包,不包括裡面lib下的jar包,
/hadoop-2.3.0/share/hadoop/mapreduce下所有jar包,不包括裡面lib下的jar包,
/hadoop-2.3.0/share/hadoop/yarn下所有jar包,不包括裡面lib下的jar包,
大概18個jar包左右。
7,Eclipse直接提交mapreduce任務所需要環境配置程式碼如下所示:
-
package wc;
-
import java.io.IOException;
-
import java.util.StringTokenizer;
-
import org.apache.hadoop.conf.Configuration;
-
import org.apache.hadoop.fs.Path;
-
import org.apache.hadoop.io.IntWritable;
-
import org.apache.hadoop.io.Text;
-
import org.apache.hadoop.mapreduce.Job;
-
import org.apache.hadoop.mapreduce.Mapper;
-
import org.apache.hadoop.mapreduce.Reducer;
-
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
-
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-
import org.apache.hadoop.util.GenericOptionsParser;
-
-
public class W2 {
-
-
public static class TokenizerMapper extends
-
Mapper<Object, Text, Text, IntWritable> {
-
private final static IntWritable one = new IntWritable(1);
-
private Text word = new Text();
-
public void map(Object key, Text value, Context context)
-
throws IOException, InterruptedException {
-
StringTokenizer itr = new StringTokenizer(value.toString());
-
while (itr.hasMoreTokens()) {
-
word.set(itr.nextToken());
-
context.write(word, one);
-
}
-
}
-
}
-
-
public static class IntSumReducer extends
-
Reducer<Text, IntWritable, Text, IntWritable> {
-
private IntWritable result = new IntWritable();
-
public void reduce(Text key, Iterable<IntWritable> values,
-
Context context) throws IOException, InterruptedException {
-
int sum = 0;
-
for (IntWritable val : values) {
-
sum += val.get();
-
}
-
result.set(sum);
-
context.write(key, result);
-
}
-
}
-
-
public static void main(String[] args) throws Exception {
-
- Configuration conf = new Configuration(); System.setProperty(\
8、執行
8.1、在HDFS上建立目錄input
[hadoop@name01 hadoop-2.3.0]$ hadoop fs -ls /
[hadoop@name01 hadoop-2.3.0]$ hadoop fs -mkdir input
mkdir: `input': No such file or directory
[hadoop@name01 hadoop-2.3.0]$ PS:fs需要全目錄的方式來建立資料夾
如果Apache hadoop版本是0.x 或者1.x,
bin/hadoop hdfs fs -mkdir -p /in
bin/hadoop hdfs fs -put /home/du/input /in
如果Apache hadoop版本是2.x.
bin/hdfs dfs -mkdir -p /in
bin/hdfs dfs -put /home/du/input /in
如果是發行版的hadoop,比如Cloudera CDH,IBM BI,Hortonworks HDP 則第一種命令即可。要注意建立目錄的全路徑。另外hdfs的根目錄是 /
2、複製本地README.txt到HDFS的input裡
[hadoop@name01 hadoop-2.3.0]$ find . -name README.txt
./share/doc/hadoop/common/README.txt
[hadoop@name01 ~]$ hadoop fs -copyFromLocal ./src/hadoop-2.3.0/share/doc/hadoop/common/README.txt /data/input
[hadoop@name01 ~]$
[hadoop@name01 ~]$ hadoop fs -ls /
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-12-15 23:34 /data
-rw-r--r-- 3 hadoop supergroup 88 2014-08-26 02:21 /input
You have new mail in /var/spool/mail/root
[hadoop@name01 ~]$
3,執行hadoop結束後,檢視輸出結果
(1),直接在hadoop伺服器上面檢視
[hadoop@name01 ~]$ hadoop fs -ls /data/
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-12-15 23:29 /data/input
drwxr-xr-x - hadoop supergroup 0 2014-12-15 23:34 /data/output
[hadoop@name01 ~]$
(2),去Eclipse下檢視
(3),在控制檯上檢視資訊
-
2014-12-16 15:34:01,303 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - session.id is deprecated. Instead, use dfs.metrics.session-id
-
2014-12-16 15:34:01,309 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
-
2014-12-16 15:34:02,047 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 1
-
2014-12-16 15:34:02,120 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
-
2014-12-16 15:34:02,323 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local1764589720_0001
-
2014-12-16 15:34:02,367 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
-
2014-12-16 15:34:02,368 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
-
2014-12-16 15:34:02,682 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1764589720_0001/job_local1764589720_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
-
2014-12-16 15:34:02,682 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1764589720_0001/job_local1764589720_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
-
2014-12-16 15:34:02,703 INFO [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://localhost:8080/
-
2014-12-16 15:34:02,704 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local1764589720_0001
-
2014-12-16 15:34:02,707 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
-
2014-12-16 15:34:02,719 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
2014-12-16 15:34:02,853 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
-
2014-12-16 15:34:02,857 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1764589720_0001_m_000000_0
-
2014-12-16 15:34:02,919 INFO [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux.
-
2014-12-16 15:34:03,281 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(581)) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@2e1022ec
-
2014-12-16 15:34:03,287 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(733)) - Processing split: hdfs://192.168.52.128:9000/data/input/README.txt:0+1366
-
2014-12-16 15:34:03,304 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
-
2014-12-16 15:34:03,340 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1181)) - (EQUATOR) 0 kvi 26214396(104857584)
-
2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) - mapreduce.task.io.sort.mb: 100
-
2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) - soft limit at 83886080
-
2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) - bufstart = 0; bufvoid = 104857600
-
2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) - kvstart = 26214396; length = 6553600
-
2014-12-16 15:34:03,708 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_local1764589720_0001 running in uber mode : false
-
2014-12-16 15:34:03,710 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 0% reduce 0%
-
2014-12-16 15:34:04,121 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) -
-
2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1435)) - Starting flush of map output
-
2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1453)) - Spilling map output
-
2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1454)) - bufstart = 0; bufend = 2055; bufvoid = 104857600
-
2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) - kvstart = 26214396(104857584); kvend = 26213684(104854736); length = 713/6553600
-
2014-12-16 15:34:04,179 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1639)) - Finished spill 0
-
2014-12-16 15:34:04,194 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(995)) - Task:attempt_local1764589720_0001_m_000000_0 is done. And is in the process of committing
-
2014-12-16 15:34:04,207 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
-
2014-12-16 15:34:04,208 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1115)) - Task \'attempt_local1764589720_0001_m_000000_0\' done.
-
2014-12-16 15:34:04,208 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1764589720_0001_m_000000_0
-
2014-12-16 15:34:04,208 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
-
2014-12-16 15:34:04,211 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
-
2014-12-16 15:34:04,211 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local1764589720_0001_r_000000_0
-
2014-12-16 15:34:04,221 INFO [pool-6-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux.
-
2014-12-16 15:34:04,478 INFO [pool-6-thread-1] mapred.Task (Task.java:initialize(581)) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@36154615
-
2014-12-16 15:34:04,483 INFO [pool-6-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@e2b02a3
-
2014-12-16 15:34:04,500 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(193)) - MergerManager: memoryLimit=949983616, maxSingleShuffleLimit=237495904, mergeThreshold=626989184, ioSortFactor=10, memToMemMergeOutputsThreshold=10
-
2014-12-16 15:34:04,503 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1764589720_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
-
2014-12-16 15:34:04,543 INFO [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(140)) - localfetcher#1 about to shuffle output of map attempt_local1764589720_0001_m_000000_0 decomp: 1832 len: 1836 to MEMORY
-
2014-12-16 15:34:04,548 INFO [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 1832 bytes from map-output for attempt_local1764589720_0001_m_000000_0
-
2014-12-16 15:34:04,553 INFO [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(307)) - closeInMemoryFile -> map-output of size: 1832, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1832
-
2014-12-16 15:34:04,564 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
-
2014-12-16 15:34:04,566 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
-
2014-12-16 15:34:04,566 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
-
2014-12-16 15:34:04,585 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments
-
2014-12-16 15:34:04,585 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
-
2014-12-16 15:34:04,605 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742)) - Merged 1 segments, 1832 bytes to disk to satisfy reduce memory limit
-
2014-12-16 15:34:04,605 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) - Merging 1 files, 1836 bytes from disk
-
2014-12-16 15:34:04,606 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) - Merging 0 segments, 0 bytes from memory into reduce
-
2014-12-16 15:34:04,607 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments
-
2014-12-16 15:34:04,608 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
-
2014-12-16 15:34:04,608 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
-
2014-12-16 15:34:04,643 INFO [pool-6-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
-
2014-12-16 15:34:04,714 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 100% reduce 0%
-
2014-12-16 15:34:04,842 INFO [pool-6-thread-1] mapred.Task (Task.java:done(995)) - Task:attempt_local1764589720_0001_r_000000_0 is done. And is in the process of committing
-
2014-12-16 15:34:04,850 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
-
2014-12-16 15:34:04,850 INFO [pool-6-thread-1] mapred.Task (Task.java:commit(1156)) - Task attempt_local1764589720_0001_r_000000_0 is allowed to commit now
-
2014-12-16 15:34:04,881 INFO [pool-6-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task \'attempt_local1764589720_0001_r_000000_0\' to hdfs://192.168.52.128:9000/data/output/_temporary/0/task_local1764589720_0001_r_000000
-
2014-12-16 15:34:04,884 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
-
2014-12-16 15:34:04,884 INFO [pool-6-thread-1] mapred.Task (Task.java:sendDone(1115)) - Task \'attempt_local1764589720_0001_r_000000_0\' done.
-
2014-12-16 15:34:04,885 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local1764589720_0001_r_000000_0
-
2014-12-16 15:34:04,885 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
-
2014-12-16 15:34:05,714 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 100% reduce 100%
-
2014-12-16 15:34:05,714 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) - Job job_local1764589720_0001 completed successfully
-
2014-12-16 15:34:05,733 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 38
-
File System Counters
-
FILE: Number of bytes read=34542
-
FILE: Number of bytes written=470650
-
FILE: Number of read operations=0
-
FILE: Number of large read operations=0
-
FILE: Number of write operations=0
-
HDFS: Number of bytes read=2732
-
HDFS: Number of bytes written=1306
-
HDFS: Number of read operations=15
-
HDFS: Number of large read operations=0
-
HDFS: Number of write operations=4
-
Map-Reduce Framework
-
Map input records=31
-
Map output records=179
-
Map output bytes=2055
-
Map output materialized bytes=1836
-
Input split bytes=113
-
Combine input records=179
-
Combine output records=131
-
Reduce input groups=131
-
Reduce shuffle bytes=1836
-
Reduce input records=131
-
Reduce output records=131
-
Spilled Records=262
-
Shuffled Maps =1
-
Failed Shuffles=0
-
Merged Map outputs=1
-
GC time elapsed (ms)=13
-
CPU time spent (ms)=0
-
Physical memory (bytes) snapshot=0
-
Virtual memory (bytes) snapshot=0
-
Total committed heap usage (bytes)=440664064
-
Shuffle Errors
-
BAD_ID=0
-
CONNECTION=0
-
IO_ERROR=0
-
WRONG_LENGTH=0
-
WRONG_MAP=0
-
WRONG_REDUCE=0
-
File Input Format Counters
-
Bytes Read=1366
-
File Output Format Counters
- Bytes Written=1306
----------------------------------------------------------------------------------------------------------------
<版權所有,文章允許轉載,但必須以連結方式註明源地址,否則追究法律責任!>
原部落格地址: http://blog.itpub.net/26230597/viewspace-1370205/
原作者:黃杉 (mchdba)
----------------------------------------------------------------------------------------------------------------
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-1987855/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 搭建Hadoop的Eclipse開發環境HadoopEclipse開發環境
- 【Hadoop】:Windows下使用IDEA搭建Hadoop開發環境HadoopWindowsIdea開發環境
- 搭建Hadoop開發環境Hadoop開發環境
- Hadoop 系列(四)—— Hadoop 開發環境搭建Hadoop開發環境
- mac搭建hadoop開發環境(二)MacHadoop開發環境
- Windows10系統下Hadoop和Hive開發環境搭建填坑指南WindowsHadoopHive開發環境
- 寫作業系統之搭建開發環境作業系統開發環境
- 在Eclipse下搭建Android開發環境EclipseAndroid開發環境
- 詳解Window10下使用IDEA搭建Hadoop開發環境IdeaHadoop開發環境
- Windows7下PHP 7.1搭建開發環境WindowsPHP開發環境
- VSCode+Maven+Hadoop開發環境搭建VSCodeMavenHadoop開發環境
- 第三篇:配置Hadoop的Eclipse開發環境HadoopEclipse開發環境
- 在eclipse中開發hadoop MapReduceEclipseHadoop
- Hadoop2.7.5環境搭建Hadoop
- Hadoop+hive環境搭建HadoopHive
- hadoop叢集環境搭建Hadoop
- ESP32的SDK開發之window系統下eclipse開發環境搭建Eclipse開發環境
- 學習二:Hadoop1.0.1 Eclipse編譯環境搭建HadoopEclipse編譯
- [hadoop]hadoop2.6完全分散式環境搭建Hadoop分散式
- Hadoop 基礎之搭建環境Hadoop
- 搭建本地執行Hadoop環境Hadoop
- hadoop完全分散式環境搭建Hadoop分散式
- hadoop的單機環境搭建Hadoop
- HADOOP SPARK 叢集環境搭建HadoopSpark
- Windows10系統下使用Docker搭建ClickHouse開發環境WindowsDocker開發環境
- 從 0 開始使用 Docker 快速搭建 Hadoop 叢集環境DockerHadoop
- ubuntu下hadoop環境配置UbuntuHadoop
- Hadoop框架:叢集模式下分散式環境搭建Hadoop框架模式分散式
- Linux 下 Hadoop 2.6.0 叢集環境的搭建LinuxHadoop
- Eclipse下搭建Android的NDK開發環境EclipseAndroid開發環境
- Eclipse下搭建ArcGIS API for Android開發環境EclipseAPIAndroid開發環境
- Hadoop入門(一)之Hadoop偽分散式環境搭建Hadoop分散式
- hadoop之旅2-centerOS7: 搭建分散式hadoop環境HadoopROS分散式
- hadoop 2.8.5偽分散式環境搭建Hadoop分散式
- hadoop 2.8.5完全分散式環境搭建Hadoop分散式
- Ubuntu14下Hadoop開發 基礎環境安裝UbuntuHadoop
- Linux 環境下搭建Hadoop叢集(全分佈)LinuxHadoop
- Windows下hadoop環境搭建之NameNode啟動報錯WindowsHadoop