Hadoop MapReduce之wordcount(詞頻統計)

hackeruncle發表於2016-02-28
1.建立test.log

點選(此處)摺疊或開啟

  1. [root@sht-sgmhadoopnn-01 mapreduce]# more /tmp/test.log
  2. 1
  3. 2
  4. 3
  5. a
  6. b
  7. a
  8. v
  9. a a a
  10. abc
  11. 我是誰
  12. %……
  13. %
2.hadoop建立目錄及上傳

點選(此處)摺疊或開啟

  1. [root@sht-sgmhadoopnn-01 ~]# hadoop fs -mkdir /testdir
  2. 16/02/28 19:40:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. [root@sht-sgmhadoopnn-01 ~]# hadoop fs -put /tmp/test.log /testdir/
  4. 16/02/28 19:40:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
3.檢視官方封裝好的函式,我們選取wordcount

點選(此處)摺疊或開啟

  1. [root@sht-sgmhadoopnn-01 ~]#cd /hadoop/hadoop-2.7.2/share/hadoop/mapreduce
  2. [root@sht-sgmhadoopnn-01 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.2.jar
  3. An example program must be given as the first argument.
  4. Valid program names are:
  5.   aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  6.   aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  7.   bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  8.   dbcount: An example job that count the pageview counts from a database.
  9.   distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  10.   grep: A map/reduce program that counts the matches of a regex in the input.
  11.   join: A job that effects a join over sorted, equally partitioned datasets
  12.   multifilewc: A job that counts words from several files.
  13.   pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  14.   pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  15.   randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  16.   randomwriter: A map/reduce program that writes 10GB of random data per node.
  17.   secondarysort: An example defining a secondary sort to the reduce.
  18.   sort: A map/reduce program that sorts the data written by the random writer.
  19.   sudoku: A sudoku solver.
  20.   teragen: Generate data for the terasort
  21.   terasort: Run the terasort
  22.   teravalidate: Checking results of terasort
  23.   wordcount: A map/reduce program that counts the words in the input files.
  24.   wordmean: A map/reduce program that counts the average length of the words in the input files.
  25.   wordmedian: A map/reduce program that counts the median length of the words in the input files.
  26.   wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.
4.執行wordcount
# hadoop jar hadoop-mapreduce-examples-2.7.2.jar wordcount /testdir /out1
#                       官方模板jar包              函式    輸入目錄 輸出目錄(未建立)

點選(此處)摺疊或開啟

  1. [root@sht-sgmhadoopnn-01 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.2.jar wordcount /testdir /out1
  2. 16/02/28 19:40:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. 16/02/28 19:40:53 INFO input.FileInputFormat: Total input paths to process : 1
  4. 16/02/28 19:40:53 INFO mapreduce.JobSubmitter: number of splits:1
  5. 16/02/28 19:40:53 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456590271264_0002
  6. 16/02/28 19:40:54 INFO impl.YarnClientImpl: Submitted application application_1456590271264_0002
  7. 16/02/28 19:40:54 INFO mapreduce.Job: The url to track the job: http://sht-sgmhadoopnn-01:8088/proxy/application_1456590271264_0002/
  8. 16/02/28 19:40:54 INFO mapreduce.Job: Running job: job_1456590271264_0002
  9. 16/02/28 19:41:04 INFO mapreduce.Job: Job job_1456590271264_0002 running in uber mode : false
  10. 16/02/28 19:41:04 INFO mapreduce.Job: map 0% reduce 0%
  11. 16/02/28 19:41:12 INFO mapreduce.Job: map 100% reduce 0%
  12. 16/02/28 19:41:21 INFO mapreduce.Job: map 100% reduce 100%
  13. 16/02/28 19:41:22 INFO mapreduce.Job: Job job_1456590271264_0002 completed successfully
  14. 16/02/28 19:41:22 INFO mapreduce.Job: Counters: 49
  15.         File System Counters
  16.                 FILE: Number of bytes read=102
  17.                 FILE: Number of bytes written=244621
  18.                 FILE: Number of read operations=0
  19.                 FILE: Number of large read operations=0
  20.                 FILE: Number of write operations=0
  21.                 HDFS: Number of bytes read=142
  22.                 HDFS: Number of bytes written=56
  23.                 HDFS: Number of read operations=6
  24.                 HDFS: Number of large read operations=0
  25.                 HDFS: Number of write operations=2
  26.         Job Counters
  27.                 Launched map tasks=1
  28.                 Launched reduce tasks=1
  29.                 Data-local map tasks=1
  30.                 Total time spent by all maps in occupied slots (ms)=5537
  31.                 Total time spent by all reduces in occupied slots (ms)=6555
  32.                 Total time spent by all map tasks (ms)=5537
  33.                 Total time spent by all reduce tasks (ms)=6555
  34.                 Total vcore-milliseconds taken by all map tasks=5537
  35.                 Total vcore-milliseconds taken by all reduce tasks=6555
  36.                 Total megabyte-milliseconds taken by all map tasks=5669888
  37.                 Total megabyte-milliseconds taken by all reduce tasks=6712320
  38.         Map-Reduce Framework
  39.                 Map input records=12
  40.                 Map output records=14
  41.                 Map output bytes=100
  42.                 Map output materialized bytes=102
  43.                 Input split bytes=98
  44.                 Combine input records=14
  45.                 Combine output records=10
  46.                 Reduce input groups=10
  47.                 Reduce shuffle bytes=102
  48.                 Reduce input records=10
  49.                 Reduce output records=10
  50.                 Spilled Records=20
  51.                 Shuffled Maps =1
  52.                 Failed Shuffles=0
  53.                 Merged Map outputs=1
  54.                 GC time elapsed (ms)=79
  55.                 CPU time spent (ms)=2560
  56.                 Physical memory (bytes) snapshot=445992960
  57.                 Virtual memory (bytes) snapshot=1775263744
  58.                 Total committed heap usage (bytes)=306184192
  59.         Shuffle Errors
  60.                 BAD_ID=0
  61.                 CONNECTION=0
  62.                 IO_ERROR=0
  63.                 WRONG_LENGTH=0
  64.                 WRONG_MAP=0
  65.                 WRONG_REDUCE=0
  66.         File Input Format Counters
  67.                 Bytes Read=44
  68.         File Output Format Counters
  69.                 Bytes Written=56
  70. You have mail in /var/spool/mail/root
  71. [root@sht-sgmhadoopnn-01 mapreduce]#
5.驗證wordcount,詞頻統計

點選(此處)摺疊或開啟

  1. [root@sht-sgmhadoopnn-01 mapreduce]# hadoop fs -ls /out1
  2. 16/02/28 19:43:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. Found 2 items
  4. -rw-r--r-- 3 root supergroup 0 2016-02-28 19:41 /out1/_SUCCESS
  5. -rw-r--r-- 3 root supergroup 56 2016-02-28 19:41 /out1/part-r-00000
  6. [root@sht-sgmhadoopnn-01 mapreduce]# hadoop fs -text /out1/part-r-00000
  7. 16/02/28 19:43:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  8. % 1
  9. %…… 1
  10. 1 1
  11. 2 1
  12. 3 1
  13. a 5
  14. abc 1
  15. b 1
  16. v 1
  17. 我是誰 1
  18. You have mail in /var/spool/mail/root

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-2015610/,如需轉載,請註明出處,否則將追究法律責任。

相關文章