hadoop-2.6.0基準測試
1.測試程式的幫助資訊
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar
An example program must be given as the first argument.
Valid program names are:
DFSCIOTest: Distributed i/o benchmark of libhdfs.
DistributedFSCheck: Distributed checkup of the file system consistency.
JHLogAnalyzer: Job History Log analyzer.
MRReliabilityTest: A program that tests the reliability of the MR framework by injecting faults/failures
SliveTest: HDFS Stress Test and Live Data Verification.
TestDFSIO: Distributed i/o benchmark.
fail: a job that always fails
filebench: Benchmark SequenceFile(Input|Output)Format (block,record compressed and uncompressed), Text(Input|Output)Format (compressed and uncompressed)
largesorter: Large-Sort tester
loadgen: Generic map/reduce load generator
mapredtest: A map/reduce test check.
minicluster: Single process HDFS and MR cluster.
mrbench: A map/reduce benchmark that can create many small jobs
nnbench: A benchmark that stresses the namenode.
sleep: A job that sleeps at each map and reduce task.
testbigmapoutput: A map/reduce program that works on a very big non-splittable file and does identity map/reduce
testfilesystem: A test for FileSystem read/write.
testmapredsort: A map/reduce program that validates the map-reduce framework's sort.
testsequencefile: A test for flat files of binary key value pairs.
testsequencefileinputformat: A test for sequence file input format.
testtextinputformat: A test for text input format.
threadedmapbench: A map/reduce benchmark that compares the performance of maps with multiple spills over maps with 1 spill
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO
15/02/03 15:28:32 INFO fs.TestDFSIO: TestDFSIO.1.7
Missing arguments.
Usage: TestDFSIO [genericOptions] -read [-random | -backward | -skip [-skipSize Size]] | -write | -append | -clean [-compression codecClassName] [-nrFiles N] [-size Size[B|KB|MB|GB|TB]] [-resFile resultFileName] [-bufferSize Bytes] [-rootDir]
[hadoop@tong1 hadoop-2.6.0]$
2.測試hadoop寫的速度
向HDFS檔案系統中寫入資料,10個檔案,每個檔案10MB,檔案存放到/benchmarks/TestDFSIO/io_data中
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 10MB
15/02/03 15:38:21 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/03 15:38:21 INFO fs.TestDFSIO: nrFiles = 10
15/02/03 15:38:21 INFO fs.TestDFSIO: nrBytes (MB) = 10.0
15/02/03 15:38:21 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/03 15:38:21 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
15/02/03 15:38:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/03 15:38:22 INFO fs.TestDFSIO: creating control file: 10485760 bytes, 10 files
15/02/03 15:38:30 INFO fs.TestDFSIO: created control files for: 10 files
15/02/03 15:38:30 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:38:30 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:38:31 INFO mapred.FileInputFormat: Total input paths to process : 10
15/02/03 15:38:31 INFO mapreduce.JobSubmitter: number of splits:10
15/02/03 15:38:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1422946799080_0004
15/02/03 15:38:32 INFO impl.YarnClientImpl: Submitted application application_1422946799080_0004
15/02/03 15:38:32 INFO mapreduce.Job: The url to track the job:
15/02/03 15:38:32 INFO mapreduce.Job: Running job: job_1422946799080_0004
15/02/03 15:38:39 INFO mapreduce.Job: Job job_1422946799080_0004 running in uber mode : false
15/02/03 15:38:39 INFO mapreduce.Job: map 0% reduce 0%
15/02/03 15:38:50 INFO mapreduce.Job: map 10% reduce 0%
15/02/03 15:38:52 INFO mapreduce.Job: map 20% reduce 0%
15/02/03 15:39:02 INFO mapreduce.Job: map 20% reduce 7%
15/02/03 15:39:08 INFO mapreduce.Job: map 30% reduce 10%
15/02/03 15:39:55 INFO mapreduce.Job: map 37% reduce 10%
15/02/03 15:39:59 INFO mapreduce.Job: map 43% reduce 10%
15/02/03 15:40:03 INFO mapreduce.Job: map 50% reduce 10%
15/02/03 15:40:13 INFO mapreduce.Job: map 57% reduce 10%
15/02/03 15:40:33 INFO mapreduce.Job: map 63% reduce 10%
15/02/03 15:40:43 INFO mapreduce.Job: map 70% reduce 10%
15/02/03 15:40:50 INFO mapreduce.Job: map 73% reduce 10%
15/02/03 15:40:51 INFO mapreduce.Job: map 73% reduce 13%
15/02/03 15:40:52 INFO mapreduce.Job: map 77% reduce 13%
15/02/03 15:40:54 INFO mapreduce.Job: map 80% reduce 17%
15/02/03 15:40:57 INFO mapreduce.Job: map 80% reduce 20%
15/02/03 15:41:00 INFO mapreduce.Job: map 83% reduce 20%
15/02/03 15:41:01 INFO mapreduce.Job: map 90% reduce 20%
15/02/03 15:41:03 INFO mapreduce.Job: map 90% reduce 23%
15/02/03 15:41:05 INFO mapreduce.Job: map 93% reduce 23%
15/02/03 15:41:09 INFO mapreduce.Job: map 93% reduce 27%
15/02/03 15:41:15 INFO mapreduce.Job: map 97% reduce 27%
15/02/03 15:41:18 INFO mapreduce.Job: map 97% reduce 30%
15/02/03 15:41:34 INFO mapreduce.Job: map 100% reduce 30%
15/02/03 15:41:45 INFO mapreduce.Job: map 100% reduce 100%
15/02/03 15:41:47 INFO mapreduce.Job: Job job_1422946799080_0004 completed successfully
15/02/03 15:41:47 INFO mapreduce.Job: Counters: 53
File System Counters
FILE: Number of bytes read=842
FILE: Number of bytes written=1168035
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2310
HDFS: Number of bytes written=104857677
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=12
Job Counters
Failed map tasks=1
Killed map tasks=8
Launched map tasks=19
Launched reduce tasks=1
Other local map tasks=1
Data-local map tasks=9
Rack-local map tasks=9
Total time spent by all maps in occupied slots (ms)=1501978
Total time spent by all reduces in occupied slots (ms)=172661
Total time spent by all map tasks (ms)=1501978
Total time spent by all reduce tasks (ms)=172661
Total vcore-seconds taken by all map tasks=1501978
Total vcore-seconds taken by all reduce tasks=172661
Total megabyte-seconds taken by all map tasks=1538025472
Total megabyte-seconds taken by all reduce tasks=176804864
Map-Reduce Framework
Map input records=10
Map output records=50
Map output bytes=736
Map output materialized bytes=896
Input split bytes=1190
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=896
Reduce input records=50
Reduce output records=5
Spilled Records=100
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=8949
CPU time spent (ms)=24290
Physical memory (bytes) snapshot=2860949504
Virtual memory (bytes) snapshot=22915002368
Total committed heap usage (bytes)=2033188864
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1120
File Output Format Counters
Bytes Written=77
15/02/03 15:41:47 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
15/02/03 15:41:47 INFO fs.TestDFSIO: Date & time: Tue Feb 03 15:41:47 CST 2015
15/02/03 15:41:47 INFO fs.TestDFSIO: Number of files: 10
15/02/03 15:41:47 INFO fs.TestDFSIO: Total MBytes processed: 100.0
15/02/03 15:41:47 INFO fs.TestDFSIO: Throughput mb/sec: 0.4277928455924503
15/02/03 15:41:47 INFO fs.TestDFSIO: Average IO rate mb/sec: 2.8419787883758545
15/02/03 15:41:47 INFO fs.TestDFSIO: IO rate std deviation: 3.5636629341100754
15/02/03 15:41:47 INFO fs.TestDFSIO: Test exec time sec: 197.297
15/02/03 15:41:47 INFO fs.TestDFSIO:
[hadoop@tong1 hadoop-2.6.0]$ cat TestDFSIO_results.log --檢視寫入的結果
----- TestDFSIO ----- : write
Date & time: Tue Feb 03 15:41:47 CST 2015
Number of files: 10
Total MBytes processed: 100.0
Throughput mb/sec: 0.4277928455924503
Average IO rate mb/sec: 2.8419787883758545
IO rate std deviation: 3.5636629341100754
Test exec time sec: 197.297
[hadoop@tong1 hadoop-2.6.0]$
3.測試hadoop讀檔案的速度
在HDFS檔案系統中讀入10個檔案,每個檔案10M
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 10MB
15/02/03 15:51:53 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/03 15:51:53 INFO fs.TestDFSIO: nrFiles = 10
15/02/03 15:51:53 INFO fs.TestDFSIO: nrBytes (MB) = 10.0
15/02/03 15:51:53 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/03 15:51:53 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
15/02/03 15:51:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/03 15:51:54 INFO fs.TestDFSIO: creating control file: 10485760 bytes, 10 files
15/02/03 15:51:56 INFO fs.TestDFSIO: created control files for: 10 files
15/02/03 15:51:56 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:51:56 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:51:57 INFO mapred.FileInputFormat: Total input paths to process : 10
15/02/03 15:51:57 INFO mapreduce.JobSubmitter: number of splits:10
15/02/03 15:51:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1422946799080_0005
15/02/03 15:51:57 INFO impl.YarnClientImpl: Submitted application application_1422946799080_0005
15/02/03 15:51:58 INFO mapreduce.Job: The url to track the job:
15/02/03 15:51:58 INFO mapreduce.Job: Running job: job_1422946799080_0005
15/02/03 15:52:10 INFO mapreduce.Job: Job job_1422946799080_0005 running in uber mode : false
15/02/03 15:52:10 INFO mapreduce.Job: map 0% reduce 0%
15/02/03 15:52:21 INFO mapreduce.Job: map 10% reduce 0%
15/02/03 15:52:27 INFO mapreduce.Job: map 20% reduce 0%
15/02/03 15:52:29 INFO mapreduce.Job: map 30% reduce 0%
15/02/03 15:52:30 INFO mapreduce.Job: map 40% reduce 0%
15/02/03 15:52:32 INFO mapreduce.Job: map 50% reduce 0%
15/02/03 15:52:38 INFO mapreduce.Job: map 50% reduce 17%
15/02/03 15:52:39 INFO mapreduce.Job: map 57% reduce 17%
15/02/03 15:52:40 INFO mapreduce.Job: map 63% reduce 17%
15/02/03 15:52:41 INFO mapreduce.Job: map 70% reduce 17%
15/02/03 15:52:42 INFO mapreduce.Job: map 73% reduce 17%
15/02/03 15:52:43 INFO mapreduce.Job: map 87% reduce 17%
15/02/03 15:52:44 INFO mapreduce.Job: map 93% reduce 17%
15/02/03 15:52:45 INFO mapreduce.Job: map 100% reduce 17%
15/02/03 15:52:51 INFO mapreduce.Job: map 100% reduce 100%
15/02/03 15:52:57 INFO mapreduce.Job: Job job_1422946799080_0005 completed successfully
15/02/03 15:52:57 INFO mapreduce.Job: Counters: 50
File System Counters
FILE: Number of bytes read=816
FILE: Number of bytes written=1167961
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=94374150
HDFS: Number of bytes written=64
HDFS: Number of read operations=53
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=7
Rack-local map tasks=4
Total time spent by all maps in occupied slots (ms)=236272
Total time spent by all reduces in occupied slots (ms)=27137
Total time spent by all map tasks (ms)=236272
Total time spent by all reduce tasks (ms)=27137
Total vcore-seconds taken by all map tasks=236272
Total vcore-seconds taken by all reduce tasks=27137
Total megabyte-seconds taken by all map tasks=241942528
Total megabyte-seconds taken by all reduce tasks=27788288
Map-Reduce Framework
Map input records=10
Map output records=50
Map output bytes=710
Map output materialized bytes=870
Input split bytes=1190
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=870
Reduce input records=50
Reduce output records=5
Spilled Records=100
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=2853
CPU time spent (ms)=11890
Physical memory (bytes) snapshot=2568495104
Virtual memory (bytes) snapshot=22888431616
Total committed heap usage (bytes)=2021654528
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1120
File Output Format Counters
Bytes Written=64
15/02/03 15:52:57 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
15/02/03 15:52:57 INFO fs.TestDFSIO: Date & time: Tue Feb 03 15:52:57 CST 2015
15/02/03 15:52:57 INFO fs.TestDFSIO: Number of files: 10
15/02/03 15:52:57 INFO fs.TestDFSIO: Total MBytes processed: 90.0
15/02/03 15:52:57 INFO fs.TestDFSIO: Throughput mb/sec: 2.663509914175792
15/02/03 15:52:57 INFO fs.TestDFSIO: Average IO rate mb/sec: NaN
15/02/03 15:52:57 INFO fs.TestDFSIO: IO rate std deviation: NaN
15/02/03 15:52:57 INFO fs.TestDFSIO: Test exec time sec: 61.378
15/02/03 15:52:57 INFO fs.TestDFSIO:
[hadoop@tong1 hadoop-2.6.0]$ cat TestDFSIO_results.log --檢視結果
----- TestDFSIO ----- : write
Date & time: Tue Feb 03 15:41:47 CST 2015
Number of files: 10
Total MBytes processed: 100.0
Throughput mb/sec: 0.4277928455924503
Average IO rate mb/sec: 2.8419787883758545
IO rate std deviation: 3.5636629341100754
Test exec time sec: 197.297
----- TestDFSIO ----- : read
Date & time: Tue Feb 03 15:52:57 CST 2015
Number of files: 10
Total MBytes processed: 90.0
Throughput mb/sec: 2.663509914175792
Average IO rate mb/sec: NaN
IO rate std deviation: NaN
Test exec time sec: 61.378
[hadoop@tong1 hadoop-2.6.0]$
4.刪除臨時檔案
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -clean
15/02/03 15:55:19 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/03 15:55:19 INFO fs.TestDFSIO: nrFiles = 1
15/02/03 15:55:19 INFO fs.TestDFSIO: nrBytes (MB) = 1.0
15/02/03 15:55:19 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/03 15:55:19 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
15/02/03 15:55:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/03 15:55:20 INFO fs.TestDFSIO: Cleaning up test files
[hadoop@tong1 hadoop-2.6.0]$
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar
An example program must be given as the first argument.
Valid program names are:
DFSCIOTest: Distributed i/o benchmark of libhdfs.
DistributedFSCheck: Distributed checkup of the file system consistency.
JHLogAnalyzer: Job History Log analyzer.
MRReliabilityTest: A program that tests the reliability of the MR framework by injecting faults/failures
SliveTest: HDFS Stress Test and Live Data Verification.
TestDFSIO: Distributed i/o benchmark.
fail: a job that always fails
filebench: Benchmark SequenceFile(Input|Output)Format (block,record compressed and uncompressed), Text(Input|Output)Format (compressed and uncompressed)
largesorter: Large-Sort tester
loadgen: Generic map/reduce load generator
mapredtest: A map/reduce test check.
minicluster: Single process HDFS and MR cluster.
mrbench: A map/reduce benchmark that can create many small jobs
nnbench: A benchmark that stresses the namenode.
sleep: A job that sleeps at each map and reduce task.
testbigmapoutput: A map/reduce program that works on a very big non-splittable file and does identity map/reduce
testfilesystem: A test for FileSystem read/write.
testmapredsort: A map/reduce program that validates the map-reduce framework's sort.
testsequencefile: A test for flat files of binary key value pairs.
testsequencefileinputformat: A test for sequence file input format.
testtextinputformat: A test for text input format.
threadedmapbench: A map/reduce benchmark that compares the performance of maps with multiple spills over maps with 1 spill
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO
15/02/03 15:28:32 INFO fs.TestDFSIO: TestDFSIO.1.7
Missing arguments.
Usage: TestDFSIO [genericOptions] -read [-random | -backward | -skip [-skipSize Size]] | -write | -append | -clean [-compression codecClassName] [-nrFiles N] [-size Size[B|KB|MB|GB|TB]] [-resFile resultFileName] [-bufferSize Bytes] [-rootDir]
[hadoop@tong1 hadoop-2.6.0]$
2.測試hadoop寫的速度
向HDFS檔案系統中寫入資料,10個檔案,每個檔案10MB,檔案存放到/benchmarks/TestDFSIO/io_data中
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 10MB
15/02/03 15:38:21 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/03 15:38:21 INFO fs.TestDFSIO: nrFiles = 10
15/02/03 15:38:21 INFO fs.TestDFSIO: nrBytes (MB) = 10.0
15/02/03 15:38:21 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/03 15:38:21 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
15/02/03 15:38:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/03 15:38:22 INFO fs.TestDFSIO: creating control file: 10485760 bytes, 10 files
15/02/03 15:38:30 INFO fs.TestDFSIO: created control files for: 10 files
15/02/03 15:38:30 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:38:30 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:38:31 INFO mapred.FileInputFormat: Total input paths to process : 10
15/02/03 15:38:31 INFO mapreduce.JobSubmitter: number of splits:10
15/02/03 15:38:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1422946799080_0004
15/02/03 15:38:32 INFO impl.YarnClientImpl: Submitted application application_1422946799080_0004
15/02/03 15:38:32 INFO mapreduce.Job: The url to track the job:
15/02/03 15:38:32 INFO mapreduce.Job: Running job: job_1422946799080_0004
15/02/03 15:38:39 INFO mapreduce.Job: Job job_1422946799080_0004 running in uber mode : false
15/02/03 15:38:39 INFO mapreduce.Job: map 0% reduce 0%
15/02/03 15:38:50 INFO mapreduce.Job: map 10% reduce 0%
15/02/03 15:38:52 INFO mapreduce.Job: map 20% reduce 0%
15/02/03 15:39:02 INFO mapreduce.Job: map 20% reduce 7%
15/02/03 15:39:08 INFO mapreduce.Job: map 30% reduce 10%
15/02/03 15:39:55 INFO mapreduce.Job: map 37% reduce 10%
15/02/03 15:39:59 INFO mapreduce.Job: map 43% reduce 10%
15/02/03 15:40:03 INFO mapreduce.Job: map 50% reduce 10%
15/02/03 15:40:13 INFO mapreduce.Job: map 57% reduce 10%
15/02/03 15:40:33 INFO mapreduce.Job: map 63% reduce 10%
15/02/03 15:40:43 INFO mapreduce.Job: map 70% reduce 10%
15/02/03 15:40:50 INFO mapreduce.Job: map 73% reduce 10%
15/02/03 15:40:51 INFO mapreduce.Job: map 73% reduce 13%
15/02/03 15:40:52 INFO mapreduce.Job: map 77% reduce 13%
15/02/03 15:40:54 INFO mapreduce.Job: map 80% reduce 17%
15/02/03 15:40:57 INFO mapreduce.Job: map 80% reduce 20%
15/02/03 15:41:00 INFO mapreduce.Job: map 83% reduce 20%
15/02/03 15:41:01 INFO mapreduce.Job: map 90% reduce 20%
15/02/03 15:41:03 INFO mapreduce.Job: map 90% reduce 23%
15/02/03 15:41:05 INFO mapreduce.Job: map 93% reduce 23%
15/02/03 15:41:09 INFO mapreduce.Job: map 93% reduce 27%
15/02/03 15:41:15 INFO mapreduce.Job: map 97% reduce 27%
15/02/03 15:41:18 INFO mapreduce.Job: map 97% reduce 30%
15/02/03 15:41:34 INFO mapreduce.Job: map 100% reduce 30%
15/02/03 15:41:45 INFO mapreduce.Job: map 100% reduce 100%
15/02/03 15:41:47 INFO mapreduce.Job: Job job_1422946799080_0004 completed successfully
15/02/03 15:41:47 INFO mapreduce.Job: Counters: 53
File System Counters
FILE: Number of bytes read=842
FILE: Number of bytes written=1168035
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2310
HDFS: Number of bytes written=104857677
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=12
Job Counters
Failed map tasks=1
Killed map tasks=8
Launched map tasks=19
Launched reduce tasks=1
Other local map tasks=1
Data-local map tasks=9
Rack-local map tasks=9
Total time spent by all maps in occupied slots (ms)=1501978
Total time spent by all reduces in occupied slots (ms)=172661
Total time spent by all map tasks (ms)=1501978
Total time spent by all reduce tasks (ms)=172661
Total vcore-seconds taken by all map tasks=1501978
Total vcore-seconds taken by all reduce tasks=172661
Total megabyte-seconds taken by all map tasks=1538025472
Total megabyte-seconds taken by all reduce tasks=176804864
Map-Reduce Framework
Map input records=10
Map output records=50
Map output bytes=736
Map output materialized bytes=896
Input split bytes=1190
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=896
Reduce input records=50
Reduce output records=5
Spilled Records=100
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=8949
CPU time spent (ms)=24290
Physical memory (bytes) snapshot=2860949504
Virtual memory (bytes) snapshot=22915002368
Total committed heap usage (bytes)=2033188864
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1120
File Output Format Counters
Bytes Written=77
15/02/03 15:41:47 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
15/02/03 15:41:47 INFO fs.TestDFSIO: Date & time: Tue Feb 03 15:41:47 CST 2015
15/02/03 15:41:47 INFO fs.TestDFSIO: Number of files: 10
15/02/03 15:41:47 INFO fs.TestDFSIO: Total MBytes processed: 100.0
15/02/03 15:41:47 INFO fs.TestDFSIO: Throughput mb/sec: 0.4277928455924503
15/02/03 15:41:47 INFO fs.TestDFSIO: Average IO rate mb/sec: 2.8419787883758545
15/02/03 15:41:47 INFO fs.TestDFSIO: IO rate std deviation: 3.5636629341100754
15/02/03 15:41:47 INFO fs.TestDFSIO: Test exec time sec: 197.297
15/02/03 15:41:47 INFO fs.TestDFSIO:
[hadoop@tong1 hadoop-2.6.0]$ cat TestDFSIO_results.log --檢視寫入的結果
----- TestDFSIO ----- : write
Date & time: Tue Feb 03 15:41:47 CST 2015
Number of files: 10
Total MBytes processed: 100.0
Throughput mb/sec: 0.4277928455924503
Average IO rate mb/sec: 2.8419787883758545
IO rate std deviation: 3.5636629341100754
Test exec time sec: 197.297
[hadoop@tong1 hadoop-2.6.0]$
3.測試hadoop讀檔案的速度
在HDFS檔案系統中讀入10個檔案,每個檔案10M
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 10MB
15/02/03 15:51:53 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/03 15:51:53 INFO fs.TestDFSIO: nrFiles = 10
15/02/03 15:51:53 INFO fs.TestDFSIO: nrBytes (MB) = 10.0
15/02/03 15:51:53 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/03 15:51:53 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
15/02/03 15:51:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/03 15:51:54 INFO fs.TestDFSIO: creating control file: 10485760 bytes, 10 files
15/02/03 15:51:56 INFO fs.TestDFSIO: created control files for: 10 files
15/02/03 15:51:56 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:51:56 INFO client.RMProxy: Connecting to ResourceManager at tong1/192.168.1.247:8032
15/02/03 15:51:57 INFO mapred.FileInputFormat: Total input paths to process : 10
15/02/03 15:51:57 INFO mapreduce.JobSubmitter: number of splits:10
15/02/03 15:51:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1422946799080_0005
15/02/03 15:51:57 INFO impl.YarnClientImpl: Submitted application application_1422946799080_0005
15/02/03 15:51:58 INFO mapreduce.Job: The url to track the job:
15/02/03 15:51:58 INFO mapreduce.Job: Running job: job_1422946799080_0005
15/02/03 15:52:10 INFO mapreduce.Job: Job job_1422946799080_0005 running in uber mode : false
15/02/03 15:52:10 INFO mapreduce.Job: map 0% reduce 0%
15/02/03 15:52:21 INFO mapreduce.Job: map 10% reduce 0%
15/02/03 15:52:27 INFO mapreduce.Job: map 20% reduce 0%
15/02/03 15:52:29 INFO mapreduce.Job: map 30% reduce 0%
15/02/03 15:52:30 INFO mapreduce.Job: map 40% reduce 0%
15/02/03 15:52:32 INFO mapreduce.Job: map 50% reduce 0%
15/02/03 15:52:38 INFO mapreduce.Job: map 50% reduce 17%
15/02/03 15:52:39 INFO mapreduce.Job: map 57% reduce 17%
15/02/03 15:52:40 INFO mapreduce.Job: map 63% reduce 17%
15/02/03 15:52:41 INFO mapreduce.Job: map 70% reduce 17%
15/02/03 15:52:42 INFO mapreduce.Job: map 73% reduce 17%
15/02/03 15:52:43 INFO mapreduce.Job: map 87% reduce 17%
15/02/03 15:52:44 INFO mapreduce.Job: map 93% reduce 17%
15/02/03 15:52:45 INFO mapreduce.Job: map 100% reduce 17%
15/02/03 15:52:51 INFO mapreduce.Job: map 100% reduce 100%
15/02/03 15:52:57 INFO mapreduce.Job: Job job_1422946799080_0005 completed successfully
15/02/03 15:52:57 INFO mapreduce.Job: Counters: 50
File System Counters
FILE: Number of bytes read=816
FILE: Number of bytes written=1167961
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=94374150
HDFS: Number of bytes written=64
HDFS: Number of read operations=53
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=7
Rack-local map tasks=4
Total time spent by all maps in occupied slots (ms)=236272
Total time spent by all reduces in occupied slots (ms)=27137
Total time spent by all map tasks (ms)=236272
Total time spent by all reduce tasks (ms)=27137
Total vcore-seconds taken by all map tasks=236272
Total vcore-seconds taken by all reduce tasks=27137
Total megabyte-seconds taken by all map tasks=241942528
Total megabyte-seconds taken by all reduce tasks=27788288
Map-Reduce Framework
Map input records=10
Map output records=50
Map output bytes=710
Map output materialized bytes=870
Input split bytes=1190
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=870
Reduce input records=50
Reduce output records=5
Spilled Records=100
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=2853
CPU time spent (ms)=11890
Physical memory (bytes) snapshot=2568495104
Virtual memory (bytes) snapshot=22888431616
Total committed heap usage (bytes)=2021654528
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1120
File Output Format Counters
Bytes Written=64
15/02/03 15:52:57 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
15/02/03 15:52:57 INFO fs.TestDFSIO: Date & time: Tue Feb 03 15:52:57 CST 2015
15/02/03 15:52:57 INFO fs.TestDFSIO: Number of files: 10
15/02/03 15:52:57 INFO fs.TestDFSIO: Total MBytes processed: 90.0
15/02/03 15:52:57 INFO fs.TestDFSIO: Throughput mb/sec: 2.663509914175792
15/02/03 15:52:57 INFO fs.TestDFSIO: Average IO rate mb/sec: NaN
15/02/03 15:52:57 INFO fs.TestDFSIO: IO rate std deviation: NaN
15/02/03 15:52:57 INFO fs.TestDFSIO: Test exec time sec: 61.378
15/02/03 15:52:57 INFO fs.TestDFSIO:
[hadoop@tong1 hadoop-2.6.0]$ cat TestDFSIO_results.log --檢視結果
----- TestDFSIO ----- : write
Date & time: Tue Feb 03 15:41:47 CST 2015
Number of files: 10
Total MBytes processed: 100.0
Throughput mb/sec: 0.4277928455924503
Average IO rate mb/sec: 2.8419787883758545
IO rate std deviation: 3.5636629341100754
Test exec time sec: 197.297
----- TestDFSIO ----- : read
Date & time: Tue Feb 03 15:52:57 CST 2015
Number of files: 10
Total MBytes processed: 90.0
Throughput mb/sec: 2.663509914175792
Average IO rate mb/sec: NaN
IO rate std deviation: NaN
Test exec time sec: 61.378
[hadoop@tong1 hadoop-2.6.0]$
4.刪除臨時檔案
[hadoop@tong1 hadoop-2.6.0]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar TestDFSIO -clean
15/02/03 15:55:19 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/03 15:55:19 INFO fs.TestDFSIO: nrFiles = 1
15/02/03 15:55:19 INFO fs.TestDFSIO: nrBytes (MB) = 1.0
15/02/03 15:55:19 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/03 15:55:19 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.
15/02/03 15:55:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/03 15:55:20 INFO fs.TestDFSIO: Cleaning up test files
[hadoop@tong1 hadoop-2.6.0]$
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25854343/viewspace-1425183/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- hadoop基準測試_Hadoop TeraSort基準測試Hadoop
- MySQL基準測試MySql
- TGI 基準測試
- 測試基準資料的準備
- MySQL學習 - 基準測試MySql
- 固態硬碟基準測試硬碟
- TPCC-MySQL基準測試MySql
- 【MYSQL 基準測試結果】MySql
- MySQL基準測試工具sysbenchMySql
- 《Redis官方教程》-基準測試Redis
- 【Mysql】sysbench基準測試工具MySql
- [轉帖]sysbench基準測試
- JMH- benchmark基準測試
- postgresql:pgbench基準效能測試SQL
- 【工具】基準測試工具之sysbench
- 技術基礎 | Apache Cassandra 4.0基準測試Apache
- 資料庫基準測試工具 sysbench資料庫
- 公有云RDS-MySQL基準測試MySql
- 基準測試:HTTP/3 有多快? - requestmetricsHTTP
- 【MySQL】利用sysbench進行基準測試MySql
- Hadoop TeraSort 基準測試實驗Hadoop
- MySQL基準壓力測試工具MySQLSlapMySql
- 【工具】基準測試工具之iozone
- Go 語言基準測試入門Go
- [總結] 簡述 MySQL 基準測試工具MySql
- 【總結】簡述 MySQL 基準測試工具MySql
- Java基準效能測試--JMH使用介紹Java
- 使用 JMH 做 Kotlin 的基準測試Kotlin
- 詳解 MySQL 基準測試和 sysbench 工具MySql
- 利用sysbench進行MySQL OLTP基準測試MySql
- 【工具】基準測試工具之tpcc-mysqlMySql
- MySQL效能基準測試對比:5.7 VS 8.0MySql
- Java JSON解析器效能基準測試JavaJSON
- TechEmpower第八輪Web框架基準測試推出Web框架
- 如何使用hammerdb進行MySQL基準測試MySql
- 基於 AI 大模型的精準測試分享AI大模型
- VMmark 4.0.1 - 虛擬化平臺基準測試
- 使用Sysbench對滴滴雲MySQL進行基準測試MySql