磁碟效能測試工具 flexible I/O tester
fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user. The typical use of fio is to write a job file matching the I/O load one wants to simulate.
相對而言,dd只是測試磁碟的物理效能,並且只是測試的磁碟順序讀寫的頻寬。但實際上磁碟效能的瓶頸往往是IOPS,這個是dd無法測試出來的。
Fio可以按照使用者的需求啟動多個執行緒模擬各種不同的IO操作,測試出有檔案系統的磁碟在實際的隨機讀寫的工作模式下所能承受的實際壓力。
下載
fio 下載:
[http://freshmeat.net/projects/fio/]
fio 手冊:
[http://linux.die.net/man/1/fio]
fio 文章:
[http://www.linux.com/archive/feature/131063]
make & make install 即可
如果出現下面的錯誤:
CC gettime.o In file included from fio.h:23, from gettime.c:8: os/os.h:15:20: error: libaio.h: No such file or directory In file included from gettime.c:8: fio.h:119: error: field 'iocb' has incomplete type make: *** [gettime.o] Error 1
安裝這兩個包:
sudo apt-get install libaio1 libaio-dev
測試
在source code裡面自帶有測試的fio指令碼,使用fio [filename] 即可進行測試,下面的結果是測試vmware 7.0中,NILFS檔案系統隨機讀的測試結果:
rex@rexvm:~/下載/fio-1.36/examples$ fio aio-read file1: (g=0): rw=randread, bs=128K-128K/128K-128K, ioengine=libaio, iodepth=4 file2: (g=0): rw=randread, bs=128K-128K/128K-128K, ioengine=libaio, iodepth=32 file3: (g=0): rw=randread, bs=128K-128K/128K-128K, ioengine=libaio, iodepth=8 file4: (g=0): rw=randread, bs=128K-128K/128K-128K, ioengine=libaio, iodepth=16 Starting 4 processes file1: Laying out IO file(s) (1 file(s) / 512MB) file2: Laying out IO file(s) (1 file(s) / 512MB) file3: Laying out IO file(s) (1 file(s) / 512MB) file4: Laying out IO file(s) (1 file(s) / 512MB) Jobs: 1 (f=1): [r___] [100.0% done] [86426K/0K /s] [659/0 iops] [eta 00m:00s] file1: (groupid=0, jobs=1): err= 0: pid=8637 read : io=524288KB, bw=24611KB/s, iops=192, runt= 21303msec slat (usec): min=123, max=57781, avg=3508.71, stdev=8292.79 clat (usec): min=47, max=388155, avg=17137.44, stdev=43573.89 bw (KB/s) : min=10069, max=75520, per=23.88%, avg=23504.13, stdev=12992.39 cpu : usr=0.15%, sys=20.28%, ctx=672, majf=0, minf=156 IO depths : 1=0.1%, 2=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=4096/0, short=0/0 lat (usec): 50=2.88%, 100=24.56%, 250=1.22%, 500=0.46%, 750=4.10% lat (usec): 1000=9.25% lat (msec): 2=14.23%, 4=15.43%, 10=1.05%, 20=2.95%, 50=16.33% lat (msec): 100=2.95%, 250=3.61%, 500=0.95% file2: (groupid=0, jobs=1): err= 0: pid=8638 read : io=524288KB, bw=26222KB/s, iops=204, runt= 19994msec slat (usec): min=123, max=81954, avg=4452.41, stdev=9360.67 clat (msec): min=7, max=710, avg=148.20, stdev=64.75 bw (KB/s) : min= 168, max=34048, per=27.30%, avg=26875.55, stdev=5675.16 cpu : usr=0.10%, sys=24.71%, ctx=678, majf=0, minf=1052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=99.2%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued r/w: total=4096/0, short=0/0 lat (msec): 10=0.10%, 20=0.12%, 50=0.71%, 100=9.42%, 250=83.89% lat (msec): 500=5.44%, 750=0.32% file3: (groupid=0, jobs=1): err= 0: pid=8639 read : io=524288KB, bw=26203KB/s, iops=204, runt= 20009msec slat (usec): min=124, max=158833, avg=4322.76, stdev=9381.58 clat (usec): min=47, max=457767, avg=34494.35, stdev=47974.57 bw (KB/s) : min=13810, max=41306, per=27.00%, avg=26578.39, stdev=5382.38 cpu : usr=0.20%, sys=23.35%, ctx=654, majf=0, minf=284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.8%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=4096/0, short=0/0 lat (usec): 50=0.32%, 100=2.37%, 250=0.20%, 500=0.07%, 750=1.03% lat (usec): 1000=2.42% lat (msec): 2=4.91%, 4=9.64%, 10=9.94%, 20=5.91%, 50=49.68% lat (msec): 100=7.32%, 250=5.08%, 500=1.12% file4: (groupid=0, jobs=1): err= 0: pid=8640 read : io=524288KB, bw=26301KB/s, iops=205, runt= 19934msec slat (usec): min=124, max=65601, avg=4483.65, stdev=9360.13 clat (msec): min=1, max=459, avg=72.36, stdev=51.90 bw (KB/s) : min= 1237, max=39936, per=27.08%, avg=26655.42, stdev=5750.57 cpu : usr=0.14%, sys=24.12%, ctx=668, majf=0, minf=540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=99.6%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=4096/0, short=0/0 lat (msec): 2=0.02%, 4=0.68%, 10=0.78%, 20=1.17%, 50=27.51% lat (msec): 100=56.42%, 250=11.06%, 500=2.34% Run status group 0 (all jobs): READ: io=2048MB, aggrb=98443KB/s, minb=25201KB/s, maxb=26932KB/s, mint=19934msec, maxt=21303msec Disk stats (read/write): sdb: ios=16543/3236, merge=0/0, ticks=234540/23416, in_queue=257840, util=37.67%
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25618347/viewspace-713869/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Linux下磁碟I/O測試Linux
- 【工具】ORION I/O 測試工具
- 如何監測 Linux 的磁碟 I/O 效能Linux
- 在Linux下測試磁碟的I/OLinux
- Linux下如何測試磁碟I/O: ( hdparm -t)Linux
- AIX 下磁碟 I/O 效能分析[轉]AI
- 【AIX】AIX 下磁碟 I/O 效能分析AI
- 如何在Linux系統伺服器中測試儲存/磁碟I/O效能?Linux伺服器
- AIX系統磁碟I/O效能評估AI
- 優化磁碟I/O優化
- linux監測I/O效能-iostatLinuxiOS
- hdparm 測試硬碟讀寫速度I/O硬碟
- 效能測試工具
- 測試boot庫下I/O模型型別boot模型型別
- 使用Rational Performance Tester實現DB2 效能測試和監控ORMDB2
- 使用iostat監控磁碟I/OiOS
- MySQL之磁碟I/O過高排查MySql
- 效能測試工具Locust
- mysqlslap 效能測試工具MySql
- 效能測試工具supersmackMac
- 效能測試工具 - Siege
- Oracle 9i 整體效能優化概述草稿之四:調整磁碟I/O (zt)Oracle優化
- 減少ORACLE中的磁碟I/O(轉)Oracle
- 【PG效能測試】pgbench效能測試工具簡單使用
- ABAP Webdynpro效能測試工具Web
- 壓縮工具效能測試
- MySQL 效能測試工具mysqlslapMySql
- YCSB效能測試工具使用
- 效能測試工具的原理
- 【I/O scheduler】Linux的磁碟排程策略Linux
- 效能測試工具你知道多少?
- java 效能測試框架工具-junitperfJava框架
- 使用 fio 工具測試 EBS 效能
- Nginx效能測試工具之webbenchNginxWeb
- Linux 下使用 dd 命令進行硬碟 I/O 效能檢測Linux硬碟
- 效能測試:主流壓測工具介紹
- Linux系統監控之磁碟I/O篇Linux
- 使用YCSB工具工具進行cassandra效能測試