前言:想到之前用sysbench测试过服务器硬盘IOPS,但是没有指定数据块大小,所以最近又另外找了fio工具来测试服务器磁盘IOPS。

一、fio安装

官网下载最新fio工具,编译安装即可
解压

  1. tar -zxvf fio-2.1.10.tar.gz

安装

  1. make
  2. make install

二、测试

  1. filename=/dev/emcpowerb 支持文件系统或者裸设备,-filename=/dev/sda2或-filename=/dev/sdb
  2. direct=1 测试过程绕过机器自带的buffer,使测试结果更真实
  3. rw=randwread 测试随机读的I/O
  4. rw=randwrite 测试随机写的I/O
  5. rw=randrw 测试随机混合写和读的I/O
  6. rw=read 测试顺序读的I/O
  7. rw=write 测试顺序写的I/O
  8. rw=rw 测试顺序混合写和读的I/O
  9. bs=4k 单次io的块文件大小为4k
  10. bsrange=512-2048 同上,提定数据块的大小范围
  11. size=5g 本次的测试文件大小为5g,以每次4kio进行测试
  12. numjobs=30 本次的测试线程为30
  13. runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止
  14. ioengine=psync io引擎使用pync方式,如果要使用libaio引擎,需要yum install libaio-devel
  15. rwmixwrite=30 在混合读写的模式下,写占30%
  16. group_reporting 关于显示结果的,汇总每个进程的信息
  17. 此外
  18. lockmem=1g 只使用1g内存进行测试
  19. zero_buffers 0初始化系统buffer
  20. nrfiles=8 每个进程生成文件的数量

实际测试:

  1. [root@Mariadb-04 fio-2.1.10]# /usr/local/bin/fio -filename=/storage/test_randread -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=120 -group_reporting -name=mytest

三、结果解读

  1. ...
  2. fio-2.1.10
  3. Starting 30 threads
  4. mytest: Laying out IO file(s) (1 file(s) / 2048MB)
  5. Jobs: 30 (f=30): [mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [96239KB/42997KB/0KB /s] [6014/2687/0 iops] [eta 00m:00s]
  6. mytest: (groupid=0, jobs=30): err= 0: pid=32902: Wed Apr 25 11:01:28 2018
  7. read : io=3498.2MB, bw=29841KB/s, iops=1865, runt=120039msec
  8. clat (usec): min=95, max=7180.4K, avg=13934.48, stdev=106872.83
  9. lat (usec): min=95, max=7180.4K, avg=13934.66, stdev=106872.84
  10. clat percentiles (usec):
  11. | 1.00th=[ 115], 5.00th=[ 133], 10.00th=[ 161], 20.00th=[ 235],
  12. | 30.00th=[ 318], 40.00th=[ 652], 50.00th=[ 5024], 60.00th=[ 8640],
  13. | 70.00th=[12864], 80.00th=[19840], 90.00th=[32384], 95.00th=[46336],
  14. | 99.00th=[84480], 99.50th=[107008], 99.90th=[209920], 99.95th=[1253376],
  15. | 99.99th=[5210112]
  16. bw (KB /s): min= 2, max= 5447, per=4.09%, avg=1221.31, stdev=688.71
  17. write: io=1513.9MB, bw=12914KB/s, iops=807, runt=120039msec
  18. clat (usec): min=179, max=7160.4K, avg=4952.37, stdev=109858.64
  19. lat (usec): min=180, max=7160.4K, avg=4954.63, stdev=109858.67
  20. clat percentiles (usec):
  21. | 1.00th=[ 286], 5.00th=[ 326], 10.00th=[ 358], 20.00th=[ 406],
  22. | 30.00th=[ 446], 40.00th=[ 494], 50.00th=[ 564], 60.00th=[ 700],
  23. | 70.00th=[ 1192], 80.00th=[ 4896], 90.00th=[ 8512], 95.00th=[10048],
  24. | 99.00th=[16064], 99.50th=[18560], 99.90th=[44288], 99.95th=[1253376],
  25. | 99.99th=[7176192]
  26. bw (KB /s): min= 2, max= 2821, per=4.14%, avg=534.19, stdev=334.76
  27. lat (usec) : 100=0.01%, 250=16.05%, 500=21.96%, 750=9.53%, 1000=2.97%
  28. lat (msec) : 2=3.48%, 4=3.07%, 10=16.50%, 20=12.44%, 50=11.02%
  29. lat (msec) : 100=2.51%, 250=0.37%, 500=0.02%, 1000=0.01%, 2000=0.01%
  30. lat (msec) : >=2000=0.05%
  31. cpu : usr=0.04%, sys=0.25%, ctx=325441, majf=0, minf=6
  32. IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
  33. submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  34. complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  35. issued : total=r=223880/w=96886/d=0, short=r=0/w=0/d=0
  36. latency : target=0, window=0, percentile=100.00%, depth=1
  37. Run status group 0 (all jobs):
  38. READ: io=3498.2MB, aggrb=29840KB/s, minb=29840KB/s, maxb=29840KB/s, mint=120039msec, maxt=120039msec
  39. WRITE: io=1513.9MB, aggrb=12913KB/s, minb=12913KB/s, maxb=12913KB/s, mint=120039msec, maxt=120039msec
  40. Disk stats (read/write):
  41. dm-1: ios=231005/101135, merge=0/0, ticks=3413245/925615, in_queue=4340282, util=100.00%, aggrios=231734/101432, aggrmerge=32/11, aggrticks=3416960/876818, aggrin_queue=4293134, aggrutil=100.00%
  42. dm-0: ios=231734/101432, merge=32/11, ticks=3416960/876818, in_queue=4293134, util=100.00%, aggrios=231734/101432, aggrmerge=0/0, aggrticks=2391410/76306, aggrin_queue=2467258, aggrutil=100.00%
  43. sdb: ios=231734/101432, merge=0/0, ticks=2391410/76306, in_queue=2467258, util=100.00%

这里我们只需要关注read iops=1865与write iops=807就可以了。