LINUX.ORG.RU

Производительность posixaio vs libaio

 , , ,


0

7

Доброго времени суток

Возникла задача сравнить производительность io на серверах с aix и linux

По привычке решил использовать fio. на aix нет libaio, поэтому на обоих серверах запускал fio с ioengine=posixaio. И получил совершенно неприличные цифры ~ 200 .. 300 IOPS, что для используемого массива просто смешно.

На linux ещё раз прогнал fio уже с ioengine=libaio, и получил 1000 IOPS, что ближе к возможностям железа.

Вопрос: откуда такое различие? Проблема в posixaio, в неумении fio его использовать, или в том что я не учёл что-то важное?

aix

cat read.ini:
[readtest]
blocksize=4k
filename=/dev/fslv01
rw=randread
direct=1
buffered=0
ioengine=posixaio
iodepth=8
invalidate=0

# fio --readonly read.ini
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=8
fio-2.0.15
Starting 1 process
^Cbs: 1 (f=1): [r] [11.1% done] [1630K/0K/0K /s] [407 /0 /0  iops] [eta 47m:12s]
fio: terminating on signal 2

readtest: (groupid=0, jobs=1): err= 0: pid=20840660: Tue Aug  5 15:11:19 2014
  read : io=464360KB, bw=1320.8KB/s, iops=330 , runt=351601msec
    slat (usec): min=5 , max=43955 , avg=55.12, stdev=144.51
    clat (usec): min=12 , max=293859 , avg=24152.43, stdev=9755.24
     lat (usec): min=739 , max=293881 , avg=24207.55, stdev=9756.42
    clat percentiles (msec):
     |  1.00th=[    7],  5.00th=[   11], 10.00th=[   14], 20.00th=[   17],
     | 30.00th=[   19], 40.00th=[   21], 50.00th=[   24], 60.00th=[   26],
     | 70.00th=[   29], 80.00th=[   32], 90.00th=[   37], 95.00th=[   42],
     | 99.00th=[   52], 99.50th=[   57], 99.90th=[   70], 99.95th=[   82],
     | 99.99th=[  111]
    bw (KB/s)  : min=  670, max= 1784, per=100.00%, avg=1321.51, stdev=160.82
    lat (usec) : 20=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.04%, 4=0.21%, 10=3.91%, 20=31.78%, 50=62.78%
    lat (msec) : 100=1.26%, 250=0.01%, 500=0.01%
  cpu          : usr=0.55%, sys=0.66%, ctx=0, majf=214256, minf=0
  IO depths    : 1=0.1%, 2=0.1%, 4=6.9%, 8=93.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=116090/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=464360KB, aggrb=1320KB/s, minb=1320KB/s, maxb=1320KB/s, mint=351601msec, maxt=351601msec

linux, ioengine=posixaio

# cat read.ini
[readtest]
blocksize=4k
filename=/dev/sddlmbk1
rw=randread
direct=1
buffered=0
ioengine=posixaio  
iodepth=8

# fio --readonly read.ini
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=8
fio-2.0.15
Starting 1 process
Jobs: 1 (f=1): [r] [0.9% done] [460K/0K/0K /s] [115 /0 /0  iops] [eta 10h:48m:37s]
fio: terminating on signal 2
Jobs: 1 (f=1): [r] [0.9% done] [459K/0K/0K /s] [114 /0 /0  iops] [eta 10h:48m:51s]
readtest: (groupid=0, jobs=1): err= 0: pid=29405: Tue Aug  5 15:11:20 2014
  read : io=187944KB, bw=547204 B/s, iops=133 , runt=351705msec
    slat (usec): min=0 , max=109 , avg= 1.32, stdev= 0.98
    clat (msec): min=6 , max=321 , avg=59.88, stdev=10.42
     lat (msec): min=6 , max=321 , avg=59.88, stdev=10.42
    clat percentiles (msec):
     |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
     | 30.00th=[   55], 40.00th=[   58], 50.00th=[   60], 60.00th=[   62],
     | 70.00th=[   64], 80.00th=[   68], 90.00th=[   72], 95.00th=[   77],
     | 99.00th=[   92], 99.50th=[   99], 99.90th=[  117], 99.95th=[  125],
     | 99.99th=[  318]
    bw (KB/s)  : min=  356, max=  623, per=100.00%, avg=534.44, stdev=33.12
    lat (msec) : 10=0.01%, 20=0.01%, 50=13.35%, 100=86.20%, 250=0.43%
    lat (msec) : 500=0.02%
  cpu          : usr=0.10%, sys=0.42%, ctx=155507, majf=0, minf=48
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=46986/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=187944KB, aggrb=534KB/s, minb=534KB/s, maxb=534KB/s, mint=351705msec, maxt=351705msec

Disk stats (read/write):
  sddlmbb1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

linux, ioengine=libaio

# cat read.ini
[readtest]
blocksize=4k
filename=/dev/sddlmbb1
rw=randread
direct=1
buffered=0
ioengine=libaio
iodepth=8

# fio --readonly read.ini
readtest: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8
fio-2.0.15
Starting 1 process  
Jobs: 1 (f=1): [r] [1.5% done] [4578K/0K/0K /s] [1144 /0 /0  iops] [eta 01h:22m:22s]]
fio: terminating on signal 2

readtest: (groupid=0, jobs=1): err= 0: pid=32737: Tue Aug  5 15:14:08 2014
  read : io=335592KB, bw=4414.4KB/s, iops=1103 , runt= 76023msec
    slat (usec): min=8 , max=305 , avg=17.22, stdev= 4.91
    clat (usec): min=17 , max=90397 , avg=7226.39, stdev=3555.25
     lat (usec): min=96 , max=90412 , avg=7244.49, stdev=3555.26
    clat percentiles (usec):
     |  1.00th=[  104],  5.00th=[ 1832], 10.00th=[ 3408], 20.00th=[ 4768],
     | 30.00th=[ 5664], 40.00th=[ 6368], 50.00th=[ 7072], 60.00th=[ 7776],
     | 70.00th=[ 8640], 80.00th=[ 9536], 90.00th=[10816], 95.00th=[12352],
     | 99.00th=[17792], 99.50th=[20608], 99.90th=[36608], 99.95th=[42752],
     | 99.99th=[52480]
    bw (KB/s)  : min= 4023, max= 4768, per=100.00%, avg=4418.39, stdev=129.60
    lat (usec) : 20=0.01%, 100=0.23%, 250=2.03%, 500=1.57%, 750=0.10%
    lat (usec) : 1000=0.10%
    lat (msec) : 2=1.31%, 4=7.98%, 10=71.68%, 20=14.43%, 50=0.55%
    lat (msec) : 100=0.02%
  cpu          : usr=0.59%, sys=2.42%, ctx=81962, majf=0, minf=26
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=83898/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=335592KB, aggrb=4414KB/s, minb=4414KB/s, maxb=4414KB/s, mint=76023msec, maxt=76023msec

Disk stats (read/write):
  sddlmbk1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

★★★★★

тоже интересно,подпишусь, как раз скоро собираюсь новый массив попробовать завалить. С fio такая же хрень была, смог отжать всего 7k iops, при том что syncvg -P 32 взбодрил массив до 130k, сильно нагрузив при этом второй массив

user_undefined ()
Вы не можете добавлять комментарии в эту тему. Тема перемещена в архив.