LINUX.ORG.RU
ФорумAdmin

Бенчмарк SSD (сравнить результаты)

 , iozone,


0

1

Стали тормозить приложения, подозреваю проблемы с SSD. Хочется понять - действительно ли проблема в SSD.

Можете поделиться результатами бенчмарков ваших SSD (+ командами, которыми Вы эти бенчмарки запускали)?

Диск: SanDisk SD8SN8U-256G-1006.

★★★★★

Тут в принципе и без сравнения можно. Дай цифры и будет понятно. Алсо, смотри S.M.A.R.T. диска (через гномовский disks, или smartctl -a /dev/sd*). Не лишним будет посмотреть под нагрузкой на показатели iotop.

Jefail ★★★★
()

Для простоты посмотрел на вывод dstat во время выполнения dd if=/dev/zero of=/mnt/test/test.img oflag=direct bs=1M (файл ложится на XFS):

# dstat -c -d -D sda --io
----total-cpu-usage---- --dsk/sda-- ---io/sda--
usr sys idl wai hiq siq| read  writ| read  writ

  7  11  49  33   0   1|4096B  197M|1.00   395 
  8   6  46  40   0   0|4096B   99M|1.00   205 
  7   8  39  46   0   0|4096B  131M|1.00   273 
  6   8  46  40   0   0|4096B  104M|1.00   209 
  6   8  50  35   0   1|4096B  149M|1.00   320 
 10   6  41  43   0   1|4096B  112M|1.00   224 
  9   7  44  40   0   0|4096B  113M|1.00   225 
 12   6  40  41   0   1|4096B   85M|1.00   172 
  5   9  45  40   0   1|4096B  174M|1.00   358 
  6  11  35  48   0   0|4096B  191M|1.00   415 
  5  11  38  46   0   1|4096B  198M|1.00   396 
  7  10  49  34   0   1|4096B  198M|1.00   396 
  7  10  50  33   0   0|4096B  195M|1.00   396 
  8  11  46  34   0   1|4096B  196M|1.00   394

... и при iozone -l 8 -i 0 -i 1 -i 2 -e -+n -r 4K -s 1G -O (много потоков):

# dstat -c -d -D sda --io
----total-cpu-usage---- --dsk/sda-- ---io/sda--
usr sys idl wai hiq siq| read  writ| read  writ
 15   4  75   6   0   0|  26k  573k|1.33  13.6 
  7   1   0  92   0   0|4096B 4048k|1.00   987 
  7   4   1  89   0   0|4096B 4060k|1.00   975 
  7   4   1  88   0   0|4096B 3916k|1.00   979 
  7   2   1  90   0   0|4096B 3940k|1.00   989 
  7   3   2  89   0   0|4096B 3904k|1.00   976 
  7   3   1  89   0   0|4096B 3992k|1.00  1006 
  6   3   0  91   0   0|4096B 4052k|1.00   972 
  8   3   1  89   0   0|4096B 4172k|1.00   906 
  7   3   1  90   0   0|4096B 4164k|1.00   954 
  5   2   2  92   0   0|4096B 6932k|1.00   845 
  6   2   0  92   0   1|4096B 4068k|1.00  1026 
  6   3   0  91   0   0|4096B 3868k|1.00   967 
  6   2   0  92   0   0|4096B 4740k|1.00   947 
  6   2   1  90   0   0|4096B 4348k|1.00   943 
  5   3   2  90   0   0|4096B 3960k|1.00   954 
 15   2   1  81   0   0|4096B 4236k|1.00   990 
  6   2   1  91   0   0|4096B 3952k|1.00   952 
  7   3   1  89   0   0|4096B 4156k|1.00   962 
  6   3   1  91   0   0|4096B 4128k|1.00   965 
 10   4   3  83   0   0|4096B 3860k|1.00   965 
 10   3   1  87   0   0|4096B 4416k|1.00   998 
  7   1   2  90   0   0|4096B 5168k|1.00   954 
  8   3   2  87   0   0|4096B 3924k|1.00   981 

Мне кажется, запись 5Мбайт/сек - это маловато для SSD, даже при 1000 iops.

Harliff ★★★★★
() автор топика
Последнее исправление: Harliff (всего исправлений: 1)

Раньше был serverbear с кучей результатов запуска unixbench на различных vps и dedicated серверах.

Сейчас его закрыли, но появился https://serverscope.io/, правда данных там маловато.

vren
()
Ответ на: комментарий от anonymous

Нищеброд ищет оправдания не показывать свой смартбай

anonymous
()

из этого треда взял настройки для fio, паттерн который глупыш с хабра придумал, но всё лучше, чем сидеть думать самому чего бы протестировать такого.
оба диска выдают примерно одно и тоже, 200MB\s и 50k iops

~$ fio --directory=/media/user/bff53b1c-ed21-4c97-abc7-14c3a9b46267/ --name=test --rw=write --bs=4k --size=10G --numjobs=1 --group_reporting --ioengine libaio --iodepth=32 --buffered=0 --direct=1
test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.16
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/199.5MB/0KB /s] [0/51.7K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4693: Fri Dec 15 20:43:36 2017
  write: io=10240MB, bw=199881KB/s, iops=49970, runt= 52460msec
    slat (usec): min=4, max=885, avg=10.34, stdev= 3.90
    clat (usec): min=71, max=74297, avg=629.03, stdev=624.31
     lat (usec): min=79, max=74303, avg=639.37, stdev=624.24
    clat percentiles (usec):
     |  1.00th=[  330],  5.00th=[  418], 10.00th=[  458], 20.00th=[  524],
     | 30.00th=[  564], 40.00th=[  580], 50.00th=[  604], 60.00th=[  620],
     | 70.00th=[  652], 80.00th=[  700], 90.00th=[  772], 95.00th=[  836],
     | 99.00th=[ 1368], 99.50th=[ 2024], 99.90th=[ 2736], 99.95th=[ 3088],
     | 99.99th=[ 8384]
    lat (usec) : 100=0.01%, 250=0.15%, 500=15.85%, 750=71.93%, 1000=10.13%
    lat (msec) : 2=1.41%, 4=0.53%, 10=0.01%, 20=0.01%, 50=0.01%
    lat (msec) : 100=0.01%
  cpu          : usr=10.45%, sys=53.58%, ctx=352575, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2621440/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=199881KB/s, minb=199881KB/s, maxb=199881KB/s, mint=52460msec, maxt=52460msec

Disk stats (read/write):
  sda: ios=0/2616963, merge=0/833, ticks=0/1447128, in_queue=1446912, util=99.78%
~$ fio --directory=/home/user/ --name=test --rw=write --bs=4k --size=10G --numjobs=1 --group_reporting --ioengine libaio --iodepth=32 --buffered=0 --direct=1
test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.16
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/198.2MB/0KB /s] [0/50.8K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=5002: Fri Dec 15 20:51:32 2017
  write: io=10240MB, bw=201878KB/s, iops=50469, runt= 51941msec
    slat (usec): min=4, max=3998, avg= 9.95, stdev= 4.48
    clat (usec): min=95, max=37965, avg=623.17, stdev=560.39
     lat (usec): min=104, max=37974, avg=633.12, stdev=560.45
    clat percentiles (usec):
     |  1.00th=[  390],  5.00th=[  540], 10.00th=[  564], 20.00th=[  572],
     | 30.00th=[  580], 40.00th=[  588], 50.00th=[  588], 60.00th=[  596],
     | 70.00th=[  604], 80.00th=[  612], 90.00th=[  628], 95.00th=[  652],
     | 99.00th=[ 1272], 99.50th=[ 1928], 99.90th=[ 9536], 99.95th=[17536],
     | 99.99th=[22656]
    lat (usec) : 100=0.01%, 250=0.01%, 500=3.33%, 750=94.01%, 1000=1.08%
    lat (msec) : 2=1.09%, 4=0.33%, 10=0.06%, 20=0.06%, 50=0.03%
  cpu          : usr=9.83%, sys=52.37%, ctx=847409, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2621440/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=201878KB/s, minb=201878KB/s, maxb=201878KB/s, mint=51941msec, maxt=51941msec

Disk stats (read/write):
    dm-0: ios=0/2619022, merge=0/0, ticks=0/1536412, in_queue=1537800, util=99.54%, aggrios=0/2621202, aggrmerge=0/584, aggrticks=0/1536284, aggrin_queue=1535672, aggrutil=99.43%
  sdb: ios=0/2621202, merge=0/584, ticks=0/1536284, in_queue=1535672, util=99.43%

system-root ★★★★★
()

Согласно этому речью http://www.storagereview.com/sandisk_x400_ssd_review

При random read 4k в 16-64 потока однотерабайтник даёт более 80k iops. Даже если у 256 гигабайтника все в 4 раза хуже, он должен давать порядка 20k iops random read 4kqd32, а не 1k iops, как у тебя. Либо твой экземпляр говно, либо iozone говно.

iliyap ★★★★★
()
$ fio --directory=/home/test/ --name=test --rw=write --bs=4k --size=10G --numjobs=1 --group_reporting --ioengine libaio --iodepth=32 --buffered=0 --direct=1
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.2
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=194MiB/s][r=0,w=49.6k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4576: Sat Dec 16 22:24:33 2017
  write: IOPS=63.1k, BW=247MiB/s (259MB/s)(10.0GiB/41514msec)
    slat (usec): min=3, max=1512, avg= 9.70, stdev= 3.54
    clat (usec): min=41, max=374022, avg=496.11, stdev=1125.04
     lat (usec): min=51, max=374030, avg=505.99, stdev=1125.11
    clat percentiles (usec):
     |  1.00th=[  326],  5.00th=[  465], 10.00th=[  469], 20.00th=[  478],
     | 30.00th=[  482], 40.00th=[  482], 50.00th=[  486], 60.00th=[  490],
     | 70.00th=[  494], 80.00th=[  498], 90.00th=[  506], 95.00th=[  523],
     | 99.00th=[  553], 99.50th=[  570], 99.90th=[  979], 99.95th=[ 1860],
     | 99.99th=[29492]
   bw (  KiB/s): min=197712, max=263160, per=100.00%, avg=252592.76, stdev=11997.10, samples=83
   iops        : min=49428, max=65790, avg=63148.18, stdev=2999.27, samples=83
  lat (usec)   : 50=0.01%, 100=0.77%, 250=0.09%, 500=84.55%, 750=14.45%
  lat (usec)   : 1000=0.05%
  lat (msec)   : 2=0.05%, 4=0.01%, 10=0.02%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%, 500=0.01%
  cpu          : usr=10.66%, sys=67.89%, ctx=883534, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,2621440,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=247MiB/s (259MB/s), 247MiB/s-247MiB/s (259MB/s-259MB/s), io=10.0GiB (10.7GB), run=41514-41514msec

Disk stats (read/write):
  sda: ios=0/2613215, merge=0/304, ticks=0/986533, in_queue=922490, util=98.60%
greenman ★★★★★
()
drsm@L7480:~$ fio --directory=/home/drsm/tmp/ --name=test --rw=write --bs=4k --size=10G --numjobs=1 --group_reporting --ioengine libaio --iodepth=32 --buffered=0 --direct=1
test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.2.10
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/195.6MB/0KB /s] [0/50.7K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=9593: Sat Dec 16 20:38:10 2017
  write: io=10240MB, bw=196860KB/s, iops=49215, runt= 53265msec
    slat (usec): min=1, max=12520, avg=13.09, stdev=15.60
    clat (usec): min=107, max=121564, avg=636.26, stdev=976.60
     lat (usec): min=109, max=121567, avg=649.46, stdev=977.22
    clat percentiles (usec):
     |  1.00th=[  342],  5.00th=[  378], 10.00th=[  398], 20.00th=[  430],
     | 30.00th=[  466], 40.00th=[  498], 50.00th=[  540], 60.00th=[  652],
     | 70.00th=[  692], 80.00th=[  716], 90.00th=[  804], 95.00th=[ 1096],
     | 99.00th=[ 2064], 99.50th=[ 2544], 99.90th=[ 4016], 99.95th=[ 4896],
     | 99.99th=[43776]
    bw (KB  /s): min=140944, max=228064, per=100.00%, avg=196885.85, stdev=13581.52
    lat (usec) : 250=0.03%, 500=40.74%, 750=45.09%, 1000=7.94%
    lat (msec) : 2=5.08%, 4=1.02%, 10=0.08%, 20=0.01%, 50=0.01%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=7.54%, sys=67.34%, ctx=442465, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2621440/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=196860KB/s, minb=196860KB/s, maxb=196860KB/s, mint=53265msec, maxt=53265msec

Disk stats (read/write):
  sda: ios=26/2615537, merge=0/2242, ticks=48/966916, in_queue=965912, util=99.31%

drsm@L7480:~$ sudo hdparm -i /dev/sda | grep Model
 Model=SanDisk X400 M.2 2280 256GB, FwRev=X4152012, SerialNo=xxx

drsm ★★
()
Вы не можете добавлять комментарии в эту тему. Тема перемещена в архив.