Jump to content


Photo

Why am I seeing high read iops?


  • You cannot start a new topic
  • Please log in to reply
1 reply to this topic

#1 treesloth

treesloth

    Member

  • Member
  • 1 posts

Posted 06 December 2012 - 01:36 PM

Greeting to all. Please pardon me if this has been answered somewhere... I wasn't able to come up with a search of the forums that would reveal it.

I'm testing a storage system under various configurations and am a little surprised by the results. We're using multiple VMs, each running fio against an EMC storage system. One of many tests is a repetitive (comparatively) small write test. Basically, a quick little script calls the same fio command a number of times in succession. Here's an example:

for num in `seq 20`
do
        fio --name=randtest --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --direct=0 --size=128m --numjobs=2 --output=/test/fio_output
done

It's that simple. Now, the strange thing is that, according to Unisphere, read is exceeding write! On other, non-looping tests, the write was very high, read was very low, just as expected. Where is the high read level coming from?

#2 Stoyan Varlyakov

Stoyan Varlyakov

    Member

  • Member
  • 48 posts

Posted 14 December 2012 - 05:46 AM

Greeting to all. Please pardon me if this has been answered somewhere... I wasn't able to come up with a search of the forums that would reveal it.

I'm testing a storage system under various configurations and am a little surprised by the results. We're using multiple VMs, each running fio against an EMC storage system. One of many tests is a repetitive (comparatively) small write test. Basically, a quick little script calls the same fio command a number of times in succession. Here's an example:

for num in `seq 20`
do
        fio --name=randtest --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --direct=0 --size=128m --numjobs=2 --output=/test/fio_output
done

It's that simple. Now, the strange thing is that, according to Unisphere, read is exceeding write! On other, non-looping tests, the write was very high, read was very low, just as expected. Where is the high read level coming from?


In a word: caching.

Such a test writes data first, which then gets read, so you are 100% running in cached mode, so it is blazing fast.

For accurate testing it is commoly accepted to use data sizes in the area of (all caches)*2

If I have a client/server model, client has 1 GB RAM, Server has 2 GB RAM + controller 1 GB + 12 Drives with 64 MB each... do the math. Its a lot

If you have SSDs, use a mutiplier of 3 or else the SSD will flush a lot of the operations to flash, then you will be running cached again...
What you see above is my personal opinion. Don't take it as the holy bible and the one-and-only truth :)



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users