treesloth

Why am I seeing high read iops?

2 posts in this topic

Greeting to all. Please pardon me if this has been answered somewhere... I wasn't able to come up with a search of the forums that would reveal it.

I'm testing a storage system under various configurations and am a little surprised by the results. We're using multiple VMs, each running fio against an EMC storage system. One of many tests is a repetitive (comparatively) small write test. Basically, a quick little script calls the same fio command a number of times in succession. Here's an example:

for num in `seq 20`
do
       fio --name=randtest --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --direct=0 --size=128m --numjobs=2 --output=/test/fio_output
done

It's that simple. Now, the strange thing is that, according to Unisphere, read is exceeding write! On other, non-looping tests, the write was very high, read was very low, just as expected. Where is the high read level coming from?

Share this post


Link to post
Share on other sites

Greeting to all. Please pardon me if this has been answered somewhere... I wasn't able to come up with a search of the forums that would reveal it.

I'm testing a storage system under various configurations and am a little surprised by the results. We're using multiple VMs, each running fio against an EMC storage system. One of many tests is a repetitive (comparatively) small write test. Basically, a quick little script calls the same fio command a number of times in succession. Here's an example:

for num in `seq 20`
do
       fio --name=randtest --ioengine=libaio --iodepth=4 --rw=randwrite --bs=8k --direct=0 --size=128m --numjobs=2 --output=/test/fio_output
done

It's that simple. Now, the strange thing is that, according to Unisphere, read is exceeding write! On other, non-looping tests, the write was very high, read was very low, just as expected. Where is the high read level coming from?

In a word: caching.

Such a test writes data first, which then gets read, so you are 100% running in cached mode, so it is blazing fast.

For accurate testing it is commoly accepted to use data sizes in the area of (all caches)*2

If I have a client/server model, client has 1 GB RAM, Server has 2 GB RAM + controller 1 GB + 12 Drives with 64 MB each... do the math. Its a lot

If you have SSDs, use a mutiplier of 3 or else the SSD will flush a lot of the operations to flash, then you will be running cached again...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now