Sign in to follow this  
lecaf

RAID 5: Writes faster than Reads ?

Recommended Posts

Hi

I have 3, 3TB WD Red drives on a LSI 9260CV in raid 5.

Read = always read ahead

IO Policy = Cached IO

Write = Always write back

OS= Win 2012R2 latest, firmware Drivers lastest

I did some benchmarks and I can only explain the sequential results. The rest I dont get it, Raid5 is supposed to have slower writes due to parity calculation.

post-76058-0-72743500-1398266715_thumb.p

The LSI card has 512K cache and for sure it influenses the results: the numbers get smaller as the ratio cahe/file size changes. While this is normal, there is always 50% more throughtput for random writes and this is consistent whatever the file size. I would expect that ratio to drop also as the test file grows bigger (if the cache was the reason for this strange performance).

Here are results for a tiny file that fits entirely into the boards cache, so numbers reflect PCI transfer not disk perfomance.

post-76058-0-24571500-1398267716_thumb.p

What did I miss ?

m a r c

Edited by lecaf

Share this post


Link to post
Share on other sites

Given the smaller test file size, you are most likely seeing just the affect of write cache from that RAID card.

Share this post


Link to post
Share on other sites

I have 3, 3TB WD Red drives on a LSI 9260CV in raid 5.

Read = always read ahead

IO Policy = Cached IO

Write = Always write back

OI Policy on the controller needs to be set to " Direct ". This will stop using the controller RAM to cache/buffer.

You also have Write-Back, which also enhances performance but is the more common setting to be left enabled. If you change that setting to Write-Through, then you get 100% of drive performance on all writes (unless you also disable the small onboard cache on the drives).

Share this post


Link to post
Share on other sites

Given the smaller test file size, you are most likely seeing just the affect of write cache from that RAID card.

Yep disabled write cache and performance clearly drops...worse than a standalone drive

post-76058-0-82159900-1398286367_thumb.p

(these numbers scare me are they normal?)

OI Policy on the controller needs to be set to " Direct ". This will stop using the controller RAM to cache/buffer.

Can'see much of a difference, but true "Direct" is LSI recommendation.

post-76058-0-32858800-1398286602_thumb.p

m a r c

Edited by lecaf

Share this post


Link to post
Share on other sites

UPDATE:

The high write numbers you get come from having Write-Back enabled on the array while you expected much lower figures. Caching not only accelerates writes but also masks additional write delays (such as parity) from applications by I/O staying in System RAM/Controller RAM/Drive RAM and then writing data out of sync instead of truly writing each I/O to the drive/s immediately (Write-Through). The cache is still used for queuing with Write-Through, but will not report a complete I/O write until it is truly written to disk. If a power loss happens with Write-Through, data stays consistent to applications because whatever was buffered was known to not be truly written.

It is okay to use write caching as while a battery is present to keep data alive that hasn't been flushed. In the case of a RAID controller, a BBU. In the case of a system with a standard controller, a UPS on the system.

I too have seen writes being faster than reads on some block sizes. It could be the controller being optimized in such a way, but I am not completely sure.

Just a minor note on your single drive testing; you most likely had write cache enabled (same as Write-Back) on the single drive while you had Write-Through (same as write cache disabled) on the RAID-5 Array. The comparison is not appropriate in this case. You can check this out yourself whether you have Write-Back or Write-Through on the array and by checking the Windows setting (Device Manager > Disk Drives > Properties > Policies) being changed by the controller. If you have Write-Through on the array, you should disable write cache for the single drive you test against.

NOTES:

I only have two WD Red 3TB drives and can't test RAID-5 on them.

For my main tests, I used latest generation 1TB Raptors.

System configuration:

1x Intel Xeon E-5 2620v2 (Ivy Bridge)

4x 16GB ECC Reg LP 1,333MHz DDR3 @ 9/9/9/24 (Quad Channel)

LSI MegaRAID 9260-8i (4 of 8 ports connected to the SAS expander)

1x WD Red 3TB

post-859-0-74549500-1398459136_thumb.jpgpost-859-0-94986300-1398459139_thumb.jpg

1x WD Raptor 1TB

post-859-0-81413800-1398459176_thumb.jpgpost-859-0-68692600-1398459180_thumb.jpg

3x WD Raptor 1TB (RAID-5)

post-859-0-87089500-1398459315_thumb.jpgpost-859-0-68151800-1398459348_thumb.jpgpost-859-0-70383400-1398459358_thumb.jpgpost-859-0-35360600-1398459362_thumb.jpg

4x WD Raptor 1TB (RAID-5)

post-859-0-82598500-1398459365_thumb.jpgpost-859-0-04503600-1398459369_thumb.jpg

5x WD Raptor 1TB (RAID-5)

post-859-0-37423600-1398459373_thumb.jpgpost-859-0-18979300-1398459376_thumb.jpg

6x WD Raptor 1TB (RAID-5)

post-859-0-14461600-1398459379_thumb.jpgpost-859-0-47449100-1398459382_thumb.jpg

P.S: I might just also post benchmarks from Solaris ZFS as RAIDz (RAID-5 equivalent) to compare Vs. traditional parity.

Edited by Maxtor storage

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this