ewart

RAID 10 Performance Slower than single disk on ICH9R

Recommended Posts

I'm experiencing slow RAID 10 Performance with 4 x 750gb seagate drives on a ASUS P5E motherboard. All equipment brand new.

Average speed is 80mb/s, some spikes up to 90mb/s and down to 20mb/s. At this performance level I'd be better off removing the drives from the RAID as the RAID is slowing them down. Outside the RAID a single disk gets me up to 105mb/s.

Intel Matrix Storage (ISM) console reports drives running in 'Generation 1' transfer mode, with ncq and write cache enabled; stripe size 65kb. The drives are plugged into ports 1 though 4.

Using latest MB BIOS and ISM 7.8.1013

My previous motherboard with a ICH7R and older generation 320gb seagate drives was much faster so there is clearly something very wrong here. Using Vista x64 and Q9450 CPU @ 3.2ghz.

any suggestions appreciated, compile times are a bitch!

regards

ewart

Share this post


Link to post
Share on other sites
I'm experiencing slow RAID 10 Performance with 4 x 750gb seagate drives on a ASUS P5E motherboard. All equipment brand new.

Average speed is 80mb/s, some spikes up to 90mb/s and down to 20mb/s. At this performance level I'd be better off removing the drives from the RAID as the RAID is slowing them down. Outside the RAID a single disk gets me up to 105mb/s.

Intel Matrix Storage (ISM) console reports drives running in 'Generation 1' transfer mode, with ncq and write cache enabled; stripe size 65kb. The drives are plugged into ports 1 though 4.

Using latest MB BIOS and ISM 7.8.1013

My previous motherboard with a ICH7R and older generation 320gb seagate drives was much faster so there is clearly something very wrong here. Using Vista x64 and Q9450 CPU @ 3.2ghz.

any suggestions appreciated, compile times are a bitch!

regards

ewart

For immediate resolution, get a lower end hardware RAID controller from a major manufacturer that supports RAID 1+0. <_<

At a guess, this is probably an implementation problem in the driver (in which case, it may be fixed whenever Intel releases a driver update), but it might be a hardware issue or limitation.

Share this post


Link to post
Share on other sites
At a guess, this is probably an implementation problem in the driver (in which case, it may be fixed whenever Intel releases a driver update), but it might be a hardware issue or limitation.

My guess is there are other people with RAID 10 on this chipset without this problem, if so it'd be nice to have their experience.

cheers

Share this post


Link to post
Share on other sites
At a guess, this is probably an implementation problem in the driver (in which case, it may be fixed whenever Intel releases a driver update), but it might be a hardware issue or limitation.

My guess is there are other people with RAID 10 on this chipset without this problem, if so it'd be nice to have their experience.

cheers

I have a somewhat similar setup

asus p5e-vm raid 10 on connectors 1-4

4 Seagate 500gb re2 enterprise sata 2 ST3500320NS

W2K8 server enterprise edition x64.

core 2 duo at 2.33 Ghz 8 GB RAM

intel storage 7.8.1013

If you have any specific tests that you would like me to run, just let me know.

PS my disks say gen 2... did you remember to remove the tiny jumper on each of your seagate disks to enable sata2 mode?

PPS did you enable volume write back cache? if so turn it off, it was apparently designed for RAID 5 implementations and really seems to screw with other raid levels. It seems to have positive and negative effects on reading performance .. at least in benchmarks...

Share this post


Link to post
Share on other sites
I almost hate to ask, but how's performance in XP? Are there beta versions for Intel Storage Manager?

Not beta versions; and sorry, I don't have XP installed.

PS my disks say gen 2... did you remember to remove the tiny jumper on each of your seagate disks to enable sata2 mode?

PPS did you enable volume write back cache? if so turn it off, it was apparently designed for RAID 5 implementations and really seems to screw with other raid levels. It seems to have positive and negative effects on reading performance .. at least in benchmarks...

Seems the shop left those little suckers on. I pulled them off and my hard drives were on fire!. Literally! No, really - as it was an electrical fire I immediately turned off all the power at the mains first then attempted to blow out the flames before the smoke got too bad. I succeded.. it was interesting trying to pull the melted SATA power cable out of the drive.. very unimpressed. Anyway, having spent the entire day rebuilding the array (yay for RAID 10, although the rebuild process took about 6 hours), (and after the purchase of two new 750gb drives) I can confirm that hdtach 3.0.1 reports:

avg read speed with volume write cache disabled: 86.4mb/s -- pretty much across the entire disk span

avg read speed with volume write cache enabled: 127.8mb/s -- Starts at 160mb/s and drops to 80mb/s at the disk end

not sure why enabling the cache improves the read test but there you go. Will leave cache enabled and see how this performance translates to real world apps in due course.

I'd still be interested in your HD tach results.

cheers

ewart.

Share this post


Link to post
Share on other sites

I downloaded hdtach 3.04 from the http://www.simplisoftware.com/Public/index...?request=HdTach site. It complained that it required XP or W2K so I enabled XP SP2 compatibility and run as an administrator. I ran the long bench and this is what I got

avg read speed with volume write cache disabled: 152mb/s -- Starts at 220mb/s and drops to 120mb/s at the disk end

with a few drops to 100 mb/s

as this is a server, I cannot give exclusive access to the hdd to this app, I suspect the OS access is causing the drops to 100 as they are randomn in reruns

avg read speed with volume write cache enabled: 126.8mb/s -- varied wildly from 100 to 200 accross the whole range and did not follow the expected dropping average str pattern

ps I use stripesize 64kb and have hard drive write cache enabled at all times in the intel tool

also can you retest, my results seem to be the opposite of you, in comparing the enabled vs disabled volume write cache setting.

Share this post


Link to post
Share on other sites
not sure why enabling the [write] cache improves the read test but there you go.

On the ICH9R (and probably ICH8R, ICH7R etc), enabling the write cache also enables read-ahead cacheing in the driver and/or controller.

Share this post


Link to post
Share on other sites
avg read speed with volume write cache disabled: 86.4mb/s -- pretty much across the entire disk span

avg read speed with volume write cache enabled: 127.8mb/s -- Starts at 160mb/s and drops to 80mb/s at the disk end

Hello, i'm experiencing the same issue.

Did you find out anything?

I changed everything, cables, hdds, drivers, even the damn motherboard already.

My setup is the following: ASUS P5K Premium, XP32bit, 4xSamsung HD501LJ, E8200@3.5G

If i enabled the write-back cache then the performance rised approx to the RAID0 level, but the CPU usage became a lot higher too. And WBC withouth UPS is gambling so that's not my way.

Some chaotic test results with 4 drives in RAID10:

http://web.interware.hu/seven/server/raid10/hdtach.png

http://web.interware.hu/seven/server/raid10/hdtune.png

The strangest thing is that if i remove any of the hdds on the fly, than it speeds up to RAID0.

What did you partition your hdds with?

Share this post


Link to post
Share on other sites
avg read speed with volume write cache disabled: 86.4mb/s -- pretty much across the entire disk span

avg read speed with volume write cache enabled: 127.8mb/s -- Starts at 160mb/s and drops to 80mb/s at the disk end

Hello, i'm experiencing the same issue.

Did you find out anything?

I changed everything, cables, hdds, drivers, even the damn motherboard already.

My setup is the following: ASUS P5K Premium, XP32bit, 4xSamsung HD501LJ, E8200@3.5G

If i enabled the write-back cache then the performance rised approx to the RAID0 level, but the CPU usage became a lot higher too. And WBC withouth UPS is gambling so that's not my way.

Some chaotic test results with 4 drives in RAID10:

http://web.interware.hu/seven/server/raid10/hdtach.png

http://web.interware.hu/seven/server/raid10/hdtune.png

The strangest thing is that if i remove any of the hdds on the fly, than it speeds up to RAID0.

What did you partition your hdds with?

looking at your graphs, it appears that you might have the appropriate shape, as STR trends down over the cpacity of the disk, but there are many drops due to other apps accessing the system. Are you positive that you do not have anything else on this volume? is it maybe a system partition?

did you align your array when you made it? apparently Vista and W2K8 server do not require it but older OS's apparently benefit from it.

Share this post


Link to post
Share on other sites

Hello mervincm, thanks for your prompt reply.

It is the system volume. No other apllications are running. The swap file is switched off, the virtual memory is in the RAM. So nothing else are accessing the disks except of the test application.

If i remove any of the disks (so the RAID10 is going from 3 disks) the drops disappear completely.

After such a test, it takes approx 2 hours to recover the removed disk.

What does it mean aligning an array? Sorry i dont get it what you mean.

May i ask your RAID Option ROM version?

Share this post


Link to post
Share on other sites

Hi mervincm, yes I can confirm my results are the right way around, i.e. it is faster with the cache enabled - are you sure your results are the right way around? noting the console message will say 'disable volume cache..' when it is enabled.

Hi Trinary, looks like the same thing alright. You could try upgrading your ROM, but It does look like other apps are interfeering with your result through - try sysinternals process monitor (a microsoft tool), which is free, to see what else is happening on your disk.

cheers

ewart

Share this post


Link to post
Share on other sites

This is what I get with volume cache disabled

note that this is a Windows 2008 server running a buncha stuff including IIS to the web, so a "clean run" is not possible till I can figgure out how to do this from a USB or CD boot. there are many drops from what I suspect to be other threads accessing the disk during the test.

Here are my settings in the intel Matrix Storage Console

imatrix_1.jpg

imatrix_2.jpg

imatrix_3.jpg

imatrix_4.jpg

Here are a couple sames of HD tune 3.04 Long benchmark

HDTach-1.jpg

HDTach-2.jpg

Edited by mervincm

Share this post


Link to post
Share on other sites

Thanks for your posts everyone.

As far as i see ICH9R RIAD10 is a piece of sh*t.

I haven't seen anyone with proper results.

This feature just simply shouldn't be advertised.

I'm tired of messing with it. I already spent a week with finding a solution.

I'll use 2x RAID 1 and i'll buy a proper hardware PCIe RAID controller card when i can.

Best Regards

Share this post


Link to post
Share on other sites
Thanks for your posts everyone.

As far as i see ICH9R RIAD10 is a piece of sh*t.

I haven't seen anyone with proper results.

This feature just simply shouldn't be advertised.

I'm tired of messing with it. I already spent a week with finding a solution.

I'll use 2x RAID 1 and i'll buy a proper hardware PCIe RAID controller card when i can.

Best Regards

ICH9R RAID 10 worked fine for me:

hdtachqi4.jpg

Share this post


Link to post
Share on other sites
ICH9R RAID 10 worked fine for me:

hdtachqi4.jpg

Thanks for your test results.

This measurement was done with write-back cache enabled.

So you have high risk of dataloss and 3times as much cpu load than without it.

Share this post


Link to post
Share on other sites
I haven't seen anyone with proper results.

This feature just simply shouldn't be advertised.

Whats wrong with my results? They seems right where they should be from what I can tell.

Things get weird when I turn on Volume write cache, but according to Intel, that is designed for RAID-5, so I do not use it for my RAID 10

and as far as your issue with danger goes, buy a UPS they are CHEAP, thats what I did :) Small price to pay for the extra protection and piece of mind.

as far as extra CPU load goes, I don't care about that at all. CPU power is amazingly cheap and almost always I have spare CPU capacity just wasting away. If small amounts of extra CPU utilization really impact your results, then I agree that you would be best served with a PCIE RAID card with an integrated processor. Just realize that it is going to cost MANY times what you paid for your ICH9R.

Best of luck!

PS your point about performance increases when you pull a drive is very interesting and I would love to understand that one!

Edited by mervincm

Share this post


Link to post
Share on other sites
PS your point about performance increases when you pull a drive is very interesting and I would love to understand that one!

Well, me too :)

Btw, UPS wont protect you against BSOD.

Looking forward for your results with WBC disabled.

Best Regards

Share this post


Link to post
Share on other sites

I have Volume write cache disabled, are you suggesting that you would like to see results with the individual hard disk write caches disabled as well through the Intel tool? Other than specific cases like a database, that you want to avoid race condition etc, why would you do that? I just can't think of any condition that would be so sensitive to failed writes that I would consider butting on anything but enterprise grade hardware.

Share this post


Link to post
Share on other sites
I have Volume write cache disabled, are you suggesting that you would like to see results with the individual hard disk write caches disabled as well through the Intel tool?

Oh, sorry, no. I thought about noegrut's setup. It really wasnt obvious.

Share this post


Link to post
Share on other sites

OK here is a quick one after I disabled the hard disk write back cache (in addition to the previously disabled volume write back cache)

Imatrix_5.jpg

This is the result, nothing had changed (as expected)

HDTach-3.jpg

Edited by mervincm

Share this post


Link to post
Share on other sites
ICH9R RAID 10 worked fine for me:

hdtachqi4.jpg

Thanks for your test results.

This measurement was done with write-back cache enabled.

So you have high risk of dataloss and 3times as much cpu load than without it.

Ignore the CPU load on that benchmark, it was performed on my test server which was running a Celeron 420.

I don't run RAID 10 on that machine, this was just a benchmark that I took while I was experimenting with the hardware.

Note that enabling write-back cache on the ICH9R also enables read-ahead caching in the driver/controller.

I've been using WB caching for about 9 months on my main system without any problems. I have a UPS and I don't remember the last time I had a BSOD.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now