Sign in to follow this  
chrispitude

nForce4 RAID5 - why does it suck?

Recommended Posts

I've currently got a rock-solid 3ware 7450 RAID5 array using four Hitachi 7K250 160GB PATA drives. The box is a linux server running an underclocked, undervolted Athlon XP Mobile. The combination of linux and 3ware has been an absolute breeze to maintain. The array has seen a few different types of drives over the years (75GXP, WD1200JB, and now 7K250) and the 3ware has saved my bacon through a 75GXP failure and a WD1200JB failure.

I have been contemplating moving to an Athlon 64 system using some sort of CPU-based RAID5 approach. I've been contemplating going with linux software RAID5. From what I understand, it is not possible to boot the system from a RAID5 partition because of the limitations of linux software RAID. Then I read about some of the latest nForce4 chipsets having native RAID5 support at the chipset level. I found the following informative article:

Chipset Serial ATA and RAID performance compared: Whose arrays are faster? (techreport.com)

The article compares various RAID levels with the nForce4 and Intel ICH7R chipsets. To sum up the article's RAID5 findings, chipset-level RAID5 isn't very impressive. I've seen references to linux software RAID5 write speeds of ~50Mb/s, while the article is only able to reach a maximum sustained write speed using chipset RAID5 of 30Mb/s.

I'm assuming that with chipset RAID5, I'd just build an array in the BIOS across the four drives, and install linux on one big SCSI device. That definitely has some appeal. Does anyone have any experience with chipset-level RAID5, or with software-level RAID5 read/write performance?

The 3ware isn't fast, but it sure makes things easy. :)

- Chris

Share this post


Link to post
Share on other sites
CPU usage?

Well, since I'm not quite sure what you're asking here...

The review I linked indicates that the CPU usage for both the Intel and NVidia chipset-level RAID5 arrays was low - less than 1%. See this page. The CPU does not appear to be the bottleneck.

- Chris

Share this post


Link to post
Share on other sites
From what I understand, it is not possible to boot the system from a RAID5 partition because of the limitations of linux software RAID.

But you can apply RAID 5 to partitions instead of entire disks.

So create a 250 mbyte boot partition before the data partition and you don't need extra drives to boot from.

You can even put the boot partitions in RAID 1.

Share this post


Link to post
Share on other sites

Hi SCSA,

The primary reason is to move from PATA to SATA. Also, there are two bottlenecks with my (old) 3ware's RAID5 performance:

  • The write performance isn't real great. If memory serves, it's around 20MB/s. Supposedly the performance on the newer 3ware cards is better.
  • The 3ware array is quite fast in sustained reading. In fact, I can easily saturate the bus at 120Mb/s.

I am looking for an upgrade path which will address both of the bottlenecks above.

Hi Olaf,

You described my fallback plan. :) If nothing else, I will set up a small boot partition in RAID1 mirrored across all four drives, then create all the remaining partitions in RAID5. While this is doable, it does waste a very tiny bit of space with the RAID1 array. It seems much more elegant to me to simply collapse all the drives into a RAID5 array at the BIOS level (ie - with native BIOS support or an adapter card) where the entire system is on a single large RAID5 array device.

The reason I am beginning to prefer a host-based solution over a hardware solution is the following:

  • The machine is currently just a fileserver. It pretty much idles 24/7. The CPU really has nothing better to do than perform RAID duties.
  • Modern CPUs are so fast that for a dedicated fileserver box, the performance could be greater than a hardware solution.
  • By using a host-based RAID solution, the CPU becomes the intelligence of the RAID controller, essentially resulting in a controller with "Cool n Quiet" technology that consumes less power when it is idle.
  • In the future, if I do ever begin doing more processing on the box, upgrading to a dual-core Athlon 64 will provide more than enough parallel CPU capacity to perform host-based RAID processing in parallel with other processing tasks.

Of course, I keep reading the glowing reviews of these Areca cards, and the geek in me just wants to bite the bullet and get an Areca 1210 just to have it. :) I've also been looking at the Raidcore cards, which seem quite intriguing. They seem to be just a pretty wrapper around what is essentially another host-based solution, but I really can't argue with the solidity and completeness of their implementation, and with the impressive performance it delivers. Plus, since the Raidcore is host-based, its performance should scale with the CPU capabilities of the machine.

Back to the original topic... I was hoping that the new nForce4 RAID5 capabilities would be comparable to the Raidcore solution, but this does not seem to be the case from the benchmarks I am reading.

- Chris

Share this post


Link to post
Share on other sites

Another option is to get a 256mb IDE flash module ($47), stick it in the IDE port, and boot from that. Then RAID 5 your hard disks all you want!

Just make sure you partition your disks, create the RAID, and format in advance. Fedora's installer uses horrible ext3 options.... it'll reserve 5% of your big RAID 5 volume for root and it won't format with stride.

Share this post


Link to post
Share on other sites

The reason why RAID 5 (any RAID 5, whether hardware or software) performance isn't great is quite simply that, unless overwriting an entire stripe (and even then, only if the chipset or driver optimizes for this specific case), any write will cause several reads to occur before the write can be fulfilled.

Now, with that being said, if you're really looking for performance, I'm surprised you haven't considered selling your 3Ware card and using that to help subsidize the purchase of hardware to set up a RAID 1+0 array. If you look back at the graphs in TechReport's article, RAID 1+0 is obviously superior in reliability and quite high performance in both reads and writes. The only downside is, obviously, either the cost for the extra drives or using your existing drives and losing capacity, and either a new motherboard or an add-in card, but compromises have to be made somewhere.

Personally, I'd rather outlay a little more cash and have my system perform better all the time.

Remember, the big performance bottleneck of RAID 5 is caused by the extra reads that every write causes. You can't really get around it, regardless of what type of RAID 5 you build.

Edited by Trinary

Share this post


Link to post
Share on other sites
Remember, the big performance bottleneck of RAID 5 is caused by the extra reads that every write causes. You can't really get around it, regardless of what type of RAID 5 you build.

It's not just extra reads but also an extra write.

Share this post


Link to post
Share on other sites

Remember, the big performance bottleneck of RAID 5 is caused by the extra reads that every write causes. You can't really get around it, regardless of what type of RAID 5 you build.

It's not just extra reads but also an extra write.

True enough, since every write that isn't overwriting a complete stripe will also require a "parity write" to also occur.

Is it any wonder that RAID 5 performance suffers, with all this "extra" disk activity going on? Not to mention that performance during a rebuild for RAID 5 is even worse than normal.

RAID 1 and 1+0 don't suffer much of a performance penalty during a rebuild.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this