The Belgain

Nested Linux software RAID arrays

Recommended Posts

Hi there,

I've currently got a 4-drive software RAID5 array on Linux. The drives are 160GB each, and I want to add capacity to the array. Rather than getting several extra 160GB drives, I'm like to get a couple of 320GB drives and have a "4-drive" RAID5 array with the following setup:

Drive1: new 320GB drive

Drive2: new 320GB drive

Drive3: 2*160GB drives in RAID0

Drive4: 2*160GB drives in RAID0

Is there any reason why this would be a bad idea? The array is used only for storing audio/video and is on a dedicated networked PC. The OS is on a separate drive. I'm not really bothered about performance within reason - only sequential read/write transfer rate really matters. The PC is a 333Mhz P2.

Anything in particular I should know about setting it up? I was wondering if it would be a good idea to pick the stripe size for the RAID0 arrays to be half of the one for the RAID5 array? Or would that not help?

Cheers...

Edited by The Belgain

Share this post


Link to post
Share on other sites

No this should IMHO not be a problem. I would create one Linux-RAID-Autodetect-Partition on each drive and create it about 2GB smaller than the drive itself so hat you have no trouble if a replacement-drive is smaller. The latest Kernel 2.6.13 contains some important fixes for the md-System so use it ! Also you should set its tick-rate down to 100Hz (250HZ ist default now) an d disable all preemtion-features to prevent this old system from to much interrupts.

The favourite FS for this type of usage (large files, not to much overhead, maybe quotas) is XFS for sure.

Your CPU should be fast enough but the probably very slow PCI-bus of this system will be a problem.

Share this post


Link to post
Share on other sites

Since it looks like you have to back up and restore the data anyway, it might be better to just set up 2 RAID 5 arrays by splitting the 320GB drives into 2 partitions like so:

Array 1: 2* 160GB drives + 1st half of 320GB drives

Array 2: 2* 160GB drives + 2nd half of 320GB drives

And if you want the 2 RAID arrays to appear as one volume, you can use LVM to do it.

But even if you decide to go the nested RAID route, you'll still need to find a place to keep your stuff while you do the conversion.

Share this post


Link to post
Share on other sites

I was going to set up a smaller RAID5 array first (i.e. with fewer drives), then use EVMS to expand it to an 4-drive array.

1. Create a degraded 3-drive RAID5 array using the two new 320GB drives.

2. Copy all the data (480GB of it) to the new array (which has a capacity of 640GB).

3. Split up the old RAID5 array into 2 320GB RAID0 stripes.

4. Add one of these 320GB stripes to the degraded array and let it resync (making it a healthy 640GB array).

5. Take the array offline, and do an expansion using EVMS (turning it into a 960GB array).

Is there any reason this wouldn't work? I'm kind of relying on it....

What would be the advantage of using the setup you've suggested? If I used LVM to make it appear as a single partition, wouldn't I have very slow performance when reading/writing to both arrays simultaneously (which would presumably sometimes happen because of the way LVM splits them up under the covers)? Presumably though, with your setup I wouldn't have to back up data (I could just remove one 160GB partition at a time from the existing array, replacing it with a 160GB partition from the 320GB drive?

Also, redundancy is slightly improved if I go for the nested arrays (I can get away with 2 of the 160GB drives failing if they're both in the same RAID0 stripe...).

Edited by The Belgain

Share this post


Link to post
Share on other sites

I suppose that method would work to preserve the data but if it were me, I'd put a copy of everything somewhere else Just In Caseâ„¢. And I guess the idea of nested RAID arrays makes me kind of... nervous ;).

Share this post


Link to post
Share on other sites

I guess I'm less nervous when the nesting is being done by a hardware RAID controller. As far as the guide goes, I have no experience with Ubuntu and only minimal experience with Debian, though the guide doesn't look like it'll have you doing anything reckless. Just don't forget to compile in the RAID support ;).

Share this post


Link to post
Share on other sites
Also, redundancy is slightly improved if I go for the nested arrays (I can get away with 2 of the 160GB drives failing if they're both in the same RAID0 stripe...).

:blink: If only ONE drive fails you have a degraded array - in other words : You have simply MORE drives in the array so the possibility of a double-failure is much higher !

And Linux-SoftRAID-expansion ? There is one tool that seems to work, but this is very very experimental and time-consuming. You simply need a full backup i think.

Uhm... and EVMS ? Doesn't this monster need a bunch of nasty kernel-patches ? You even dont need this GUI (is it much more?), mdadm can do the job too.

Compiling a custom kernel is no big deal, especially on Debian it is quite easy :

Install "krenel-package" and "build-essential", unpack the sources, configure it and type "make-kpkg kernel_image" and you get fine deb-Packets ;-)

First take a look there :

http://news.gmane.org/gmane.linux.raid

Share this post


Link to post
Share on other sites
If only ONE drive fails you have a degraded array - in other words : You have simply MORE drives in the array so the possibility of a double-failure is much higher !

This is true, but that's also the case of the other setup suggested...

And Linux-SoftRAID-expansion ? There is one tool that seems to work, but this is very very experimental and time-consuming. You simply need a full backup i think.

EVMS lets you do this and it's supposed to be pretty stable I gather... doing a full backup of 500GB just isn't really feasible unfortunately... I'd say about 70% of it is backed up to DVDRs though...

Uhm... and EVMS ? Doesn't this monster need a bunch of nasty kernel-patches ? You even dont need this GUI (is it much more?), mdadm can do the job too.

Yeah, that's what I'm most worried about... Ubuntu comes with EVMS installed and configured by default (even though it isn't completely upto date in terms of revisions), so I may just give that a go...

Share this post


Link to post
Share on other sites

WTF? Ubuntu is equiped with EVMS ? This is surprising for a Distribution aimed to desktop-usage. But this could be the way for you. The other Tool that is able to resize RAID5 is called "raidreconf" which should work almost as reliable than EVMS but without modification of the kernel so you can easily use a self-breed 2.6.13-kernel.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now