noir

linux or win2003 software raid5?

Recommended Posts

There weren't really any settings. I created the array, 3x160GB IDE drives (Hitachi 7K250) and formatted with cluster size 32K. Takes about 3 hours to synch the whole thing, reads at about 70mB/sec, writes about 60.

Share this post


Link to post
Share on other sites

i did mine with the default cluster size... so i'd better try raiding it with bigger cluster size ^^

32k... ill try that on sunday eve (leaving for the weekend soon)

ill post the results here when raiding has been done ^^

Share this post


Link to post
Share on other sites

WHOA, exceptional results!

The array hasnt even finished building, and i just tried to copy over from another HD to this array... 1100megs in 25 seconds

thats around 44meg/seconds, i doubt my source HD can go any faster!

finally, my raid is working like i want it too!

thanks so much, qasdfdsaq! :lol:

(one other tiny question: as this software raid5 is based on dynamic discs, is it possible to migrate some of the hds to another controller? will windows recognize them?)

Share this post


Link to post
Share on other sites
WHOA, exceptional results!

The array hasnt even finished building, and i just tried to copy over from another HD to this array... 1100megs in 25 seconds

thats around 44meg/seconds, i doubt my source HD can go any faster!

finally, my raid is working like i want it too!

thanks so much, qasdfdsaq!  :lol:

(one other tiny question: as this software raid5 is based on dynamic discs, is it possible to migrate some of the hds to another controller? will windows recognize them?)

201171[/snapback]

You should be able to ... just import them when you attach them in the disk manager.

Share this post


Link to post
Share on other sites

That sounds pretty slow...

I haven't tried Windows software RAID, but my fileserver is running Linux software RAID5 (4x160GB SATA Maxtor), and I get about 45MB/sec read, and 25MB/sec write on a 333MHz P2.

Happily maxes out the 100MBit LAN, even when accessing multiple files (downloads t the drive + video encoding from/to it + streaming video for example). Overall, I'm very happy with it.

The only issue is that I now want to add more drives to the array, and I don't think I can do that without having to build a new array from scratch (correct me if I'm wrong).

Share this post


Link to post
Share on other sites

So large cluster sizes is the way to go? I had a small cluster size on my previous array. What is your write speed now noir?

Share this post


Link to post
Share on other sites

i havent benchmarked it yet.

i just know that a 1gig file transferred from a normal hd to the RAID5 array was copied in 25 seconds... thats roughly 40megs/sec

but i seriously think that it was the source HD that was the bottleneck... the raid array didnt "sound" that active. Could anyone recommend me a benchmark test for arrays? (hdtach didnt seem to recognize my array)

Share this post


Link to post
Share on other sites
i havent benchmarked it yet.

i just know that a 1gig file transferred from a normal hd to the RAID5 array was copied in 25 seconds... thats roughly 40megs/sec

but i seriously think that it was the source HD that was the bottleneck... the raid array didnt "sound" that active. Could anyone recommend me a benchmark test for arrays? (hdtach didnt seem to recognize my array)

201260[/snapback]

Could you copy the file the other way, to test the array write speed? I can accept slow 30MB/s reads but the 5MB/s writes was terrible when the array was used as backup storage.

Share this post


Link to post
Share on other sites

Perhaps you guys will have some insite in to my Windows software RAID problem.

For some reason there is a pretty big performance hit when doing RAID-5 vs RAID-0. On my P4 3Ghz HT, (I875P Chipset) I have 3 120GB WD Drives, 1 on IDE, 2 on SATA. Using RAID-0, I get about 140MB/s reads, but with RAID-5 I get only 45MB/s. My fastest SATA drive alone is 60MB/s.

The CPU utilization while doing these read tests barely shows up. I've tried playing around with the cluster sizes with no change in percieved performance. I even went as far as moving the drive around on the IDE channels.

I find it odd that you guys here are happy about getting 44MB/s. I guess this is considering the machine this is performing on right? I've been using hd_speed as a benchmarking tool.

If anyone has any ideas regarding my situation, I would appreciate any feedback.

Thanks.

Share this post


Link to post
Share on other sites

My current theory is that windows software raid is sensetive to cluster size. Small cluster size gives very poor performance (I got reads in around 45MB/s and writes around 5MB/s). I'm hoping that noir will get back soon with the performance figures from his new array, he used small clusters before with poor result and has changed to large clusters now.

Currently all my drives are tied up for raid 0+1 arrays but in a month or two I'll have the possibility to check out windows software raid 5 again, although I would like to have some indication to wether it is worth the work or not.

Share this post


Link to post
Share on other sites
My current theory is that windows software raid is sensetive to cluster size. Small cluster size gives very poor performance (I got reads in around 45MB/s and writes around 5MB/s). I'm hoping that noir will get back soon with the performance figures from his new array, he used small clusters before with poor result and has changed to large clusters now.

In my tests, cluster size seemd to have little if no effect. As I mentioned, there is a huge difference in the performance of software raid 0 and raid5. 140MB/s vs 45MB/s for reads. Since the parity is only calculated during writes, the overhead of the partiy should not come into to play. Even if it did, a 3Ghz P4 should be able to handl it.

It seems like there is some other issue here.

Currently all my drives are tied up for raid 0+1 arrays but in a month or two I'll have the possibility to check out windows software raid 5 again, although I would like to have some indication to wether it is worth the work or not.

201523[/snapback]

Well, Raid5 is supposed to be a compromise between redundancy and performance. In my particular application, that much compromise for redundancy is not worth it. A window of data-loss between NAS backups is worth the risk. I am curious if there is much of a difference in software Raid 5 on Linux.

Share this post


Link to post
Share on other sites
My current theory is that windows software raid is sensetive to cluster size. Small cluster size gives very poor performance (I got reads in around 45MB/s and writes around 5MB/s). I'm hoping that noir will get back soon with the performance figures from his new array, he used small clusters before with poor result and has changed to large clusters now.

In my tests, cluster size seemd to have little if no effect. As I mentioned, there is a huge difference in the performance of software raid 0 and raid5. 140MB/s vs 45MB/s for reads. Since the parity is only calculated during writes, the overhead of the partiy should not come into to play. Even if it did, a 3Ghz P4 should be able to handl it.

It seems like there is some other issue here.

Parity calculations are performed on both reads and writes. The main bottleneck is that during writes the data first has to read, updated, and then written, there is no such thing as just "write" to a raid 5 array.

If cluster size if not the issue then what is? What cluster sizes did you try? Seems that nobody can give a good reason to why some systems are PCI-bus limited and some are extremely slow. There has been some ideas about SATA-interfaces being a problem but I find it quite unlikely, that would be some kind of bug.

Where is noir!? You hinted that the new array was faster, please give us your results!

Edited by Adde

Share this post


Link to post
Share on other sites

One thing seems to be really common to all systems, cpu-load is low. Even on low-end fileservers that should not be an issue. The highest load I had when using raid 5 was 30% on a 800MHz celeron, so there seems to be plenty of cpu-horsepower to allow at least the double (if not tripple) performance.

Share this post


Link to post
Share on other sites

I tried out the hd_speed.exe program on my raid 0 array, and it really gave some quick answers. Testing with small block sizes (4kb) gave really poor performance (20MB/s) larger block size managed up to 80MB/s although the result graph was quite unstable.

I haven't lost hope that software raid 5 with larger cluster size can give good performance.

Share this post


Link to post
Share on other sites

Raid 0 and 5 cannot be equally compared, it is not fair. Raid 0 does not have any parity calculations to do like in RAID 5. You either want the speed or fault tolerance, you can't have both.

I am currently getting 600+ MB/sec on my RAID 0 setup and 400+MB/sec on RAID 5

Share this post


Link to post
Share on other sites

oh, and by the way. There is nothing better that a Hardware RAID controller for RAID 5 and 6 arrays. The parity calculates much faster, especially when the card has allot of cache memory. Plus, in the long run, software raid perfomance decreases while hardware raid performance stays the same.

Share this post


Link to post
Share on other sites

sorry i didn't post for that long, but i was trying to test the arrays.

A 3x200gig software raid5ed with 32k cluster size gives me about 25mbyte/sec writes when copying from my boot HD to the array

The same 3x200gig raid5ed with 64k cluster size is around 35mbyte/sec

Now for the main problem: as soon as i raid5 4x200gigs, cluster size either 32k or 64k, write speeds slow down to around 12mbyte/sec

i'm starting to feel helpless

what sector size are you soft raid5ers using?

Edited by noir

Share this post


Link to post
Share on other sites
sorry i didn't post for that long, but i was trying to test the arrays.

A 3x200gig software raid5ed with 32k cluster size gives me about 25mbyte/sec writes when copying from my boot HD to the array

The same 3x200gig raid5ed with 64k cluster size is around 35mbyte/sec

Now for the main problem: as soon as i raid5 4x200gigs, cluster size either 32k or 64k, write speeds slow down to around 12mbyte/sec

i'm starting to feel helpless

201605[/snapback]

So one step forward, and one back. How about read speeds?

Share this post


Link to post
Share on other sites
sorry i didn't post for that long, but i was trying to test the arrays.

A 3x200gig software raid5ed with 32k cluster size gives me about 25mbyte/sec writes when copying from my boot HD to the array

The same 3x200gig raid5ed with 64k cluster size is around 35mbyte/sec

Now for the main problem: as soon as i raid5 4x200gigs, cluster size either 32k or 64k, write speeds slow down to around 12mbyte/sec

i'm starting to feel helpless

what sector size are you soft raid5ers using?

201605[/snapback]

I think it is to be expected that writes are slower when you add drives, because to write one block you first have to read one block of each drive in the array, although the jump from 35MB/s to 12MB/s seems large.

Are all drives in the same performance league?

Share this post


Link to post
Share on other sites

no idea, i haven't tested read speeds yet... my guess is that read speeds are faster than writes anyway... for me anything around 40mbyte/sec would be sufficient.

But i can't explain myself the performance discrepancy between the 3hd and 4hd array.

Actually, with the 4hd array, transfers start out alright.... for the first few hundred megs, then it suddenly slows down terribly, whereas the 3hd array keeps transfer speeds constant

Share this post


Link to post
Share on other sites

Could you try the "hd_speed.exe" above? It has no installation, just run and push start. Just test the current array whichever configuration that is.

Edited by Adde

Share this post


Link to post
Share on other sites

yes, all drives are in the same performance league: its 4x seagate 7200.7

2 are IDE, and 2 are S-ATA

i did test the read of the array, it was around 60megs/second in the 3HD array

Share this post


Link to post
Share on other sites
no idea, i haven't tested read speeds yet... my guess is that read speeds are faster than writes anyway... for me anything around 40mbyte/sec would be sufficient.

But i can't explain myself the performance discrepancy between the 3hd and 4hd array.

Actually, with the 4hd array, transfers start out alright.... for the first few hundred megs, then it suddenly slows down terribly, whereas the 3hd array keeps transfer speeds constant

201608[/snapback]

If the initial performance is ok with the 4 drive array it seems really strange that it would slow down after writing a couple hundred megabytes. I don't have a clue to what might cause that.

Share this post


Link to post
Share on other sites
yes, all drives are in the same performance league: its 4x seagate 7200.7

2 are IDE, and 2 are S-ATA

i did test the read of the array, it was around 60megs/second in the 3HD array

201610[/snapback]

That's what I would call ok at least. I'll try to get my hands on a drive so that I can do my own testing during this weekend and I'll get back with my results.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now