Sign in to follow this  
parity

Need opinions on choice of RAID

Recommended Posts

To preface, I'm aware of the benefits of both RAID choices.

RAID 6 allows additional fault tolerance and data consistency from the second parity drive.

RAID 5 allows for faster writes.

In this specific implementation, splitting into two RAID 5 arrays would allow for easy upgrades in the future.

One array could be removed, new drives inserted, and data could be mirrored onto a new array when required.

The procedure could then be repeated on the second array.

If RAID 6 were implemented, to mirror data, a second machine and RAID controller would be needed.

My questions concern the practical application with respect to rebuild times, corruption of data (RAID 5).

How often have people experienced corruption issues in RAID 5 volumes where the parity drive hosed all the data?

How long are typical RAID 5 rebuilds with arrays between 3-5TB? All feedback would be appreciated.

RAID 5 -- (2 arrays)

------------------------

5x 1000gb = 3724gb

5x 1000gb = 3724gb

------------------------

Total........... 7448gb

or

RAID 6 -- (1 array.)

------------------------

10x 1000gb = 7448gb

Edited by parity

Share this post


Link to post
Share on other sites
RAID 5 -- (2 arrays of 5x 1TB) or RAID 6 -- (1 array. of 10x 6TB)

It looks the same but I will choose the RAID 5 option because too many large drives (1TB!) on a single array raise your failure probability to the roof

I hope your are not going to use those arrays for anything else than large file copying from few clients

Edited by HachavBanav

Share this post


Link to post
Share on other sites
because too many large drives (1TB!) on a single array raise your failure probability to the roof

Why, big drives fails more than small ones ??

There is 2 "partity drives" in each case.

Share this post


Link to post
Share on other sites

No, just having more drives-- rather than less drives-- raises likelihood of failure, period.

Plus, rebuild times with a larger array are of course going to be worse than with a smaller array, so it is probably to your advantage to run two smaller arrays.

How often have people experienced corruption issues in RAID 5 volumes where the parity drive hosed all the data?
A few times. Mostly in smaller RAID5s though (by today's standards at least, under 3TB).
How long are typical RAID 5 rebuilds with arrays between 3-5TB? All feedback would be appreciated.
The one time I had this happen it took something like 50 hours on an Areca ARC-1680. That's 50 hours of vulnerability to a 2nd disk failure.... not something to be looked onto well with a mission-critical situation.

Share this post


Link to post
Share on other sites
No, just having more drives-- rather than less drives-- raises likelihood of failure, period.

Plus, rebuild times with a larger array are of course going to be worse than with a smaller array, so it is probably to your advantage to run two smaller arrays.

I'd prefer to run smaller arrays, but I'm worried about problems with RAID 5.

How often have people experienced corruption issues in RAID 5 volumes where the parity drive hosed all the data?
A few times. Mostly in smaller RAID5s though (by today's standards at least, under 3TB).

This is what I'm afraid of and why I'm considering RAID6 10x1TB. I don't want to lose the data. I don't have the ability to do frequent backups, so I need to choose the best cost effective implementation to insure data is kept intact and not lost.

How long are typical RAID 5 rebuilds with arrays between 3-5TB? All feedback would be appreciated.
The one time I had this happen it took something like 50 hours on an Areca ARC-1680. That's 50 hours of vulnerability to a 2nd disk failure.... not something to be looked onto well with a mission-critical situation.

Unfortunately, I purchased an Areca ARC-1231 prior to the release of the Areca ARC-1680. I've read that rebuild times are faster with the newer Intel IOP. If I choose to create the larger RAID6 10x1TB array, I increase rebuild times, increase chance of failure, but also gain an additional parity drive which would insure data integrity, that would help against losing the array during a rebuild. I have no idea how long it would take to rebuild a RAID6 7+TB array rebuilding with the Intel IOP341.

Edited by parity

Share this post


Link to post
Share on other sites

Without thinking to much about it, it seems to me that a 10 drive raid6 array would have better expected reliability than 2x 5drives raid5. As for expansion you could always perform an online capacity expansion, swaping out one drive at a time.

I'm not sure about the way the areca works but I assume that if you swapped out say 5 drives for larger ones you could put another array on the new available space, making uppgrades easy.

Share this post


Link to post
Share on other sites

Well, your ARC-1231 also has a dual-core IOP on it, so you should still be in the same ballpark as rebuild times go. The issue is the window of vulnerability for such a large array is, naturally, much longer (to rebuild) than a smaller array-- and if you unknowningly have more than your number of parity drives on the way out, you are very vulnerable to failure.

Share this post


Link to post
Share on other sites
make the decision easy: two 6-drive raid 6

not cost effective to lose four drives out of twelve which would be the case with two RAID6 array. =/

You said you absolutely don't want to lose your data and you that you can't even do a real backup. If your data is that critical this is suicide! Even 2 RAID6 arrays will be. Imagine a lightning strike, a broken water pipe, a thief etc. RAID is only meant to keep your data available (or increase performance) but it's no replacement for a backup. There are a lot of people who had to learn that the hard way, but you still have a choice! ;)

Also, a 1TB drive is only about 100EUR. Not really much compared to the controller and the other 10 drives. And think about the time you already spent on this.

Share this post


Link to post
Share on other sites
because too many large drives (1TB!) on a single array raise your failure probability to the roof
Why, big drives fails more than small ones ??

Big drives fails more than small ones because the "Non-recoverable read errors per bits read" (ex: WD RE3) is always "< 1 in 10exp15

So a "non recoverable read error per bits read" depends on how many "bits read" are done ... which is supposed to be bigger on a 1TB drive than on a 300GB one.

Share this post


Link to post
Share on other sites
not cost effective to lose four drives out of twelve which would be the case with two RAID6 array. =/
Beats losing all your data, doesn't it?

And better than a 50% hit from RAID10. ;)

Share this post


Link to post
Share on other sites
because too many large drives (1TB!) on a single array raise your failure probability to the roof

Why, big drives fails more than small ones ??

There is 2 "partity drives" in each case.

yeah I forget the details but I read a blog about this, and basically yes, using bigger drives increases the chance of problems in a parity-using raid array because of the nature of still being at risk of failure when rebuilding the array after a failure. Assume you can only tolerate one more disk failure when rebuilding; well if just one sector on one drive fails, then that part of the rebuild fails. And there are more failed sectors on larger drives than on smaller drives due to there always being a certain percentage of failed sectors. Something like that anyway. boiiiing!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this