Jump to content


Photo

Maximum number of drives in raid 5


  • You cannot start a new topic
  • Please log in to reply
11 replies to this topic

#1 kosov22

kosov22

    Member

  • Member
  • 1 posts

Posted 01 March 2002 - 07:18 PM

I have a question about raid 5.. Is there a limit to how many drives you can have in one raid 5? For example the ide raid card escalade 7850 that can support 8 drives, is it possible to use them all in one raid and get the space of 7 drives and one for the parity information? Also if it is possible, are there any drawbacks to having that many drives in one raid5?

#2 jehh

jehh

Posted 01 March 2002 - 07:28 PM

No, there is no limit.

In very high end servers, there are RAID 5 arrays with hundreds of hard drives in them.

As for the 3Ware card, yes you can have them all in 1 array and have the space of 7 drives. That is actually the point of that card for the most part. :D

Jason

#3 jchung

jchung

Posted 01 March 2002 - 07:41 PM

I have a question about raid 5.. Is there a limit to how many drives you can have in one raid 5? For example the ide raid card escalade 7850 that can support 8 drives, is it possible to use them all in one raid and get the space of 7 drives and one for the parity information? Also if it is possible, are there any drawbacks to having that many drives in one raid5?


I'm not certain about the escalade. Technically it depends on the controller. Practically, you don't want to use to many drives per RAID 5 array. A RAID 5 array can only suffer a failure of 1 drive. If 2 drives fail you are toast. The more drives you have in a RAID 5 array, the greater your likelihood of 2 failed drives. If you want to use RAID 5, I suggest you use 7 drives for the RAID 5 and leave 1 drive as a hot spare.

In large installations, instead of using one incredibly large RAID 5 array, they would use say multiple smaller RAID 5 arrays. and then maybe RAID 0 across the RAID 5 arrays.

Joo

#4 jehh

jehh

Posted 01 March 2002 - 07:45 PM

Most very large arrays provide for hot spares...

If you have a RAID 5 array with 100 drives in it, you might have 5 or 10 hot spare drives as well.

Jason

#5 Trinary

Trinary

    Member

  • Member
  • 1,115 posts

Posted 04 March 2002 - 12:14 PM

There are also some unusual RAID types of storage offered by individual companies.

Compaq, for example, offers what is called "Data Guarding", which protects an array against up to two simultaneous drive failures, at the cost of using an extra drive's worth of space.
Trinary

#6 russofris

russofris

    Member

  • Member
  • 2,120 posts

Posted 04 March 2002 - 03:15 PM

I have a question about raid 5.. Is there a limit to how many drives you can have in one raid 5? 


Depends on the controllers. I remember having a 72 physical disk limit on Compaq proliant 4500, 5000, and 5500's. In the IBM PC325, the limit was 96 physical disks.

If you are going above 64 disks, you should be using FC or external controllers (SAN)

Hope that this helps,
Frank Russo

#7 cypherpunks

cypherpunks

    Member

  • Member
  • 48 posts

Posted 16 August 2007 - 09:46 AM

I have a question about raid 5.. Is there a limit to how many drives you can have in one raid 5? Also if it is possible, are there any drawbacks to having that many drives in one raid5?


great question, I'm looking for the answer as well. I don't think the other posters in this thread understood the question. basically, when using the redundancy algorithm (ECC, error correcting code) in RAID-5, what is the maximum amount of data loss possible while still being able to reconstruct the original data in full? 25%? (4 drive raid) 20%? (5 drive raid) 1%? (100 drive raid).

and I think I've finally figured it out: http://www.commodore...raid5/raid5.htm take a look at the 2nd and 3rd illustrations.

the "parity" information really is PARITY (revisit parity if you forgot what it is). it doesn't matter how many bits/drives there are, as long as only ONE fails, the data can be recovered. you can have an INFINITE number of drives and maintain full data recovery, as long as only one fails and the rest stay up during your rebuild process. that's awesome!

that said, there's a risk that the time it takes to rebuild an enormous array is long enough for another drive to fail. that's where RAID-6 comes in. it uses two parity drives. here's an article about it: http://www.serverwat...10823_3508871_1 and it quotes Richard Scruggs, HP's product manager for server storage as saying: "RAID 5 had a limit of somewhere between 10 and 14 drives in one system". by "limit", he was referring to the risk of another drive failing during the rebuild operation. the risk for home users may outweigh the price of another drive being "wasted" on parity.

i'm running a 6x400GB RAID-5 through Windows XP's software (had to be hacked for support in the workstation OS) right now, its working great. I had to cross my fingers and hope that 6 drives would work. I'm really glad to know what the limitations are now!

#8 DigitalFreak

DigitalFreak

    Member

  • Member
  • 134 posts

Posted 17 August 2007 - 01:24 PM

Wow! I think that might qualify for an award for the resurrection of the longest dead thread in history! :lol:

#9 Trinary

Trinary

    Member

  • Member
  • 1,115 posts

Posted 17 August 2007 - 04:14 PM

I have a question about raid 5.. Is there a limit to how many drives you can have in one raid 5? Also if it is possible, are there any drawbacks to having that many drives in one raid5?


great question, I'm looking for the answer as well. I don't think the other posters in this thread understood the question. basically, when using the redundancy algorithm (ECC, error correcting code) in RAID-5, what is the maximum amount of data loss possible while still being able to reconstruct the original data in full? 25%? (4 drive raid) 20%? (5 drive raid) 1%? (100 drive raid).

and I think I've finally figured it out: http://www.commodore...raid5/raid5.htm take a look at the 2nd and 3rd illustrations.

the "parity" information really is PARITY (revisit parity if you forgot what it is). it doesn't matter how many bits/drives there are, as long as only ONE fails, the data can be recovered. you can have an INFINITE number of drives and maintain full data recovery, as long as only one fails and the rest stay up during your rebuild process. that's awesome!

that said, there's a risk that the time it takes to rebuild an enormous array is long enough for another drive to fail. that's where RAID-6 comes in. it uses two parity drives. here's an article about it: http://www.serverwat...10823_3508871_1 and it quotes Richard Scruggs, HP's product manager for server storage as saying: "RAID 5 had a limit of somewhere between 10 and 14 drives in one system". by "limit", he was referring to the risk of another drive failing during the rebuild operation. the risk for home users may outweigh the price of another drive being "wasted" on parity.

i'm running a 6x400GB RAID-5 through Windows XP's software (had to be hacked for support in the workstation OS) right now, its working great. I had to cross my fingers and hope that 6 drives would work. I'm really glad to know what the limitations are now!


RAID 1+0 offers better performance (especially during rebuilds) than RAID 5, and also offers much higher reliability. With RAID 1+0, you only have to be concerned about both drives in a mirror pair failing taking down the array.

Of course, you will use twice as many drives, but given the cost of quality RAID controllers that support RAID 5 or RAID 6, onboard cache, and an onboard battery, it may actually cost less than implementing a quality RAID 5 or RAID 6 solution.
Trinary

#10 Ralf

Ralf

    Member

  • Member
  • 147 posts

Posted 20 August 2007 - 06:40 AM

write performance typically takes a huge hit in raid 5 with many drives (if you write random sectors the controller has to pull in the whole stripe to calculate the new parity)
large high-performance raid5 is thus often actually a raid5+0 - with win xp pro you may just use its built-in software striping to bundle a bunch of hw raid5 arrays / hbas

#11 todd bailey

todd bailey

    Member

  • Member
  • 1 posts

Posted 06 April 2012 - 12:41 PM

write performance typically takes a huge hit in raid 5 with many drives (if you write random sectors the controller has to pull in the whole stripe to calculate the new parity)
large high-performance raid5 is thus often actually a raid5+0 - with win xp pro you may just use its built-in software striping to bundle a bunch of hw raid5 arrays / hbas


software raid 5 or 6 is usually a bad idea, primarily due to the load placed on the host computer.
A hardware based solution is the desired configuration but it comes at a high cost (usually) and in doing so the controller is usually the limiting factor. SCSI raid controllers typically limit the number of drives to 15 per channel. Sata and Sas controllers have other limits. A lot depends on desired i/o performance a 4 drive raid 5 array has lower performance than a 8 drive array and that has lower performance than a 16 drive array. But at some point the controller becomes the bottle neck and adding more drives to a controller doesn't improve performance. It's best to design the system; determine the use and bandwidth needs before going out and buying hardware.

#12 continuum

continuum

    Mod

  • Mod
  • 3,518 posts

Posted 09 April 2012 - 01:00 PM

Holy old threads batman!



5 user(s) are reading this topic

0 members, 5 guests, 0 anonymous users