Sign in to follow this  
tetra-pro

Recommended SSDs for RAID 5?

Recommended Posts

I'm putting together a new workstation to be used for software development, using Visual Studio 2010 and Intel C++ on 64-bit Windows 7, and VMware Workstation with various Linux virtual machines.

The system configuration will look something like this:

Asus P6X58E-WS motherboard

W3670 Xeon processor

24 GB unbuffered ECC RAM

3GB EVGA GTX 580

LSI 9265-8i RAID controller with FastPath key and 1 GB on-board cache

2 x 1TB WD Caviar Black in R1

4/5 x 120GB SSD in R5

As far as the spinners go, they will be used primarily to hold backup images of the boot and data disks (discussed below), and to hold the Linux virtual machines (which are on the order of 40 GB or so).

My thinking with the SSDs is to put 4 or 5 120 GB SSDs into a RAID 5 array. I estimate this will give me about 300-330 GB (for 4 SSDs) or 400-440 GB (for 5 SSDs) formatted capacity. This array would be then partitioned into 2 disks.

The C: drive would be about 150 GB, be bootable, contain the OS and all software development tools. Its contents would be relatively static, changing only when there are OS or tool updates. I anticipate the total storage requirement for all this stuff to be about 80 GB.

The D: drive would be a scratch area, containing development sandboxes, temporary directories, build directories, and the Linux virtual machines I use most often. I expect the total storage requirement to be about 80-100 GB or so on this disk.

All told, my total storage needs will be about 50-60% of the capacity I will be purchasing. I'm hoping this will help with reliability, and that there should be plenty of available space on the drives for them to perform their self-maintenance.

I'm looking specifically at RAID 5 because I think it will give me a good balance of performance and reliability. According to the SSD Review and this site, I should get pretty good performance in RAID 5. Conversely I should have limited down time in case of an SSD failure; (I can quickly put in a new spare SSD and rebuild the array in hours, instead of having days of down time waiting on replacement).

I'll be making my living with this machine, so uptime is important. As a C++ developer, compile and link speed is also important.

So, I guess my questions are:

0. Are there any major flaws in my plan? Alternatively, is there a better way to do the same thing?

1. Does anyone have experience using SSDs in RAID 5?

2. Since TRIM cannot be passed through to RAID 5 arrays, how important is garbage collection?

3. What SSDs are best suited to this scenario?

Thanks in advance for any insight.

--Bob

Share this post


Link to post
Share on other sites

Buy whatever's qualified by LSI in this case. Don't want drives randomly dropping from your array.

I haven't seen LSI's hardware compatibility list but that would at least be a good starting point. Garbage collection is definitely something I would consider very important...

Share this post


Link to post
Share on other sites
As far as the spinners go, they will be used primarily to hold backup images of the boot and data disks (discussed below), and to hold the Linux virtual machines (which are on the order of 40 GB or so).

My thinking with the SSDs is to put 4 or 5 120 GB SSDs into a RAID 5 array. I estimate this will give me about 300-330 GB (for 4 SSDs) or 400-440 GB (for 5 SSDs) formatted capacity. This array would be then partitioned into 2 disks.

The C: drive would be about 150 GB, be bootable, contain the OS and all software development tools. Its contents would be relatively static, changing only when there are OS or tool updates. I anticipate the total storage requirement for all this stuff to be about 80 GB.

The D: drive would be a scratch area, containing development sandboxes, temporary directories, build directories, and the Linux virtual machines I use most often. I expect the total storage requirement to be about 80-100 GB or so on this disk.

All told, my total storage needs will be about 50-60% of the capacity I will be purchasing. I'm hoping this will help with reliability, and that there should be plenty of available space on the drives for them to perform their self-maintenance.

I'm looking specifically at RAID 5 because I think it will give me a good balance of performance and reliability. According to the SSD Review and this site, I should get pretty good performance in RAID 5. Conversely I should have limited down time in case of an SSD failure; (I can quickly put in a new spare SSD and rebuild the array in hours, instead of having days of down time waiting on replacement).

I'll be making my living with this machine, so uptime is important. As a C++ developer, compile and link speed is also important.

So, I guess my questions are:

0. Are there any major flaws in my plan? Alternatively, is there a better way to do the same thing?

1. Does anyone have experience using SSDs in RAID 5?

2. Since TRIM cannot be passed through to RAID 5 arrays, how important is garbage collection?

3. What SSDs are best suited to this scenario?

0. Alternatives

For "real" scratch drives (temp/swap), I would go RAID 0 or use some of your 24GB memory in a ramdrive

For boot drive, I would use a small 100MB partition on your HDD

For OS/Apps drive that the boot links to, I will create a SSD partition on a RAID 5 array taking care of having the RAID 5 stripe size to equals the OS "page" size (Windows cluster size can be up to 64KB)

For scratch Data drives, I will use the OS/Apps partition above

1. No known problem if you know what aligning a partition is and if you have no useless background process (defrag, index, etc)

2. The TRIM function can not happen in RAID...and, in most case, you don't care at all because it only helps the garbage collector (GC) to know at deletion time whose sectors are to be resetted later on. In many situation, you only update sectors which means the SSD write on new sectors and marked to-be-resetted old ones. Obviously, if you leave enough free space (you already have 10% invisible "spare area" in all SSD) which is your case

3. SandForce 2000 series based SSD are the best as of today, Intel's are almost equivalent but don't worth the price gap to my opinion. Most SSD manufacturers provides TRIM alternatives (Intel's Toolbox for example) that "refresh" their SSD.

Share this post


Link to post
Share on other sites

This array would be then partitioned into 2 disks.

The C: drive would be about 150 GB, be bootable, contain the OS and all software development tools. Its contents would be relatively static, changing only when there are OS or tool updates. I anticipate the total storage requirement for all this stuff to be about 80 GB.

The D: drive would be a scratch area, containing development sandboxes, temporary directories, build directories, and the Linux virtual machines I use most often. I expect the total storage requirement to be about 80-100 GB or so on this disk.

All told, my total storage needs will be about 50-60% of the capacity I will be purchasing. I'm hoping this will help with reliability, and that there should be plenty of available space on the drives for them to perform their self-maintenance.

I'm looking specifically at RAID 5 because I think it will give me a good balance of performance and reliability. According to the SSD Review and this site, I should get pretty good performance in RAID 5. Conversely I should have limited down time in case of an SSD failure; (I can quickly put in a new spare SSD and rebuild the array in hours, instead of having days of down time waiting on replacement).

I'll be making my living with this machine, so uptime is important. As a C++ developer, compile and link speed is also important.

So, I guess my questions are:

0. Are there any major flaws in my plan? Alternatively, is there a better way to do the same thing?

1. Does anyone have experience using SSDs in RAID 5?

2. Since TRIM cannot be passed through to RAID 5 arrays, how important is garbage collection?

3. What SSDs are best suited to this scenario?

Thanks in advance for any insight.

--Bob

A. Do you really need RAID for this? What are the chances of a drive failing? Do you do frequent backups? Would a single SSD meet your performance needs? How about a SSD per drive letter? You did say uptime is important so why not pay a little more for high capacity SSDs and reduce the complexity of the system avoiding RAID entirely? Yes single drives are a single point of failure but unless you have spare RAID controllers laying around your RAID controller/motherboard is a single point of failure in hardware RAID or BIOS level "fake" RAID. Which gets into spare CPUs, ram, and such that as single points of failure could keep you from accessing the data if it isn't software RAID or just plain disks.

You could go as far as putting all the data on a single drive like the Crucial M4/C400 512GB or Intel 320 Series 600GB. Sure it isn't RAID but RAID isn't the only way to ensure uptime. You still have to do backups so even a solution as simple as buying an entire spare PC would allow you to restore data to another machine and keep going.

B. You have two (drive letter, partitions, volumes, pick a term to go here) why not just do two RAID 1 arrays? One answer might be that your planned C: will have more data than your planned D:. Another take is that it might be cheaper to buy lower capacity SSDs and RAID them but you did say uptime is important so why not pay a little more for high capacity SSDs and reduce the complexity of the Array? Again the question of hardware vs "fake" vs OS level RAID determines which components are still a single point of failure.

C. My god it's 2011 why are you looking at RAID 5? See BAARF (google it if you aren't familiar) and realize that RAID 5 has been looked down on by many in the storage industry for close to a decade. If single drives or RAID 1 sets don't fit your needs how about RAID 10? See http://www.silentpcreview.com/forums/viewtopic.php?p=388987 for my thoughts from 2008 on the matter of RAID levels.

The more you try to minimize down time the more you'll minimize your bank account:

If you really need uptime you might consider picking 2 or 3 drive controllers (Intel, Marvell, Sandforce) and using drives from each controller family in your RAID 1 legs or keeping drives of the other controller family in stock in sufficient capacity to replace all the drives in the array with a different type if needed. Heck you could even buy some PCIe drive controllers so you can have spare controllers on hand to swap into a system with a different chipset. I'm sure we could all suggest other ways to spend money to reduce downtime.

Good luck on whatever you decide to do.

Edited by dhanson865

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this