48 Drives iSCSI SAN ...

Recommended Posts


We are into the project of building a 48 drives iSCSI SAN, it would be the base of 8 Xen Servers running about 300+ VM instances, so high performance it's really important as you may guess :)

After our first research we are considering the following:


- SuperMicro 417E16-R1400UB or 417E26-R1400UB (http://www.supermicro.com/products/chassis/4U/417/SC417E26-R1400U.cfm)

Basically difference is about the SAS Expanders, one is E16 Single Expander chip and the other E26 Dual Expander chip (Both SAS2 6Gb/s)


Not decided yet, but it would be SuperMicro (As is our main hardware provider) probably with the following specs:

Dual Xeon 5600 Series

48 GB RAM (To use as cache)

Network Cards:

Not decided yet, basically we are checking out Intel to use DCA to reduce network interrupts, and/or cards which support iSCSI offload.

RAID Controller:

Here is the big thing, actually the only RAID controller we've found which supports 48 drives RAID10 is Adaptec 5 Series, but we are unsure about the performance of this card for this setup. But as you may see in the link below it supports +32 drives in an unique RAID10 array.


Hard Drives:

We are thinking about using WD Velociraptor 600GB drives, as we had really good experience in the past with these drives, but we are open to other possibilities, only requirement is that the drive should be at least 10.000RPM and 2,5" form factor (Also drives with less than 600GB per drive are suitable)

iSCSI target Software:

We are considering two solutions, starwind and Open-e, actually we prefer StarWind as it is able to manage a huge amount of RAM as WriteBack cache for the system (We plan to use the 48GB of RAM for that) and as we plan a Active/Active HA setup, even if one of the SANs goes offline, there would not be data loss, as the other SAN would be replicated synchronously.

Good point of Open-e is that it would allow as to have 2 x 24 drives RAID 10, and merge them into just one RAID0 array, but cache management of StarWind seems like a really huge benefit.

So, if anybody could provide advice about the following questions, we would really apreciate :) :

1.- Do you know or recommend any controller for 48 drives RAID 10 array?

2.- Experiences with Open-e or StarWind software, any recommendations?

3.- Obviously any other suggestion or tip to improve our setup would be highly appreciate :) .

Thanks a lot for your help! (And sorry about my english, I'm not native english speaker :P )

Share this post

Link to post
Share on other sites

StarWind also has de-duplication (even in their free editions) which is great for virtualization projects as disk saving is HUGE. And with de-duplication and compression RAM cache is also de-duplicated and compressed so it's MUCH more effective then on systems w/o dedupe. FYI.

Share this post

Link to post
Share on other sites

Hi Kooler,

Nice to know, only problem is that actually starwind doesn't support deduplication in HA editions (But in their web it says: "coming soon") but what you say makes perfect sense :)

Also, I've contacted ARECA, and they told me they are going to implement RAID 100 on their controllers, up to 128 drives per array, so we are considering filling up the complete chasis, with 72 drives RAID10...

Do you guys think it makes sense, or it's too much? (We are planning to export the array to iSCSI target using 8 1gbps ethernet links)

Share this post

Link to post
Share on other sites

I was in a similar spot to where you are now. After doing tons of research and configs, I gave up on the home-built model. I was not confident in the uptime aspect of the home-built product. Instead I went with HP P2000s (DAS, SAS) and Seagate drives (10K.5 and 15K.3s, all 2.5", whatever you do, don't buy HP drives... crazy expensive). Out of the box, they only support 4 HA servers but that can be fixed by adding HA LSI SAS switches (~$2,200 each http://www.cdw.com/shop/products/LSI-SAS6160-switch-16-ports/2266936.aspx?enkwrd=ALLPROD%3a|LSI%2520SAS6160%2520Switch|All%20Product%20Catalog).

You havn't touched on what these 300 VMs will be doing. You said performance is very important but what kind of performance are you looking for? Sequential, random, etc? I have found that smaller R5 sets work best, 8 drives. While R10/100 would be good, you are going to loose 1/2 of the space right off the bat. Plus, I think you are setting yourself up for failure. If you want to have one big array, when you have 300 VMs all trying to it hit it at once, you are going to have a que longer then the Mississippi river. Again, this is just my opinion, take it or leave it, just don't bitch at it please. I would suggest creating 6 or 9 sets of 8 drives in R5 and then spreading the VMs between them. Also, I would suggest using SAS drives instead of SATA so you can take advantage of dual ports in case of one of many things dies, controller, cable, connector, backplane, etc. With that being said, HA controllers, I assume, would be very important to you. If not, disregard. I was unable to find a good, dual controller home-built setup. Lastly, I would HIGHLY recommend 10Gb ethernet. The Dell 10Gb ethernet solutions are rather good for the price (http://www.dell.com/us/business/p/managed-10gigabit-ethernet-switches).

I know some of the stuff I recommended may be a little pricey but if you want to run 300 VMs, you need a quality setup or else reliability will suffer. Also, keep in mind that all of this stuff can be had for much cheaper then advertised price. I have an awesome rep at CDW that has been a tremendous help with my projects. Let me know if you would like his info. He is a no-BS sales guy. I hope this was helpful. If not, pose some more questions and we'll see what we can do to help.

Edited by amdoverclocker

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now