Sign in to follow this  
jpreed00

DIY SAN Questions

Recommended Posts

I don't mean to hijack the thread, but it seems like this would be an appropriate place to ask my question. I'm trying to create something along the lines of what the original poster is, except for a business environment. I don't have the budget to purchase a brand name SAN, but it appears that it is completely possible to build one.

The purpose would be to export an iSCSI connection (using OpenFiler) and use it as shared storage for virtualization. Is there a thread somewhere on the forums that outlines the type of hardware necessary for this type of application?

I apologize if this is improperly posted. I noticed that this particular forum doesn't have a Storage By Function -> Small Business Use section, so I'm kinda at a loss about where to post. :)

Thanks guys (and/or gals)!

Edit: This was the original post where this thread was found before I got moved. Check out the hardware description:

Edited by jpreed00

Share this post


Link to post
Share on other sites

I don't mean to hijack the thread, but it seems like this would be an appropriate place to ask my question. I'm trying to create something along the lines of what the original poster is, except for a business environment. I don't have the budget to purchase a brand name SAN, but it appears that it is completely possible to build one.

The purpose would be to export an iSCSI connection (using OpenFiler) and use it as shared storage for virtualization. Is there a thread somewhere on the forums that outlines the type of hardware necessary for this type of application?

I apologize if this is improperly posted. I noticed that this particular forum doesn't have a Storage By Function -> Small Business Use section, so I'm kinda at a loss about where to post. :)

Thanks guys (and/or gals)!

Migrated from this thread -

And due to several requests, there is now an SMB/Enterprise storage forum.

Back to the question at hand...what's your budget and how much activity are you expecting to the server?

Share this post


Link to post
Share on other sites

Well, the server will be serving as a storage SAN for a small to medium sized virtualization project. Maybe between 15-30 VMs max? I don't really have a hard number for the budget, I'm still trying to get a feel for the various options and requirements for such a server. Obviously, the cheaper the better, but before putting together a proposal, I'd like to understand the pros and cons of the various hardware choices, etc.

Oh, and thanks for starting the new section!

Share this post


Link to post
Share on other sites

Well, the server will be serving as a storage SAN for a small to medium sized virtualization project. Maybe between 15-30 VMs max? I don't really have a hard number for the budget, I'm still trying to get a feel for the various options and requirements for such a server. Obviously, the cheaper the better, but before putting together a proposal, I'd like to understand the pros and cons of the various hardware choices, etc.

Oh, and thanks for starting the new section!

The major problem giving advice, especially about stuff like IO (which you need) is that 15-30VMs could mean anything from 30 servers doing absolutely nothing, to 15 large enterpise SQL servers or Exchange. running thousand of IO/s.

basically size, spindle count etc, all needs to be compared to the performance requirements you have, and when you have those two kind of information, you pretty much know what you need, and can build a system from that knowledge.b

Share this post


Link to post
Share on other sites

The major problem giving advice, especially about stuff like IO (which you need) is that 15-30VMs could mean anything from 30 servers doing absolutely nothing, to 15 large enterpise SQL servers or Exchange. running thousand of IO/s.

basically size, spindle count etc, all needs to be compared to the performance requirements you have, and when you have those two kind of information, you pretty much know what you need, and can build a system from that knowledge.b

I understand and totally agree with your sentiment. However, since I am unable to satisfactorily provide you the details you need to provide a definitive answer, how about providing a range of options? Such as so and so combination of hardware would support 15-20 super high bandwidth VMs, where as such and such el-cheapo components would probably work better if you were only expecting to do low traffic stuff.

Is providing a range of options possible?

Edited by jpreed00

Share this post


Link to post
Share on other sites

I understand and totally agree with your sentiment. However, since I am unable to satisfactorily provide you the details you need to provide a definitive answer, how about providing a range of options? Such as so and so combination of hardware would support 15-20 super high bandwidth VMs, where as such and such el-cheapo components would probably work better if you were only expecting to do low traffic stuff.

Is providing a range of options possible?

I would probably just buy a server like a Dell r610 fit it with two Perc H800 and buy the needed number of MD1200 chassis, and try and get the disks somewhere cheaply like newegg or something. Main problem will be getting the drive caddys when you dont buy the disks from dell.

The major problem is reliability in cheap solutions. If you dont have every component redundant, thats where the system fails (and thats typically why enterprise storage costs a premium). in Enterprise SANs every component should be redudant, from storage controllers, to chassis interfaces, to the switches and HBA/NICs.

Im sure there are cheaper ways from something like supermicro or something, but dont expect to build 2x12 disks for anything under 10k$

Share this post


Link to post
Share on other sites

I would probably just buy a server like a Dell r610 fit it with two Perc H800 and buy the needed number of MD1200 chassis, and try and get the disks somewhere cheaply like newegg or something. Main problem will be getting the drive caddys when you dont buy the disks from dell.

The major problem is reliability in cheap solutions. If you dont have every component redundant, thats where the system fails (and thats typically why enterprise storage costs a premium). in Enterprise SANs every component should be redudant, from storage controllers, to chassis interfaces, to the switches and HBA/NICs.

Im sure there are cheaper ways from something like supermicro or something, but dont expect to build 2x12 disks for anything under 10k$

How would you rate this one? Assuming quality enterprise level drives?

Share this post


Link to post
Share on other sites

How would you rate this one? Assuming quality enterprise level drives?

I would go for a server PSU instead of a high-end gaming PSU, they arent rated for 24/7 running, so i wouldnt trust them, i would look into a good redundant server PSU.

As you said the drives need to be rated for 24/7 also. and i would look into how to secure my data from something disasterous happening, like data corruption or anything like that.

Also i would look into figuring out how its supposed to be connected to the network. are you planning on running multipathed 1gig nics 4-8 ports? or are you planning on running 10GE.

10GE can be very good, but you also need to be aware of IO control, so a single VM host doesnt drag the entire SAN down with it. If you run VMware enterprise plus f.x enable I/O Control

Share this post


Link to post
Share on other sites

If you are running VM's I would not go with SATA, you should be looking into SAS instead. VM's use a lot of IOPS and SATA will be very painful. Also, you need to make sure the VM's are aligned.

Share this post


Link to post
Share on other sites

If you are running VM's I would not go with SATA, you should be looking into SAS instead. VM's use a lot of IOPS and SATA will be very painful. Also, you need to make sure the VM's are aligned.

Sorry i have to object a bit here.

Im running a vmware enviroment with around 50 VMs in, and im not seeing above 500 Iops, even with SQL and exchange servers running, so claiming that VMs inheritly a big IO hogs i wont agree on. Primarily what you need is good latency and redundancy. pure IO and bandwith, less so.

If your running VDI we are talking about something entirely else, exactly the same if your running multiuser enviroments like TS/Citrix

I agree on alignment tho, that is always important. Both VMware and Hyper-V are running on aligned filesystems, so its mainly a raid issue.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this