clicker666

Storage for small-medium business

Recommended Posts

This is a long one. I've been running this system for a few years and the recent growth of data is becoming worrisome.

Supermicro SuperChassis 745TQ-R800B with a X7DWN+ mainboard running 2 quad core Xeon E5410s running 2.33 GHz with 16 GB RAM. The RAID card is a bit of a dog. It's a HP Smart Array E200 SAS Controller with 128 MB RAM and a battery backup. My initial plan was to have two arrays and volumes, one holding the host, a Win 2003 x64 server, my SBS 2003 server VM and the other volume holding a few smaller VMs. (accounting and remote desktop) I had two 3 disk RAID 5 arrays with a hotspare in place for each, for a total of 8 drives. The array cache is enabled, and accelerator ratio is 50:50. Physical drive write caches are on. (Connected to a UPS and battery on controller). The drives are all WDC WD5000ABYS-0 drives running SATA 1 (150 Gbps).

The initial plan had to be scrapped as drive thrashing was killing my system. The performance of the card is horrible. I intend to replace it, but my SBS VM is HUGE, and the time involved in moving it would be problematic. In addition, the backup for that VM is hit an miss since it's so large. I've had to become selective on backing up files which is also problematic. The final problem with this huge VM is that it takes a long time to shutdown. The power in my facility is atrocious, and everything has it's own line conditioner and UPS. It still takes a good 5 minutes to shut down, and a good 15 to come back up. WAYYYY too long.

My current thoughts are that the server is still pretty fast. It just needs a better RAID controller. I also need to lighten up the SBS VM. I was thinking of moving the shared company folders and user shared directories to a NAS of some sort. That would reduce the size of the SBS VM from 254 GB to 104 GB. (150 GB in User/Shared Dirs) There would probably be a bit more as well as some of that is client apps and the like.

So, any thoughts?

1. I need a new SAS controller. I picked up some of those Datoptic SPM393 driverless 1-5 raid controllers that smokes the HP. The only real downside to this type of controller is the lack of battery backup.

2. I need to figure out a RAID scheme for the new controller that is best suited for running VMs, in particular an SBS VM. (Exchange stores)

3. I need a stinking fast box running anything that can hold all those 150+ GB of shared/user files. It needs to have hot swap redundancy for disk failures. Linux / FreeNAS is not out of the question, I've used them. The ability to put this box on a UPS and have it safely shut down is paramount. The Norco DS-24ER looks interesting, redundant power supplies are nice, and if I use 2.5" drives cooling should be pretty easy.

4. What do you guys use for offsite backups nowadays? I've grown out of my tapes and I'm using blu-ray disks now, but this is time consuming. I need to be able to pull out files from a few years back when asked and produce a user's directory for whatever date is asked for. Keeping a tape drive on hand for each type of media is pointless as I can't even hook up half of them anymore.

Edited by clicker666

Share this post


Link to post
Share on other sites

This is a long one. I've been running this system for a few years and the recent growth of data is becoming worrisome.

Supermicro SuperChassis 745TQ-R800B with a X7DWN+ mainboard running 2 quad core Xeon E5410s running 2.33 GHz with 16 GB RAM. The RAID card is a bit of a dog. It's a HP Smart Array E200 SAS Controller with 128 MB RAM and a battery backup. My initial plan was to have two arrays and volumes, one holding the host, a Win 2003 x64 server, my SBS 2003 server VM and the other volume holding a few smaller VMs. (accounting and remote desktop) I had two 3 disk RAID 5 arrays with a hotspare in place for each, for a total of 8 drives. The array cache is enabled, and accelerator ratio is 50:50. Physical drive write caches are on. (Connected to a UPS and battery on controller). The drives are all WDC WD5000ABYS-0 drives running SATA 1 (150 Gbps).

The initial plan had to be scrapped as drive thrashing was killing my system. The performance of the card is horrible. I intend to replace it, but my SBS VM is HUGE, and the time involved in moving it would be problematic. In addition, the backup for that VM is hit an miss since it's so large. I've had to become selective on backing up files which is also problematic. The final problem with this huge VM is that it takes a long time to shutdown. The power in my facility is atrocious, and everything has it's own line conditioner and UPS. It still takes a good 5 minutes to shut down, and a good 15 to come back up. WAYYYY too long.

My current thoughts are that the server is still pretty fast. It just needs a better RAID controller. I also need to lighten up the SBS VM. I was thinking of moving the shared company folders and user shared directories to a NAS of some sort. That would reduce the size of the SBS VM from 254 GB to 104 GB. (150 GB in User/Shared Dirs) There would probably be a bit more as well as some of that is client apps and the like.

So, any thoughts?

1. I need a new SAS controller. I picked up some of those Datoptic SPM393 driverless 1-5 raid controllers that smokes the HP. The only real downside to this type of controller is the lack of battery backup.

2. I need to figure out a RAID scheme for the new controller that is best suited for running VMs, in particular an SBS VM. (Exchange stores)

3. I need a stinking fast box running anything that can hold all those 150+ GB of shared/user files. It needs to have hot swap redundancy for disk failures. Linux / FreeNAS is not out of the question, I've used them. The ability to put this box on a UPS and have it safely shut down is paramount. The Norco DS-24ER looks interesting, redundant power supplies are nice, and if I use 2.5" drives cooling should be pretty easy.

4. What do you guys use for offsite backups nowadays? I've grown out of my tapes and I'm using blu-ray disks now, but this is time consuming. I need to be able to pull out files from a few years back when asked and produce a user's directory for whatever date is asked for. Keeping a tape drive on hand for each type of media is pointless as I can't even hook up half of them anymore.

1. If you have the 1,8 budget get a LSI Nytro - it will solve your performance issues on the fly.

Do not forget the BBU! If you have no BBU, then write caching is NOT enabled in most of the controllers. You will cry that you spent money for nothing. If you have redundant Power Supply, then you could force the write cache to be on, but as you have an exchange store, that is a pretty bad idea. Every power cycle could mean data store corruption beyond repair.

2. RAID5 has a write penalty of 3, so you are currently using the write performance of a single drive. Either if you need more IOPS - use SSDs as they are pretty cheap nowadays OR double the drive number in the chassis. Solving performance issues is always a pain when it comes to budgeting

3. Depending on your users number and client OS-es, you are either limited to windows server (true SMB 2.0) or a custom linux build. I have not seen a single el-cheapo NAS that uses the Gigabit provided capacity.

Now 150 Gb is nothing - just put 2 Intel 330 SSDs in RAID1 (because they have NO CACHE) which come currently for 180 $ a piece and you are all set. If the clients will be able to see that performance - that is a different question

4. I am still using tapes. You might think of the option of 3 Tier backup. I personally am backing up data in the 3+ TB range, so Bluray and even HDDs are not an option.

Cheers,

SV

Edited by Stoyan Varlyakov

Share this post


Link to post
Share on other sites

1. If you have the 1,8 budget get a LSI Nytro - it will solve your performance issues on the fly.

Do not forget the BBU! If you have no BBU, then write caching is NOT enabled in most of the controllers. You will cry that you spent money for nothing. If you have redundant Power Supply, then you could force the write cache to be on, but as you have an exchange store, that is a pretty bad idea. Every power cycle could mean data store corruption beyond repair.

2. RAID5 has a write penalty of 3, so you are currently using the write performance of a single drive. Either if you need more IOPS - use SSDs as they are pretty cheap nowadays OR double the drive number in the chassis. Solving performance issues is always a pain when it comes to budgeting

3. Depending on your users number and client OS-es, you are either limited to windows server (true SMB 2.0) or a custom linux build. I have not seen a single el-cheapo NAS that uses the Gigabit provided capacity.

Now 150 Gb is nothing - just put 2 Intel 330 SSDs in RAID1 (because they have NO CACHE) which come currently for 180 $ a piece and you are all set. If the clients will be able to see that performance - that is a different question

4. I am still using tapes. You might think of the option of 3 Tier backup. I personally am backing up data in the 3+ TB range, so Bluray and even HDDs are not an option.

Cheers,

SV

1. The sticker shock on that unit might be too much for management. Especially since I would have to have a spare on-hand.

2. I'm planning on flashing the BIOS on the existing RAID card to a new version that's supposed to eliminate the missing array on boot issue. It looks as if people have had better performance with the E200i by dropping RAID 5 and going to RAID10 with a single volume spanning 8 drives. I've also been thinking about changing the hard drives on that server to the SSD's (as you've indicated). There's currently almost 500 GB on that server, but I'll be dropping a bunch of that once I move the shares to the iSCSI (below) bring it down to 350. The only problem I see with swtiching to SSD is the Intel 330's you mention are SATA3 and my system is SAS with SATA2.

3. I've been using OpenFiler and have been experimenting with iSCSI. Good write/read speeds off of my test bed. I'm planning on running the company shared files, user shared folders, and storing backups on this device. I also plan on doing a backup to external drives of the OpenFiler box.

4. What would you consider a 3 tier backup? My only real issue with tapes (cost of drive and media can be easier for management to digest for some reason) is that you have to maintain a way to utilize them down the road. In our case, 7 years of storage is required. I can't actually produce that now because our tape drives from 7 years ago won't even install on a modern computer.

Share this post


Link to post
Share on other sites

1. The sticker shock on that unit might be too much for management. Especially since I would have to have a spare on-hand.

2. I'm planning on flashing the BIOS on the existing RAID card to a new version that's supposed to eliminate the missing array on boot issue. It looks as if people have had better performance with the E200i by dropping RAID 5 and going to RAID10 with a single volume spanning 8 drives. I've also been thinking about changing the hard drives on that server to the SSD's (as you've indicated). There's currently almost 500 GB on that server, but I'll be dropping a bunch of that once I move the shares to the iSCSI (below) bring it down to 350. The only problem I see with swtiching to SSD is the Intel 330's you mention are SATA3 and my system is SAS with SATA2.

3. I've been using OpenFiler and have been experimenting with iSCSI. Good write/read speeds off of my test bed. I'm planning on running the company shared files, user shared folders, and storing backups on this device. I also plan on doing a backup to external drives of the OpenFiler box.

4. What would you consider a 3 tier backup? My only real issue with tapes (cost of drive and media can be easier for management to digest for some reason) is that you have to maintain a way to utilize them down the road. In our case, 7 years of storage is required. I can't actually produce that now because our tape drives from 7 years ago won't even install on a modern computer.

1. Agree on that - its pricey, but nice as well :)

2. RAID10 has almost no requirements on the storage cache, so if you can use that instead, that will be a great improvement. Keep in mind RAID 1 and RAID10 therefore have a write penalty of 2, whereas RAID5 have 3, 2 separate RAID5s up the penalty to 6...

Even if you run RAID5 with 3 240 GB SSDs, you will have 480 GB of data, however I would never put more than 400 GB on that array, because of write amplification. So if your requirement is for 500 GB, consider a RAID5 array of 4+ SSDs which is still doable. Intel 330 and 520 for that matter are backwards compatible to SATA2. To push the max out of the drives in sequential write, you will need SATA3, but for IOPS mode, you are not limited by the interface, so you are just fine.

3. OpenFiler and Nexenta show very good iSCSI read/write speeds because they are cached iSCSI. Microsoft Target on the other hand runs in uncached mode, so its not recommended. But for other applications, that are NAS specific, nothing can match the native Microsoft Windows performance, if your clients are Desktop Windows OSes (Vista and upwards). For XP and Linux NFS, you can live just fine with either OpenFiler or Nexenta. Even Synology and Qnap are a solution for those scenarios...

4. Backup to disk (online) to disk (incrementals for up to a month/few months) to tape (longer retention)

If you get something that is SAS connected via HBA there is a good chance you will be still in business after a decade. But it all comes down to the amount of data you have to store. With my volumes, I am unfortunately limited

Share this post


Link to post
Share on other sites

I've decided to continue using my E200i card, but switch the array to RAID 10 (6 disks total) with 2 hotspares (final total = 8). I'm going to flatten the entire array and create it in the new format rather than waste hours migrating. I then plan to install ESXi 5 and do a VM migration of the VDMK machines to ESXi. As far as OpenFiler iSCSI goes, the performance (on my Windows 8 machine) is head and shoulders above the straight copy functionality of the OS.

One little iSCSI question I just need confirmation on, because I'm pretty sure i know the answer. I have two 2 port (4 connector) 2 GbE cards. Can I pop one into the ESXi box and the other into the OpenFiler NAS, then have the Windows server inside ESXi see the iSCSI target like an attached drive - and therefore be able to make a Windows server-based share on this iSCSI target? The result I'm looking for is a very fast share that can retain the old share name.

example: \\SERVER\COMPANY on old VM's D: drive moves to \\SERVER\COMPANY on the VM's iSCSI connection?

Share this post


Link to post
Share on other sites

Hi,

what you are looking for is achievable, yes.

Just make sure you have a storage IP network and up the Jumbo frames to the magic 9000.

This makes the improvement even better.

For best performance, I would configure passtrough for the NIC to the VM and initiate connection to the OpenFiler via iSCSI.

This gives the advantage of "real life-like" performance: no Virt Overhead on the network layer communication and TCP offloading will bring considerable performance increase.

A very good read regarding Intel NICs and their driver:

http://www.intel.com/support/network/sb/cs-025829.htm

on the VMWare in case you cannot do passtrough, please make sure you are using VMXNET3 adapters and not the flexible E1000. This gives about 10% increase in Network performance.

P.S. just saw the section for ESXi and dual CPU - keep in mind free ESXi will not work with more than 1 CPU, if I remember correctly.

Edited by Stoyan Varlyakov

Share this post


Link to post
Share on other sites

I just did some reading on the Free ESXi 5 and the only limits are 32 GB RAM, can only manage one host at a time, and no command line. You can run as many CPUs and cores as your heart desires.

Share this post


Link to post
Share on other sites

You are absolutely right.

I have to stand corrected - they changed the limits in free ESXi from 8 Gb per VM to 32Gb per Host and the initial CPU limitation is now gone.

Maybe I have fooled myself with something else, don´t know.

I am absolutely positive, there is the SSH which you can use to login to ESXi, because I use that to perform backups. When you enable it you get a big fat warning "SSH is activated" on the home page of vSphere client.

I have not tested RemoteCLI so maybe that is the "disabled feature".

HTH,

SV

Share this post


Link to post
Share on other sites

OK, so here is where the confusion came from.

Once installed, select the Host, then go to the tab "configuration" and look at ESX Server Licence type.

A freshly generated ESXi Free key shows up as this:

Product: VMware vSphere 5 Hypervisor Licensed for 1 physical CPUs (unlimited cores per CPU)

Share this post


Link to post
Share on other sites

OK, so here is where the confusion came from.

Once installed, select the Host, then go to the tab "configuration" and look at ESX Server Licence type.

A freshly generated ESXi Free key shows up as this:

Product: VMware vSphere 5 Hypervisor Licensed for 1 physical CPUs (unlimited cores per CPU)

I'm running the 60-day eval so I can take full advantage of all the features for a bit and get everything the way I like, so my config page doesn't show the same. Looking at http://www.vmware.com/products/datacenter-virtualization/vsphere-hypervisor/requirements.html it shows that the free version supports multiple physical CPUs. It always did, so I can't see why they'd change that. Guess I'll find out at day 60 when I have to panic and buy it lol.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now