Sign in to follow this  

RAID card restrictions and replacement suggestions

Recommended Posts

Hi all,

I work for a small sized (but quickly growing) school district and we have some aging hardware that is in need of some love.

From a processing and network viewpoint these servers should still have plenty of life left in them (for us at least). Rather than buying all-new servers, the thought was to put in some SSDs and RAM to breathe some new life (and performance!) into them.

However, after doing some research it looks like the RAID controllers currently installed only take up to ~12GB SSDs (~36GB after a firmware update), so that brings a few questions to mind:

1) Is this size limitation only if using SSDs as cache drives? Would they recognize modern SSDs as 'normal' hard drives with no such size restriction?

2) Is TRIM still an issue with SSDs? or do modern controllers pass TRIM commands? Or does it depend on the specific hardware in use?

3) If I need to replace the controller, what would be a good make/model? I have only ever used onboard Intel RAID, or whatever RAID card comes with a server... I am a little green in this area. They would need to control up to 8 physical drives (2 arrays) each, and I believe the current controller is in a PCIe2 8x slot on the motherboard (will verify tomorrow).

3) Would we be able to get away with using relatively cheap consumer grade drives (like Samsung 850 Pro) instead of straight-up SAS HDDs or SSDs? Our write load is not very high, mostly read operations on databases and running several lightly used VMs on each box.

4) I have typically used RAID6 (or equivalent RAIDz2 or RAID5+1) in servers up to this point so we can take up to 2 drive failures. However in doing a little research everyone seems to think that RAID5 is perfectly acceptable when using SSDs (Intel's website specifically suggests NOT using SSDs in RAID6 and to use RAID1 or 5 instead). Is this generally true? Or should I still be looking at a RAID6 setup for redundancy?

5) My first thought is to make the system drive on each box a RAID1 with 2 SSDs for performance and redundancy... but while that makes sense on a desktop computer, would that affect anything other than the boot time on a server? These are all on battery backups, so they don't shut down often, and boot time is really not a priority. Should we save the money and buy HDDs for the boot drive?

Other potentially important info:

There are basically 3 servers I am looking to upgrade.

Server 1 is for file shares and will just have a bunch of ~1.5-2TB HDDs (server takes 2.5" drives) for the data drive. Performance is not such a huge issue here, the big concern is in bulk storage and redundancy. SSD use here would only be for the OS drives (RAID1) if it would offer any real-world benefit.

Server 2 is going to be a HyperV box (nothing against VMWare... we just have more experience using HyperV and are less likely to break it lol). This will hold the VMs with the databases on it, and I would like to put in all SSDs. If we can use high-end consumer SSDs then I would like to put in 4-6 drives in a RAID5 or 6. If we have to use SAS drives then I might just buy 2 larger (512GB) ones and put them in RAID1

Server 3 is going to be another HyperV box for our more pedestrian VMs (print servers, DCs, applicaiton servers, controllers, etc.). First thought is to just buy new HDDs and be done with it... but if we can use something like the 850 pro SSDs then I would like to make Servers 2 & 3 identical.

Depending on when this project is complete these servers will either be running Server 2012r2 or 2016.

If you need more specifics (make, model, etc.) I can look that up when I am in the district tomorrow.

These IBM servers all take smaller 2.5" drives instead of normal HDDs

I don't have a specific budget yet, but we are probably looking at $5K or less (preferably much less if I want the district to agree to it lol) in total upgrades to these boxes. That includes drives, controllers, ~100GB of RAM, etc.

When I am done I am hoping to consolidate 14 physical servers strewn about the district into 5-6 boxes total. Should be a fun project :D

Thanks for your time everybody!

Share this post

Link to post
Share on other sites

> I believe the current controller is in a PCIe2 8x slot on the motherboard (will verify tomorrow).

Please give us a few links to specs for your motherboards

and that "current controller".

We've settled on the Samsung and SanDisk SSDs with a 10-year factory warranty.

I would point you to the Highpoint RocketRAID 2720SGL, but

I would NOT be comfortable with that recommendation

without knowing more about your server motherboards.

It's been around a while, and it's very inexpensive,

given the performance we've achieved in RAID-0 mode

(for speed); and, it should work in a PCIe 2.0 x8 slot:

The problems new users have with the 2720SGL are these:

( a ) INT13 is ENABLED by default, and this can interfere

with existing chipset settings;

( b ) if you do NOT intend to boot from this controller,

there is a sequence to DISABLE INT13 and only

use that controller for data partitions; method:

install withOUT any drives connected, and

when flashing the BIOS, DISABLE INT13

using a Windows program you download

from Highpoint's website;

( c ) to guarantee that each port is running at 6 GHz,

one must download the latest BIOS from the vendor's website,

and flash the 2720SGL with that latest BIOS, ideally

withOUT any drives connected;

( d ) not all SFF-8087 "fan-out" cables are compatible;

we found that the model from Startech works great; and,

that model is only 1/2 meter vs. Highpoint's 1.0 meter in length;

the shorter cables help a LOT with cable management

and hence they help to improve air flow.

In answer to one of your other questions,

TRIM is generally NOT supported by third-party

RAID controllers. Check with each vendor

before "leaping ahead".

Many of the most modern SSDs have pretty good

internal garbage collection, so the absence of

TRIM may not be a big issue for you.

Edited by MRFS

Share this post

Link to post
Share on other sites

> hoping to consolidate 14 physical servers strewn about the district into 5-6 boxes total.

Can you off-line one of those 14 servers,

and use it for testing purposes?

Newegg will allow you to return a shrink-wrapped 2720SGL

if it doesn't work for you. They re-sell those as "Open Box".

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this