I work for a small sized (but quickly growing) school district and we have some aging hardware that is in need of some love.
From a processing and network viewpoint these servers should still have plenty of life left in them (for us at least). Rather than buying all-new servers, the thought was to put in some SSDs and RAM to breathe some new life (and performance!) into them.
However, after doing some research it looks like the RAID controllers currently installed only take up to ~12GB SSDs (~36GB after a firmware update), so that brings a few questions to mind:
1) Is this size limitation only if using SSDs as cache drives? Would they recognize modern SSDs as 'normal' hard drives with no such size restriction?
2) Is TRIM still an issue with SSDs? or do modern controllers pass TRIM commands? Or does it depend on the specific hardware in use?
3) If I need to replace the controller, what would be a good make/model? I have only ever used onboard Intel RAID, or whatever RAID card comes with a server... I am a little green in this area. They would need to control up to 8 physical drives (2 arrays) each, and I believe the current controller is in a PCIe2 8x slot on the motherboard (will verify tomorrow).
3) Would we be able to get away with using relatively cheap consumer grade drives (like Samsung 850 Pro) instead of straight-up SAS HDDs or SSDs? Our write load is not very high, mostly read operations on databases and running several lightly used VMs on each box.
4) I have typically used RAID6 (or equivalent RAIDz2 or RAID5+1) in servers up to this point so we can take up to 2 drive failures. However in doing a little research everyone seems to think that RAID5 is perfectly acceptable when using SSDs (Intel's website specifically suggests NOT using SSDs in RAID6 and to use RAID1 or 5 instead). Is this generally true? Or should I still be looking at a RAID6 setup for redundancy?
5) My first thought is to make the system drive on each box a RAID1 with 2 SSDs for performance and redundancy... but while that makes sense on a desktop computer, would that affect anything other than the boot time on a server? These are all on battery backups, so they don't shut down often, and boot time is really not a priority. Should we save the money and buy HDDs for the boot drive?
Other potentially important info:
There are basically 3 servers I am looking to upgrade.
Server 1 is for file shares and will just have a bunch of ~1.5-2TB HDDs (server takes 2.5" drives) for the data drive. Performance is not such a huge issue here, the big concern is in bulk storage and redundancy. SSD use here would only be for the OS drives (RAID1) if it would offer any real-world benefit.
Server 2 is going to be a HyperV box (nothing against VMWare... we just have more experience using HyperV and are less likely to break it lol). This will hold the VMs with the databases on it, and I would like to put in all SSDs. If we can use high-end consumer SSDs then I would like to put in 4-6 drives in a RAID5 or 6. If we have to use SAS drives then I might just buy 2 larger (512GB) ones and put them in RAID1
Server 3 is going to be another HyperV box for our more pedestrian VMs (print servers, DCs, applicaiton servers, controllers, etc.). First thought is to just buy new HDDs and be done with it... but if we can use something like the 850 pro SSDs then I would like to make Servers 2 & 3 identical.
Depending on when this project is complete these servers will either be running Server 2012r2 or 2016.
If you need more specifics (make, model, etc.) I can look that up when I am in the district tomorrow.
These IBM servers all take smaller 2.5" drives instead of normal HDDs
I don't have a specific budget yet, but we are probably looking at $5K or less (preferably much less if I want the district to agree to it lol) in total upgrades to these boxes. That includes drives, controllers, ~100GB of RAM, etc.
When I am done I am hoping to consolidate 14 physical servers strewn about the district into 5-6 boxes total. Should be a fun project
Thanks for your time everybody!