Sign in to follow this  
Reborning

Building storage with supermicro 846TQ

Recommended Posts

Hi,

first i want to thank for help anyone who at least tries. We need to build good affordable storage for file server (lots of downloads).

Now we have 2x SC846TQ filled with 23x1TB hdds and 1x250GB system, attached on 9650SE 24 port raid adapter AMCC, with 8Gs ddr2 ecc fbdimm, motherboard x7dwe and xeon e5420.

HDDs are attached paralel, so no raid etc..

What is the best software solution with this hardware ? Now is transit per one server in primetime highest 700mbit/s, average 500mbit/s.

I mean what raid to use, raid 10 ? raid 60 ?

Would more ram affect transfer rate ?

Goal is to achieve most capacity for best performance.

Second, we got offer for next servers:

1x Supermicro® CSE-847E16-R1400LPB 4U chassis

1x LSI SAS 9260, PCI-E 6Gb/s, SATA/SAS 512MB RAID0,1,10,5 8-ch, bulk

1x LSI Battery Backup Units for 8888EM 2 ,9260

8x Cable, SFF8087(miniSAS) to SFF-8087(miniSAS), 600mm

4x DDR 3 ...4GB .......1333MHz ..ECC DIMM CL9, Dual Rank......Kingston

1x Quad-Core Intel® Xeon™ E5507- 2.26GHz/4.8 GT/sec/4MB/bez chladica

1x Intel® Nehalem cooler till 130W

1x Supermicro Motherboard Xeon X8DTE-F bulk

36x Seagate Constellation ES 2TB 7200RPM 64MB 3Gb/s SATA (we would like to change it for desktop as they are almost twice that cheaper)

is this good , what would you change, what raid would you use etc..

Thanks !

Share this post


Link to post
Share on other sites

Nobody to give me advice ? :(

If space is an issue i would probably go for RAID50, with 1-2 hotspares pr chassis

Write penalty on Raid 6(+0) can be a bit big depending on the controller or hardware doing it.

Another route would be to look into ZFS, i heard great things about it, but got no clue about how it works.

I would never advice you to buy Standard drives, they arent rated for 24/7 running, and i will almost bet money on you will see a higher failure rate on them than the enterprise rated drives.

More ram doesnt do stinker, unless you can use it for Cache on the controllers, and im highly doubtfull you can do that.

Do you run anything that actually needs 700MB/s reads or writes?? If its simple file storage, i highly doubt your ever gonna use more than 200MB/s in any likely senario

Share this post


Link to post
Share on other sites

Well, to satisfy your goal, your best bet is RAID 5 or 6. You need to be careful which hard drives you buy as some of the SATA drives have low URE's that would prevent your RAID from ever being rebuilt with larger storage sizes. It looks like you wouldn't run into that issue with the Seagate Constellation drives though.

More RAM will help, depending on your OS. Linux uses RAM for file system cache. I believe Windows Server can do this as well if you enable an option.

If you ask me for software recommendations, I'll always suggest Linux. You should check out ext4 or xfs. Enabling write back with a battery backup will achieve high write speeds.

What kind of applications are you running on this? This is good for high capacity, good transfer speeds but not good for IOPS.

Share this post


Link to post
Share on other sites

Thanks for answers, no 700MB/s won't be needed for one server or pair of servers in raid 60/50.. max 2gbits per server or 4 per pair..

it is used for file hosting, so IOPS imho is the thing we need most.. we tried raid 5 and 6 for one server with 22TB , it went down after few minutes of operation.

Share this post


Link to post
Share on other sites

Thanks for answers, no 700MB/s won't be needed for one server or pair of servers in raid 60/50.. max 2gbits per server or 4 per pair..

it is used for file hosting, so IOPS imho is the thing we need most.. we tried raid 5 and 6 for one server with 22TB , it went down after few minutes of operation.

Are all the files accessed frequently or would it be possible for you to tier your storage?

You could probably build some sort of system that would allow you to have highly accessed files on SAS and have some sort of rutine that moves less used files to a slower media type like sATA.

Raid 50/60 is fine if the IO load is mostly reads, but if your very write bound i would go for raid10, but that ofcause causes you to have even less storage available pr disk.

More professional arrays allow for several raid levels on each disk, making it possible to utilize the IO of a disk better, but im not sure any homebrew systems allow for that via software.

Honestly i doubt your a candidate for SSD in any form, yes you could probably get some IOs from it, but i have a feeling if the amount of data written changes alot, that performance degradation would be horrific over time with non EFD drives

Share this post


Link to post
Share on other sites

yes, there are most used files , uploaded recently and old files used sometimes.. so tier storage could be one option.

Will it gain with SAS needed performance ? Is that really that difference between sata drives ?

There is one more affordable option and it's a lots of raid 5 etc 6x raid 5 (array of 4 drives, 3+1spare).. there is storage loss but better than raid 10, what do you think about this idea ?

With sata drives there will be failure less horrible if just one array drops off..

Share this post


Link to post
Share on other sites

yes, there are most used files , uploaded recently and old files used sometimes.. so tier storage could be one option.

Will it gain with SAS needed performance ? Is that really that difference between sata drives ?

There is one more affordable option and it's a lots of raid 5 etc 6x raid 5 (array of 4 drives, 3+1spare).. there is storage loss but better than raid 10, what do you think about this idea ?

With sata drives there will be failure less horrible if just one array drops off..

Could you perhaps explain what the problem is with your current setup atm.?

you say your not running raid or anything on it, how is the data managed?

SAS is about twice the speed that sata is, a decent SATA disk runs around 90 IOPS and a SAS disk can run around 180 IOPS, In RAW MB/S they probably transfer about the same, but seek latency is alot lower on SAS.

I agree with you it might make sense to make smaller raid5/6 sets, mainly because if you make a 12 or 20 disk raid5 it takes way way too long to rebuild. Your probably better off making either 4 6-disk raid5's without hotspare, but i would advice you then have disks on premise in case of failure, or make 3 6-disk and 1 5-disk raid5 and have 1 hotspare available.

Or just make raid 60 with your controller, if the write penalty isnt too much.

Its a bit hard to give sound advice when we know so little about your enviroment and what you need really.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this