Sign in to follow this  
Fiorana

Building a Hot-Swappable 24-Bay SATA Storage Server

Recommended Posts

Hello,

Foreword.

I’ve stumbled upon this brilliant forum just a couple of days ago, when I was researching for storage components. I must say I’m glad I did; Reading this forum is a joy and only reconfirms how little I know in comparison to some of you peoples regarding these matters.

Introduction.

I’d like to start off by informing you, I’m originating from Belgium and therefor the English language is not my mother tongue. I’ll do my best to explain myself in English, and try to bring my ideas/questions across as best as I possibly can.

I’ve been building normal Desktop Computers since I was very little. Learned it from my dad, who’s in the IT business, you could say I literally grew up with it. I’ve build hundreds of PC’s over the years now; But for the project I’m about to embark upon, I lack confidence and a bit of knowledge in certain areas. I am fairly assured though, some of you might be able to help point me in the right direction and send me on my merry way.

I am somewhat of a movie-buff, and have around 500 Blu-ray’s currently in my possession. This number will continue to grow at a steady rate.

Because 1) I am in lack of physical storage space to exhibit the Blu-ray boxes properly; 2) Multiple persons in our house also want access those movies; 3) The hassle of storing them in alphabetical order, etc …

For all the above reasons and many more, with which I will not bore you now; We purchased an 8-bay QNAP Ts-809 PRO TURBO NAS 2 years ago. We stuffed the QNAP NAS with 8x 2TB Western Digital Caviar Blacks and configured them in a RAID-5 Array; Which gives us a total capacity of around 13 Terabytes, give or take. We have never encountered any issues what-so-ever, and are generally very pleased and satisfied with the result and operation of the Qnap. Also performance-wise it excelled in its duty. With read’s and write speeds up to the theoretical limit of a Gigabit network (approx. 100 Mb/s).

Our Qnap is FULL! What now? Buy a new one? hmmm..

Although we are very satisfied with the Qnap device as previously mentioned, nevertheless we’re not planning on buying a second one to expand our storage capacity for the following reasons. 1) The initial purchase cost of the device without HDD’s is quite expensive (€ 2000) for the 19-inch Rack-Unit-variant (we have to go rack from now on); 2) Qnap NAS devices are not stackable and therefore we will be stuck with 2 different share names and locations, and our data becomes DE-centralized. If you want to access anything, you’ll have to know on which NAS it is stored, etc. and so forth...

The Project.

I would like to custom build my own Rack Server Storage solution.

I’ve read many posts on this forum about peoples that are either in progress of building their own Storage Server or have successfully completed it. I’d like to congratulate these wonderful individuals as well as their perseverance needed to bring the initial idea to a tangible completion.

Before we start, discussing hardware or software requirements, I’d like to neatly define the project parameters and purpose first. As I stated before, the server will serve as a Storage solution MAINLY to house Blu-ray Images for use by up to a maximum of 3 users simultaneously. Obviously the data integrity and security is of great importance to me; But not to an extraordinary degree and/or cost. Performance is secondary to the latter. The aim is to build a system that will initially contain 24 hard drives and is open for expansion later on, when needed; Let’s say FUTURE-PROOF within reason of course (:D, take 15 years).

In this section beneath I’ll post some hardware that many of you have used in your systems and which I am considering for use in my own Setup. I welcome any input and feedback you would be willing to share. Or suggestions for alternate hardware I have overlooked. Please do..

Hardware.

Case:NORCO RPC-4224 4U Server Case w/ 24 Hot-Swappable SATA/SAS Drive Bay ’ or ‘ CHENBRO RM51424 5U 24-Bay Storage Center Server Chassis ’ I would also consider separating the Server Chassis from the 24-Bay Hot-Swappable Hard-Disk Chassis because I could lack a bit of space in depth. But I haven’t had any success yet in finding a 24-Bay-chassis exclusively for housing the HDD’s. On the other hand locating a separate Server chassis is Childs play.

PSU: In this area, I’d like to receive a bit op input. There are so many power supplies available these days that I feel a bit lost in the woods. I know most widespread manufacturers like Antec, Corsair, Zalman and many others, sell PSU’s with power outputs up till 1300 Watts. Knowing my current setup, in what direction would you suggest me to look? I’m not using power hungry GPU’s; but a lot of HDD’s, at least 26 I’d say (24 for the intended raid configuration and 2 system drives). As I’ve come to understand it, Hdd’s are in need of the 5V cable outputs provided by the PSU instead of the alternative 12V ones. I also read on this forum, that the critical point is firing the whole system up at ones. Apparently Platter-based Hard Disk Drives require most of the current while spinning up? Ones again having the stability of the system in mind, I’d like to look for a PSU that delivers as clean as power as possible, without fluctuations and great efficiency. Also, when put in my position, would you go for 1 heavy PSU for the whole system or go for 2 medium ones, to split up the load?

Motherboard: In this department, I have a bit of a favorite I must admit. I’ve been using Asus motherboards for the last 12 to 15 years and I’ve always found them to be very well build and very reliable. I also hear good things of Intel Server boards, but personally I’ve had no prior experiences working with them. I’m looking towards the following server boards: ‘ Asus Z8NA-D6 ’ or ‘ Asus Z8PE-D12X ’. Or a similar Intel Server Motherboard. I’m mentioning Server Boards again for reasons of stability: they support dual Xeon CPU LGA1366 setups, ECC and registered Memory, onboard graphics chip and higher quality components. Any ideas?

Processor: I was thinking about going for either a dual Processor setup by using 2x ‘ Intel® Xeon® Processor E5620 ’ or a single CPU setup that would involve only one specimen of the before mentioned processor or a lesser ‘ Intel® Xeon® Processor W3550 ’. The first configuration is

obviously a lot more powerful than the latter stated one, but I will elaborate on this matter further down.

Memory: I plan on acquiring a 12 Gb DDR3 (3x 4Gb) set by Kingston or Corsair to fulfill my memory requirements and thus additionally provide me with the Triple channel benefits. Given the fact that this server is intended to be ultra-stable and reliable, I can’t go wrong by choosing ECC and registered DRam modules. Which is the usual convention I presume when going for the before-mentioned aims. Unless some of you feel different about this?

Raid Controller:Intel® RAID Controller RS2SG244 ’ or ‘ Areca ARC-1280ML ’ or ‘ 3ware 9650SE-24M8 ’ or ‘ Adaptec RAID 52445 ’. As far as I know, after having researched these different manufacturers, The Intel and Areca would definitely be the best choice. Having read the fact that Areca uses the Intel chips on their raid controller boards, am I wrong in assuming that maybe the Intel could be the one to go for then?

Graphic Card: Because the purpose of this Server is to provide a network attached storage solution, which ones set up, should run 24/7 independently without any external interference; I’d like to refrain from installing a graphic card. Graphic cards possess many downsides: they use up a lot of energy and produce more unwanted heat. The system can, if and when desired, be monitored remotely over the network using any available terminal. And for initial installation procedures the onboard ‘ Aspeed AST2050 8MB ’ Graphic Processor can provide the necessary video output requirements. The Graphic processing power of such an onboard integrated chip will unsurprisingly not break any records; but will do just fine for this particular job at hand.

Network Adapter: Similar to my reluctance to install a separate graphic card; I don’t see the need to upgrade to a dedicated Ethernet adapter just yet. You should know that my network infrastructure is currently limited to 1Gbps due to the hardware involved. The main router, switches and cabling I currently possess are measured and limited to 1Gbps. I know that the 10 Gbps network cards are already available against premium prices, but in my specific case that would just be overkill for now, given my current network infrastructure limitations. This will of course provide me with a possible future upgrade.

HDD: The general goal is to install 24 Hard Disk Drives of at least 2TB a piece (3TB is an option). Over the past few years I’ve been working a great deal with the Western Digital Caviar Hdd’s. I’ve extensively used both the green and the black series (2 TB) without any serious issues. I know that the WD caviar green series is regarded by many as unfit for Raid assigned duty, but on a highly personal note, I’ve never had any such reservations to support that widely acclaimed statement. I might perhaps have been a tad lucky and as a result be entirely in the wrong here. On the other hand I’ve had less than successful results though with the 1.5 TB Seagate Barracuda Hdd’s 3 years ago. In contrast I’ve heard and read many good things about the newer Ecogreen Samsung Spinpoint F4 HDD’s. As I lack familiarity with Hitachi’s hard disks I’m not in a position to judge their respective capabilities…anyone? That roughly concludes the part about the 24 SATA drives for the Raid Array. I was planning on using 2x SAS Enterprise Hdd’s as system drives for the Operating System. I might also contemplate using SSD’s instead. Going for ultimate reliability in this department!

Raid Configuration: After having studied this particular topic in depth; I have decided to opt for a RAID-6 configuration to ensure the 24x SATA drives Storage Side of the server. I will go with a Mirroring RAID-1 setup for my Operating System Drives.

Software:

As I’m not familiar with Solaris Unix or Linux Operating systems, I’m unavoidably bound to go with Microsoft’s Windows Server 2008 R2. I know that taking both Windows OS and reliability in the same sentence can be perceived as blasphemy, but don’t shoot me just yet. In my humble opinion Windows has already come a long way. Evidently, it’s not comparable to the likes of Unix or Linux-based Operating systems, but I believe that for my specific purpose Windows can provide me with the necessary stability. Would you agree? Or am I completely missing the ball here?

Questions:

I’ve seen that many peoples are building these types of systems, to expand their Storage capacity and after having read some of those posts, I’m stuck with a couples of key enquiries.

1)Why do some of you opt for a dedicated Ethernet adapter, when there’s a similar one installed onboard?

2)Why do some of you go for extreme processing power when using a hardware RAID controller card? As far as I know, the raid controllers have all the necessary processing power on their circuit boards, and there should not be any overhead to the main server CPU. At least that’s what they claim.

3)This brings to my next question. Is it wise to go with a Hardware Raid Controller, when spanning an array over 24 physical SATA drives? Or is someone shouting Software Raid? I presume in this case Hardware > Software? Better security and reliable I would assume? Also faster?

4)On Most Hardware Raid Controllers of this caliber you can expand the cache memory from 512 Mb to mostly 2 GB. Does this provide any real-world noticeable improvement and if so, in what regard?

I’m sure many more questions are destined to arise in the near future and when they do I’ll be happy to include them in this topic.

As this is an ongoing Project of mine, which at the moment is situated in the research and development stages; It comes as no surprise to anyone, that I welcome any input on anything I’ve written down here. In fact I am looking forward to being educated.

Jimi

Edited by Fiorana

Share this post


Link to post
Share on other sites

Welcome!

It sounds like this server is for personal use, and hence does not need to be mission critical?

If so, that means you can use a single consumer-grade PSU like a Seasonic X-650 or X-750 and be fine. I highly doubt you will need more power than that-- see the harddisk reviews right here on this site to see how much power draw modern disks hit you with on startup-- it's really not that bad.

Motherboards, we normally run Supermicro boards and ECC memory if supported by the chipset/processor, even in budget/low-cost builds. Supermicro C2SEA's or whatnot, I forget what board we're currently spec'ing right now for engineering samples in this type of server...

HDD: The general goal is to install 24 Hard Disk Drives of at least 2TB a piece (3TB is an option)
2TB you have lots of choices, especially if you are willing to risk consumer-grade disks. 3TB right now you are stuck with consumer-grade disks.

Do I recommend consumer-grade disks? No... but that's your call to make.

1)Why do some of you opt for a dedicated Ethernet adapter, when there’s a similar one installed onboard?
Some of us need multiple ethernet adapters or the performance from the onboard is not adequate, or something we're doing extensive revision control of our hardware and it's easier to spec a separate ethernet adapter. 99.99% of the time the onboard is fine.
2)Why do some of you go for extreme processing power when using a hardware RAID controller card? As far as I know, the raid controllers have all the necessary processing power on their circuit boards, and there should not be any overhead to the main server CPU. At least that’s what they claim.
Some guys here do much more than just storage on their servers, they also do extensive processing.
3)This brings to my next question. Is it wise to go with a Hardware Raid Controller, when spanning an array over 24 physical SATA drives? Or is someone shouting Software Raid? I presume in this case Hardware > Software? Better security and reliable I would assume? Also faster?
Linux software RAID, at least whatever versions they use in FreeNAS and Openfiler, scales quite well. I have no problems with it on budget builds. That said all of my highest-performance customers demand hardware RAID here, normally with high-end 3ware, Areca, or Adaptec controllers.

And if you are going to do Windows rather than Linux, well, then naturally you will not have Linux software RAID as an option. :) :-p

4)On Most Hardware Raid Controllers of this caliber you can expand the cache memory from 512 Mb to mostly 2 GB. Does this provide any real-world noticeable improvement and if so, in what regard?
Mostly for small file transfers, extensive small file chunks and random writes n' stuff. If you're just storing tons of movies, I wouldn't bother with the cache upgrade. (although if you are spending several thousand dollars on a server, another $90 for more memory for the RAID controller is such a small additional cost you might as well do it).

Share this post


Link to post
Share on other sites

I also want to build something similar to the OP, except the server-grade equipment such as motherboard and cpu

Do I recommend consumer-grade disks? No... but that's your call to make.

Just asking, why don't you recommend consumer grade disks for personal home server? They are cheaper to replace, and the RAID (whether HW or SW solution) protect it from failure

Share this post


Link to post
Share on other sites

I don't recommend consumer-grade disks in general for RAID use. Build enough RAID arrays on enough different hardware combinations and some of those odd problems-- some fatal for production reasons, some not, some just annoying, some that crop up only weeks or months later, etc.-- and let's just say I prefer to stick with tested, approved hardware configurations, 98% which involve nearline SATA disks rather than consumer SATA.

(Adaptec may be the one exception-- I see the Barracuda LP 2TB on their compatibility list along with a few others, but I don't have a whole lot of experience with the 5805Z and the Barracuda LP 2TB or anything.)

Like I said, that's your call to make. :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this