alphakry

My 60TB Build Log

Recommended Posts

I figured I'd start a brief build log - so others can either learn from my experiences, give some always welcomed advice... or just drool

1.0: HARDWARE:

  • CASE: Lian Li PC-343B Modular Cube Case
  • MOBO: Asus P6T7 Supercomputer
  • CPU: Intel W3570 Xeon (LGA 1333)
  • RAM: 12GB Corsair Dominator TR3X6G1866C9DF (6x2GB DDR3 1866 (PC3 15000) 1.65V)
  • RAID: (1) Areca ARC-1280ML 256MB 24-Port PCI-E x8 Raid Card
  • HDDs: (30) SAMSUNG Spinpoint F4 HD204UI 2TB 5400RPM 32MB Cache SATA3
  • RAID BACKPLANES: (6) iStarUSA BP-35-BLACK 3x5.25" to 5x3.5" SATA2.0 Hot-Swap Backplane Raid Cage
  • GPU: NVIDIA GTS 240 1GB PCI-E
  • OS: Windows Server 2008 R2
  • POWER: Corsair 1000W HX1000
  • NIC: Intel E1G42EF Dual-Port 1GE PCI-E Server Adapter

2.0: RAID SETUP:

Array 1: 2 x 450GB 15K SAS in RAID 0 for the O/S.

The OS is not considered mission critical. It's purpose will be to facilitate the services interacting with the data - IE: FTP, network sharing protocols and media transcoding.

Raid0 was selected due to performance being the only important part of this array - data redundancy is of no concern.

Array 2: 15 x 2TB SATA in RAID 6 on Areca card # 1

This array will remain active. The current storage requirement is approximately 11TB. This array will provide room to grow till about 16.5TB. Once that capacity is reached, arrays 1 and 2 will be combined into a singular RAID6 array.

Array 3: 15 x 2TB SATA in RAID 6 on Areca card # 2

This array will be used as an extra redundancy precaution. It will mirror Array 2 to provide an additional copy of data. I have not determined the best method to achieve this but assumed a software solution such as ViseVersa may be able to streamline this function. Open to suggestions on this one

Edited by alphakry

Share this post


Link to post
Share on other sites

Updates:

Oct-21-2010

1. I am currently running the Areca 1280ML - with a plan to run 24x2TB and then 6x2TB via the onboard SATA ports.

I have now changed the plan to run 2 x Areca 16-port cards. This will provide the full coverage of the 30 Hot-Swap bays as well as some additional redundancy - as explained in the Raid Setup section.

I have decided on the Areca ARC-1880ix-16 x8 SAS/SATA6.0 card as this is their latest and greatest and will provide me the opportunity to upgrade to faster drives, if needed.

2. I am considering replacing the iStarUSA BP-35-BLACK cages with the Norco SS-500 cages. These came recommended by a certain storage guru here and I may be willing to give them a try for comparison. I originally tried the StarTech SBAY5BK cages - which were the newest 3x5 drive cages when I purchased them but I haven't had 100% luck with them

3. With the help of a friend who's well versed in Linux, i am starting to research the idea of replacing the Windows Server 2008 R2 OS choice with a linux distro. It just feels like the right thing to do with such a large storage box

Share this post


Link to post
Share on other sites

Results:

Oct-21-2010

So there is a lot of talk about the new Samsung HD204UI drives and how they'll perform for the RAID community. At a promotional cost of $95 per drive, I couldn't resist throwing myself into the guinea pig pool. So I will post my results thus far using these drives.

1. I have now setup my Areca 1280ML card to create a RAID6 array of 5 of these drives. It took a whopping
53 HOURS
to initialize this array. Seems like quite a long time. Anyone have comments?

I would like to run some performance tests on this array - to show how the Areca 1280ML plays with these drives - which have yet to be officially qualified by Areca.

If anyone can recommend some good HDD testing software that is used as a standard, I'll happily run tests and post results for everyone

Share this post


Link to post
Share on other sites

I wish I had something constructive to add to your thread. However, I don't. Hell, I've only got 4TB of storage for my "personal" server. But with that said, I'd like to wish you luck. I'll definitely be checking on this thread, from time to time.

I guess you can add me to your "drool" pile. :)

Share this post


Link to post
Share on other sites

Sounds great - tell us more about the server and what you're hoping to accomplish with it please!

Also, what case(s) are you using?

Share this post


Link to post
Share on other sites

Sounds great - tell us more about the server and what you're hoping to accomplish with it please!

Also, what case(s) are you using?

  • The case I'll be using is the Lian Li PC-343B Cube Case. I have updated post # 1 to include more hardware - all of which is already in my possession.
    That is not to say I wouldn't sell and replace anything if there was a better choice.
  • This server will be a stand-alone box, so one of those 9U Chenbro units, while tempting, is not quite what I'm aiming for.
  • The purpose of the server is to securely host 30-32 2TB drives. All that matters to me is data integrity and security - performance is merely a nicety (which I'm confident will be more then met considering I prefer only the higher end parts)
  • As mentioned above, the current data space requirement is around 11TB but would grow as time went on.

Edited by alphakry

Share this post


Link to post
Share on other sites

So I take it staggered startup for all those drives? ;)

Surprised you are only going with a 1Gb x 2 LAN.... you could be connecting 30 slow USB thumb drives and still be able to probably saturated 10Gb.

Share this post


Link to post
Share on other sites

I have 30 disks in my storage server and my requirement for it is to be easily expandable. Therefore, I have a totally different setup.

OS: OpenSolaris b134.

Head Unit:

Lian Li A70B (10 internal 3.5in disk bays)

Mobo: Supermicro X8ST3-F (6 SATA, 8 SAS ports onboard)

CPU: W3520

RAM: 12G Kingston ECC DDR3 1333

2*73G 15K rpm SAS mirror for OS

6*2TB Hitachi SATA connected to the rest 6 SAS ports, RAID-50

6 SSD connected to onboard SATA ports (2 Intel X25-E for write cache, 4 Corsair Nova 64G for read cache)

Additioal controller: LSI 3801E external SAS HBA

Qlogic 2462 4Gb dual-port FC HBA

Nic: 2 onboard Intel 82574, 2 dual-port Intel 82571

Disk shelf:

Supermicro 936E1 (16 SAS/SATA bays through LSI X28 SAS expander)

8*Seagate 15K.7 SAS 300G RAID-10

8*Seagate Constellation 2TB SAS RAID-50

Tier 1:

Seagate 15K.7 in RAID-10, Intel X25-E for write cache, 2 Corsair Nova 64 for read cache, for VMware Datastore and Database storage through FC,

Tier 2:

Constellation 2TB in RAID-50. Intel X25-E for write cache, 2 Corsair Nova 64 for read cache, for application data

Tier 3:

Hitachi 2TB SATA in RAID-50. No read and write cache, for user files, movie, downloads, etc.

With this setup, I can add 2 more disk shelves and daisy chain with the first disk shelf. Or I can connect to 4 SAS ports on LSI 3801E. I may replace the SAS HBA and get a new disk shelf with new 6Gb one as all my SAS disks are 6Gb.

Edited by dilidolo

Share this post


Link to post
Share on other sites

I also have a small filserver which is running linux with software raid and LVM2 (more on this later.)

I did however look at opensolaris, but the problem i saw was that it was not possible to exspand an RAID-Z (z=1 (raid5), z=2 (raid6), z=n (n parity drives)). So the only way to exspand your pool was to create a whole new raid array and then add it to the pool (and then push some other stuff out of the pool. I am however in love with zfs ;) but as my setup is rather small i cant be using this exspand strategy (which big setups could be using).

So i went for an easy (and cheap) setup:

OS: Ubuntu (any linux distro would propably do)

Onboard: 7xSATA-II

RAID card: Intel RAID Controller SASUC8I (bringing 8xSata-II without SAS exspanders)

Disks: 2x1TB WD Caviar GreenPower (OS – Raid1) 5x2TB WD20EADS (raid6)

All my raid is linux software raid, after a few corrupt controllers from Adaptec, 3ware and Promise, I desided I needed to be “hardware independent”, so the only reason I used my raid card is to get all those nice cheap SATA-Ports.

Storage: From the ground up, I have created a partition on each 2tb drive using the whole disk. I now build my RAID 6 on these partitions (linux software raid cant take raw disks, it only works on partitions. It has its ups/downs). I use this partition as a PE device on LVM and in LVM I create a disk, which ive placed EXT4 on, and mounted.

I can now exspand my already created raid volumes, and I can add raid volumes to LVM and remove other opsoleded raidsets (2tb is not mutch in 3 years ;)

With 5disk raid 6 using WD20EADS im getting ~160MB/s writespeed, and higher READ. So 1gbit network is not a problem for me.

A Thing with LVM that might interest you…. Lets say you make your 2x15disk raid6 in linux software raid. You now have 2 sets which you add to your LVM, you now make your Logical Volume in LVM, and tells LVM that you would like to make sure that LVM places data on 2 different PE devices… (RAID1 in LVM). Now you have your failover  I do however not know if you can convert from raid1 LV to nonraid LV (when you need more than 15 disk raid 6 can give you. But some google might help you there.

//Jan Chu

Share this post


Link to post
Share on other sites

Hi, i' m very interested with your project as i plan to build the same type of setup.

Don' t know what type of card to buy (1280ML, 1880ix ?)

I think of a single 24*3TB RAID6 with 6*EX36B LianLi, but a little afraid of the limited room between card and hdd racks, here are some pics found on the web:

add0c5b402efe1e908a6dec5e635e.jpg39b1521b79a0581ec80102921487c.jpg

05a199b24857eaa3e3de71cd86dee.jpg

ea6a54055dd61189aab1d71cf8dcf.jpg

(last one is a ARC-1170)

Perhaps i should use other hdd backplane, but i don' t want them to be noisy (too many have 40/80mm and the LianLi racks comes with 120mm fans), any advices ?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now