jpiszcz

Lian Li PC-A77: 12TB Built out right

Recommended Posts

Hi all,

I am the poster of the Velociraptor issues, in any event I got sick of all the errors I was having with them. I moved to 1TB RE3s, all problems have gone away, I also run daily short smart tests, weekly long smart tests and weekly raid verifies, not a single problem yet. In addition, all the disks I ordered (not from newegg), not a single DOA (at least that I tested, there is 1 cold spare as well).

Unit  UnitType  Status		 %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0	RAID-6	OK			 -	   -	   64K	 12107.1   ON	 ON	 
u1	SPARE	 OK			 -	   -	   -	   931.505   -	  ON	 

Port   Status		   Unit   Size		Blocks		Serial
---------------------------------------------------------------
p0	 OK			   u0	 931.51 GB   1953525168	SNIP
p1	 OK			   u0	 931.51 GB   1953525168	SNIP
p2	 OK			   u0	 931.51 GB   1953525168	SNIP
p3	 OK			   u0	 931.51 GB   1953525168	SNIP
p4	 OK			   u0	 931.51 GB   1953525168	SNIP
p5	 OK			   u0	 931.51 GB   1953525168	SNIP
p6	 OK			   u0	 931.51 GB   1953525168	SNIP
p7	 OK			   u0	 931.51 GB   1953525168	SNIP
p8	 OK			   u0	 931.51 GB   1953525168	SNIP
p9	 OK			   u0	 931.51 GB   1953525168	SNIP
p10	OK			   u0	 931.51 GB   1953525168	SNIP
p11	OK			   u0	 931.51 GB   1953525168	SNIP
p12	OK			   u0	 931.51 GB   1953525168	SNIP
p13	OK			   u0	 931.51 GB   1953525168	SNIP
p14	OK			   u0	 931.51 GB   1953525168	SNIP
p15	OK			   u1	 931.51 GB   1953525168	SNIP

Name  OnlineState  BBUReady  Status	Volt	 Temp	 Hours  LastCapTest
---------------------------------------------------------------------------
bbu   On		   Yes	   OK		OK	   OK	   255	28-Dec-2008

# df -h
/dev/sda1			  12T  2.6T  9.3T  22% /r1

Setup:

System: Raptor150 x 3 (at the top of the case); 2 = raid1; 1 = hotspare

Storage: RE3 x 16 (in the front of the case); 15 = RAID6; 1 = hotspare

Before attaching the sata cables from the raid card:

ll_01.jpg

The system disks (3 raptor 150s):

ll_02.jpg

Finished system:

ll_03.jpg

Edited by jpiszcz

Share this post


Link to post
Share on other sites

Very, very nice!

I am looking to purchase this same case. However, I need a favour from you, if possible.

Could you measure the width of the case for me at the bottom? Ideally the width *inside* the edges of the bottom panel. I need to know what the maximum width is between the left and right sides.

I have read the specs but I need the real-life measurements. I have never trusted specs on dimensions - someone always goofs up the mm<->inch conversions.

Thanks!

Share this post


Link to post
Share on other sites
Very, very nice!

I am looking to purchase this same case. However, I need a favour from you, if possible.

Could you measure the width of the case for me at the bottom? Ideally the width *inside* the edges of the bottom panel. I need to know what the maximum width is between the left and right sides.

I have read the specs but I need the real-life measurements. I have never trusted specs on dimensions - someone always goofs up the mm<->inch conversions.

Thanks!

I cannot take the case apart right now, the width of the sides of aluminum are VERY thin, the width at the bottom is the same as the top.

Roughly 8.5" or exact from side to side is 8.6875 (8 11/16) inches.

What are you trying to build?

Share this post


Link to post
Share on other sites

A few questions, if you don't mind

1. What's the space used for?

2. What controller?

3. Any fans in the front of that case for those drives? Talk about a big old stack of HEAT.

Share this post


Link to post
Share on other sites
A few questions, if you don't mind

1. What's the space used for?

2. What controller?

3. Any fans in the front of that case for those drives? Talk about a big old stack of HEAT.

1. For whatever it needs to be used for-- too many servers with 1TiB here or there, best to have all the space in one box, less mess.

2. 3ware 9650SE-16ML

3. Yes, 120MM fan in-front of each 4 drives.

Share this post


Link to post
Share on other sites
Very, very nice!

I am looking to purchase this same case. However, I need a favour from you, if possible.

Could you measure the width of the case for me at the bottom? Ideally the width *inside* the edges of the bottom panel. I need to know what the maximum width is between the left and right sides.

I have read the specs but I need the real-life measurements. I have never trusted specs on dimensions - someone always goofs up the mm<->inch conversions.

Thanks!

I cannot take the case apart right now, the width of the sides of aluminum are VERY thin, the width at the bottom is the same as the top.

Roughly 8.5" or exact from side to side is 8.6875 (8 11/16) inches.

What are you trying to build?

Thanks very much for doing this. It's appreciated. :) The specs say 8.7" width so they weren't quite right.

I am trying to fit a Mini ITX motherboard in the case horizontally (not mounted to the motherboard tray).

Based on your help I have just ordered one of these cases.

Have a great day!

Share this post


Link to post
Share on other sites
A few questions, if you don't mind

1. What's the space used for?

2. What controller?

3. Any fans in the front of that case for those drives? Talk about a big old stack of HEAT.

1. For whatever it needs to be used for-- too many servers with 1TiB here or there, best to have all the space in one box, less mess.

2. 3ware 9650SE-16ML

3. Yes, 120MM fan in-front of each 4 drives.

Any indications of the read/write performance (both sequential and random I/O - 4K, 1MB and 4MB block reads/writes). Also what OS? (Assuming Linux based by the output).

What LAN adapter(s) are you using, and are they teamed for higher throughput?

Also what's the rebuild time on replacement of a failed HDD?

On the case, any reason not to go with a Supermicro 4U chassis with hotswap bays or similar setup from another manufacturer?

Share this post


Link to post
Share on other sites
A few questions, if you don't mind

1. What's the space used for?

2. What controller?

3. Any fans in the front of that case for those drives? Talk about a big old stack of HEAT.

1. For whatever it needs to be used for-- too many servers with 1TiB here or there, best to have all the space in one box, less mess.

2. 3ware 9650SE-16ML

3. Yes, 120MM fan in-front of each 4 drives.

Any indications of the read/write performance (both sequential and random I/O - 4K, 1MB and 4MB block reads/writes). Also what OS? (Assuming Linux based by the output).

What LAN adapter(s) are you using, and are they teamed for higher throughput?

Also what's the rebuild time on replacement of a failed HDD?

On the case, any reason not to go with a Supermicro 4U chassis with hotswap bays or similar setup from another manufacturer?

> Any indications of the read/write performance (both sequential and random I/O - 4K, 1MB and 4MB block reads/writes). Also what OS? (Assuming Linux based by the output).

I will have to re-run some benchmarks, turned out the initial build of the RAID-6 took a lot of time, afterward I just wanted to use it :P Running Debian Linux using the latest 2.6.x kernel.

> Also what's the rebuild time on replacement of a failed HDD?

Will let you know when one dies, so far they've been very stable (RE3s)

> What LAN adapter(s) are you using, and are they teamed for higher throughput?

Using the built-in gigabit on pci-express port on the motherboard.

> On the case, any reason not to go with a Supermicro 4U chassis with hotswap bays or similar setup from another manufacturer?

Been there, done that, I had 1 port on my hotswap backplane arrive bad, in addition when I was transferring from all drives using Enlight hotswap trays the whole thing beeped like it did not have enough power or something. In addition, hotswap trays/backplanes go bad, I wanted to minimize the amount of things that could go wrong. I suppose if you order replacement backplanes for a rackmount chassis you will be fine, but just include that in the cost of your purchase, don't wait until one fails until you buy one. Also using Lian Li's case it does wonders for anti-vibration vs. a traditional HDD cage. Cheaper rackmount servers seem to vibrate quite a bit vs. HP's proliant DLXXX servers, which seem to be built out "better."

Share this post


Link to post
Share on other sites
A few questions, if you don't mind

1. What's the space used for?

2. What controller?

3. Any fans in the front of that case for those drives? Talk about a big old stack of HEAT.

1. For whatever it needs to be used for-- too many servers with 1TiB here or there, best to have all the space in one box, less mess.

2. 3ware 9650SE-16ML

3. Yes, 120MM fan in-front of each 4 drives.

Any indications of the read/write performance (both sequential and random I/O - 4K, 1MB and 4MB block reads/writes). Also what OS? (Assuming Linux based by the output).

What LAN adapter(s) are you using, and are they teamed for higher throughput?

Also what's the rebuild time on replacement of a failed HDD?

On the case, any reason not to go with a Supermicro 4U chassis with hotswap bays or similar setup from another manufacturer?

> Any indications of the read/write performance (both sequential and random I/O - 4K, 1MB and 4MB block reads/writes). Also what OS? (Assuming Linux based by the output).

I will have to re-run some benchmarks, turned out the initial build of the RAID-6 took a lot of time, afterward I just wanted to use it :P Running Debian Linux using the latest 2.6.x kernel.

> Also what's the rebuild time on replacement of a failed HDD?

Will let you know when one dies, so far they've been very stable (RE3s)

> What LAN adapter(s) are you using, and are they teamed for higher throughput?

Using the built-in gigabit on pci-express port on the motherboard.

> On the case, any reason not to go with a Supermicro 4U chassis with hotswap bays or similar setup from another manufacturer?

Been there, done that, I had 1 port on my hotswap backplane arrive bad, in addition when I was transferring from all drives using Enlight hotswap trays the whole thing beeped like it did not have enough power or something. In addition, hotswap trays/backplanes go bad, I wanted to minimize the amount of things that could go wrong. I suppose if you order replacement backplanes for a rackmount chassis you will be fine, but just include that in the cost of your purchase, don't wait until one fails until you buy one. Also using Lian Li's case it does wonders for anti-vibration vs. a traditional HDD cage. Cheaper rackmount servers seem to vibrate quite a bit vs. HP's proliant DLXXX servers, which seem to be built out "better."

Benchmarks:

http://home.comcast.net/~jpiszcz/20090113/...650se-ml16.html

Share this post


Link to post
Share on other sites

I'm not sure how the "random seeks" were measured, but 15 RE3 disks will do much better than that. You should be able to coax > 2,000 read IOPS from that array, with 32-deep queues on all the disks (a total queue depth of 480 if you're using IOMeter). Even without pushing the queues, I think you can get 1,000 read IOPS.

I've got a 12 1TB RE3s hooked up to a 9550SXU, it gets pretty close to 2000 IOPS.

Share this post


Link to post
Share on other sites
I'm not sure how the "random seeks" were measured, but 15 RE3 disks will do much better than that. You should be able to coax > 2,000 read IOPS from that array, with 32-deep queues on all the disks (a total queue depth of 480 if you're using IOMeter). Even without pushing the queues, I think you can get 1,000 read IOPS.

I've got a 12 1TB RE3s hooked up to a 9550SXU, it gets pretty close to 2000 IOPS.

In a RAID-6?

Share this post


Link to post
Share on other sites

For reads, yeah. I'm testing a RAID0 at the moment, but it shouldn't matter that much - it's not doing any checksum-verification on the reads, it's just picking random blocks off of the disks. I've tested RAID5s and 50s and seen more or less exactly the same random read IOPS.

Share this post


Link to post
Share on other sites
For reads, yeah. I'm testing a RAID0 at the moment, but it shouldn't matter that much - it's not doing any checksum-verification on the reads, it's just picking random blocks off of the disks. I've tested RAID5s and 50s and seen more or less exactly the same random read IOPS.

As a few people requested, here are a couple pictures of the front of the case. Note, I took the thermometer unit out of the top, otherwise all of the wires would hit the top HDD module. The USB, power button, etc, are all on the top of the case, they do not interfere with the 4 HDD modules.

front1.jpg

front2.jpg

Share this post


Link to post
Share on other sites

Thanks for the pictures!

I had PM'd him for some info, thanks for the advice. When I'm done with my build I'll post a report and pictures too.

This is the first time I've seen the PC-A77 case as a fully loaded server. Nice to see how it worked out.

The funny thing is I was looking for info on teh 9650SE-16ML and ran across this post. My system uses the same raid card, case, and powersupply! Wish I had the 1Tb hard drives, but I stuck with 750Gb hard drives. I need to exspand my server, but not that much!

And thanks for the quick reply to my PM!

Share this post


Link to post
Share on other sites

Awesome !

What' s the CPU/mobo/system ?

What' s the :

-noise ? (except the raptors)

-heat ?

-power consumption ?

Storage: RE3 x 16 (in the front of the case); 15 = RAID6; 1 = hotspare

Is the hotspare really needed with RAID 6 ?

About the 3*raptors, is it a home made rack or a rack for LianLi PC343 ?

Isn' t it too dangerous having such a big array ? I heard stuff about data corruption of very large arrays.

Share this post


Link to post
Share on other sites
Awesome !

What' s the CPU/mobo/system ?

What' s the :

-noise ? (except the raptors)

-heat ?

-power consumption ?

Storage: RE3 x 16 (in the front of the case); 15 = RAID6; 1 = hotspare

Is the hotspare really needed with RAID 6 ?

About the 3*raptors, is it a home made rack or a rack for LianLi PC343 ?

Isn' t it too dangerous having such a big array ? I heard stuff about data corruption of very large arrays.

What' s the CPU/mobo/system ?

1. CPU: Q6600 (90nm, bought when it first came out)

2. Mobo: Intel DG965WH (built-in video) old mobo, but 100% compatible by 3ware documentation

3. System: 8GiB of RAM

What' s the :

-noise ? (except the raptors)

1. Noise is same as the Caviar black I believe, check www.silentpcreview.com for the latest Caviar Black 1TB review, they sound like your typical 7200RPM disks, personally, over the fan noise, I cannot hear the disks themselves except when seeking etc.

-heat ?

2. Heat of the 16 disks in the front is between 31-34c, the raptors were a bit hotter as I had mentioned before so I had to place two 80mm fans on each side of the case to knock them down 8-12c.

-power consumption ?

- At startup its around 540-570 watts, after everything settles down, it idles around 200 watts. Of course, the benefit here is its one system and not several, each with a 1/2TiB here/there etc.

Is the hotspare really needed with RAID 6 ?

This is the practice Data Domain and other top enterprise vendors use in their $100k+ disk systems, I want maximum protection and a lot of space, there is only one option, RAID 6 and a hot spare helps for time to rebuild.

About the 3*raptors, is it a home made rack or a rack for LianLi PC343 ?

The case comes with 3-in-3 modules, I could put a four module back there if it fits, I have not tried, but it was not needed for my goal, 16 disks + raid1 with hot spares for both. The module can go in the top or the bottom of the case.

Isn' t it too dangerous having such a big array ? I heard stuff about data corruption of very large arrays.

Its possible; however, I have worked with thousands of 3ware cards and I think they are pretty reliable. In addition, I run a RAID VERIFY once a week and daily smart (short) and weekly (long) tests on all of the drives, so far, no issues to report. It could happen, remember RAID is no excuse for backups :) I have all the important data mirrored elsewhere over a LAN daily incase it does fail. That was also part of my decision, I did not want to use the RAID-6 for the system/OS because if the card died you're totally screwed, so I have the OS on a software-raid1 raptors, so I can keep running (if .. the card fails)

Share this post


Link to post
Share on other sites

Thanks !

I' m planning to build a small 24/7 homeserver like that but with 8/12*1TB 5400RPM & E4400+ same mobo, i was looking for the stacker but this A77 seems to be the perfect case.

however, I have worked with thousands of 3ware cards and I think they are pretty reliable.

Not for the card/hdds, but more with the total capacity of the array vs non-recoverable errors, like comments on this 19TB rig.

Share this post


Link to post
Share on other sites
Thanks !

I' m planning to build a small 24/7 homeserver like that but with 8/12*1TB 5400RPM & E4400+ same mobo, i was looking for the stacker but this A77 seems to be the perfect case.

however, I have worked with thousands of 3ware cards and I think they are pretty reliable.

Not for the card/hdds, but more with the total capacity of the array vs non-recoverable errors, like comments on this 19TB rig.

I like that case too but I don't like the temps of the drives that do not get enough airflow, that is why I went with the A77.

Share this post


Link to post
Share on other sites

Yeah, that was my problem with this type of case, plenty of hdd racks available but awful cooling...

I saw newer Vxxxx revision has more 120mm fans, one at the second hdd row (the hotspot) & another on top corner (another hotspot, where all the heat from mobo/CPUs accumulate)

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05666.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05668.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05669.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05670.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05671.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05674.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05676.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05677.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05679.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05680.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05681.JPG

http://perso.wanadoo.fr/mondite.mastaba/bi...ge/DSC05682.JPG

Some drives are near 60°C (on first hotpost) even with the 120*38mm panaflo (and i removed the front blackice), now i leave it open all the time with a big desk fan for hdd cooling..

I' m thinking of replacing my V2100 with this wonderful A77 (and/or the 7200.8 by cooler WDGPs)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now