vorpel

Help needed - trying to format 19TB in Windows 2008 Server

Recommended Posts

Help! I am trying to get my new raid array up and running.

Goal: have the 16 1.5TB drives setup in a Raid6 array to provide maximum amount of raid 6 protected disk.

Setup: Areca 1160 16-port SATA raid controller, 16 x 1.5TB Seagate hard drives, Windows 2008 Server (32-bit), on an Asus P5N-EM HDMI mobo. I used the Areca beta Windows 2008 driver - 6.20.00.15_80129. I created the raid array in the Areca bios to use all 16 drives and raid 6. In Windows 2008 created a "New Simple Volume" and tried to format it NTFS with 32K allocation unit size.

Problem: Tried to format 19.1TB twice - system locked up each time. Once was a blue screen (didn't get the full message - but it was an IRQ Not Less or Equal) and the other was just a blank screen with mouse cursor (format takes 96 hours).

Question: Can I do a quick format of the partition to get it online, and then use a tool (what tool I don't know) to do what a non-quick format would do?

Any help or pointers would be greatly appreciated.

-DC

Edited by vorpel

Share this post


Link to post
Share on other sites

The 16 drives doesn't bother me too much in and of itself, but the Seagate's crummy BER sure does. With a BER of 1x10^14 on Seagate's Barracuda drives (.9, .10, and .11, didn't look back older), that means that you'll statistically have one read error for every 12.5TB read. You can't even read the entire 19TB array without statistically encountering an error. My understanding of this spec is that it isn't something that your parity can correct--though it could detect it during a consistency check. During normal read operations, the parity isn't computed unless a drive signals it's unable to read. I believe that in this case, it just returns invalid data. Others might be able to offer a more definitive take on this.

But anyway, back to the issue at hand...does it work when you create a smaller array of say...2TB? 6TB? You are using GPT, correct?

Share this post


Link to post
Share on other sites

The Areca bios did recognize the 1.5TB drives, and was able to create the single 16 disk raid 6 array (19.1TB). I have not checked the bios version level for the Areca 1160 card - I'll do that later tonight.

Thanks!

http://www.microsoft.com/whdc/device/storage/GPT_FAQ.mspx

Hmm, according to that you should be ok, although I would be curious if switching to x64 would solve your problem?

Does the Areca handle volumes that big? I think it does, but it's a good thing to check, especially since the 1.5TB Seagates aren't yet validated...

Is this a personal preference, or something that I should really look at not doing?

I wanted the 16 drive array to minimize the overhead that the array would need, and went with raid 6 so that if I do lose a volume, I can still be protected while it's replacement is rebuilt.

Thanks!

-DC

and for starters you shouldn't have 16 drives in one raid 6 array.....

Share this post


Link to post
Share on other sites

Windows 2008 Server did not ask me if I wanted to use GPT - it just assumed based on the size. I say this as I originally tried this array with Windows Vista SP1 (32 bit) and it did ask if I wanted GPT which I tried. Failure to format the drive under Vista was my fault as I didn't specify the allocation unit size (default for NTFS was too small IIRC).

I can try to create a volume smaller and format it, but my main question besides looking for advice on what I am trying to do is - can I do a quick format and then test the volume with a utility to basically do what the normal full format command will do (IIRC it validates all of the disk - doesn't just create the FAT table - or whatever it is now with GPT).

Thanks everyone for replying so far - I am very grateful!

-DC

The 16 drives doesn't bother me too much in and of itself, but the Seagate's crummy BER sure does. With a BER of 1x10^14 on Seagate's Barracuda drives (.9, .10, and .11, didn't look back older), that means that you'll statistically have one read error for every 12.5TB read. You can't even read the entire 19TB array without statistically encountering an error. My understanding of this spec is that it isn't something that your parity can correct--though it could detect it during a consistency check. During normal read operations, the parity isn't computed unless a drive signals it's unable to read. I believe that in this case, it just returns invalid data. Others might be able to offer a more definitive take on this.

But anyway, back to the issue at hand...does it work when you create a smaller array of say...2TB? 6TB? You are using GPT, correct?

Share this post


Link to post
Share on other sites
- can I do a quick format and then test the volume with a utility
In practice if your OS or drivers or something doesn't like a volume that big, it will crash. Doesn't matter if you quick format or what.

Share this post


Link to post
Share on other sites
...blue screen (IRQ Not Less or Equal)

Usually, those blue screen comes from bad drivers...

It remembered me that to use the NVidia RAID driver on the booting array, you HAD to create a customized installation CD (nLite) on Win2k3

Share this post


Link to post
Share on other sites

Update:

The quick format does work and I am presented with a 19TB formatted volume. I copied 10G of video files to the new drive and they played just fine locally and across the network. I am currently running the Error-checking tool built into Windows 2008 Server, but I don't think that will be as thorough as I would like to see.

Any suggestions on tools to check the volume would be greatly appreciated.

-DC

Share this post


Link to post
Share on other sites

I typically use Bart's Stuff Test to validate drives...it does a sequential write, seq. read with compare, random write, random read with compare, etc. I let it run for 24 hours or 1 full pass, whichever occurs later.

Share this post


Link to post
Share on other sites

I will check that out - thank you very much!

-DC

I typically use Bart's Stuff Test to validate drives...it does a sequential write, seq. read with compare, random write, random read with compare, etc. I let it run for 24 hours or 1 full pass, whichever occurs later.

Share this post


Link to post
Share on other sites
Is this a personal preference, or something that I should really look at not doing?

I wanted the 16 drive array to minimize the overhead that the array would need, and went with raid 6 so that if I do lose a volume, I can still be protected while it's replacement is rebuilt.

Thanks!

-DC

and for starters you shouldn't have 16 drives in one raid 6 array.....

If this data matters to you, you might be playing with fire and not realize it :)

If you want the most fool-proof big array with great performance, you want raid 10. Anything other raid type can be surprisingly unprotective, especially with a large array, and especially with an array of disks of the same type, and especially with an old array, and especially ..... and especially if you aren't an expert and don't have an IT team backing you up and don't have data backups :)

Basically, unless you know exactly what you're talking about, I'm telling you that the chances of that 16 drive raid6 array completely breaking is higher than you think :) But if this is for crap data then it doesn't matter.

Share this post


Link to post
Share on other sites

Finally got this raid array up and working...! Turns out that the firmware issue on the Seagate 1.5TB drives was causing the issue. Got all 16 drives updated to SD1A firmare and the raid array created just fine and has been running without any issues for 7 days.

Thanks to everyone who helped out - I sure learned a lot doing this!

-DC

Share this post


Link to post
Share on other sites
Finally got this raid array up and working...! Turns out that the firmware issue on the Seagate 1.5TB drives was causing the issue. Got all 16 drives updated to SD1A firmare and the raid array created just fine and has been running without any issues for 7 days.

Thanks to everyone who helped out - I sure learned a lot doing this!

-DC

What kind of case/tower are you using?

Share this post


Link to post
Share on other sites

I am using a LIAN LI PC-V2000B PLUS II. I got the first one from Newegg.com and the second from Mwave. It looks like Newegg has these back in stock without PSU for $199. I've been building systems for 14 years and this is by far my favorite case.

Here a picture:

oh96di.jpg

I'm using an Areca Arc-1160 Raid6 card.

Thanks!

Finally got this raid array up and working...! Turns out that the firmware issue on the Seagate 1.5TB drives was causing the issue. Got all 16 drives updated to SD1A firmare and the raid array created just fine and has been running without any issues for 7 days.

Thanks to everyone who helped out - I sure learned a lot doing this!

-DC

What kind of case/tower are you using?

I ended up using drive 16 as a hot spare, so the timing was a little off. It took right at 8 hours to create the Raid6 array in the Areca Arc-1160 card bios/setup, and then the format in Windows Server 2008 took 91-92 hours to complete. Total formatted capacity: 17.7TB

Thanks!

I want to know if it really took 96 hours to format.

Share this post


Link to post
Share on other sites

I need to trace the connections to determine which drive is connect to which port, but here are the temps:

HDD_01 - 36c

HDD_02 - 47c

HDD_03 - 35c

HDD_04 - 47c

HDD_05 - 34c

HDD_06 - 43c

HDD_07 - 35c

HDD_08 - 47c

HDD_09 - 37c

HDD_10 - 34c

HDD_11 - 42c

HDD_12 - 41c

HDD_13 - 42c

HDD_14 - 41c

HDD_15 - 38c

I don't know how this compares with others, but I don't think it is too bad considering I'm using the stock fans with 2 8cm additional fans in the back of the case and my environment (mechanical room) doesn't have its own cooling so it is same temp as the house.

As for the other Lian-Li case, it does look nice, but from what I say you will only get 15 bays and I'm not sure about the back top part of the case - can it hold additional drives? As for build quality, Lian Li is excellent!

Have a good one,

DC

Very impressive. How are the temps on the drives in the middle, also what is your opinion of the new Lian Li case?

http://www.newegg.com/Product/Product.aspx...N82E16811112175

One could fit 16+(2 in the back?) drives in that one as well, I like the case you mentioned but the temps on the drives in the back seem to get a little warm?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now