vorpel

Member
  • Content Count

    25
  • Joined

  • Last visited

Community Reputation

0 Neutral

About vorpel

  • Rank
    Member
  1. Contacted Areca support here in the US and Benjamin was a great help. After updating the firmware on my ARC-1260, I was able to see the number of drives in each RaidSet. From there I deleted the bad RaidSet with only the new drive, marked the new drive as a hot spare, and the array started rebuilding.
  2. Hello. I have an Areca ARC-1260 16-port controller with 15 (currently) Seagate Barracuda 1.5TB (all drives are model "ST31500341AS" and this array has been running for 3 years) hard drives in a single Raid-6 raid set. This morning one of my drives dropped from the raid set. I removed the drive and validated it was bad through SeaTools. I have an extra Seagate 1.5TB drive (same model number) that I put into the system and connected it to channel 5 (where the drive failed this morning). Instead of the drive showing up as free (and I could then create it as a hot-spare and the array would start building), the drive is showing up as 1 drive of 15 in a second raid set that has the exact same name as the read raid set - "Raid Set # 00". From what I can tell, I will need to delete the 1 drive raid set, however when I go into the "Raid Set Functions", "Delete Raid Set", all I see is both raid arrays and they have the same name... Is there any way to fix this? I could guess which of the two identically named raid sets is the one with just 1 drive, and delete it, but if I guess wrong I will lose 17TB of data... Under the "Information", "RaidSet Hierarchy" option, I see the correct array with 14 drives - degraded as the first selection, and the failed array with the new drive I just put in as the second in the list. Any help would be greatly appreciated. If I did delete the wrong raidset, is there any way to recover it without losing data? Thank you so much for reading and any help you can provide...!
  3. vorpel

    TLER / CCTL

    Here are some new drives to add to the list: Seagate model=ST33000651AS Size=3.0TB RPM=7200 Revision=? Firmware=CC45 Available=YES Default=Disabled Reboot=Stay Powercycle=Lost Seagate model=ST3000DM001 Size=3.0TB RPM=7200 Revision=? Firmware=CC96 Available=NO I'm not happy with these new Seagate 3TB drives (model ST3000DM001) as the smartctl -l scterc *drive* gives "Warning: device does not support SCT Error Recovery Control command" I hope this helps.
  4. Update: I just ran smartctl against the Seagate ST3000DM001 drives and they report: "Warning: device does not support SCT Error Recovery Control command". The Seagate Barracuda XT drives do support ERC, but it is disabled by default. I'm not sure if my Areca ARC-1120 card needs to have ERC set or not. Either way I guess I won't be using the new ST3000DM001 drives in anything other than Raid 0 or Raid 1. Thanks.
  5. I am in the process of setting up a new raid6 array using an Areca ARC-1120 raid card and 8 Seagate 3TB hard drives. Currently I have 2x 16 drive Seagate Barracuda XT 1.5TB arrays (both with Areca ARC-1260 adapters) that have been functioning fine for going on 2 years. Based on this and how hard drive prices have skyrocketed I purchased 8 of the Seagate Go-Flex 3TB external hard drives during the BF sales. Upon opening 6 of the cases (2 won't be delivered until later today) I found 4 Barracuda XT 3TB drives and 2 of the new Barracuda ST3000DM001 3TB drives. Having had success before with the Barracuda XT drives in Raid6, I didn't think it would be a problem setting up the new array, but the "new" 3TB drive has given me pause and I need to understand if there are any firmware differences that would make using them in a Raid 5/6/10 array a problem. Does anyone know if there are any issues with this "new" Seagate 3TB drive being able to run in Raid 5/6/10? I do understand that both Areca and Seagate don't "support" Raid 5/6/10 on these drives and recommend the enterprise drives, but with hard drives prices so high right now these drives are all I can afford for now. Since this isn't a "green" drive my hope is that there isn't anything like WD's TLER in these new Seagate drives that will make Raid 5/6/10 arrays crap out like the WD Green drives do. Any help would be greatly appreciated. Thanks! -DC
  6. I need to trace the connections to determine which drive is connect to which port, but here are the temps: HDD_01 - 36c HDD_02 - 47c HDD_03 - 35c HDD_04 - 47c HDD_05 - 34c HDD_06 - 43c HDD_07 - 35c HDD_08 - 47c HDD_09 - 37c HDD_10 - 34c HDD_11 - 42c HDD_12 - 41c HDD_13 - 42c HDD_14 - 41c HDD_15 - 38c I don't know how this compares with others, but I don't think it is too bad considering I'm using the stock fans with 2 8cm additional fans in the back of the case and my environment (mechanical room) doesn't have its own cooling so it is same temp as the house. As for the other Lian-Li case, it does look nice, but from what I say you will only get 15 bays and I'm not sure about the back top part of the case - can it hold additional drives? As for build quality, Lian Li is excellent! Have a good one, DC
  7. I am using a LIAN LI PC-V2000B PLUS II. I got the first one from Newegg.com and the second from Mwave. It looks like Newegg has these back in stock without PSU for $199. I've been building systems for 14 years and this is by far my favorite case. Here a picture: I'm using an Areca Arc-1160 Raid6 card. Thanks! What kind of case/tower are you using? I ended up using drive 16 as a hot spare, so the timing was a little off. It took right at 8 hours to create the Raid6 array in the Areca Arc-1160 card bios/setup, and then the format in Windows Server 2008 took 91-92 hours to complete. Total formatted capacity: 17.7TB Thanks!
  8. Finally got this raid array up and working...! Turns out that the firmware issue on the Seagate 1.5TB drives was causing the issue. Got all 16 drives updated to SD1A firmare and the raid array created just fine and has been running without any issues for 7 days. Thanks to everyone who helped out - I sure learned a lot doing this! -DC
  9. I will check that out - thank you very much! -DC
  10. Update: The quick format does work and I am presented with a 19TB formatted volume. I copied 10G of video files to the new drive and they played just fine locally and across the network. I am currently running the Error-checking tool built into Windows 2008 Server, but I don't think that will be as thorough as I would like to see. Any suggestions on tools to check the volume would be greatly appreciated. -DC
  11. Windows 2008 Server did not ask me if I wanted to use GPT - it just assumed based on the size. I say this as I originally tried this array with Windows Vista SP1 (32 bit) and it did ask if I wanted GPT which I tried. Failure to format the drive under Vista was my fault as I didn't specify the allocation unit size (default for NTFS was too small IIRC). I can try to create a volume smaller and format it, but my main question besides looking for advice on what I am trying to do is - can I do a quick format and then test the volume with a utility to basically do what the normal full format command will do (IIRC it validates all of the disk - doesn't just create the FAT table - or whatever it is now with GPT). Thanks everyone for replying so far - I am very grateful! -DC
  12. The Areca bios did recognize the 1.5TB drives, and was able to create the single 16 disk raid 6 array (19.1TB). I have not checked the bios version level for the Areca 1160 card - I'll do that later tonight. Thanks! Is this a personal preference, or something that I should really look at not doing? I wanted the 16 drive array to minimize the overhead that the array would need, and went with raid 6 so that if I do lose a volume, I can still be protected while it's replacement is rebuilt. Thanks! -DC
  13. Help! I am trying to get my new raid array up and running. Goal: have the 16 1.5TB drives setup in a Raid6 array to provide maximum amount of raid 6 protected disk. Setup: Areca 1160 16-port SATA raid controller, 16 x 1.5TB Seagate hard drives, Windows 2008 Server (32-bit), on an Asus P5N-EM HDMI mobo. I used the Areca beta Windows 2008 driver - 6.20.00.15_80129. I created the raid array in the Areca bios to use all 16 drives and raid 6. In Windows 2008 created a "New Simple Volume" and tried to format it NTFS with 32K allocation unit size. Problem: Tried to format 19.1TB twice - system locked up each time. Once was a blue screen (didn't get the full message - but it was an IRQ Not Less or Equal) and the other was just a blank screen with mouse cursor (format takes 96 hours). Question: Can I do a quick format of the partition to get it online, and then use a tool (what tool I don't know) to do what a non-quick format would do? Any help or pointers would be greatly appreciated. -DC
  14. Woohoo!!! The chkdsk worked like a charm. All the data is there (1.2TB). Thanks for the suggestions and all the help! -DC
  15. Thank you so much for the information. When I did the rebuild there were no extra options (the nVidia raid software is pretty primitive). I will definitely reboot and run the chkdsk. I'll post the results when it is done. Thanks again! -DC