• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About BradC

  • Rank
  1. Any hints as to it's capacity? 2.25, 2.5 or 3tb?
  2. Nahh it doesn't have any bad sectors at all, thanks for the suggestion though
  3. BradC

    Does this motherboard exist?

    Remember 4x PCIe on that intel board is still 1gb/s, which is damned fast! Iwill boards are great and I would recommend them for anyone. Another option is Asus or Gigabyte, they both make some nice server boards.
  4. BradC

    Hitachi E7K500 vs T7K500

    due to your quote about the Kurofune drives, I would go with one of them, or some WD5000AAKS, I've got 6 of them and I've also got 6x WD3200KS, while I can't compare them to Hitachi 500gb drives of any description, I can say that they are very cool and quiet compared to Seagate 7200.9 or .10's and they are damned fast too!
  5. I've got 6x WD5000AAKS and they are around the 85mb/s max and 40mb/s minimum. HD Tune reports 13.2ms access time. Definately a 3x 167gb platter drive. They also don't have the old style power connector which pissed me off!
  6. One of my 250gb hdd's is coming up with smart errors in HD Tune, specifically scoring 73 for 'seek error rate', the typical amount is 30. Is this a sign there is something wrong with the drive, or is it a false alarm?
  7. It doesn't really matter much about the OS, the key is trying to get the raid card to do it properly. I've spoken to HP support today and they have been very helpful and are trying to contact adaptec for me to see if there is a fix for it.
  8. XP definately supports it, I've created a dynamic volume of 2.25TB with these drives, but the problem is the card and the software for the card doesn't like anything over 2TB
  9. I know it is quite a large problem for quite a few manufacturers, but Areca have solved it It's just a matter of waiting for Adaptec to figure it out.
  10. I've found out that it is impossible to create a 2.25TB Raid 5 array on an Adaptec 2610SA Raid Card. These cards were commonly installed in Dell and HP server systems and are a 6 port PCI-X SATA Raid card. I'm running 6x WD5000AAKS, which by the way record 80mb/s at the start and decay down to 40mb/s in HD Tune. (each) Adaptec's official answer lies in this pdf: Where it basically says you need to create one array that is 2TB in size, then create another array that uses the left over space in each hdd, to create (in my case) a 2TB + 250gb array. Basically the same sort of thing that intel matrix raid does. Then once you've done that, go into disk management and convert the new drives to dynamic, then create a spanned volume between them that creates a single large drive on one drive letter. IMHO this is not an ideal solution at all, and I do want to find a better solution to this problem, I would like to be able to create a single 2.25TB array without any of this dynamic disk rubbish. Does anyone out there have a solution? BTW this is the card I have
  11. BradC

    Question about RAID 5

    I would suggest a pair of 80gb drives in raid 1 for boot, with 4x 500gb in raid 10 for your data. The only way to do raid 5 and to make it actually work properly is to have a PCIe or PCI-X raid card, which it sounds like you don't want to spend the money on. Ideally I would suggest you buy an 8 port PCIe raid card, it will work on the second PCIe slot on the P5B Deluxe motherboard, and buy 4x 500gb drives to start off with and configure them in Raid 5, in the future you can add drives to it as you need. Areca, Promise, LSI, even Intel make good PCIe 8 port Raid 5 cards.
  12. BradC

    Where, oh where, are the Terabyte drives

    I guess the eventual size depends on the platter size. Hitachi seem to be comfortable with 5 platters, WD ans Seagate with only 4 platters. Seagate are currently at 188gb, it wouldn't be much of a stretch to imagine Hitachi using 5x 200gb platters, but it would take more development time for a brand to come out with 4x 250gb platters. I doubt we will see 5x 250gb platter drives anytime soon....
  13. You need to factor in drive price vs total price of the system. As an example: 16x 400gb drives give you 6.4TB space (assuming raid 6 you get 5.6TB) 12x 500gb drives give you 6.0TB space, 5.0TB in raid 5. You can fit 12 drives into quite a few cases, a TJ-07 will do it comfortably with a few 2x5.25" to 3x3.5" converters, and you still get 3x spare 5.25" bays. In fact that is the setup I have, I have 2x Adaptec 6 channel SATA PCI-X cards from HP Servers. One has 6x 250gb drives on it in the bottom 6 hdd bays in my TJ-07, the other has 6x 320gb drives on it in the cd bays using Lian Li EX23 drive bay converters. I still have 3 cd bays free. What I've done with them is have a DVD-RW drive, a removable SATA rack, and another 2x 5.25" to 3x3.5" rack right up top, using the top cd bay, and the exmpty space at the top of the case. (kind of like having 8x 5.25" bays). I would suggest a setup like this with a single Areca 12 channel PCIe raid card and any enthusiast level motherboard with 2x PCIe slots like an Asus P5W DH, P5N32-E, A8N-SLI, whatever. 12x 500gb drives in a setup like this will cost you a lot less than 16x 400gb drives because with 16 drives you will need to go up to the next level of case size, controller card, and you will have even more case and cooling problems I use an Acbel 550w psu to power the motherboard, cpu (p4 3.46ghz ee) and 4 of the drives at the bottom of the case, and an Enermax 701AX to power the remaining drives.