• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About PrimeMover

  • Rank
  1. I suspect that the issue here lies with the Adaptec controller, not the SSD. The Intel SSD controller does have a DRAM IC (16MB on G1, 32MB on G2), but it is NOT used for user data. Intel uses it for internal operations, including their wear leveling algorithm. Given that, data loss from removing power from the drive itself shouldn't be possible.
  2. PrimeMover

    WD4000YR vs WD5002ABYS

    Yes, how dare WD replace your ancient 400GB RE2 drive with a newer, faster, bigger 500GB RE3. You don't have to rearrange anything. You can simply leave the extra 100GB of capacity unused. Your decision to rearrange is pure choice. I also recently RMA'd a 4000YR and got a 5002ABYS in return. I was thrilled that WD opted to send me a bigger and faster drive (of the same RAID Edition lineage) instead of making me wait who knows how long for a repaired lesser 4000YR to become available. Incredible.
  3. PrimeMover

    Looking for alternative DNS servers

    Unless I'm mistaken, you can opt-out of the redirection services at OpenDNS. Otherwise, you can use the long-public GTEI servers: and
  4. PrimeMover

    Drive cage with no connections?

    I nearly picked up one of these for a FreeNAS/OpenFiler project I'm working on, reusing 8x WD 250GB PATA drives. Ended up just re-using an internal HD rack from an old case.
  5. I know it isn't as big of a deal as it once was, but I'd be fairly worried about odd compatibility issues with the mixed drives. They certainly aren't qualified together by any adapter manufacturer. I couldn't help but suspect that issue first if there was any odd behavior.
  6. I typically use Bart's Stuff Test to validate does a sequential write, seq. read with compare, random write, random read with compare, etc. I let it run for 24 hours or 1 full pass, whichever occurs later.
  7. The 16 drives doesn't bother me too much in and of itself, but the Seagate's crummy BER sure does. With a BER of 1x10^14 on Seagate's Barracuda drives (.9, .10, and .11, didn't look back older), that means that you'll statistically have one read error for every 12.5TB read. You can't even read the entire 19TB array without statistically encountering an error. My understanding of this spec is that it isn't something that your parity can correct--though it could detect it during a consistency check. During normal read operations, the parity isn't computed unless a drive signals it's unable to read. I believe that in this case, it just returns invalid data. Others might be able to offer a more definitive take on this. But anyway, back to the issue at hand...does it work when you create a smaller array of say...2TB? 6TB? You are using GPT, correct?
  8. PrimeMover

    Mounting ISO's?

    MagicDisc is free, very lightweight, and does not require a reboot to install and use. Last I tried, it didn't have a signed driver for Vista x64, but there have been several releases since then...might have been resolved.
  9. My guess is that you are creating an MBR partition table, when you need to be making a GPT for disks > 2TB. While I have very little experience with linux, a quick google suggests that you need to use parted, as cfdisk can't work with GPT disks.
  10. Sorry, I copied the links from my original post in another forum, and didn't notice that the link was munged. And since SR has a lame post editing horizon are the correct links: 1. Purchased an external 9-bay 5.25" enclosure 2. Purchased a couple 3-in-2 hot-swap bays 3. Purchased a kit that included 2 4x cables (1m) and a 8-internal-SATA-to-2-external-4x-multilane L bracket. 4. Purchased two 4x-multilane-to-4-SATA Centronics-mount brackets
  11. When I expanded my Areca array from 6 drives to 12, I decided to do the expansion external. I considered doing what you are--just use discrete cables for each drive--but I really don't like the idea of having that many fragile connections and cables. I opted to go for a 4x multilane setup, and I did find a 8-lane (2x 4x ML) single-slot L-bracket. Here's the entire list what what I bought: 1. Purchased an external 9-bay 5.25" enclosure 2. Purchased a couple 3-in-2 hot-swap bays 3. Purchased a kit that included 2 4x cables (1m) and a 8-internal-SATA-to-2-external-4x-multilane L bracket. 4. Purchased two 4x-multilane-to-4-SATA Centronics-mount brackets Probably more expensive than your solution, but it hasn't given me any trouble--aside from needing to reduce the signaling speed to 1.5gb/s...but I later found that to be the hot swap enclosures themselves, not necessarily the cable length. Not like it matters anyway.
  12. It did? the Abit IP35 Pro, Gigabyte GA-X38-DQ6, and more than a few others are reported to work there. I personally know at least one of them works as I have a GA-X38-DQ6 and an ARC-1680. ARC-1230 also working on my GA-P35-DS3P. But the issue with RAID card support on some motherboard models with Intel chipsets appears to have been mostly--if not completely--solved. There's a thread on 2cpu that details covering some pins on the PCIe connector that relate to the SMBus. The thread is full of replies of success...
  13. PrimeMover

    Areca 1220 Resize Array Question

    Yes, the Areca supports OCE (Online Capacity Expansion). You would have two options: Either make a new volume set in the free space after your last drive finishes rebuilding, or expand the existing volume set. Keep in mind that you need to use LBA64 (don't recall if this is RAID set or volume set level) if you exceed 2.0TB with a single volume set. I think the Areca may set this automatically if you extend past 2.0TB. You'll also need a GPT (not MBR) disk partition format--which must be done when the drive is not partitioned. Finally, you can't be using 32-bit XP--it doesn't support LBA64 or GPT disks. Windows Server 2003 SP1+, XP x64, and Windows Vista (on the Microsoft side) do support both, though. If you do choose to extend the volume set, you'd also need a mechanism to expand the filesystem once the physical expansion has completed. Windows Vista / Server 2008 can do this natively. Prior to that, you'd need to use some sort of disk utility. Other operating systems may have other mechanisms available as well. Of course, you could always simply make another partition as well...but if you're going to all the trouble to expand the Areca volume set, this seems somewhat counterproductive.
  14. Well, according to Kingston, the 6-rank limitation only applies at DDR400. It's certainly possible that forcing to DDR333 isn't the same as having actual DDR333 sticks in the eyes of the memory controller, but I really had expected that would work. I'm assuming you've done standard memory diagnostic tools (Memtest86+) with each pair individually to validate them? Interestingly, Kingston lists the S2895 as capable of a maximum of 12GB at DDR400. There are numerous postings around the net discussing the Opteron Rev E and later's limitation of 6 ranks at DDR400. It won't matter what board you use, looks like. However, the "real" limit will vary based on the exact individual board, RAM, and CPU. Some setups may be able to handle more loading than others, which may be why you can run one CPU at 8 ranks but not both at the same time.
  15. The S2895 isn't going to do any better either. The limit looks to be due to the memory controller on the Opterons themselves.