Steve Snyder

Member
  • Content Count

    128
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Steve Snyder

  • Rank
    Member
  1. This is a real disappointment. I've found both RE2 and RE4 drives to be very reliable in a RAID10 configuration, and was looking forward to a performance improvement. It's been 3.5 years since the 2TB RE4 was released yet the only performance gain to be seen with the new RE4 4TB, at least according to SR's testing, is in 128K writes (queue depth == 16). I'm not asking for solid-state performance in spinning media, but *some* interest in improving performance in bulk storage would be nice. Sigh.
  2. I've had 4 WDC RE4 disks in a RAID10 configuration since shortly after they came out. I've been satisfied with them, and the RE2 before them, so I went to WD's website to find out what they've produced in the intervening years. Answer: apparently nothing. I'm looking for better performance (STR and/or IOPS) than I can get with RE4. The replacements, if any, will also to be used as RAID10. Any recommendations? Thanks.
  3. Steve Snyder

    Same model with differing selftest times?

    I should have reported the context in my original post. The times for the extended self-test are what each drive reports (via Linux smartctl utility) prior to the actual test. So that takes concurrent access of the tables (I think). Also, the disks are being queried and tested offline. That is, none of the disks have a mounted filesystem on any partition, so the only access of each disk should be to the overall physical drive via smartctl. I'm not terrible worried about the disparity of self-test times, but I am a little concerned. These 4 disks are intended for use in a RAID10 configuration and I'd prefer they'd be as alike as possible. (Testing is being done as JBOD prior to assembling the array.) Thanks for the responses.
  4. I've got 4 1.5TB WD RE4 drives (model WD1503FYYS-02W0B0), all with the same firmware revision (01.01D01). So why then do the self-test times differ between the drives? For the "Extended self-test" duration these times are reported: Disk #1: 240 minutes Disk #2: 222 minutes Disk #3: 218 minutes Disk #4: 223 minutes All disks are of equal age, with equal power-on times. What variable would make these "identical" drives require different durations for their self-tests? Thanks.
  5. Steve Snyder

    Effect of platter count

    The RE4 (non-GP) was announced over a year ago with great fanfare, yet availability of the drives are still spotty at best. I've been wondering if WD is having problems manufacturing them.
  6. I'm in need of a new high-quality DVD burner. Simple, right? Not really, because I need it to be an IDE drive with a 4-pin Molex power connector. Nearly all contemporary burners are SATA, which I can't use. I'm guessing I could find adapters to use a SATA drive on an IDE interface, but I'd rather avoid having to buy and use them. So... what drives are considered good these days with IDE/Molex connectors? I don't care about Blu-Ray, I just want quality burning of DVDs and CDs. Thanks.
  7. So now it's been nearly 3 months since the WD RE4 (not RE4-GP) was announced. So when this mythical beast (model: WD2003FYYS) actually be available for purchase?
  8. what is "TR"? http://techreport.com/
  9. Hello. I've been very happy with the stability of my 4xRE2 disks in a (Linux software) RAID10 configuration. The performance of the array, though, is not so great. Or at least it has come to seem that way after 20 months of use. I'm wondering what kind of performance improvements I would see by replacing the RE2 disks with the newer RE4-GP. (I don't really care about the energy efficiency, but there seems to be no non-GP drive of this generation.) Can anyone point to a comparison of the RE2 vs the RE4-GP? All I can find is RE3-GP vs. RE4-GP. Thanks.
  10. Steve Snyder

    Hardware RAID VS Software RAID

    I've been very happy for the last 18 months with 4xRE2 in a software (Linux) RAID 10 array. Not the fastest storage on Earth, I'll grant you, but utterly stable and reliable. Just another data point.
  11. Steve Snyder

    Is enabling AHCI worth it?

    It is worth it only if you want/need the features that AHCI provides: 1. More than 4 SATA hard disks 2. Native Command Queuing (NCQ) 3. (possibly) Fake RAID The general consensus seems to be that NCQ is generally a winner for servers and a loser for desktops. FRAID is regarded as just evil. As far as I know, the only downside to having AHCI is if you've previously installed Windows in non-AHCI (my BIOS calls this Compatibility) mode. Windows doesn't take kindly to the switch. I generally run in AHCI mode, but have seen no problems (other than 2 of my hard disks not being seen) in Compatibility mode.
  12. Steve Snyder

    WD 500GB RE2 drives

    Just another data point: I've had 8 of these drives running 24/7 for about 13 months with no problems at all. 4 x WD3201ABYS-01B9A0 in Linux software RAID10 4 x WD5001ABYS-01YNA0 attached to a Areca ARC-1120 (RAID10) Bad power supply? Drives running too hot for too long? (The disks in my Linux RAID are running at 33C right now.) Maybe too many start/stop cycles?
  13. Can anyone recommend an adapter that will allow 2.5" SATA/300 drives to work in a 3.5" SATA/300 backplane? My intent is to use it with Intel's 2.5" SSD drives. Thanks.
  14. Steve Snyder

    Lots of corrected ECC errors

    No, I haven tried that yet. It seems strange to me that with all those read errors, there are zero write errors reported.
  15. I've got a Cheetah 15K.5 drive that, according to SMART, has a lot of self-corrected errors. See below. The 15K.5 is the only drive on one channel of a dual-channel Adaptec AIC-7899P (U160) SCSI controller. The other channel is occupied by an Atlas 15K II. The Atlas shows 308467 corrected ECC read errors in 49,492,787 megabytes read (E/MB =.0.006232564732179803363). As shown below, the Cheetah has 323324590 corrected ECC read errors in 1,601,630,634 megabytes read (E/MB = 0.2018721315261942936) That quite a discrepency in errors rates. Should I be concerned about the rate of correctable ECC errors on the Cheetah drive? I've been using this same SCSI controller for years and have no reason to think that there are problems with either channel. The Cheetah is only about 9 months old. A related question: what condition is measured by "Non-medium error count"? Interface errors? Firmware errors? Thanks. # smartctl -a /dev/sdb smartctl version 5.36 [i686-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/ Device: SEAGATE ST3300655LW Version: 0001 Serial number: 3LM083Z30000871309XR Device type: disk Transport protocol: Parallel SCSI (SPI-4) Local Time is: Sun Dec 14 10:38:27 2008 EST Device supports SMART and is Enabled Temperature Warning Enabled SMART Health Status: OK Current Drive Temperature: 33 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 2983269371 Blocks received from initiator = 3126389831 Blocks read from cache and sent to initiator = 92659842 Number of read and write commands whose size <= segment size = 6579334 Number of read and write commands whose size > segment size = 1 Vendor (Seagate/Hitachi) factory information number of hours powered up = 4885.50 number of minutes until next internal SMART test = 14 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 323324590 16 0 323324606 323324606 1527.434 0 write: 0 0 0 0 0 76612.315 0 Non-medium error count: 88 [GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on'] SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background long Completed - 1 - [- - -] Long (extended) Self Test duration: 3852 seconds [64.2 minutes]