Steve Snyder

Member
  • Content Count

    128
  • Joined

  • Last visited

Everything posted by Steve Snyder

  1. This is a real disappointment. I've found both RE2 and RE4 drives to be very reliable in a RAID10 configuration, and was looking forward to a performance improvement. It's been 3.5 years since the 2TB RE4 was released yet the only performance gain to be seen with the new RE4 4TB, at least according to SR's testing, is in 128K writes (queue depth == 16). I'm not asking for solid-state performance in spinning media, but *some* interest in improving performance in bulk storage would be nice. Sigh.
  2. I've had 4 WDC RE4 disks in a RAID10 configuration since shortly after they came out. I've been satisfied with them, and the RE2 before them, so I went to WD's website to find out what they've produced in the intervening years. Answer: apparently nothing. I'm looking for better performance (STR and/or IOPS) than I can get with RE4. The replacements, if any, will also to be used as RAID10. Any recommendations? Thanks.
  3. I've got 4 1.5TB WD RE4 drives (model WD1503FYYS-02W0B0), all with the same firmware revision (01.01D01). So why then do the self-test times differ between the drives? For the "Extended self-test" duration these times are reported: Disk #1: 240 minutes Disk #2: 222 minutes Disk #3: 218 minutes Disk #4: 223 minutes All disks are of equal age, with equal power-on times. What variable would make these "identical" drives require different durations for their self-tests? Thanks.
  4. Steve Snyder

    Same model with differing selftest times?

    I should have reported the context in my original post. The times for the extended self-test are what each drive reports (via Linux smartctl utility) prior to the actual test. So that takes concurrent access of the tables (I think). Also, the disks are being queried and tested offline. That is, none of the disks have a mounted filesystem on any partition, so the only access of each disk should be to the overall physical drive via smartctl. I'm not terrible worried about the disparity of self-test times, but I am a little concerned. These 4 disks are intended for use in a RAID10 configuration and I'd prefer they'd be as alike as possible. (Testing is being done as JBOD prior to assembling the array.) Thanks for the responses.
  5. Steve Snyder

    Effect of platter count

    The RE4 (non-GP) was announced over a year ago with great fanfare, yet availability of the drives are still spotty at best. I've been wondering if WD is having problems manufacturing them.
  6. I'm in need of a new high-quality DVD burner. Simple, right? Not really, because I need it to be an IDE drive with a 4-pin Molex power connector. Nearly all contemporary burners are SATA, which I can't use. I'm guessing I could find adapters to use a SATA drive on an IDE interface, but I'd rather avoid having to buy and use them. So... what drives are considered good these days with IDE/Molex connectors? I don't care about Blu-Ray, I just want quality burning of DVDs and CDs. Thanks.
  7. So now it's been nearly 3 months since the WD RE4 (not RE4-GP) was announced. So when this mythical beast (model: WD2003FYYS) actually be available for purchase?
  8. what is "TR"? http://techreport.com/
  9. Hello. I've been very happy with the stability of my 4xRE2 disks in a (Linux software) RAID10 configuration. The performance of the array, though, is not so great. Or at least it has come to seem that way after 20 months of use. I'm wondering what kind of performance improvements I would see by replacing the RE2 disks with the newer RE4-GP. (I don't really care about the energy efficiency, but there seems to be no non-GP drive of this generation.) Can anyone point to a comparison of the RE2 vs the RE4-GP? All I can find is RE3-GP vs. RE4-GP. Thanks.
  10. Steve Snyder

    Hardware RAID VS Software RAID

    I've been very happy for the last 18 months with 4xRE2 in a software (Linux) RAID 10 array. Not the fastest storage on Earth, I'll grant you, but utterly stable and reliable. Just another data point.
  11. Steve Snyder

    Is enabling AHCI worth it?

    It is worth it only if you want/need the features that AHCI provides: 1. More than 4 SATA hard disks 2. Native Command Queuing (NCQ) 3. (possibly) Fake RAID The general consensus seems to be that NCQ is generally a winner for servers and a loser for desktops. FRAID is regarded as just evil. As far as I know, the only downside to having AHCI is if you've previously installed Windows in non-AHCI (my BIOS calls this Compatibility) mode. Windows doesn't take kindly to the switch. I generally run in AHCI mode, but have seen no problems (other than 2 of my hard disks not being seen) in Compatibility mode.
  12. Steve Snyder

    WD 500GB RE2 drives

    Just another data point: I've had 8 of these drives running 24/7 for about 13 months with no problems at all. 4 x WD3201ABYS-01B9A0 in Linux software RAID10 4 x WD5001ABYS-01YNA0 attached to a Areca ARC-1120 (RAID10) Bad power supply? Drives running too hot for too long? (The disks in my Linux RAID are running at 33C right now.) Maybe too many start/stop cycles?
  13. Can anyone recommend an adapter that will allow 2.5" SATA/300 drives to work in a 3.5" SATA/300 backplane? My intent is to use it with Intel's 2.5" SSD drives. Thanks.
  14. I've got a Cheetah 15K.5 drive that, according to SMART, has a lot of self-corrected errors. See below. The 15K.5 is the only drive on one channel of a dual-channel Adaptec AIC-7899P (U160) SCSI controller. The other channel is occupied by an Atlas 15K II. The Atlas shows 308467 corrected ECC read errors in 49,492,787 megabytes read (E/MB =.0.006232564732179803363). As shown below, the Cheetah has 323324590 corrected ECC read errors in 1,601,630,634 megabytes read (E/MB = 0.2018721315261942936) That quite a discrepency in errors rates. Should I be concerned about the rate of correctable ECC errors on the Cheetah drive? I've been using this same SCSI controller for years and have no reason to think that there are problems with either channel. The Cheetah is only about 9 months old. A related question: what condition is measured by "Non-medium error count"? Interface errors? Firmware errors? Thanks. # smartctl -a /dev/sdb smartctl version 5.36 [i686-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/ Device: SEAGATE ST3300655LW Version: 0001 Serial number: 3LM083Z30000871309XR Device type: disk Transport protocol: Parallel SCSI (SPI-4) Local Time is: Sun Dec 14 10:38:27 2008 EST Device supports SMART and is Enabled Temperature Warning Enabled SMART Health Status: OK Current Drive Temperature: 33 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 2983269371 Blocks received from initiator = 3126389831 Blocks read from cache and sent to initiator = 92659842 Number of read and write commands whose size <= segment size = 6579334 Number of read and write commands whose size > segment size = 1 Vendor (Seagate/Hitachi) factory information number of hours powered up = 4885.50 number of minutes until next internal SMART test = 14 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 323324590 16 0 323324606 323324606 1527.434 0 write: 0 0 0 0 0 76612.315 0 Non-medium error count: 88 [GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on'] SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background long Completed - 1 - [- - -] Long (extended) Self Test duration: 3852 seconds [64.2 minutes]
  15. Steve Snyder

    Lots of corrected ECC errors

    No, I haven tried that yet. It seems strange to me that with all those read errors, there are zero write errors reported.
  16. Steve Snyder

    WD Velociraptor - BLFS - GLF - GLFS

    The HLFS is the VR in a backplane-compatible form factor. Here's your list: http://www.wdc.com/en/products/Products.asp?DriveID=494
  17. Steve Snyder

    Another Fusion-IO review

    http://www.tweaktown.com/reviews/1683/1/ex...tate/index.html
  18. Steve Snyder

    X25-E on Linux running XFS

    Does the converter position the drive such that it can be mounted in a 3.5" backplane (data and power connectors in correct position)?
  19. Steve Snyder

    X25-E on Linux running XFS

    What was your rationale for picking the XFS file system for use on the SSD? I mean, as opposed to any other Linux file systems? Thanks.
  20. I'm not familiar with Vista, but if this were WinXP, I would advise your friend to: 1. Turn off "System Retore" (this will also delete previous restore points) 2. Uninstall the backups for MS hotfixes and service packs ( http://www.dougknox.com/xp/utils/xp_remove_hotfix_backup.zip ). 3. Review the default size of the pagefile.
  21. Steve Snyder

    Should I worry about this SMART status?

    Well, the same cable + terminator was used previously on a similar controller (previous: Adaptec U160 add-in card; current: Adaptec U160 intrinsic to motherboard). Only a single drive on the cable.
  22. Note the number of errors corrected by ECC. This number has been slowly growing in the month that I've had this Cheetah 15K.5 drive. Should I be worried by this number of self-corrected errors? Thanks. Device: SEAGATE ST3300655LW Version: 0001 Serial number: 3LM083Z30000871309XR Device type: disk Transport protocol: Parallel SCSI (SPI-4) Local Time is: Tue Jun 10 09:54:56 2008 EDT Device supports SMART and is Enabled Temperature Warning Enabled SMART Health Status: OK Current Drive Temperature: 37 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 2457775660 Blocks received from initiator = 2980091105 Blocks read from cache and sent to initiator = 24253550 Number of read and write commands whose size <= segment size = 3235131 Number of read and write commands whose size > segment size = 1 Vendor (Seagate/Hitachi) factory information number of hours powered up = 394.85 number of minutes until next internal SMART test = 23 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 27649261 0 0 27649261 27649261 1258.381 0 write: 0 0 0 0 0 4038.363 0 Non-medium error count: 9 [GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
  23. I'm putting a system together from old components laying around. My motherboard has dual Ultra80 SCSI controllers and I've got 2 Maxtor Atlas I disks. Is it possible to configure these 2 disks, each on its own controller, such that I have a single RAID0 bootable volume in WinXP/SP3? Actually, I don't really care about a bootable RAID value. What I really want is that the OS's files and applications are on the RAID volume. If that means carving out a small partition at the start of both disks just to hold the boot files I'me fine with that. I may be a Windows S/W RAID newbie, but I'm well aware that RAID0 isn't safe. This is mostly an educational experience and a chance to use some unused hardware. Is the above scheme doable? How does one get the system files on the RAID'd volume? Install the OS on to a single disk, then merge that disk and the 2nd empty one? Can you tell WinXP at installation to to create the RAID volume on which to install its files? Any pointers or tales of relevant experience would be appreciated. Thanks.
  24. Relative to what? That is, which hard disk did you have prior to the VelociRaptor?
  25. I'm setting up a RAID 10 config on a Linux (CentOS 5.1) box, a machine that does not have a SATA controller on the motherboard. I bought an Areca ARC-1120 with the intention of just using it in JBOD mode, as a plain SATA II controller with the actual RAID management done by Linux. Now that it's time to do the actual set-up, I'm conflicted. The RAID config will be composed of 4 SATA II disks, but I bought an 8-disk controller to keep my options open. This machine is on a UPS, so I'm not concerned about disk corruption due to the ARC-1120 buffer being lost. I've heard repeatedly that the Linux software RAID is rock-solid. I've set up a software RAID before and it was a straightforeward process. (I do not yet have experience recovering from a disk failure.) The downside is increased CPU utilization and general system complexity. The upside is independence from any particular RAID controller. Praise also seems to be plentiful for the Areca RAID controllers. I wouldn't be using one of their main selling points, the XOR engine, in a RAID 10 config, but the disks would be unified at the hardware level. The downside is dependence on Areca for quality of fimware, for diagnostic software and on unknown actual disk layout. The upside is that the on-board 256MB cache and synchronized NCQ should make for more efficient I/O. (That's my fact-free assumption, anyway.) Plus, I like the simplicity of presenting a single disk to the operating system. Regarding performance/optimization: the ARC-1120 has a more intimate knowledge of the underlying hardware; the Linux kernel's software-based RAID has a more intimate knowledge of the workload being handled. I have no idea which type of RAID management would provide the better net performance. My workload is mostly STR-dependent (Samba files server on a Gigabit LAN) I/O, for whatever that information is worth. So, what to do? Hardware or software? Are there gotchas and/or benefits that I'm not taking into consideration? Has anyone reading this done gone both routes and have a basis for comparison? Any guidance toward one option or the other would be appreciated. Thanks.