• Content Count

  • Joined

  • Last visited

Everything posted by cbrworm

  1. I have a failing ST4000NM0043 SAS drive. I started receiving warnings about the drive a week or so ago, so I removed it from service. Out of curiosity, I went through the trouble of making the drive visible to Seagate Seatools. The drive failed the long drive fitness test, as I expected it to. Seatools recommended running a fix all command on it, which I did, now the drive passes the long drive fitness test? If I use smartctl, it still reports: SMART Health Status: DATA CHANNEL IMPENDING FAILURE GENERAL HARD DRIVE FAILURE [asc=5d, ascq=30] Obviously, I'm not going to use the drive, but I am curious - is the passing of the DFT suggesting that the drive should be suitable for use? Second question: Also, the drive has fairly low hours on it, but a lot of start/stop cycles due to the nature of this particular array's usage profile. It is only accessed a few times a day, but power is applied 24/7. It has 25K hours and shows Accumulated start/stop cycles: 6633. The drive (and all the drives in this array) show a specified cycle count over the device lifetime to be 10,000. I didn't realize the start/stop lifetime was so low - I think I thought the load/unload lifetime was what mattered. Are these drives more likely to fail before 5 years (what I projected them to last) due to my spinning them up and down 6 times a day as opposed to keeping them spinning from 6:00pm until 8:00am, which is when they are in use?
  2. I have a bunch of old IDE drives that still work. I came back to this thread because I just happened to look at a machine I have running in the garage. It has a WD Blue 640gb drive. I'm guessing it has been in service since '08? It currently has 54,000 power on hours and 170 power cycles. HD Tune tells me that is 2,222 days of spin time. It has zero errors of any sort. Head flying hours is 21,000 - so head is flying about 50% of the time. I have no plans to retire it any time soon.
  3. My impression: The hybrid drive identifies and stores your most frequently accessed data and keeps it in NAND for quick retrieval, it remains there through power cycles and system resets. The cache on regular drives is smaller and is used for minimal write caching and read ahead caching, essentially enabling smoother data flow to and from the drive, particularly during higher Q depth reads and writes and allowing NCQ to work. This cache is not persistent and is flushed frequently.
  4. Thanks for your concern. The machine is backed up multiple times a day locally and nightly to a remote location. Downtime for repair is the killer. Ironically they have a SAN in the same space with 24 1tb Hitachi Ultrastars that has been running 24/7 for at least 8 years. I have been hounding them to upgrade since it went out of warranty a number of years ago, but it has never had a drive failure. I ordered a HGST 3TB drive yesterday along with a Samsung 850 Pro for the OS. This particular customer is difficult to begin with, having repeat hardware failures is not good. I guess if this next set of drives follows the same pattern of basically doubling the previous set, they should be good for almost 4 years.
  5. Hi, I have a customer with a SFF Dell Optiplex 9020 that runs 24/7 in an air conditioned closet. It has been in service now for about 3.5 years. It initially had a Seagate ST3000DM001 which failed within the first few months of service. I replaced it with a new ST3000DM001 (this was before the high failure rate for those drives was known). It ran non-stop for almost a year before failing. The customer has sensitive data and doesn't want the drives sent back for warranty replacement - they pay for new drives each time. By the time the second Seagate 3tb drive failed, I had sworn off seagate due to an incredible number of those drives failing at my customer sites. So I bought a WD WD3003FZEX, thinking that the black drive should be reliable. 604 days later, it is now failing. The risks here are that the drives spin 24/7, but the actual workload is very low. It is a headless machine that only has remote users, so no one to hear the drives making noise, etc. Luckily the first drive and this drive failed progressively with SMART errors first. This one currently has high reallocated and pending sector counts. Being a small machine, airflow is not ideal, but the drives never exceed 52 degrees C. higher than I would like, but not high enough that I would expect failure. What do I get next? HGST? I have had great luck with HGST at other locations, but I honestly have very good luck with WD black drives as well. I have had fairly horrible luck with Seagate SSHD drives, which could be an ideal solution. Is the WD 4TB SSHD better? Is it suitable for 24/7 use? I don't want to move to enterprise drives due to error recovery being a desired trait. Due to the database work that they are doing, when the system is being utilized, I need max IOPS, so I am hesitant to use a 5,400 RPM drive. The machine is not exposed to any vibration and it is in a very tightly controlled, dry environment. Unfortunately it is held at about 80F. Thanks!
  6. Yes, Hitachi has made the idle noise for many generations. I had an array with 10 of the 2tb Ultrastars and when idle the whole array would sound like a cricket farm. I have not noticed that with the 6TB drives though.
  7. cbrworm

    WD Black 6TB HDD Review Discussion

    I would be interested to see how the HGST 6TB NAS drive compares to this. I guess it would be about the same as how it compares to the WD Red Pro 6TB? I wish someone would make a high performance SSHD or significantly up the cache size as I am always looking for max performance rotating drives.
  8. If speed is a concern and assuming you have active cooling, have you considered the HGST Desktop NAS 7,200 RPM drives? I have a few of the 6TB drives that have been working 24/7 since they were released. I have found them to outperform everything else I have tested/used in the 4-6TB NAS and nearline enterprise class.
  9. For my own, and many customer backups that don't have access to tape, we use enterprise grade 7,200RPM Hitachi SATA drives w/o enclosures. We have hot swap bays that they can be put into to run a backup and then be put into a cave. I have a customer who is looking for a self maintainable quarterly backup solution that he can use to backup his backup machine (which contains images of all the workstations) and put in a safety deposit box. This would be 5 drives most likely, each of which would be used one time per year with one spare. I was leaning towards a 2.5 inch bus powered solution, they need less than a terabyte, but looking towards the future, 2TB might be better. I am concerned about the longevity of 2TB 2.5 inch drives, but would like to have a small solution that requires no power supply. If this is a bad idea, I can point them in the direction of a 2TB 3.5" external drive - in either case, are there any that have proven reliable or unreliable? I have a number of the Seagate 2TB Go Flex drives from a few years ago which have been working fine as well as some of the older 1&2 TB WD MyBook drives which also still work. - but they don't sit in cold storage for 11.5 months a year.
  10. I ended up getting the WD drive due to no real reason other than it ended up being slightly less expensive than the Samsung.
  11. Has anyone seen these bare drives in the wild? Is it mechanically similar to the toshiba enterprise 5tb drive?
  12. cbrworm

    Which hard drive for 24/7 use?

    Make sure you update your firmware on the router - do a search for Asus vulnerability, it is specifically related to USB Drives connected to the router. I believe the fix for that router will not auto-download from within the web-interface (it will say it is up to date), you need to go to Asus' website and download it. As for the hard drive - if it is going to be in an external enclosure without a fan, one of the 5,400 or 5,900 RPM drives would run cooler. In either case, one that is backed up regularly would be good. I would almost be inclined to buy a pre-assembled drive and enclosure - they typically have power management features enabled that work without being prompted by a host PC.
  13. The only WD drives I am interested in have black or yellow labels, but the color scheme does make it easy for an average person to pick the 'right' drive. I didn't mind when it was black, blue or green and occasionally yellow. Now they are getting silly.
  14. cbrworm

    "Thin" 3.5 internal HDD

    What about the Seagate ST1000DM003? Good performance - less than 1"
  15. cbrworm

    Adaptec 3805 and new disks

    Which Seagate's? their NAS drive? BTW, SuicideGybe - nice numbers. Do you have your card in a PCIe 3.0 slot? It seems that I maxed out at 874/730 also w/6 (7,200) drives, in a PCIe 2.0 slot I maxed out <600MB/s. I'll see if I can find a picture.
  16. cbrworm

    WD Green 3TB Observations

    I had similar luck with the original WD15EARS that I put out in the field - I only used a few, but they all came back within a couple years. They didn't corrupt data, they just died. I have been having higher failure rates with mainstream drives from all manufacturers over the last few years than I have seen since the mid-late 90's. In fact, I still have old 5 platter 250gb Hitachi's all over the place that just keep going (and meowing). Many of my customers have older, pre FDB IDE drives that just run 24/7 and never fail. I build them a new machine with good cooling and a year later the new drive craps out. BTW, I have a WD15EARS that works perfectly and only has about 40 hours of use on it that I want to sell if anyone is interested...
  17. cbrworm

    WD new Black line

    I am also encouraged to see these new Black drives. I have a problem (literally), I need large drives that are as fast as possible, and with rotating drives - throughput is decent on a number of new drives, but the access time has been dwindling and in some cases inconsistent. I realize compared to solid state drives there is no comparison, but I still want my large drives to have <=12msec access times. To me, with my workload, access time is more important than throughput - assuming the throughput differences are 160MB/sec vs 200MB/sec. And I have to have many terrabytes. My only solution up to now has been to use enterprise class drives, with a second choice being the Hitachi drives. I can't afford SSD's big enough to meet my needs. I have terabytes of virtual machines spread across multiple drives and the difference between 12msec and 15msec is astounding! Especially when some of the new drives (Seagate) seem to get hung up every once and a while under heavy use - I suspect due to the write caching algorithm - I have seen it on all the newest Seagate drives with the exception of the 5 platter Constellations. I see this as an encouraging sign, even though the last line of Black drives seemed to be overpriced for what you got, they cost less than the non-green RE's
  18. cbrworm

    Hard Disk bit error rate

    My best advice is to keep archives. Whether it be weekly, monthly, yearly - whatever. Keep old copies that you can refer to. Ideally store a copy at a different location.
  19. I just bought the last 5 4tb 7,200 RPM seagate external drives at Costco to pull out the drives. All the ones they have left now are the slower 5,900 drives. I see the ST4000DX000 drives on amazon and ebay, but anyone I ask says either they don't know where they are from or that they were removed from enclosures. Seagate at one point had specs posted for the 4tb barracuda XT -7,200 RPM, 8-9msec access times. The ones from the enclosures have slower (18msec) access times, but still better performance than the new 'desktop' drive. Was the internal drive ever released?
  20. I am doing an experiment, I am going to use mismatched - but modern drives in a RAID 6 array on either an Adaptec 5805 or 7805 controller. 8 2 or 3tb drives, all of the same capacity, 4 drives will be new generation high throughput, but slow access time (~15ms including latency), 4 drives are older generation 12msec access time, but 25% lower throughput. Will everything average out or will all the drives be limited (particularly the access time) to the speed of the slowest access time drive? In many ways the drives performance are comparable to each other each with strengths and weaknesses - I am wondering if the array performance will end up being pretty good, or really bad. Thoughts?
  21. So, in the past I have bought the Seagate GoFlex external USB 3.0 drives and pulled out the 4TB 7,200 RPM ST4000DX000 drives. At the time they were available for around $200 (must have been pre-flood). Today I was in Costco and saw they had an identical looking Seagate external USB3.0 4tb drive for $159. So I grabbed a couple. Turns out these now have 5,900 RPM drives in them??? I thought Seagate was done with 'green' drives. I then noticed that these are labeled as backup plus instead of GoFlex. The drive through the USB3 interface is none too swift, it actually looks like it is being limited by the base, but since I plan to return them I am not going to open them up. This drive is listed as a ST4000DM000 So I went to the Seagate page to look it up: link They don't even list spin speeds anymore?!?!, but clearly this 4tb drive pulls less power and has lower throughput than the 3tb drive. They show <8.5msec access times, but don't list latency. It works out to be 17.x ms access time. It is all a bit misleading if you ask me... Needless to say, they are going back. For the time being I will continue to stick with the Hitachi 7k4000 series for 4TB drives.
  22. I have read the reports as well. I would say they are not dead yet. I have been migrating people to RAID 6 where in the past I would have used RAID 5. I may just be lucky, but I have not had a second drive fail during a rebuild - much less a third. Most of the arrays I have deployed are between 6 and 8 drives - maybe that is why my luck has been good. I have always stuck with Seagate or Hitachi enterprise class drives (at least since SAS has been around), and adaptec controllers. I am curious about these new lower cost, lower MTBF drives - these may put us in that situation. The real key is backups. I am getting ready to do a test on an in house server that I should not admit to, I am going to use mismatched good quality SATA drives capable of their respective time limited error recovery (but not enterprise class drives) and see what happens over time. It may be a very short experiment.
  23. It would be interesting to know where this fits between a black and an RE. I thought the RE was a black drive with tweaked firmware. Sounds interesting.
  24. I have had great success with the Samsung 830 series and the old Intel G2 and 330 series drives. I have also had good luck (so far) with the Sandisk Extreme drives. I have had mixed results with PQI and Crucial/Micron C200 & C300 drives, and some others. We have thousands of the Intel and Micron/Crucial drives deployed and hundreds of the 830, PQI and Apacer drives. We have never had an Intel or Samsung drive fail. We have only deployed a handful of the sandisk drives so far. We have not deployed any of the Samsung 840 series drives yet - currently we have the 830 and Intel certified and will stick with them until it is no longer available.
  25. It will be fine, your battery run time might be a little lower with a 7,200 rpm drive, but performance should be better. It is common in some cases to use a shim for 7mm drives, but they are not required in all cases depending on how the drive is mounted.