gojirasan

why so many large drive failures

Recommended Posts

I was thrilled when the first 1TB drives came out. I bought 4 Seagate 7200.11 and 1 Hitachi 7K1000 terabyte drives as soon as I had the money. I had been a loyal Seagate fan for many years. I had never had one fail on me and my very first drive was a Seagate, a 750 MB drive IIRC. In contrast I had had a number of Western Digital and Samsung drives fail on me over the years. For a while everything seemed fine with my new terabyte drives. So I bought 4 more 1.5TB Seagate 7200.11 when they came out. In less than 6 months all of the Seagate drives proved to be unreliable. I would get "temporary failures" where a drive drops out, but returns for a while after rebooting. I think only about half of them have failed completely and permanently. I lost a lot of data.

Having read the Newegg reviews for these drives I was amazed at the high rates of failure approaching 80% in some cases. I have lost all faith in Seagate. It will be many years before I ever buy one of their drives again, if ever. But what alternatives are there? Checking the reviews of current 2TB drives, Western Digital and Hitachi both seem to have failure rates somewhere around 30% or so. If you buy 3 drives you can reliably expect at least 1 of them to either be DOA or fail within the first 30 days. Samsung seems to manage a failure rate in the 15-20% range, making it the most reliable 2TB drive available, but it still means that 1 in 5 or 1 in 6 drives can reliably be expected to fail in a short time. Clearly something is very wrong with the technology, but no one is even talking about it and the manufacturers are certainly not admitting to anything. I used to consider hard drives to be a very reliable way to store data. And, at least for me, it was. So what has happened? Why the change? My 80 gig 7200.7 is still chugging away as a windows drive without even a hint of failure after all these years, but none of my terabyte drives (with the exception of my 7K1000) has managed to last more than 6 months without problems.

My theory is that there is something wrong with the basic tech. Either the higher areal density is causing problems, that placing so much data onto a magnetic platter is inherently unreliable or something is fundamentally wrong with the much touted perpendicular recording technology. And now both Seagate and Western Digital are talking about 3TB drives before the end of the year. If the decrease in reliability is caused by the increase in platter density, I can only imagine what the failure rates are going to be on those drives. As to PMR being the cause look at the wikipedia entry for PMR.

Perpendicular recording can deliver more than three times the storage density of traditional longitudinal recording[1]. There was some interest in using the system in floppy disks in the 1980s, but the technology was never reliable. Since about 2005 the technology has come into use for hard disk drives[1]. Hard disk technology with longitudinal recording has an estimated limit of 100 to 200 gigabit per square inch due to the superparamagnetic effect, though this estimate is constantly changing. Perpendicular recording is predicted to allow information densities of up to around 1 Tbit/sq. inch (1000 Gbit/sq. inch).[2] As of March 2009[update] drives with densities of 300-400Gb/in2 were available commercially, and there have been perpendicular recording demonstrations of 600-800Gb/in2
(emphasis mine)

So PMR is not new tech. It has been around since the 80s. It is just that no one ever figured out a way to overcome the problems before. I guess the question is whether they really have.

Edited by gojirasan

Share this post


Link to post
Share on other sites

Nice first post - welcome to the forums.

I think you'll get a lot of takes from the pros who handle a lot of drives, but generally, the drives seem to fail about the same, even though everyone has their favorites. I think one of the most important variables, as we've discussed before, is shipping policy. Properly shipped drives have much higher success rates than those that are not. It's hard to quantify the impact shipping has, but at this point, even 1TB drives are so cheap that I think a lot of retailers don't want to spend the cash to properly package them.

Share this post


Link to post
Share on other sites

There is no good manufaturer or bad manufacturer. All of them had their takes in the past. And all of them are almost equally reliable.

Drives failed all the time. Yes I would agree it is a whole lot more with higher density drives. But that is probably due to the volume of these drives being shipped. Almost everyone, even folks totally greek to computers, have few drives lurking around as an external storage or something right now. This is not how it was before. Hard drives were the toys of computer enthusiasts .

So why to leave the faith of your data to the manufacturer to start with?

I handled and still handle many drives. Almost all lemons fail at the start with a through stress testing. A drive rarely fails once it passes initial stress-testing. I used and still use drives from faulty batches from Hitachi, Seagate, WD, etc. Heck, the drive on my system is a supposedly from a death-drive batch of Seagates with faulty firmware. It didn't fail me 3 years ago after a day of burn-in. And it is still working fine.

Share this post


Link to post
Share on other sites

I have to agree with 666. I have a 1TB WD Black edition from over a year ago, never had and issue and I have 2 2TB WD greens now that I have not had an issue with other then a wee bit slow on up directory scans, but hey is a green drive. That being said i think that the bulk of the failure were how the drives were handled, Newegg has had issues with the drive handling, not the mention the mail diversely person. I have found out that the best solution to fix the majority of the drive failures that I see is to go into the power management on the system and disable the option to shutdown the drive after inactivity. The start up shutdown cycle kills more drives then anything else I have seen. and annoyingly this is the default.

Share this post


Link to post
Share on other sites

You know something funny; as much I dislike one brand of hard drive, I kept buying it over and over. I don't know; they have not failed me so far. I think it is a fluke sometimes that a brand new drive fails for no apparent reason. There's got to be a bad batch with bad firmware or weak read/write heads for some people to get bad drives and some does not experience the issue.

Some says it right; the density of these drives, as they are packing more bytes on a single platter, makes these drive susceptible.

Share this post


Link to post
Share on other sites

As far as the brand issues go I don't deny that every brand seems to have had "bad batches", but the only thing I can think of that compares to the Seagate 7200.11 fiasco would be the IBM 75GXP Deathstar. Admittedly a lot of the failures were supposedly due to firmware issues as opposed to mechanical defects, but still. A bricked drive is a bricked drive. I think Seagate is having an especially rough time right now in terms of defect rates and this is not just based on the newegg reviews. It is true that all of my Seagate drives that failed (and all of them failed) were bought from newegg as bare drives and were shipped with United Parcel Smashers, but my one Hitachi drive was also a bare drive shipped with UPS and it is fine.

I would be willing to accept the argument that the newegg reviews show much higher defect rates due to the fact that they ship with UPS and sell mostly bare drives and aren't known for packing very well. But if the poor packing is the main issue then why do the few boxed retail drives that they sell have more or less the same rate of negative reviews as the OEM bare drives?

It is very difficult to read reviews for the 7200.11 or the Seagate LP and not see at least some kind of statistical significance. With the 7200.11 reviews especially. There are over 2000 reviews there. Then look at the 2TB Samsung (to show a difference in manufacturer) or the 1TB Samsung (to show density trends) or a Seagate 7200.8 250GB (density, LMR, and pre-Maxtor Seagate). Also compare the reviews for the 640 GB WD Caviar Black (density). Only 12% were not either excellent or good ratings. Does that imply that Western Digital finds it easier to make a smaller (320x2) drive than a larger (500x4) drive or that Western Digital is more than capable of producing very reliable drives when they are getting a premium price for them? As impressive as that is, look at the reviews for my trusty 80 GB 7200.7. Only 7% of the reviews are not either excellent or good. Even though it's not completely consistent I do think I see some trends in all this. The fact that the 640GB Caviar Black (320x2) and 1TB Samsung (500x2) are so reliable seems to cast doubt on the theory that the change from LMR to PMR is at fault for the apparent drop in reliability. Despite the fact that PMR is relatively new tech. So maybe it is just the increase in areal density that is at fault. Will we see even higher failure rates when 640 gig platters arrive? Maybe we are approaching the actual limits of areal density for 3.5" hard drives regardless of what PMR theory has to say about it (1TB per square inch).

The theories that most defects can be picked up initially on stress testing (I admit this does seem to be true of the more recent non-Seagate failures which are mostly infant mortality), or that they can often be avoided by not allowing the drive to spin down are interesting. I used to allow my drives to spin down in as little as 10 minutes, but I have always done that (10-30 min.) and never had a problem with older drives. And my Hitachi 7K1000 didn't seem to mind. Only my 1TB+ Seagates. A friend mentioned this idea to me and although I'm not sure that it has any real basis I have been following the advice and not allowing any of my drives to spin down. As for the stress testing I must admit that I did not do that on my failed Seagate drives, but it only took them 3 or 4 months to all start failing intermittently. OTOH, the 2 Samsung 160GB drives that failed on me both took over a year to do so and my 2 WD1200 Western Digital 120 gig drives took more than 2 years to fail. This has been my first experience where a drive has failed so quickly.

Share this post


Link to post
Share on other sites
but the only thing I can think of that compares to the Seagate 7200.11 fiasco would be the IBM 75GXP Deathstar. A
I don't know the actual numbers but the 7200.11's started dying much sooner. The 75GXP's and to a lesser extent the 60GXP's were especially nasty because they tended to live a lot longer and hence people ended up with more data on them before they'd die. The 7200.11's and ES.2's were much less catastropic for the most part.
d sell mostly bare drives and aren't known for packing very well. But if the poor packing is the main issue then why do the few boxed retail drives that they sell have more or less the same rate of negative reviews as the OEM bare drives?
End-user reporting bias, that's almost certainly why.

That and even if you retail box a drive, but it ends up tossed 6 feet of vertical... yeah, that's probably still not going to be good for it. 3 or 4 feet is probably well within packing specs, but 6 or 8 feet? Not likely.

Our failure rates here are actually pretty good for 1TB and 1.5TB disks; unfortunately we do not yet have enough 2TB disks in nearline use for me to make such a judgment (and most of our desktop production is with smaller disks)...

I trust Newegg reviews about as far as I can throw them-- that is, not very far at all.

Will we see even higher failure rates when 640 gig platters arrive?
I suggest you read some other threads here that come up like this every time each time we jump a size generation as far as areal density, or even a half generation. In more recent memory that includes 200GB/platter and 250GB/platter for 1TB disks (Hitachi's the only one who did a 5-platter 1TB), the shift to 1.5TB to a lesser extent (since Seagate was the only one with a 7200rpm 1.5TB disk for a very, very long time), and now to 2TB at 400GB/platter and 500GB/platter (again Hitachi's the only one with a 7200rpm 5-platter, while Seagate and WD did a 7200rpm 4-platter, while Samsung and Hitachi's 4-platter 7200rpm entries, plus EVERYONE's 7200rpm 4-platter nearline entries, are all curiously very late to hit volume...
I used to allow my drives to spin down in as little as 10 minutes,
For drives that are contact start-stop, yes, they have a shorter lifetime than rampload drives when frequently powered down.
This has been my first experience where a drive has failed so quickly.
I hate to say this, but the plural of ancedote is not data. When you run through say 10 drives bought at random times from different batches all handled reliably, well, you might be on to something. :)

Share this post


Link to post
Share on other sites

I trust Newegg reviews about as far as I can throw them-- that is, not very far at all.

Unfortunately they are all we've got when it comes to statistical information. The SR reliability database lacks sample size and statistically it is not necessarily any better anyway. I don't see why the newegg reviews should be skewed in terms of higher failure rates (aside from shipping/packing issues) or in terms of one brand getting more positive reviews than another. It might be said that people are more likely to post a negative review than a positive one, but even if that is true (which is by no means proven) that could be accounted for.

I suggest you read some other threads here that come up like this every time each time we jump a size generation as far as areal density, or even a half generation. In more recent memory that includes 200GB/platter and 250GB/platter for 1TB disks (Hitachi's the only one who did a 5-platter 1TB), the shift to 1.5TB to a lesser extent (since Seagate was the only one with a 7200rpm 1.5TB disk for a very, very long time), and now to 2TB at 400GB/platter and 500GB/platter (again Hitachi's the only one with a 7200rpm 5-platter, while Seagate and WD did a 7200rpm 4-platter, while Samsung and Hitachi's 4-platter 7200rpm entries, plus EVERYONE's 7200rpm 4-platter nearline entries, are all curiously very late to hit volume...

If I can find those threads I certainly will read them. I just found the following on anandtech's article on the WD 4k sector issue.

The crux of the problem is that there are 3 factors that are in constant need of balancing when it comes to hard drive design: areal density, the signal-to-noise ratio (SNR) in reading from drive platters, and the use of Error Correcting Code (ECC) to find and correct any errors that occur. As areal density is increases, sectors become smaller and their SNR decreases. To compensate for that, improvements are made to ECC (usually through the use of more bits) in order to maintain reliability. So for a drive maker to add more space, they ultimately need to improve their error-correction capabilities, which means the necessary ECC data requires more space. Rinse, wash, repeat.

At some point during this process drive manufacturers stop gaining any usable space - that is, they have to add as much ECC data as they get out of the increase areal density in the first place - which limits their ability to develop larger drives. Drive manufacturers dislike this both because it hinders their ability to develop new drives, and because it means their overall format efficiency (the amount of space on a platter actually used to store user data) drops. Drive manufacturers want to build bigger drives, and they want to spend as little space on overhead as possible.

There is also a nice graph showing that relationship in the article. So apparently there is an inverse relationship between SNR and areal density which inevitably introduces more errors which then need a more robust form of error correction in order to fix. That looks like a smoking gun to me. I wonder how many more I would find if I knew more about the technology. So apparently the move to PMR was not the total panacea that it was made out to be in terms of automagically getting to 1TB/sq.in platter densities. There are other factors involved. All that is needed in order for reliability to decrease proportionally to an increase in areal density is for the manufacturers to underestimate how much additional ECC is required to make up for the larger number of errors.

I found the following post in a hardforum thread on the Hitachi 7K2000:

Originally Posted by novadude:

The odd thing is that at the same time they introduced the 5 platter 2TB drives they introduced 2-platter 7200rpm 1TB drives, so they have the capability to make 4 platter 2TB drives.

This is incorrect- if they could've put out a 4-platter 7200RPM 2Tb instead of 5-platter, they would have. There was an engineering issue specific to four platters @ 500Gb per platter when spun higher than a certain RPM. At 7200RPM supposedly the poor SNR produced error rates that were out of control. Meantime read/write head tech may have advanced and overcome that, but I don't think it was a marketing conspiracy if thats the concern.

Is this stuff common knowledge? Does anyone have links to more info about this? The idea of declining SNR being responsible for the increase in negative Newegg reviews for higher capacity drives is becoming more and more compelling to me.

Edited by gojirasan

Share this post


Link to post
Share on other sites
. It might be said that people are more likely to post a negative review than a positive one, but even if that is true (which is by no means proven) that could be accounted for.
That is absolutely proven. Go take a look at some statistical studies...
o apparently the move to PMR was not the total panacea that it was made out to
it was never a pancea. Everyone in the industry knew it would be more difficult.

If you haven't noticed during each capacity jump (which I alluded to above), some makers lag... sometimes entire types of drives lag. If something lags, any one component or manufacturer of a component... things take longer. Samsung, for example, had a devil of a time getting 333GB/platter 7200rpm disks working as their head supplier/head design did not turn out as expected when they shifted from sampling to full production.

Is this stuff common knowledge?
In the industry, it is.

Head positioning and drive design as platter count increases gets more complex. Demands on the drive design and components as areal density goes up also get more complex.

The idea of declining SNR being responsible for the increase in negative Newegg reviews for higher capacity drives is becoming more and more compelling to me.
Not at all. Newegg reviews-- indeed any consumer survey like that-- tends to have people who are happy or satisified with their drives and hence they don't say anything about it... and those who have failed drives, and tend to bitch and moan about it. Add in that Newegg, Mwave, ZZF, and most other vendors still can't consistently package a harddisk both within their internal warehouse as well as externally to their customers to save their lives and you see a much higher failure rate due to poor handling than you would otherwise. And by "other vendors" I mean just about everyone. Even Other World Computing (OWC) has screwed up plenty of times, ATACOM has had some bloody catastrophes, it's a problem just about everywhere...

Add in that people are now packing larger drives with more data, and hence shuffling more data around, and failures are going to go up as the BER (bit error rate) remains constant and the drive handling procedures and the drives themselves aren't getting any more immune to shock and physical damage (at least not by much!).

Share this post


Link to post
Share on other sites

Just to add to this discussion a bit. 3 of the 7 Samsung EcoGreen F3EG 2TB disks have developed bad sectors this week and caused me a large amount of data loss. :(

Quite a bad score if you ask me. :(

Share this post


Link to post
Share on other sites

That is absolutely proven. Go take a look at some statistical studies...

Fair enough. I'll take your word for it. I guess I am the exception. I have posted positive reviews on Newegg, but I haven't posted any reviews about my Seagate 7200.11 experiences. If you are so confident that negative experiences are being over-represented then it is simple enough to subtract a negative bias factor and you are again left with some very useful statistics both on trends (whether drive manufacturers really are overcoming the greater difficulties inherent to higher areal densities without a drop in reliability) and to differences between manufacturers. The only question would be what to make the fudge factor. It seems reasonable to assume that no drive has a 0% (or less, haha) defect rate. So if you take what seems like a very reliable drive (based solely on said reviews) like the 7200.7 with it's measly 7% negative reviews you end up with some guidelines. I think any defect rate less than 2% is pretty implausible. So it might seem reasonable to just subtract 5% from the percentage of drive failures indicated in the newegg reviews. Since Newegg may still have been shipping via Fedex Saver in the days of the 7200.7, it probably makes sense to also subtract a United Package Smasher factor.

The Samsung 1TB F3 is well within the Newegg UPS era. In fact it is a 500 gig per platter drive. That drive has 12% negative reviews. The worst case scenario is that all of the difference in Newegg review reliability between the 2 drives is due to the difference between UPS ground and Fedex air (although Saver is sometimes ground) or maybe a different method of packaging (although I don't recall any significant changes there). In that case we would add another 5% to the fudge factor for a total of 10%. Since I don't think Fedex was ever perfect that may be an overestimate. You still end up with a large difference between drives and drive manufacturers. But admittedly you do end up (if the assumptions are correct) showing that it is quite possible to make a modern PMR 500 gig per platter drive with about the same reliability of a much older LMR 80 gig per platter drive. At least if you happen to be Samsung. Subtracting 5% for negative bias seems perfectly reasonable to me, but subtracting another 5% just for the (alleged) differences between UPS and Fedex I think is a bit of a stretch. I would guess it may be more like half of that. Fedex shipping is not that much less violent than UPS shipping. They both use similar systems for package handling. That would still imply only a slight decrease in (best case) reliability of only 2 to 3% over an order of magnitude increase in areal density, which I don't exactly consider a major disaster. So maybe the sky is not in fact falling.

It may just seem that way because Seagate, Hitachi, and Western Digital (with the possible exception of their premium drives) are nowhere near a 12% negative review rate in their 1TB+ drives. They usually start at more like 33% and go up, way up, from there. Even if you subtract another 5 to 10% from that for negative bias and shipping issues you are left with true failure rates starting at somewhere between 23 and 28% and getting up to 50% or higher on some Seagate drives. I thought I found a Seagate drive with a 70% newegg review failure rate, but I can't find it now. Even if you subtract the full 10% fudge factor, these failure rates are unacceptably high (both for the manufacturer and the customer). However the main point of this thread was to discuss an apparent change in reliability in the modern high capacity drives as compared to older drives like my Seagate 7200.8 400GB or 7200.7. Subtracting 3-5% for the switch to UPS does make that more difficult. And not all older drives were as reliable as the Seagate 7200.7. It seems fair to compare apples to apples (and lemons to lemons). The 5% negative bias wouldn't have any effect on the difference from older drives, but the change from Fedex to UPS probably would. It would be nice if I knew exactly when that happened so that I could find a drive released just after the change to compare to modern drives. That would avoid the entire shipping issue and maybe also the issue of any change in packing methods if the less reliable packing also changed at the same time as or earlier than the change in shipper.

It may just be a kind of freakish coincidence that all of the non-Korean manufacturers seem to be having such a high percentage of drives returned to them. The equivalent of a "bad batch" of drives from 3 manufacturers for a year or two, which could happen anytime. I mean, what if Seagate and Western Digital had also been having similar problems to IBM during the 75GXP fiasco, but at maybe half the defect rate? Maybe that is sort of what is happening now, but in a less dramatic or obvious fashion so that the problems can be more easily denied. Maybe it actually has nothing to do with the increase in areal density. It seems pretty plausible to me that the fact that higher capacity drives are harder to make reliably may have something to do with a string of "bad luck" on the part of Seagate, Hitachi, and Western Digital. OTOH, maybe not. I can imagine someone having speculated in the same way after 10 of their IBM Deathstars just failed about how the high failure rate had something to do with the improvements in areal density at the time. It does always seem like we are balancing precariously on the cutting edge. This is the first time that I have felt that the manufacturers maybe moving too fast.

Could the fact that Samsung tends to lag so far behind the other manufacturers in terms of capacity jumps have something to do with their drives seeming to be more reliable now? Maybe they are more careful or conservative in terms of handling higher areal densities. Maybe what other companies regard as a safe level of SNR or error rates gives them pause. And maybe Western Digital applies the same level of care or conservatism to their premium drives. Maybe Seagate has analyzed the factors involved and decided that simply denying the problems, even in their premium drives, is more economical.

Hehe. Sorry for that dig, but I just had thousands of dollars worth of Seagate drives fail on me in the past year. I lost terabytes worth of data. Maybe I still hold a bit of a grudge :). I just bought a 2TB Caviar Black (my first Western Digital since my WD1200 drives). There aren't many Newegg reviews of the drive probably due to Western Digital's insane pricing. I am hoping for the same 12% - 7% = 5% defect rate that the 1TB version manages.

Just to add to this discussion a bit. 3 of the 7 Samsung EcoGreen F3EG 2TB disks have developed bad sectors this week and caused me a large amount of data loss. :(

Quite a bad score if you ask me. :(

It is indeed. I guess Samsung is not a safe haven either. Although it still doesn't compare to my 9 out of 9 Seagate failures. Did you buy them from Newegg? Have you ever had that high a percentage of drives fail on you before? That's about the same percentage as the Hitachi and Western Digital drives. So at least you haven't done any worse by going with Samsung.

Share this post


Link to post
Share on other sites

Ah, well, I have had dozens and dozens of disks go through my pc's/servers over the years. I have been running a storage server for years and thus upgrade once in a while and try to double the capacity when possible. This started with 20GB disks and has now moved to 2TB disks. So I have had 20GB's, 60GB, 120GB, 250GB, 500GB, 1TB, 1.5TB and now 2TB disks in my server (server itself has changed hardware too but in the most recents years I have been using adaptec controllers, so you could call that a constant. ;) ).

I've run from 4 disk RAID5's to 8 disk RAID5's. And have had some failed disks over the years.... treating them quite badly actually.

I run the server 24Hr a day, for weeks on end, then shut it down, put it in the car, drive it to a lanparty, turn it on again for 48Hrs where the disks get trashed, turn it off again and put it back into it's spot where the disks spin again for a few weeks. Well, I can tell you, desktop disks (or any?) are not designed for that kind of usage. ;) Oh, and my cars mostly have subwoofers in the back too, where mostly the server would stand also. Never had any effect on it that I could tell.

But, since I have 3 years warranty mostly, I got by pretty well. Especially Maxtor was my friend with their advanced replacement program! :D :D Most disks would take 1 tot 1,5 or 2 years of this kind of punishment, in all honesty, the average lifetime you would expect from the disks in power on hours and reads, but normally over much larger time say 5 to 10 years.

Anyway, in all my time with all my type of disks and array's and everything, I have never!!! lost as much data as I did this time. I have never had so many disks just fail, for no apparent reason. And now a days my disks are in nice holders in chieftec hotswap bays, etc. etc.

So no, I am used to having a much better experience. Just like you are from what I get from your story. Sadly disappointed and I am hoping the replacement disks won't fail as horribly too!

Share this post


Link to post
Share on other sites
f you are so confident that negative experiences are being over-represented then it is simple enough to subtract a negative bias factor
Yes, but what is the factor? That's much more difficult to say.

And as the new sticky at the top shows about testing drives for failure, how many drives are being properly diagnosed as failed by the end-user? It's definitely not 100%...

Statistical reporting bias is something well studied, unfortunately even I am not sufficiently along in stats to dig up the best of the studies for you. You don't have to take my word for it, head over to your local community college and talk to even a newly-minted stats grad there and they can find the info...

I think any defect rate less than 2% is pretty implausible.
Good lord... if drives had an initial 2% defect rate the manufacturer would be out of business. I have actual numbers from our production, and I can tell you that if any of our initial failure rates were 2% we'd send the entire batch back to the manufacturer for replacement. (I am under NDA, so you'll have to make do here. Sorry :( ).

Hell a 5% overall defect rate would be bloody ridiculous. That would cause systems we warranty for 3 years to have at least a 5% warranty claim rate on harddisk defects alone, and we're nowhere near that kind of number for those failures. :)

, it probably makes sense to also subtract a United Package Smasher factor.
I don't think so. Newegg and almost all vendors are known for less than proper handling of drives within their own warehouses, within their own packing process, improper packing, and possible shipping abuse. Of those four, I am the least worried about improper shipping as once a drive is properly packed, it should survive most shipping just fine.
ould the fact that Samsung tends to lag so far behind the other manufacturers in terms of capacity jumps have something to do with their drives seeming to be more reliable now?
Not at all. They were actually among the most aggressive in ramping to 333GB/platter disks for 1TB capacity. Look at actual 1TB 333GB/platter disk ship dates, and Samsung's F1 was the first. They got bitten hard as their head design/supplier could not keep up, and suffered from high failure rates and very low production (due to insufficient head supplies) as a result.
but I just had thousands of dollars worth of Seagate drives fail on me in the past year.
Again, the plural of ancedote is not data. ;)
Just to add to this discussion a bit. 3 of the 7 Samsung EcoGreen F3EG 2TB disks have developed bad sectors this week and caused me a large amount of data loss.
Were they all from the same batch? Purchased in the same shipment? Shipped in the same box? Purchased from the same vendor?

Share this post


Link to post
Share on other sites

Actually regarding the Samsung F1 1TB drives, I've seen about 2 fail, and I know of atleast 100 in production, both in servers and home systems... Not a bad track record, imho.

Samsung might have jumped on the 333GB/platter bandwagon quickly, but they were far from the first to get to the 1TB point.

The biggest problems with harddrives and perceived vs. actual reliability is that most end users have too few drives for the data to be meaningful, and those with large enough datasets are reluctant to talk about it, for obvious reasons...

Share this post


Link to post
Share on other sites
and I know of atleast 100 in production, both in servers and home systems... Not a bad track record, imho.
I'm talking specifically to early in the production cycle... the first 6 months or even year of availability for the F1 1TB's was extremely rough. It's what 3 years later now? They definitely have most the bugs ironed out by now. :)

Hell that's another point of bias in the data (if not carefully analyzed) that may not reflect continuous improvements...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now