no driver

New HDD burn-in routines?

Recommended Posts

If you are doing proper burn-in and torture test on a drive for a day and it passes before committing any data on it, it will not die on you after several months. If it does, it will be something random, bad luck, not a repetitive pattern.

I am fairly certain I just lost my hard drive in my HTPC and am preparing to buy a replacement drive, but I saw the comment above in another thread.

What should I do to burn-in and/or test the new drive before I try to reinstall my system and start trusting it with my data again?

Thanks in advance :)

Share this post


Link to post
Share on other sites

Write some random stuff to it. Stuff that you have a checksum of. Write the drive full. Check SMART values for anomalities. Check the files against the old checksums for silent data corruption. Check SMART values again for any changes.

Zero fill the drive. Check SMART. Do some random SMART offline tests: extended self-test in offline mode and offline data collection routine checks all media. Short self-test runs mechanical tests in same manner as extended but checks only a small portion of magnetic media (incl. beginning of and end of LBA space because lots of important data get placed there (boot sector, filesys data)).

You cannot make the drive perfectly reliable with any burn-in, and just a few days of constant torture is still not enough. So keep extra back-ups even for half-trivial files during first few weeks of use and keep the HDD in frequent use during that period. Then you can go back to backing up only stuff that's actually important. A good way to keep high HDD load without slowing down the system is to run offline data collection routine as often as possible. It takes a few hours to run when HDD is idle and it's autonomously handled by HDD. The routine goes to pause when disk IO is needed so it doesn't slow you down. To start the routine you can use HDDScan so that you don't need to exit Windows and you can continue to work while the routine is running on the background.

Heavier methods would include boot-CD with some HDD eraser/torture utility that writes random bytes in an endless loop but to do that you need one computer to sit there out of regular service. If you intend to do the burn-in in the HTPC system itself, this latter method is preferred instead of those others. Just run some write-verify loop overnight or a few days, then install OS normally but continue to monitor it. It may still fail but that burn-in should give coup de grace to "almost DOAs".

Share this post


Link to post
Share on other sites

You shouldn't. Hard drive failures show a bathtub curve pattern, sure, but it's not until after several months that the initial failures tail off. Burning your drive in for 3 months just makes it more likely to fail in month 4, and burning it in for 6 months does not make it certain that it will not be an "early failure" drive either. Burning in for one day guarantees nothing and introduces unecessary wear and tear that degrades your long-term prospects anyway. Frankly you should never "trust" one drive and always keep backups.

[Edit] This was written before whiic's more comprehensive post so probably seems out-of-place.

Edited by qasdfdsaq

Share this post


Link to post
Share on other sites

I disagree with qasdfdsaq on some points and agree on some other.

Bathtub curve: yes, more or less drive failures follow such a curve. The initial failure rate is much higher than the old-age, though, due to complexity of the device. Burning in for X period of time takes X*A period of time of high failure rate time-span off (where A is some (unknown) multiplier on how much more stressful the burn-in is compared to normal use) but takes same X*A off drives expected life time. You say that burning in doesn't affect early death at time after burn-in and that it actually not only increases deaths during burn-in but increases deaths in nearby future after burn in, and shortens life time by making the old-age barrier closer, claiming burn-in actually a LOSE-LOSE-LOSE scenario. I think if you burned in for 3 months (which is a lot longer period than I would ever do, because that's a lot of wasted electricity) the 4th month failure rate (if highly stressful workload is reduced to normal) would be lower with the cost of reducing several months of drives expected service life.

At the very least, I would burn in for a few hours. Basically, I would at least do a complete surface scan. While early death can be spread to a duration of 3 months, first month is more failure prone than next 2. While 1 month is quite failure prone, first week is the most failure prone than 3 following weeks, and so on... until we get to the first hour or even first few minutes. Most DOAs die during OS installation but it would be rather annoying to have OS half-installed and then notice instabilities of which you have to go back to diagnostics utilities to verify it's the drive and not RAM or other hardware or even software. If the first thing you do is to run stress test, you may spare a lot of unnecessary work.

Overnight burn-in will not make the HDD reliable but it will make it less horribly unreliable. It will be more prone to failure for a few months even if burned it and that duration is only shortened by a few days. What matter is that it get shortened from the extremely high failure rate that is the first few days of that ~3 month period.

Share this post


Link to post
Share on other sites

I am using computers close to 20 years. I have not lost a single bit of data (proper backup procedures) and I never had a running drive fail on me after 1 entire day of burn-in. I handled over 100 drives... and at least 30 of them ran continously for a year or more before taken offline. It a drive failed, it failed while doing the stress test -- never after.

I take drives offline after max of 2 years of use. All my drives run 24/7.

I always use quality components... especially quality PSUs.

I had offline drives fail after few years of unuse (that is paramagnetisation / thermal decay i believe). That is a whole different issue, not likely to affect average user running their OS on a drive.

If you do a proper burn-in, you will not face problems. I am not considering random out-of-luck failures, lightning strikes, etc. Whatever the choice is, proper backup procedures must be followed. You don't want to be part of Murphy statistics.

Here is my procedure:

1. Connect the drive to a running system. Read SMART values.

2. Do a SMART short self test. Do a SMART long self-test.

3. Zero fill / Wipe the drive with the manufacturer's utility. Entire drive.

4. Run HDTach full read/write. Everest / Sandra, etc all have stress tests. Run hard drive part continously for hours.

5. Run Victoria for Windows Read/Write test and make sure no slow sectors.

6. Drop to DOS. Run MHDD, run a LBA test and see check for slow sectors. Run Read/Write/Verify test. Run drive internal ATA secure erase command.

7. Do a full format.

8. Compare SMART values. If no anomalies, all good to go. Install your OS and continue.

If you are doing proper burn-in and torture test on a drive for a day and it passes before committing any data on it, it will not die on you after several months. If it does, it will be something random, bad luck, not a repetitive pattern.

I am fairly certain I just lost my hard drive in my HTPC and am preparing to buy a replacement drive, but I saw the comment above in another thread.

What should I do to burn-in and/or test the new drive before I try to reinstall my system and start trusting it with my data again?

Thanks in advance :)

Share this post


Link to post
Share on other sites
Here is my procedure:

All this is pointless and useless. If you've had 100 drives not fail on you, it's still just anecdotal evidence and hence, meaningless. Even if you had 1000 drives not fail on you it would still merely be anecdotal evidence, and would be no guarantee your 1001th drive won't fail, or the original poster's drive for that matter.

"Burn-in" of a harddrive is pointless and does nothing to improve its reliability. You may possibly detect any existing flaw(s) that was already present when the unit was delivered, but that's it. Regardless of what you do as a user you cannot improve reliability of a HDD, it will either fail early or it won't, running HDTach and all the smart tests in the world will not change this.

You may as well wave a dead frog over the drive whilst muttering incantations from various L. Ron Hubbard scriptures, it'll do you as much good. :P

Share this post


Link to post
Share on other sites
Here is my procedure:

All this is pointless and useless. If you've had 100 drives not fail on you, it's still just anecdotal evidence and hence, meaningless. Even if you had 1000 drives not fail on you it would still merely be anecdotal evidence, and would be no guarantee your 1001th drive won't fail, or the original poster's drive for that matter.

"Burn-in" of a harddrive is pointless and does nothing to improve its reliability. You may possibly detect any existing flaw(s) that was already present when the unit was delivered, but that's it. Regardless of what you do as a user you cannot improve reliability of a HDD, it will either fail early or it won't, running HDTach and all the smart tests in the world will not change this.

You may as well wave a dead frog over the drive whilst muttering incantations from various L. Ron Hubbard scriptures, it'll do you as much good. :P

I disagree with this and agree with 6_6_6, its still not a bad idea to stress the disk before you use it before placing your data on a potentially unstable drive.

Justin.

Share this post


Link to post
Share on other sites

I disagree with both 6_6_6 and FaaR, partially only.

I do not think 1 day stress test will weed out except the weakest of the weak (the ones which would have failed in first few days anyway). But I still don't consider burn-in / testing for new drive useless. You spare yourself the trouble of installing OS, drivers, utilities and making your settings, etc. all to lose it within first few days because you didn't even bother to surface scan the drive you just got. But other than managing to kill a drive that would have died during first few days anyway, burn-in of a HDD will not yield any additional benefit... nor does it need to, since detecting the DOAs alone is good enough reason to test the drives prior to attempting OS installation.

It will remain quite failure prone still after overnight burn-in. 6_6_6 is only anecdotal evidence, thus doesn't actually prove anything. HDDs have low failure rates after DOAs are removed from population so he might not have noticed elevated failure rate at age of 2-90 days (only seeing drive deaths during 1st day). 6_6_6 has just been lucky.

Share this post


Link to post
Share on other sites

I think that's quite a good way to put it. Burning in a drive does not make it any more reliable (in my opinion, it makes it less reliable), BUT it does weed out some early failure drives. The longer you burn it in, the more you weed out, but you soon get to the realm of diminishing returns and wasted time. I'd say anything over one night/day is the realm of wasted time, as that's how much time it takes to get a new drive out to replace the one that failed. Since everything's backed up and/or RAID'ed the loss of one drive to me means nothing more than that, and burning in for any longer is just a waste of time.

I would put the amount of time you want to burn-in a drive as equal to the amount of time it would cost you should that drive fail. For example, if you have no backups and no RAID, and it would take a month to replace any work you had on the drive, then burning in for a few weeks could be justified. In my case, it's not - drive fails? Hot spare. Hot spare fails? Overnight a replacement.

However, I'd say waving a dead frog over it however probably doesn't weed out the early failures.

Have to say though, that having used computers for 15 years, I've lost many gigabytes of data to every cause from user error to faulty drivers, the Via KT133/686B bug, getting too close to speakers in a club, faulty backups, and freak acts of god - yet next to nothing from actual hard drive failures. Looking at the reliability survey on this site where I keep a list of all my drives (helps keep track of my own reliability record), I can see most of my failures were between 2-3 months and 1-4 years. Burning in drives hasn't helped one bit, the only DOA's I've had never even POST'ed the first time.

Edited by qasdfdsaq

Share this post


Link to post
Share on other sites
"Burn-in" of a harddrive is pointless and does nothing to improve its reliability.

I am not trying to improve a hard drive's reliability -- i am not running a disk drive manufacturing factory. My purpose is to eliminate bad apples before they are rotten and make sure i get the apples the farmer intended me to receive.

I can say the methodology does a pretty good job that i receive what the manufacturer wanted to deliver me. My record speaks for itself.

Regardless of what you do as a user you cannot improve reliability of a HDD, it will either fail early or it won't, running HDTach and all the smart tests in the world will not change this.

Yes it will. I guess you have no idea what running 6 hours of prime is. If your CPU passed under this extreme conditions, it is not going to fail under light usage. If it does, it will be part of Murhpy's stats -- it is not going to be a repetitive pattern.

Share this post


Link to post
Share on other sites

I have no idea why people are talking about the time it takes stress testing a drive... but these are automated procedures. It is not as if you are taking a shovel and laying bricks on a construction site. Computer does it. You don't spend time on it. If you do, it is no more then coming here posting about "the time it takes to burn in a drive".

Let's take a look at this... Gone?

f1.jpg

Share this post


Link to post
Share on other sites

Well, the drop near the end indicated a head failure. No need to waste time on head failures.. Time to return.

but...

2 sudden drops at other parts of the drive might indicate a combination of head failure and media failure... or just a media failure.

Check SMARTS values... all good.

MHDD surface scan:

Blocks <   3ms = 990247
Blocks <  10ms = 393356
Blocks <  50ms = 13217
Blocks < 150ms = 214
Blocks < 500ms = 61
Blocks > 500ms = 13

Does not look good... but does not look like a head failure either.

Let's see if the drive can heal itself... WD Zero Fill....

Check SMART... 72 allocated sectors... Nothing pending.

Do HD Tach and compare to before (Red failing, Blue new test)...

f2.jpg

All looks good. Reread MHDD values:

Blocks <   3ms = 987704
Blocks <  10ms = 529558
Blocks <  50ms = 14589
Blocks < 150ms = 4
Blocks < 500ms = 0
Blocks > 500ms = 0

Let's see how the troubling area looks now:

f3.jpg

And how it was before:

f4.jpg

Share this post


Link to post
Share on other sites

This drive is running 24/7... 5 months now.

As you can see, i even kept drives that were in not-so-good health.

If they passed 1 entire day of stress testing, they are not going to be failing on me. I have too many drives to prove that. 30+ mine to be exact... and over 100 with others.

Now i would like to see someone stress testing 5 drives for a day and having 2 of them failing within 3 months. Anyone?

Share this post


Link to post
Share on other sites

Now let's examine hard disk failures:

15.5 % Head-Disk Interference

15.0 % No problem found

14.5 % Recording heads

10.1 % Drive handling damage (non-op)

8.5 % PCB

7.7 % Head or disk corrosion (non-op)

6.8 % Wires, Preamp

3.9 % Head Disk Assembly

2.6 % Disk Defects

1.9 % Drive Firmware

1.3 % Head-Disk Stiction (non-op)

1.1 % Spindle Bearing

0.7 % Foreign Gases or chemicals

By doing a stress test, we are immediately eliminating 48% of 90% drive failures categorized:

15.0 % No probs found

10.1 % Drive handling damage

8.5 % PCB

7.7 % Head or disk corrosion

1.3 % Head-Disk Stiction

1.9 % Drive Firmware

2.6 % Disk Defects

0.7 % Foreign Gases

Remaining:

15.5 % Head-Disk Interference (Head Crash)...

14.5 % Heads

3.9 % Head Disk Assembly

Main reasons of a head-crash:

a. Faulty electronics which would exhibit itself under stress testing

b. Wear and tear (happens right away or after prolonged periods of use)

c. Mechanical/Electrical Shock (Do not play ball with your drive and do not throw a lightning at it! Nothing we can do to prevent this except a good UPS and PSU)

d. Dust and contaminants (happens after prolonged periods of use)

By stress testing, we greatly decrease the probability of a head-crash...

1.1 % Spindle Bearing (excessive shock, wear and tear) is such a miniscule percentage that our stress testing wear and tear has no effect on ratios.

So we have 80% of 90% categorized drive failures eliminated with a good run of stress-testing.

Yes.... lightning can hit, world might explode, aliens might feed on magnetic data...

Share this post


Link to post
Share on other sites
HDDs have low failure rates after DOAs are removed from population

DOAs account for only 10% of drive failures contrary to popular belief.

Share this post


Link to post
Share on other sites

Wow, seven posts of useless drivel that is not only utter rubbish but of no relevance at all. Well done - glad you had nothing better to do for four hours. The lack of maturity you demonstrate here and in the other thread where you've resorted to personal attacks shows that nothing you say can be taken seriously. Before I leave you to your childish wailing, I'll make a few points to clear up some misinformation for the benefit of others:

1. Stress testing will not improve the reliability of anything. Not a hard drive, not a CPU, not a car engine, not a hot air baloon. All it will do is introduce wear and tear and damage the drive. This is exactly the theory behind 6_6_6's idea, damage the drive as much and as quickly as you can to make it break earlier.

2. Just because "computer does it" it does not mean stress testing does not take time. When you are stress testing a drive, you cannot use it. If you cannot use something because you are waiting for it to finish something else, this is wasted time. Some of us have jobs/lives/other things to do than rant on forums for >4 hours while waiting for our computers to become useable again.

3. The steps 6_6_6 took take an hour or more each depending on the size of the drive and each require you to return to the computer to perform the next step.

4. Doing a stress test does not mean we are "eliminating 48% of 90% drive failures categorized". Apart from the innate nonsensical contradiction of the statement itself, it also contradicts what 6_6_6 later states in that "DOAs account for only 10% of drive failures". If we eliminate 48% of all failures by testing for DOA how come DOAs only account for 10%? The whole argument is flawed. Ignoring that, most failures from the reasons listed do not occur within the first day of a drives life, nor do they become triggered by a "stress test". The whole point of a stress test is to trigger failure and if "1.1 % Spindle Bearing (excessive shock, wear and tear) is such a miniscule percentage that our stress testing wear and tear has no effect on ratios." is true, then a stress test is pointless by virtue.

5. 6_6_6's repeatedly argues that he/she is deliberately ignoring "random" and "by chance" failures. He/she also argues they've never had a drive fail after doing a stress test (except for failures that happen by chance and failures that happen after a random arbitrary amount of time). Duh. "Hmm, this failure doesn't fit my perfect track record, it must be down to chance so I shall ignore it". Again, a completely flawed arguement by its very nature.

Edited by qasdfdsaq

Share this post


Link to post
Share on other sites
4. Doing a stress test does not mean we are "eliminating 48% of 90% drive failures categorized". Apart from the innate nonsensical contradiction of the statement itself, it also contradicts what 6_6_6 later states in that "DOAs account for only 10% of drive failures".

Do you even know what a DOA is? Do you even understand what "non-op state" in that statistics show?

If we eliminate 48% of all failures by testing for DOA how come DOAs only account for 10%? The whole argument is flawed.

DOA is anything you received in non-operational state. It implies handling damage. You don't test for DOA. DOA is DEAD ON ARRIVAL. The item is not operational. You can't make a drive DOA by stress testing. If you are able to stress test a drive, it means the drive was not DOA in the first place. Hence DOA probability is eliminated.

Nothing but your knowledge, or lack of it for that matter, is flawed.

Share this post


Link to post
Share on other sites

I am not going to be dignifying rest of your present or probably future nonsense with answers or rebuttals. I am done with you and your utter lack of knowledge.

Share this post


Link to post
Share on other sites

Uhh... I have some nonsensical sentence structures in my last post. They might make it appear I have exactly the opposite opinion than intended.

"I do not think 1 day stress test will weed out except the weakest of the weak" => "...weed out anything except..." ...and I could have put it in simpler form anyway.

I don't think there's reason to nitpick when DOA becomes non-DOA. 0 seconds? 1 second? 1 minute? 1 hour? During first day? I seriously don't think majority of handling dmage would cause a drive to not spin-up at all. And if it spins up, the time of death is impossible to determine. First click, odd noise or first bad sector? First non-passing SMART value or first time if dissappears from system? There is no way to define DOA in a way everyone accepts it so there's no reason to be anal about it.

Majority of handling damage doesn't result in DOA. Around half of static electric shocks doesn't cause immediate death. And so on.

Share this post


Link to post
Share on other sites
I don't think there's reason to nitpick when DOA becomes non-DOA. 0 seconds? 1 second? 1 minute? 1 hour? During first day? I seriously don't think majority of handling dmage would cause a drive to not spin-up at all. And if it spins up, the time of death is impossible to determine. First click, odd noise or first bad sector? First non-passing SMART value or first time if dissappears from system? There is no way to define DOA in a way everyone accepts it so there's no reason to be anal about it.

That is how DOA is determined by manufacturers and that is what it is. It works, it is not DOA. Rest is failed drive and categorized as something else (Head Crash, Spindle, etc), not DOA. It does not matter how much time it takes a person to break it -- it is no longer DOA. That is how it is used in the context of the drive failures statistics by MIT i quoted above which was taken from a drive manufacturer.

As I said, I like to see someone who can confirm in 3-6 months 2 drive failures out of 5 drives burnt-in at least for some period of time. I requested this, nobody confirmed. Majority of people commenting probably did not have few drives in their entire lifetimes let alone burn some and verify the results properly. I have. I am done here.

Share this post


Link to post
Share on other sites

6_6_6, I would be interested to know how many of the 100 drives passed steps 1 and 2, but failed steps 3-8 before handling any real data. (When I put a disk into service, I only do steps 1 and 2.)

It has been widely reported that the failure rate of HDDs follows a "bath tub" curve. It is not fully known how much of the "near wall" of the bath tub curve (i.e., the relatively high failure rate in disk drives early on) is caused by shipping/handling damage.

That said, manufacturers/customers who stress-test their enterprise drives (and 6_6_6 who may stress-test desktop drives) can pass (and effectively bypass) the near wall of the curve during the stress test, so that the drive operates its entire useful life handling real data at the bottom of the bath tub curve (e.g., with a relatively low failure rate during its warranty period. And since 6_6_6 takes drives out of operation after 2 years, it is of no consequence that the "far wall" of the bath tub curve may be reached sooner after performing a stress-test.)

Share this post


Link to post
Share on other sites

6_6_6: "That is how DOA is determined by manufacturers and that is what it is. It works, it is not DOA."

So, if it's powered on and 1 second later it starts grinding, clicking or spins down by itself, it's no longer DOA since it worked for 1 second?

6_6_6: "Rest is failed drive and categorized as something else (Head Crash, Spindle, etc), not DOA. It does not matter how much time it takes a person to break it -- it is no longer DOA."

So, DOA is not related to time of death by cause of death? And DOA is never due to for example a damaged PCB because DOA is a separate cause? Come on. DOAs can be categorized like non-DOAs. Sure, there's some sorts of damage that can only occur during transit and installation but non-DOA drives can also have been severely mistreated during transit, only to fail later (non-DOA). So, why should DOAs not be analyzed for the cause of non-operation?

6_6_6: "As I said, I like to see someone who can confirm in 3-6 months 2 drive failures out of 5 drives burnt-in at least for some period of time."

It's difficult to find one - I admit that. But... it's equally difficult to find someone who can confirm in 3-6 months 2 drive failures out of 5 drives NOT-burnt-in at least for some period of time! It's equally difficult because short burn-in testing has extremely small effect on reliability between 3rd and 6th month of operation. Burn-in accerates wear and you only burn in a few hours or maybe overnight thus it will correspond to no more than possibly a few days of regular duty. Killing the drives in a matter of hours that would otherwise have died during first days. It will take a day or two off drive's life span, regardless how much life it has left: it will kill drives which would have died within a day in matter of minutes and it will kill a drive that would have lived 1000 days in 999 days. That's all burn-in is capable of and that it enough. Burning-in is a procedure to verify the drive is OK at the time of testing and increasing wear during the highly failure prone time (just after installing it to a system) to avoid unnecessary user-interaction when troubleshooting an unstable system a day later.

Between 3 and 6 months we are at the bottom of bathtub curve where HDD death likelyhood is at it's lowest and it's flat: reducing (or increasing) one day of drives life will neither increase or reduce likehood of failure during this period of time. Burn-in will only affect early and late death likelyhood. And in late death likelyhood, it'll only change it one day earlier so it's not a big deal. Most drives would live 5+ years if they get past early failure period, and by that time they'd be retired anyway. And yeah, I know: sometimes drives fail at for example 2 year mark. Would it have made you happier if it had lasted one more day? (Do notice that if there was a SMART warning, the warning would also have been delayed by one day, giving you exactly the same time to react!)

datestandi: "That said, manufacturers/customers who stress-test their enterprise drives (and 6_6_6 who may stress-test desktop drives) can pass (and effectively bypass) the near wall of the curve during the stress test, so that the drive operates its entire useful life handling real data at the bottom of the bath tub curve (e.g., with a relatively low failure rate during its warranty period. And since 6_6_6 takes drives out of operation after 2 years, it is of no consequence that the "far wall" of the bath tub curve may be reached sooner after performing a stress-test.)"

Well, that's the idea burning-in... even though 6_6_6 claims it pretty much guarantees extremely low failure rate (bottom of bathtub curve) to run steps 3-8 which I do consider to be just sheer luck instead of truely testing hard enough to reach the flat bottom part of the curve. I don't thing that surface scanning for a few days has the acceleration factor for wearing out nearly high enough to cause 1 to 3 months of wear. 1 week at most... rest has been just pure luck.

Max 1 week of drive's life life span doesn't much matter in drives late life. It's obsolete by then. And if it dies during middle age, then it would have died during middle age +1 week anyway, giving as much or as little time for user to react to before dying. Even though it has more meaning than dying at 10 years vs 10 years +week, 2 years vs 2 years +week is still pretty much insignificant compared to dying 1 day vs during 7th day after installing OS and applications. That's pretty darn much hazzle as some people don't back-up their OSes but only their data. (I make OS images to speed-up "reinstallation" in case of hardware damage, or more common: software trouble, user-error, too much accumulated bloat collected during months after installation, etc.)

I also burn-in desktop drives. I don't see anything hilarious in it. Not in elevated temperature or anything like that. I just run verify pass, SMART self-diagnostic passes, SMART offline data collection routine, transfer lots of data, verify checksums. And when I'm done, I keep even the less worthy material backed up for a month or two or at least a minimum of few weeks. And I do a verify scan or SMART scan every few days during first weeks and after that once a month just to keep myself aware of any degratation, checking SMART values before and after each scan. Being extra careful with data place on it, I don't need to perform as harsh burn-in but I don't reduce early death period that much either... well, I do that to data drives. If I burn-in a drive that is to become OS drive, I may have to burn-in before starting to use it, as catching a failing drive after OS installation doesn't help much with all the trouble already done to install it. In that case I usually boot to another HDD and do write and verify passes on it for a few days while using my system normally from the old HDD. Doesn't require much of my intervention to run tests on the background, nor much of computer's resources.

I dunno. Maybe it sounds too much of a trouble to some but I actually think I'm minimizing my own need to hazzle around in case of potentially defunct HDD. Sure, even early HDD deaths are a minority compared to working HDDs but so is not-backing-up easier compared to having plans for backing up data and acting accordingly to the plan. The potential gains of burn-in may be less than gains of back-up (just some saved time, nothing more), but also the amount of effort for making working back-ups is greater. And anyway... I spend much more time writing replys on forums than performing burn-in or back-up, so I guess both are really insignificant loss of time for me. And this really doesn't help me much except it makes me think whether my plans for data-loss prevention are adequate or not.

Share this post


Link to post
Share on other sites

With hard drives being as reliable as they are, why bother with something useless as stress testing? It's like redlining a brand new car, running the risk of ruining the engine (prematurely or immediately) for no real gain.

Share this post


Link to post
Share on other sites

Comparison to car engines should be avoided as car engines need breaking in for piston rings to seal the cylinder properly. When car is new, they leak: lack of power, overheating, loss of oil, etc. If car is properly driven in, it should start operating properly as piston rings have worn to match the cylinders. Further wearing will practically stop when this has happened.

If on the other hand, a new engine is redlined, it may overheat and piston rings may weld themselves to cylinder walls, causing permanent damage. This can occur especially on high loads.

if car engine is driven in too lightly, especially at low revs, never getting close to redline, the piston rings don't seat and wear properly. Using only a narrow rpm range will yield bad result. If a new engine is redlined, it's better to use light low and high revs to avoid overheating and only do short periods of time... to avoid overheating. And a cold engine shouldn't be revved either... neither should an engine that is running too hot (due to increased friction during drive in period). Engine that is driven too lightly during drive-in, with excessive idling, will burn oil and the burnt oil will stick to cylinder walls and smoothen the roughness of cylinder walls and thus preventing piston rings from wear: piston ring will NEVER seal properly and the car will remain high in oil consumption until engine is overhauled!

Proper drive-in isn't as critical nowadays. Piston ring materials are designed to wear more quickly and engines are most likely broken in at the factory. After all, oil consumption during drive in could destroy the catalytic converter.

____

Don't compare HDDs to car engines. They are way too different. HDDs never benefit from wear and tear - car engines do. HDDs are burnt in order to kill them (and save data before it's written to the HDD that's about to die) where as cars are burnt in order to actually save them and inprove their characteristic permanently.

Share this post


Link to post
Share on other sites

Gee... I just wanted to say that burning in hard disk drives is stupidly useless. If your data is important you've got RAID arrays with online hot spare drives and preferably at least 2 backup systems. If it's not about data and you want to minimize downtime, make images when needed and restore them when your disk fails. That would actually be useful.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now