dafs

Member
  • Content Count

    83
  • Joined

  • Last visited

Community Reputation

0 Neutral

About dafs

  • Rank
    Member
  1. Thanks for the info. I didn't think they were running all that hot. Maybe 110F or so for the first 3 crashes. The 4th crash happened with a temp at least 15deg lower (there were less drives (2 instead of 4) in the 4-bay area of the case, and airflow had been increased), so I didn't consider heat. Maybe the 4th was a fluke compared to the first three? I guess my idea of an outrageous temp may be skewed since I worked in a lab for a few years where we had a furnace regularly running at 2400C. It was very toasty, and very scary. 216859[/snapback] 110F is only 43 degrees C. Thats not hot enough to cause 4 drive failures in such a short time. Is that drive case temperature of air ambient temperature. If its the air temperature, the drives are likely 5C to 10C hotter internally. Any drive sandwiched between other drives or the case and the drives could be quite abit hotter still. You want to make sure the drive case temperature never goes above 50C, perferably cooler. Anything above 60C case temperature is near certain death over a relatively short time period especially if the high temperature is coupled with altitude like in Denver or Mexico City. DAFS
  2. qubit, Congratulations! You have implemented a blind error correction. Some vendors implement this, most don't because of the very long time the drive takes to correct the sector using this approach. dafs
  3. Truely, fasinating. Forthe life of me I cannot figure out what is so important in the any of the files you are trying to recover that you would want to spend even five minutes recovering the data. It must be the purely intellectual challenge of understanding how the drive operates. The specifics of IBMs ECC I don't know, but there is sometimes the possibility of using what is called blind ECC correction. Some hdd manufacturers use this, some don't. WD used it in a recent drive I examined. Maxtor has not been using it. I don't know if Seagate or IBM have used it. In blind ECC correction, the normal ECC correction has failed because more bits were damaged then the ECC was designed to correct. However, if the damaged bits are assumed to be continuous the drive can use that information to nearly double the number of bits that be corrected. The drive assumes it knows which bits are in error then solves the ECC equations to recover the data. It reserves a few of the ECC bits to check if the solution is consistant with those remaining bits. The bits assumed to be in error are moved through the data until a consistant solution is found. The wd drive I played with would take nearly 30 seconds to recover a sector purposly damaged ( using write long) beyond its normal ECC recovery range. Most drives don't use blind ECC because it takes too long. www.eccpage.com
  4. Not in todays drives. Tracks are so narrow, that track switch is faster then head switch as a consequence of the misalignment between heads in the stack being larger then the track spacing. It is common today to use serpentine layout for the LBAs You're correct on this. Both track density and linear bit density are both increasing.
  5. dafs

    Apple mini, what HDD inside?

    A report on Maccentral, based on discussions with apple engineers, states that the drive is a 2.5" 4200 rpm drive. I read someplace that Seagate has stated that they are supplying drives for both the mac mini and the ipod mini. I have not been able to find where I read that however. If Seagate is supplying the drives they would be the following models. For the mac mini $499 computer they are using a Seagate Momentus 42 Model ST4019A For the mac mini $599 computer they are using a Seagate Momentus 4200.2 80 GB Model ST9808210A
  6. dafs

    Maxtor Diamonmax 10

    100 GB/Disk 3 disks dafs
  7. dafs

    Changed 7k250 for Diamondmax 9

    Disk drives have not needed thermal recalibration for many many years. All drives for at least the past 7 years, probably longer have used embedded servo. That is each track on each disk surface is written with dedicated servo data used by the head to position itself. The servo data is approximately a 10% overhead ontop of the drive capacity. Meaning an 80GB drive has nearly 8GB of additional servo data written on the disks. A 200 GB drive has nearly 20GB of servo data and the future 1TB drives will have an overhead of 100GB of servo information. A modern drive will "recal" only when servo tracking is completly lost. The drive then goes into a proceedure to reacquire servo. Trying to read data from a serverly damaged reqion of a disk will often cause this "servo recal" procedure to happen. It is often heard as a clink or clunk as the head is brought to the extreame OD or ID and hits a crash stop. This will repeat if the same bad region of the disk is attempted again. What Hitachi/IBM is doing exery 15 min is an open question but is probably related to self-diagnostic code and the drive is doing something which Hitachi believes will improve the long term reliability of the stored data. DAFS
  8. Three comments: 1: go for it! you'll learn alot. 2. it is going to cost alot more then $150 for the custom machining. CNC mill programming and time is not cheap. One-offs are always expensive. 3. the c-boc case designs look neet. Too-bad they are not actually for sale, but just some designers wet-dream. dafs
  9. dafs

    SR's D2OL Metacomputing Team

    Mars, I wonder what the determining factor is. My 2.4 GHz is a new machine at work. I am not familiar yet with the specifics of the system Motherboard, chipset, etc. I know the system uses an older 20 GB/platter HD, while my athlon system at home I use a Diamondmax +9. That should not make any difference though. Another difference is that the p4 system is using Suns JAVA while the Athlon is using Win2K Pro with microsofts JAVA. The java is only interface code and again should not be affecting things. Both systems have 512MB memory. Clocker, your 2 nodes seem very fast could you post the system details? DAFS
  10. dafs

    SR's D2OL Metacomputing Team

    Well we are currently at 104 in the team standings with only five days of computing. I wish they had a means to spec the hardware for each node. Speed comparisions would be meaningful. I have four machines running. 300 MHz dual Xeon dell box, 600 MHz P3 white box, a 2.4 GHz P4 white box, and my home machine a 1.5 GHz Athlon XP. I am impressed with the Athlon which is besting the significantly faster clocked P4. Makes me want to go out and build an AMD 64 bit cruncher and add it to the mix. DAFS
  11. dafs

    40GB HDD is having a reading falure

    There was another thread that talked dyidatarecovery.com There have a program to fix partition tables and such. Cost is only $40. They have lots of people singing the praises of their program. DAFS
  12. Interesting stuff. This was talked about on slashdot earlier today. 64 MB is min. file allocation "chunk" "Most files utilize many chunks". It makes you wonder what Google is doing, as this seems an odd organization for the type of data one imagines they process. The paper says that most of the applications accessing this data store, serial process most or all of a file and so the file system is designed to optimize that form of access at the expense of much less common random accesses to partial files. If you read the paper you get the sense that Googles needs for this file system are very different then that of your typical and even atypical user even for a large corporation. It also is apparent that this is not the data store behind the web site that all of use each and every day but is designed for some other task (indexing?) going on behind the curtain. A peek at the wizard is interesting none-the-less. DAFS
  13. dafs

    Power Point

    Yea this is kind of basic powerpoint. View Menu:Master:Slide Master Will display the current masterr slide. Edit this to get the look you want. Format Menu:Apply Design Template Format Menu:Color Scheme Format Menu: Background These are all import commands for doing what you want DAFS
  14. You're going to swap the head stack from the other drive into the dead drive? I'll say a prayer for you. DAFS
  15. dafs

    This is interesting....

    From the website: Frequently Asked Questions MaxBoost Software Beta Test How does MaxBoost work? MaxBoost is a performance filter driver for Windows specifically designed to work on Maxtor drives. MaxBoost intelligently monitors and caches I/O requests to the drive in order to improve performance. Doesn't Windows have its own caching scheme built in? Yes, Windows has its own caching scheme residing at the file system level; however, MaxBoost provides performance improvements above and beyond the Windows caching scheme. How is this different from the cache built into the drive? MaxBoost provides a larger cache and an advanced caching algorithm that complements the cache built into the drive. What kind of performance increase can I expect? MaxBoost provides performance gains ranging from 2-50%. MaxBoost can be benefit performance in a wide range of applications ranging from file copying to business applications to multimedia playback. One way to measure the performance improvement provided by MaxBoost is to run the Business and High-End Disk benchmarks is the VeriTest WinBench 99 test suite. WinBench utilizes the access patterns of applications such as Microsoft Office, Adobe Photoshop, and Adobe Premiere to provide an accurate, real-world measure of disk performance. WinBench 99 is available for download from www.etestinglabs.com. Individual system results will vary depending upon system configuration, processor speed, available memory, drive speed, applications, and benchmarks. MaxBoost doesn't work on my computer even though it has the required system configuration. Who should I contact for help? Because MaxBoost is currently available as an unsupported beta, Maxtor technical support representatives, the Maxtor.com website, and the Maxtor Knowledge Base will be unable to assist you with this product. If you are experiencing any issues with MaxBoost, please send a detailed description of the issue to maxboost@maxboostbeta.com. We will use your feedback to improve future versions of MaxBoost. Why won't MaxBoost work on my older computer with less RAM and a slower processor? MaxBoost requires a minimum set of memory and processor resources to effectively boost disk performance. The minimum specifications necessary to ensure a consistent performance boost are a 700 MHz processor and 256MB of RAM. Will this work on my Seagate/WD/IBM/Hitachi/Fujitsu/Samsung hard drive? No, MaxBoost will only allow you to enable its caching scheme on Maxtor and Quantum branded hard drives; however, it can coexist peacefully with other drives. MaxBoost has not been tested with competitor's drives. Is MaxBoost WHQL certified? No, filter drivers are not required to be WHQL certified by Microsoft at this time. Can MaxBoost be used with the Intel Application Accelerator or VIA Hyperion drivers? Yes, but you should install MaxBoost after you install either of these drivers. Is MaxBoost compatible with Hyper-Threading on Intel processors? MaxBoost is not currently compatible with the Hyper-Threading functionality of select Intel processors. To use MaxBoost on a system with Hyper-Threading, you must disable Hyper-Threading in the system BIOS. See your system documentation for details.