bfg9000

Member
  • Content Count

    1299
  • Joined

  • Last visited

Community Reputation

0 Neutral

About bfg9000

  • Rank
    Member
  1. bfg9000

    Is there a new raptor coming soon?

    Hi everyone, haven't been here in a loong time! 1st reviews here: http://www.storagereview.com/WD3000BLFS.sr http://www.tomshardware.com/reviews/hard-disk-sata,1914.html 10k, 300GB, 2.5" in a 3.5" sled, arrives in May.
  2. Absolutely, the zones are mapped to physical locations, since the firmware does not use the imaginary parameters. Why should it, for it knows where everything physically is? The imaginary parameters are only the fiction that is fed to the OS as representing the physical arrangement of the drive. That is, the presented drive arrangement may be emulated in software, whether by drive overlay or firmware, and each sawtooth represents a change in physical location (different zone) that does not necessarily correspond to changes on the logical side, where everything appears as if it were orderly and sequential. I am not saying that all drives do this, only that it is perfectly possible to program one to do so, since neither the partition manager nor user may actually know what is going on at the hardware level. Seems to me that only someone with special equipment at a data recovery company could know where things are stored physically so I'm unsure how you two have "seen" partitions are physically contiguous at all... The burden of proof is not on me for just pointing out that a partition may be moved logically but not physically (only remapped on the fly-a "logical" explanation), but on Olaf for declaring partitions must always be physically contiguous as a matter of faith: The firmware handles bits and need not know about partitions any more than a chess program needs to know the name of the game it plays in order to play it. It will do whatever it is programmed to do. The question is: do you trust the human programmers and their motives? Maybe I'm cynical but I've had to RMA a number of drives recently when performance dropped off dramatically to <5MB/s from all the bad sectors, yet the company's drive utility and SMART reported drive health was A-OK. Clearly the motive was to reduce RMAs from drives that are still "working." It is not in a third-party repartitioning software company's own interest either to tell you that use of their product may reduce performance.
  3. Adaptive zoning, adaptive zone layout, or variable zone layout all refer to variations of a theme, and serve to illustrate how the firmware was placed in charge of physical locality even before we went to imaginary drive parameters. This is from 1994 when embedded servo drives first came out: Back in the age of dinosaurs when logical was directly mapped to physical sectors, LLF was under user control and we could even DEBUG the BIOS of the disk controller card to select an interleave ratio, then FDISK it up to the whopping 32MB maximum partition size before high-level format. Picking one too high resulted in less performance than was possible, while going too low caused slipped revolutions that would really kill performance on a 3600rpm drive. Fortunately Spinrite allowed testing various interleave ratios quickly, which was good because dropping the interleave ratio could provide an enormous speed boost.Of course by ~1990 or so, all the best drives had 1:1 interleave and afterwards the introduction of the integrated drive controller eliminated the ability for user LLF or adjustment of skew or other such low-level functions entirely, hiding them away behind a firmware layer (unless the drive manufacturer specifically provides some utility to adjust e.g. AAM or Server vs. Desktop modes). This effectively hides from the OS any information about physical location so there is basically no way for the OS to know where things physically go. Nor should it care because the firmware allows the drive to function as if the logical arrangement were the actual one, so the imaginary drive parameters provided by the firmware may simply be taken at face value. The orderly geometry information seen by partition managers is merely an illusion.
  4. Hmm, all I was trying to point out was the saw toothed pattern can only be explained from a clearly non-sequential physical arrangement, and said nothing at all about this arrangement being older or newer. Interesting that you could infer that somehow? The title is "Is a Partition a Physically Contiguous Space?" My definition of physical here is some place you can point to on the surface (because the OP was concerned about fragmentation causing a performance hit from the head physically seeking, which is why I've been talking about platters). I suspect this may be the hangup because even if a partition cannot be logically fragmented by definition... I've given examples of how it may physically reside in different places (from remapping, etc that no partition manager can even be aware exists). The partition manager works at a higher level software layer. Perhaps "physical" may then also refer to the physics of all those electrons carrying out software commands?As you said, "there's a difference between physical and logical" but logical has not mapped with physical since "cylinders" became arbitrary numbers that no longer correlate with actual drive geometry (unlike the old days when you could really low-level format a drive). By that viewpoint, partitions should not even physically exist at all anymore because they are now based on imaginary parameters. It's not that the firmware is involved with creating the partitions, but that the firmware is what lies (misrepresents) to the partition manager this geometry information.The partition manager cannot know where the parts of the partition physically are because the firmware handles all this behind the curtain, and it all depends on how it's programmed. You could partition and defragment a flash drive too, but that sure won't guarantee data occupies sequential memory cells, and the partition manager will be none the wiser. I stand by my recommendation not to resize or tamper with partitions that contain data, because a partition that takes hours to defragment may be "moved" and reordered in seconds. This alone strongly suggests to me that no physical but only virtual moving is going on (hey! maybe that's what "another very logical explanation" means) .
  5. Mind if I ask who said it was the other way around?My point was only that a partition may be physically scattered on different platters and even drives, but the OS still sees it as a "virtually" contiguous logical partition (there's that word again). It's the drive firmware that determines where things actually go when creating a partition, and it does not even report that information to the OS (the C, H, and S values reported to the kernel to calculate geometry are themselves convenient artificial fictions on large disks). And that would be true no matter which location assigning scheme came first Some may consider firmware to be part of the hardware but it's really the lowest software layer, even if much of it is bypassed after the OS loads. It is the job of a software layer to hide the true nature of the hardware level from the user and other software levels (including partition managers), which allows e.g. hardware changes without breaking OS compatibility. That's why BIOS overlays work--they are supposed to lie to both lower and higher software levels...
  6. Yes, it does. And there's a difference between physical and logical space. The HDD maps logical to physical and although it's kinda continuous, it's not exactly how Spod explained it. The question at hand (and the thread title) is "Is a Partition a Physically Contiguous Space?" and the answer is simply no, it doesn't have to be.Sure, it can be considered to logically be "contiguous" by definition (and seen that way by the OS, just like how two obviously separate drives in RAID0 may be seen as one contiguous partition), but it does not physically need to be. How else can you explain a new partition made from two noncontiguous ones that were deleted? Or the saw-toothed STR pattern that results from multiple platter drives that don't use up one platter before going to the next? How about the remapping that goes on to hide bad sectors (unlike the wear-leveling algorithm on a flash drive, it's so much slower that you can see a big dip in STR when it seeks over to the spare sectors) that relocates blocks from the same partition everywhere? Seems to me it's the firmware of a drive is what determines where things physically go and keeps track of where everything is. If there is "probably another very logical explanation for that" I'd sure like to hear it.
  7. bfg9000

    IDE "hotpluging"

    SATA drives are hotpluggable because the connectors connect the grounds first/disconnect them last. Cutting either +5v or +12v will usually shut down a drive enough so it won't spin up, isn't seen by the BIOS and won't hang things at the BIOS screen. Shouldn't be any issues if coldplugging. In fact I used to do this all the time with the "turbo" button on old cases, by interrupting either yellow or red from the drive molex with the switch and booting to different OSes manually without using some dumb bootloader. Haven't tried this on a modern drive yet however...
  8. Doesn't have to be a contiguous space. Imagine you have 4 partitions, then use Disk Mangler to blow away 2 and 4 and create a new single partition from all the free space. Those third party partition programs seem to work far too quickly to move anything around as well, for super-fragmented partitions. I'd suggest creating partitions once and then not messing with them again by resizing or deleting--only formatting or ghosting.
  9. I pointed out way back here that you only need some files from XP Embedded to stop this silly Windows behavior once and for all. BartPE for USB flash devices works similarly, but of course the Preinstallation Environment OS times out. Embedded doesn't.
  10. bfg9000

    Concerns about drive geometry

    Controller manufacturers do occasionally change geometry translation, usually to add significant new features like LBA48, etc. The recommendation is always to create new partitions and reformat when such changes occur. If it was an official newer BIOS then you could probably expect that all newer versions of the card/board will use the newer translation. This can be important if e.g. your current controller dies and you want to be able to simply migrate the drives to a replacement controller. But since it was never officially released (yet) for your onboard controller, any replacement motherboard you procure would come with the old BIOS on it anyway. I am of the opinion that if it's not broke, don't fix it. Let others work out the bugs first as early adopters, unless of course you actually need the specific new functionality offered by the newer BIOS.
  11. bfg9000

    To wipe old tapes

    I'd suggest a 110v cassette tape head degaussing wand. They were quite popular back before CDs, so you should be able to pick up a used example on eBay or somewhere for cheap. While you're looking, they did sell data/VHS/cassette bulk tape erasers too (look like a plastic box with a handle and AC cord), which are more convenient if you've got a lot of tapes.
  12. bfg9000

    Question about 'wear and tear'

    As I mentioned, start/stop cycles (the number of normal spindle start/stop cycles, usually about 50,000) is a completely different rating than power-off retract count (the number of emergency retract cycles). The problem is that most drive manufacturers do not actually specify what that rating is, or for that matter the design number of power-on head parks while the disc continues to spin (which may be in the millions). This makes it impossible to know how bad it really is or to make any logical decisions on using power management or not for best reliability Hitachi rates some of their drives at 20,000 power-off retracts, and have this to say about it: Sounds like they are using the spindle motor as a generator and shorting the current across the voice coil to force a quick park. With most external enclosures, every power off is this way.I suggest not worrying about wear and tear but just treat these devices as consumables and expect they may fail at any time. Especially because the OP seems to be wondering how much these things may be trusted, and my answer is not at all, whether internal or external
  13. bfg9000

    Question about 'wear and tear'

    I think that's kind of like assuming the differential in your car never wears because it has a design life of over a million miles (by which time something serious in the rest of the car would have failed anyway). The wear may indeed be virtually negligible, but usually the failure of a related component like a seal causes catastrophic failure long before the design life is achieved. That's why there is a market for used differentials in junkyards, though of course FDB are non-serviceable (and any wobble may have caused irreparable damage to the platters by then anyway). Certainly we've seen autopsy pictures of seized FDB here before, but it's probably pretty rare. Another question is if heat-cycling the drive will cause more "wear" to solid state components (as in cold solder joints) than continuous operation wears seals or outgasses lubricants on the spindle and voice coil motors or platters themselves (moving parts fail, wear-leveling algorithms do exactly that). While any modern drive has a good chance of becoming obsolete before dying, it is foolish to think of any drive, internal or external, as reliable enough to need no backup of "GBs of data."
  14. bfg9000

    Question about 'wear and tear'

    Probably the answer is "it depends" on the drive and controller. Most USB enclosures never spin down their drives, so the drives may wear their bearings faster, and spinning all the time potentially increases exposure to damaging impacts. Most USB enclosures are also unable to send a shutdown command to the drive. Some drives are rated for a certain number of stop/start cycles if sent proper commands to spin down or shut off, and a completely different number of stop cycles if the power is just yanked. If this second number is considerably less for your drive, then repeatedly shutting it off via the emergency shutdown mechanism may wear the disk out faster. Of course if you have disabled power management for the internal disks and never turn the computer off, then there should be no difference.
  15. bfg9000

    The capacitor

    True enough for a flux capacitor at least. Changes with time is correct, though not necessarily changes with frequency since the reactance is changing within the AC cycle. Reactance will be lowest near the AC peaks and troughs and highest when voltage is changing quickly in between.