Brian Tao

  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Brian Tao

  • Rank

Profile Information

  • Location
    Toronto, ON
  1. I tried both sets (orange vs purple sockets on the motherboard). No difference. Thanks, I'll have a look at that when I can risk some downtime in case it borks my system. I did try the F6 update a long time ago, but then my eSATA adapter was long recognized in the BIOS, and something went wrong rolling back to F5. Luckily, the "Dual BIOS" feature of this motherboard actually work as advertised, and I was able to get the thing up and running again. Now here's the strange part: after a day of successful usage in the laptop, I decided to plug the SSD back into the Thermaltake dock... and now it appears to work fine! I've processed a couple thousand images from two weddings on it today (I'm a photographer), no hiccups. I'll have to try switching the SSD back to one of the on-board SATA ports to see if the problem comes back. This is a bit disconcerting, knowing that the drive might just stop working at any time...
  2. [i posted this to Intel's online community forum too, but I figured I'd try the real experts here too ] I just purchased an 80GB X25-M (1st generation, SSDSA2MH080G1GC) from and would like to use it in my desktop. Here is some relevant information: Motherboard: Gigabyte P35-DQ6, BIOS version F5 Processor: Intel Q6600 @ 2.4 GHz OS: Vista SP1 Business Edition, 64-bit On-board SATA settings: AHCI mode, legacy IDE mode On-board SATA BIOS: Serial ATA AHCI BIOS, Version iSrc 1.07 08042006, © 2003-2006 Intel Corporation Slot-based SATA: Silicon Image SiI-3124, BIOS version 6.4.09 SSD firmware: 8820 I am connecting it to my desktop both via one of the on-board SATA ports (which was previously connected to one of my WD 640GB Caviar drives) as well as using my Thermaltake BlacX eSATA dock (which is plugged into the SiI-3124, 4-port eSATA adapter). At first, I used the eSATA dock and everything appeared to work reasonably well. The SSD immediately showed up in the Disk Management snap-in, and I was able to initialize and partition it. I ran HD Tune Pro 3.50 to see what numbers I could get. I was only seeing about 110 MB/s sequential reads, but I blame that on the old 32-bit 33-MHz PCI eSATA adapter. I then decided to move the SSD over to one of the internal SATA ports, figuring I could get better speeds if I bypassed the legacy PCI bottleneck. That's when the problems began. At first, the drive showed up just fine after a reboot, and I started to re-run the tests. However, part way through a benchmark, HD Tune would freeze. The other running apps seemed to be fine. Eventually, HD Tune would unfreeze and report that the benchmark could not be completed. I could see in Vista's Event Viewer that the SSD had disappeared from the system. Windows Explorer confirmed this. I disconnected the SATA cable from the drive (leaving it powered on), and reconnected it. Still no drive. I disconnected and reconnected both power and data cables, and the drive re-appeared. I tried the benchmark again, and again HD Tune would freeze part way through. I disconnected the drive from the internal SATA and placed it back in the eSATA dock, where it had worked previously. This time, the volume never appears. When I try to bring up Disk Management, it freezes trying to connect to the Virtual Disk Service. I notice now that Explorer also hangs when attempting to bring up a list of mounted volumes. Same with diskpart and mountvol. When I power down the dock and disconnect the drive, everything wakes up and continues as normal. Event Viewer shows multiple warnings and errors regarding "The device, \Device\Harddisk8\DR12, is not ready for access yet", "The driver detected a controller error on \Device\Ide\IdePort6", "An error was detected on device \Device\Harddisk8\DR12 during a paging operation", and finally "The device 'INTEL SSDSA2MH080G1GC ATA Device' (IDE\DiskINTEL_SSDSA2MH080G1GC___________________045C8820\5&2d74fded&0&4.0.0) disappeared from the system without first being prepared for removal." So... what would cause the drive to function fine at first, and then cause these problems later? I was certain that this was caused by a defect in the SSD itself, but it works perfectly fine on my Thinkpad T60p laptop, both in the internal SATA drive bay and in the Ultrabay. I've installed Windows 7 RC on it to do some testing, ran several benchmarks, installed and ran my usual suite of apps (Firefox, OpenOffice, cygwin, Photoshop, etc.). Disk Management identifies, partitions and formats the SSD with no issue whatsoever. Here are some screenshots from my desktop, in case there is information I missed:
  3. Ah right, I was just reading about "head switch" delay... I did not realize it was fairly significant compared to the physical repositioning of the arm. I didn't know this either... it seemed logical to keep the linear density the same, and I couldn't think of a good reason to alter track density either. But I suppose if the outer rim of a platter is subject to more flutter and hence a greater error rate, it may be beneficial to slightly lower the areal density in that region? Anyway, I read through , which goes through the same exercise I'm doing, so I'm not going to continue any further with mine.
  4. Hrm, I think I need to take into account servo sectors (are those still used in today's drives?) as well as track-to-track latency when dealing with STRs of more than 2442K at a time...
  5. I'm preparing a tutorial that discusses how HDDs perform, and how they have different characteristics from SSDs, etc. I'm using an example to describe the geometry of a typical hard drive, but in doing my calculations, I've come across a discrepancy. Here was my thought process: Assume a 3.5" drive has a platter than actually measures 3.5" across. The minimum STR of a drive is typically around half of the maximum STR. That implies the radius of the innermost track is half that of the outermost track. Thus the diameter of the innermost track would be 1.75" (half of 3.5"). Doing a bit of quick math tells us that there is approximately 7.216 sq in of recording area (1.75" radius circle, minus a 0.875" circle). With two sides to a platter, that's 14.432 sq in. A 1TB drive with 3 platters (like the Samsung Spinpont F1) has six recording surfaces, or about 167 GB/surface (334 GB/platter). 167 gigabytes is 1333 gigabits (assuming 8 bits = 1 byte in hard drive speak). The areal density is therefore 92.39 Gbit/sq in. That sounds rather low to me, but I'll keep going. Linear density is the square root of areal density. A track on the platter would have 303953 bits per inch, or about 37 kilobytes. The outermost track (with a radius of 1.75") is 11" long. It contains 407 kilobytes of data. The outermost cylinder (with 6 tracks) therefore has 2442 kilobytes of data. A spindle speed of 7200 rpm is 120 revolutions per second. 2442K of data will pass under the read head in 1/120 s, or 286.17 MB/s. So why do we only see about 100 MB/s of actual transfer rate? I realize there is sector overhead, but we are talking about almost one-third the theoretical transfer rate, by my calculations. My first assumption is that I've made a bad assumption somewhere in the calculations, or a simple arithmetic mistake. For instance, I'm assuming that a drive with multiple read heads will read from them in parallel. I don't know if that is true. On the one hand, it would seem to be the obvious thing to do. On the other hand, the Samsung 1TB drive is prized because it only has 3 platters. Yes, the areal density is higher, but why not keep the same areal density but stick in 5 platters? You could be reading 10 tracks at a time instead of only 6, it would seem. I'm missing something here!
  6. If you're looking at empty external enclosures that take SATA drives, ignore that "limit". The enclosure simply provides a passthrough SATA interface... it doesn't care about drive size. The reason why some vendors state the limit is because at the time of printing, there weren't any drives larger than 500GB. They probably figured some moron customer would buy the enclosure, thinking it had some arbitrarily large amount of storage. Not necessarily true. I just bought four Seagate 1 TB 7200.11's, and they run noticeably cooler than my old WD and Maxtor 250's. Also, those Seagates work just fine in my 5-bay eSATA enclosure purchased a few years ago... just to prove my point above. Naw, why make life difficult? Get a decent mobile 3.5" SATA rack for less than $40 and mount your backup drive in there. When you're not using it, simply unlock it from the chassis and slide out the tray. The drive is protected inside the tray at all times, and you can plug/unplug the drive without having to shutdown your computer. If $40 seems like too much, you can get a non-fancy one without the LCD display for less than $30. If the drive isn't easy to remove, you will be sorely tempted just to leave it in the computer the whole time. You'll make up excuses like "oh, I heard powering up and down a drive is bad for it anyway, so I'll just leave it running!" and "backups are more convenient if the drive is always accessible!", etc., etc. Trust me, it'll happen. Physically removing your backup drive ensures that you will not accidentally delete or overwrite the wrong copy of the data, or that the drive is harmed by some electrical or thermal mishap inside your computer, that a virus somehow finds its way on to it, etc. So make it as easy as possible to load/unload the drive. Keep it inside the anti-static bag it came in.
  7. You tell us... if you are looking for a fast car, is it better to test the top speed and acceleration, or the time it takes to drive from your home to the office?
  8. On what data are you basing this claim of "many" SSD failures? Some wear-levelling algorithms will relocate data (transparently to the host OS) in order to spread out writes. For wear-levelling to only employ "free" blocks would imply that the mechanism knows the difference between "free" and "used" blocks, which then implies a knowledge of the filesystem on the device and partitioning schemes in case of multiple, different filesystems on one device, etc. That's quite a bit of smarts to cram into the firmware of a drive. Wear levelling, when implemented in the SSD itself (vs a filesystem that is tuned to spread writes evenly), should be filesystem-agnostic. Have a look at this page:
  9. Brian Tao

    New OCZ SSD

    Another quickie, this time from Hot Hardware... URL says it all:
  10. Brian Tao

    New OCZ SSD

    Another preliminary review is up:
  11. Brian Tao

    Vista Start Menu, how to get back to XP

    I actually find Vista to be far superior than XP in this regard. I now have nearly instantaneous access to every installed program (and documents too, if I choose) right from the keyboard. Try typing in a few more letters and Vista will instantly (at least on my machine) narrow down the results. It will also prioritize the stuff you use more often up at the top. For instance, if I tap the Windows key and hit just "i", the first choice is "FastStone Image Viewer". The second choice is "Internet Explorer". If I typed "in" instead, Internet Explorer would have been the default. If I want Firefox instead, I type "fir". OpenOffice Writer? Type "wri". DVD Decrypter? "dv". DVD Shrink? "sh". etc., etc. It does real-time matching on the words in the program name, so it's quite easy to find exactly what you need with just two or three keystrokes. Now if you're an extremely slow typer , or just prefer something more efficient, create QuickLaunch icons for your most-used apps. This is the same QuickLaunch that's in XP, where small application icons appear on the task bar, usually on the left side of the screen, just next to the Start menu. With Vista, the Win+1 through Win+9 shortcuts are linked to those icons. In my case, Cygwin is the first icon in the QuickLaunch folder, and thus Win+1 fires up a Cygwin shell. Firefox is the second icon; that's Win+2. Photoshop is Win+3, Bridge is Win+4, etc. I don't know what version of Windows you're used to, but Win+E always opened up a new Explorer window... this has not changed with Vista.
  12. Are there any review sites that break down their drive I/O benchmarks by zone (i.e., position of the read/write head)? For instance, how does the outer 10% of a drive compare to the inner 10% of the same drive in terms of read/write throughput, access/seek times, concurrent I/O's, etc. STR is usually the only one that is published as a function of head position. For any given benchmark, would the first, say, 30 GB on a Raptor 150 (20% of the drive) be faster or slower than the first 30 GB on a 750GB 7200.10 (4% of the drive)? Does the areal density advantage of the 7200.10 make up for the Raptors higher spindle speed? I have not been able to find any benchmarks that break down I/O numbers by zone... I'm guessing they all assume the use of one partition that spans the entire drive?
  13. Brian Tao

    Where, oh where, are the Terabyte drives

    Guess not...
  14. Brian Tao

    Anyone recommend any eSATA enclosures?

    Bumpity bump... here's my review of a 5-drive external enclosure with a PM-aware eSATA module: Pretty pictures too!