Scott C.

Member
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Scott C.

  • Rank
    Member
  1. Because you were cherry-picking dates/times. The intel drive was WAY ahead of its time, which is great. But the reality is that 18 months ago that was the ONLY drive that was any good. And its about half price / 2x the size now than then. The title of this thread is a big exaggeration. I could cherry-pick dates and times too.... In 18 months consumer SSD's got 4000x faster at random writes! (Sept 2008 to April 2010). These things have been advancing by good chunks for the last 2 years and we are on the verge of another wave of capacity/price/performance improvements between now and late spring 2011. No, nothing in this thread convinces me of that at all. I completely disagree that they aren't good for desktop use. They ROCK for desktop use. I store my bulk stuff on a big magnetic drive (I've got ~750GB of media, archives, backups from ages ago, etc). NONE of that stuff needs fast random access. I'm way more than the 'average goon' and all the stuff I use regularly fits in 80GB. I don't however use media creation software which is one way to require more space. I've also not been gaming as much the last couple years which would require some more space ... but there are two things about gamers: most are willing to upgrade to a larger size. Every now and then I have to move something I no longer use into the 'archives'. Its totally worth it to spend $150 more on a SSD for your main apps and data than $150 more on a faster CPU. Budget users are not so lucky though, you can build a budget PC for $550, but only if there is no SSD. The value segment of the market stinks. a 40GB drive does nothing other than boot fast and load email/browsers fast. The SSD market is not strange, its in fact very predictable and easy to explain. The primary cost is the chips in the drives. This cost only goes down very slowly during a single manufacturing generation and then jumps down by almost a factor of two when the manufacturing generation changes. The performance depends on having a good controller that handles random writes well, and doing good wear leveling, etc. Additionally, the manufacturing space for these _competes_ with RAM and CPU, so if demand goes up for those, supply might be affected a bit. So, like CPUs (and to a lesser extent, RAM) the market will evolve in big step functions more than the smooth changes that we see in spinning magnetic drives. In short, I agree that the budget SSD segment sucks, but disagree with just about everything else.
  2. Um. Ok. So two years ago the Intel drives weren't out. Atrocious SSD's that could not do random writes were all that existed. In fact, I bought a 32GB JMicron based SSD for testing at work almost exactly 2 years ago for $235. It could do 110MB/sec read and ~60MB/sec sequential write, but did random writes at ~6 per second! Intel's drives came out in October of 2008. The 80GB one cost $700 and sat at that price, give or take $75 or so until ~April 2009. By the time they were available for the price you bought it for, $350 it was about 1 year ago. You're like the guy who bought a Pentium Pro when it came out then complained 3 years later when the Pentium II came out. The X-25M was ahead of its time. I'm sure you had a ~1TB drive for the stuff that takes space. No OS only install requires 80GB. OS + a few games and apps fits in 80GB. I have ~100 of these drives used in production servers. None have died. I have been hit by the firmware issues that lead to slow writes, and have had them degrade in performance and require some 'reconditioning' (write at leat 50% of the drive sequentially, then re-write that and delete it) to get them back. I'm looking forward to using trim/discard in linux on G2's. However, it actually has been almost 2 years for me, since I bought them for $730 at first availability in early October 2008, and the reconditioning has just recently been an event, after ~ 150GB of writes per drive per day for 18 months. SMART says they still have 70% of their life left. The fact is, that these drives handle 200 to 600 random read iops constantly on our servers with sub millisecond latency, and cost a lot less than anything else that can do that. To answer your question on where the new stuff is: 25nm production has started at Intel and elsewhere, and Intel is manufacturing the next gen drive components now. When the inventory of old stuff gets scarce enough and the flow of new stuff is ramped up, they will release G3. This should be this fall, based on their roadmap and various rumors. These will have 160GB, 300GB, and 600GB sizes. Along with the 25nm G3 consumer drives will be 32nm "enterprise" versions with similar capacities, but using MLC instead of SLC, and a supercapacitor option this time (to avoid losing recently written data during power loss). It is expected that the price per GB will roughly drop in half, as it did last time the manufacturing process was upgraded. So a 300GB consumer drive will be ~$450 is my guess. The 300GB enterprise version, on the older process, will probably be ~$1000 (again, my guess). The next generation, capacity, and price drop after that will be in 2 years (its the semiconductor schedule), and will roughly double capacity again. This doubling every two years is MUCH faster than the rate that Hard Drives have been going lately, but SSD's will not take over the large capacity segment; they'll take over the performance segment.
  3. Scott C.

    WD RE3 1TB+ with 500GB platters?!

    Another review: http://www.tomshardware.com/reviews/2tb-hd...rgy,2371-9.html Sure, a 'true' 7200 RPM drive tops the list (barely) in best access time, but these supposedly slow RPM drives often are near the top in latency and iops -- which is all that matters. Compare to this round-up form the end of 2008: http://www.tomshardware.com/reviews/hdd-te...tb,2077-12.html The best 7200 RPM drives that are non-'green' score between 12.1 and 14ms latency. These new drives are 14 to 15. I suppose that extra 1ms can be due to some more rotational latency. No 'true' 7200 drive over 1GB scores better than 13 either. And the sequential throughput of the new drives is rather good. In any event, if you're looking for very large storage, you shouldn't be focusing on iops esclusively -- its part of the picture but not the whole thing. If you need that, just get an X25-M. Pairing one SSD with a HD is great.
  4. Scott C.

    WD RE3 1TB+ with 500GB platters?!

    The description is a bit misleading. The drive does not use variable speed during reading/writing. My understanding is that the drives use something like what Hitachi does, where the drive slows down when the heads are offloaded onto the ramp. In that sense, it's variable, but many people assume "variable" to mean even during reading/writing, but it's not. Heads are designed to fly in a fairly narrow range of spin speeds, which is why they're unloaded when the drive moves to a slower "idle" speed. Also, I don't think any of the GP line actually spin up to 7200 rpm, which continuum pointed out. The RE4-GP clearly has performance numbers that indicate its at least very close to 7200RPM when in operation. The consumer 2TB GP is FAR slower in most benchmarks, and the RE4-GP beats all but the top few 1TB drives. Based on the random seek numbers these things must be spinning close to that rate when in operation. Just exactly how often and in what conditions they can slow down for power saving is unclear. They are also using tech that makes the power used for seeking lower as well. The actual power consumed is much less than the first 2 generations of 1TB drives.
  5. Scott C.

    WD RE3 1TB+ with 500GB platters?!

    I found one earlier searching around (sorry in a hurry right now). It was comparing a couple 2TB drives to some 1TBs I think. google around a bit. The seagate 2TB was like the RE4, and clearly as fast as any 7200RPM drive in sequential and random access.
  6. Scott C.

    WD RE3 1TB+ with 500GB platters?!

    Google: 're4 gp benchmark' Reviews have been out for a while. here are some iometer numbers: http://www.pcper.com/article.php?aid=703&a...xpert&pid=7 From an article on it: "The Western Digital RE4-GP uses Western Digital's "GreenPower" collection of technologies - IntelliPower, IntelliSeek and IntelliPark - to cut down on the hard drive's power consumption. IntelliPower varies the drive's spin speed to reduce power consumption when spinning up the drive and during normal operation; as a result, Western Digital doesn't provide a fixed spin speed for the drive." The GP on this drive, is there to conserve power in datacenters, not to slow it down a lot. Its a raid drive, not a consumer drive. I'm pretty sure based on the numbers, that it spins up to 7200RPM for most tasks, then parks the head and slows it down when it can.
  7. Scott C.

    NCQ: Best Upgrade For a Power User!

    It looks like stale-mate here. I don't have Windows, and it seems you don't acknowledge any test "posted somewhere". In addition, your way of communication discourage me from any discussion You can get ANY drive in linux to work well. Just fiddle with the "readahead" value for your block device. See /sbin/blockdev --setra try perhaps, 2048 blocks (1MB) for the readahead, you'll see that concurrent separate sequential reads work very well. Fiddling with the linux device scheduler can also tune the system to favor sequential or random access more (search google for 'deadline cfq linux'). Windows though, doesn't do enough OS level sequential read detection or do speculative readahead, so the drive firmware has to instead.
  8. Scott C.

    WD RE3 1TB+ with 500GB platters?!

    The RE4-GP's are faster in all benchmarks than the 7200RPM RE3s. They are significantly faster than the 2TB consumer GP drives. Don't look at the RPM, look at the actual performance numbers -- linear transfer and random access times show that they are fine and must be spinning relatively fast (or are variable RPM). I suspect, that there will be no non-GP RE4 drives. There wouldn't be a performance difference from the looks of it anyway, it would be marginal and just use more power and release more heat when idle.
  9. Scott C.

    WD presents their new SSD models

    Because I've used several hundred MLC based drives in servers now (high random read load, moderate write, huge performance win), and even in a datacenter that is supposedly completely power safe, things happen. Hence, BBU caching raid controllers, which are only a little bit more safe (and I've seen these fail). So, fewer things that can corrupt your data the better. And if you don't have to buy a $300 to $900 RAID card to get that safety (or performance), the SSD pays for itself.
  10. Scott C.

    WD presents their new SSD models

    Interesting. Yeah, the sequential transfer rate specs aren't super hot, but if the random write performance is very good it will perform very well in the real world. I like the no data loss on power failure feature -- only hard drives with write cache off can claim the same (and performance hurts bad most of the time in that case). Intel's SSD's should have the same capability but having it a claimed feature would be a plus (no write cache on them, the RAM is for LBA mapping space and other block management duties). Having tools to show you how much life is left on the drive indicates they have tested the failure scenarios well -- it probably gracefully enters read-only mode eventually. I doubt the Indilinux stuff can clam the same with their large 64MB cache. Anything else is either pricey or junk.
  11. All HD vendors. ALL have had occasional drive lines with problems. This whole "I'll never trust Seagate again" stuff is really funny. There is not a single vendor that hasn't had one of their drive lines have a problem in the last 5 or 6 years. Just because Seagate had some issues with one sub-type of their 7200.11's (that is entirely fixed now) doesn't mean anything about their future drives. If I stopped purchasing from every manufacturer that I've had to return a drive that was less than 1 year old to I would only be able to buy SSD's. I've had WD 40GB drives die on me (and then the replacement die, and then ITS replacement die). I've returned a fujitsu, a samsung, and multiple hitachis. Somehow, none of my IBM Deskstars died, but they're not making any more drives. And although no Maxtor I've had has died in its first year, one of them died right when the 3 year warranty went up, like an alarm clock. My only rule these days is to buy 5 year warranty drives, with the exception being if its for archival mass storage and the data is protected somehow (backed up or RAID).
  12. Scott C.

    RAID5000 vs. RAID60

    If you're starting out with a large DB, its usually best to start with RAID 10. Its write performance is best, there is no raid 5 or 6 'write hole', and with enough disks you'll be maxing out sequential transfer rate so raid 5 or 6 won't do you any better than raid 10 in that respect. Random reads are often extremely important in a database, and raid 10 is very good at that. There is a space sacrifice for this performance however. The above comparison compares 48 SAS (raid 50 an 60) to 24 SATA in the raid 10 case. Its highly unlikely that you will get full random read or write performance out of raid 50 or 60 if your dataset is large and much of it is accessed. If it is large, but a small subset is most commonly accessed then lots of the parity can be cached. If your system can handle it, four arecas will beat two in sequential transfer rate noticeably due to port and bus bandwith. They will be on par for random iops if RAID 10, but the extra cache with four will help if the randomness is biased towards some data -- but usually the OS cache makes the raid card cache more effective for write scheduling and parity cache than read data cache. If RAID 50 or 60, the extra controllers will be an even larger factor. With external SAS expander boxes you can change your RAID card choice too. I highly recommend thinking about having a separate volume for the log than for the data. The log will send synchronous write commands that will pollute the controller cache, and is largely sequential (so raid 5 or 6 is fine). However, it is more important for the log to be on a separate file system partition than raid volume in my experience, and the benefit is DB and OS specific. I haven't tuned MSSQL to db's that size before, but it helped on the small ones. Postgres and Oracle both get big gains from log separation on big dbs on Unix/Linux. Windows has different behavior that makes the benefit smaller but still there. Also, if you're going for I/O this big you may have some network I/O bottlenecks to consider. 1Gbps is only about 100MB/sec. As much as the data tables are written to sequentially, writes still end up fragmented somewhat so iops will matter. And hot spare drives --- don't forget to have one or more on each controller!
  13. Scott C.

    RAID 5 with SATA vs. SAS

    Raid 5 + database = bad idea. Small random writes plus raid 5 kills performance. Generally you can get larger, slower drives in raid 10 and have the same performance. 4 10K SAS drives in raid 10 will beat 4 15k drives in raid 5 in most database workloads that are I/O bound. For Database performance RAID 10 is the only way to go for the data partition. If you aren't capacity constrained, you want to get the most iops per $. This might mean 8 SATA drives or 4 SAS drives. Or, if you really don't need that much space and want to make it rediculously fast, some X25-E's. If your db mostly fits in RAM, then SATA will do fine provided you get a controller with battery backed write cache. If you use X25-E's you don't need a hardware raid controller, software raid 1 or 10 is enough.
  14. Scott C.

    Intel X25-M Impressions

    There's something wrong with your driver / OS config. Also, your virus scan is probably CPU limited or something (not sure there, that doesn't jive with results I've seen myself). The big test isn't how long it takes to virus scan, but how badly a virus scan or other background tasks (large directory copies) interfere with other work. Another big test is loading large video games -- load times in levels go way down (and unlike boot times, this DOES matter), and any games with enough textures to texture-swap gain framerate too by stuttering less. I don't have one of these on my windows box but I can surely tell you that the hard drive gets in the way of my gaming experience and I'm eyeballing the price drop on these things. The next big test is if you run enough apps to cause any sort of disk-swapping. Even with 4GB or 8GB a power user can cause paging and page-back on a traditional drive goes at random i/o rates near a handful of MB/sec while on this SSD go at 50x that rate. As a developer pulling code branches (usually two full dev environments each on a different branch), compiling, running a db, running a VM, 400MB+ of firefox tabs, 10+ shells open with very long/large histories (1GB RAM at times), maybe a copy of GIMP, building and launching the apps under development over and over and reinitializing the db -- this thing makes a big difference. Its comparable to having a lot more RAM -- it makes things a lot smoother, but like RAM if you don't need it it won't help much. But it goes hand in hand with Lots of RAM because it helps where the RAM can't -- when things have to go to disk and aren't just cached because you're doing complicated tasks. Note, I have found that it is vital on Linux / MaxOS to do some file system tuning to make it perform best. On Linux, use the noop scheduler and turn dirty_background_ratio down to the equivalent of 100MB or less (a setting of 1 for larger systems, google it if you don't know about it). I have no experience tuning it for Windows, but if there are defaults in Linux that schedule I/O assuming its made from spinning platters, I'm sure Windows does similar. There is no chance that the newer hard drives are 20% faster than your 7200.11. They are up to that much for sequential transfer only, and 0% faster for random access unless you get a 15k rpm SAS drive. Your comment on it not being silent is silly. Might as well say your TV's mute button doesn't provide silence because you still have to breathe and pump blood during commercials. Your copy test from another drive is clearly bottlenecked by the source drive, what did you expect? It to make the other drive faster too? What about other tasks concurrent with this? Did you open up PerfMon and look at the OS disk % utilization during the test? Try a directory copy within the same drive or something. Maybe a windows file search for something that can't be indexed forcing it to scan the drive? The biggest speed gains are concurrent activity and pure random access. I agree that for many tasks it doesn't help much. Other drives and CPU bottleneck tons of stuff. Your use cases above just don't seem to be that demanding. Other SSD's would actually lead to slowdowns in a lot of things -- the Intel drive doesn't have that problem. Boot time tests are really silly for most people. They are less silly for a MacOSX user though since every freaking update from Mr. Jobs forces a reboot (even just a quicktime or itunes update -- its seriously like Windows NT 4 reboot-land). I have also never bothered to partition my X25-M's under the full 80GB. I have seen the slowdowns on occasion (writes temporarily drop to 40-50MB/sec) but they have been temporary, and only affect heavy write scenarios. In fact I've only ever seen it during or just after benchmarking. It doesn't sound like you are a heavy-write user so there is no point in limiting it to 50GB. The drive is already internally over-provisioned to some extent anyhow.
  15. Scott C.

    Multiple Controllers - One Array ?

    I've done this several times. Performance is almost always much better with two controllers and software raid 0 on top than otherwise. In a server running a big db, i have two adaptec 5805's. Each of those has a 10 disk raid 10 array (2 hot spares additionally). Then linux 'md' software raid 0 is on top of that (windows server's raid would work too, even win2k). 1200MB/sec sequential transfers. The drives are in external enclosures, and I did try putting all 20 in one raid 10 array on one raid card, which was not as fast at all (800MB/sec).