• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About hyc

  • Rank
  1. Nice to see that PCM is finally getting to a usable state. 3 million IOPS is great, but they didn't mention their write speed. Looking forward to seeing more details on that. I wonder when STT-MRAM is going to hit 1Gbit chips. Last thing I saw was 64Mbit.
  2. Crucial M500 SSD Swap Issues

    Cloning Windows drives can be a pretty pointless exercise. The FAT filesystem used C/H/S block addressing so if your new drive's geometry didn't match the old, none of your files would be found.
  3. I've been waiting for NVDIMMs for 30-some years it seems, since I used to run an Atari ST with 4MB battery-backed RAM cartridge. Instant-on isn't really an issue (though we pretty much had that, back in the ST/TT days) - today it's just about the OS page cache. PCIe SSDs still aren't as fast as the real memory bus, and the fewer times we have to make our data traverse that the better. For databases you always want fast writes, and the fastest write is the one you didn't need to execute (e.g., because it was in cache, and the block got overwritten by a subsequent update anyway). NVRAM inside disk controllers is still at the far end of the I/O bus from the CPU. NVDIMMs offer the perfect solution - cache everything, don't ever worry about flushing data back to secondary storage unless the cache is actually full. If there's a power failure, after reboot just keep on going with the cache exactly as you last left it. The trick of course is that you need explicit BIOS support. At the very least, to make sure the POST memory test doesn't zero out all of your precious NVDIMMs just as they're reloading themselves with power being restored. And you also need explicit OS/kernel support, to use the cache structures as they already exist, instead of (again) just blindly initializing to zero on a reboot. Whether NVDIMMs ever offer you instant-on is irrelevant. The real value, and the reason why you want these in servers, is perpetual high speed cache to mask whatever slowness of your HDDs or SSDs. ... and on the note of databases and RAM, this is what I've been working on these days http://symas.com/mdb/inmem/ It's a big deal for all the tech giants - Google, Facebook, Twitter, eBay, you name it. NVDIMMs can be salvation for them (with the correct database technology, of course) and a gold mine for the vendors who bring them to market first.
  4. And they seem to be targeted at industrial/military applications, which means at least 10x more expensive than consumer grade...
  5. uh huh... and you've maintained that trend... It will certainly help, especially if both drives have sufficient onboard cache and both drives have fast enough electronics to allow burst transfers at full IDE speed (133MB/sec or thereabouts). In that case, you can interleave I/Os between both drives without any delays. If the drives' burst rates aren't fast enough then you're not going to get as much benefit. Creating a RAMdisk from main memory for the purpose of housing a swap file is pure idiocy, especially on a class of machine that has only 512MB-1GB of RAM total. On a machine running WindowsXP 32bit, with more than 3GB of RAM installed, it might make sense to create a RAMdisk in the memory above 3GB, but that's only because this memory is generally inaccessible to Windows or regular apps. On a 64 bit machine, there's really no justification. And yes, Windows up to XP is notorious for aggressively paging memory out to disk. It acts as if it's allergic to RAM. Vista should be better at making full use of RAM (though to be honest, in my experience it was just as bad).
  6. SSD Recovery

    All of these drives must be writing across multiple flash chips in parallel, because individual chips have nowhere near that much speed/bandwidth. The likelihood of an entire chip failing is pretty low. If it does, I think all of your data is lost. Generally they all use ECC for 1-bit correction/2-bit detection per byte. Protection on a larger scale is pretty much up to you, e.g. RAID...
  7. Read my post -- this was already explained. The mapping information goes in the cell that is being written. In simple terms, the block being written IS the mapping block -- its self-descriptive. Ah right, I must have skimmed past that the first time 'round. I see, sorry, thanks for the explanation. It begins to remind me of old mainframe disk extents...
  8. This reminds me of an old story ... in Indian mythology, the Earth is supported on the back of an elephant. "Well, what's the elephant standing on?" "Don't be silly - it's elephants all the way down!" Where is the map stored? What is the wear life of the mapping cells?
  9. Finally I got MTRON MOBI 32GB

    Take a close look at the stats. Its MLC. While flash lifetime can be great with good write leveling, MLC is generally about 2 orders of magnitute less dureable. But also cheaper (virtually all cheap thumb-drives and sd-cards are made of it) For a laptop it will probably be fine... I'm annoyed that all the newer SSDs are only coming out in SATA, I want a 128GB 2.5" PATA SSD for my current laptop. It's going to be a while before I see a new laptop that offers any compelling reason to upgrade.
  10. Hmm, didn't quite make the July 2007 target, but MLC flash is now only about $3/GB so 128GB for $384. Seems like this is the year that SSDs really make an impact on the market.
  11. Bset 400 - 500GB IDE Drive

    Really? Bummer. I just ordered a pair of WD Caviar SE16s, figure the 16MB cache is worth something. Also they seem to have a higher minimum read rate than the Hitachis. I figure the difference in Hitachi ATA133 vs WD's ATA100 probably doesn't matter here. One of mine will be going into a Firewire case, so that's going to cap its speed anyway...
  12. Last month MOSAID announced a new controller/interconnect technology that's capable of delivering up to 800MB/sec sustained to an array of flash chips. Pretty cool. http://www.mosaid.com/corporate/news-event...2007/070507.php They call it HyperLink NAND, built on a high speed serial point-to-point daisychain layout. Makes sense, philosophically similar to HyperTransport and FBDIMMs really. Plus they also claim to have new program/erase algorithms to make writing as fast as reading. Interesting stuff.
  13. No I'm not referring to DRAM drives, I think only non-volatile storage can be a real replacement for current HDD's. Agreed. DRAM SSDs also can't compete on raw capacity. You can cram 128GB of flash in the same space as one GB of DRAM. Yep but look how far they've come. A few years ago flash was way slower in STR and now it is biting at the heals of HDD's. I know write is lagging a bit but read is more important for most application anyway. I think in about one years time read STR will surpass desktop level HDD, writes will be comparable and prices will be about 1/2 of current. Add one more year and the performance market for flash SSD's should be huge. Also that Samsung price is heavily marked up; you can find them for sale at the $480 mark already. And yes, read speed is more important because it can't improve as effectively with caching. Whereas a small amount of DRAM will make write speeds more than fast enough for real world use.
  14. OK, PQI has shown off a 256GB 2.5" SSD. http://www.dailytech.com/PQI+Shows+Off+256...article7489.htm So finally flash has leapfrogged mechanical storage in the 2.5" form factor. Of course, 256GB worth of NAND flash chips costs around $1800 right now, I expect this drive will market for at least $4000 if it were released today. Yeesh.
  15. New long block standard

    Wouldn't that increase memory slack space enormously? Each DLL, EXE and other memory mapped file would use a size rounded up to the nearest 4 mb? Yes it would, if only 4MB pages were being used. As I understand it, both large and small page sizes can be used at once. E.g. in Linux the default will still be the small (4K, 8K) page size but apps that are mmap'ing big chunks of memory can request larger page sizes. This is a nice gain for large DB apps on 64bit machines.