Gilbo

Patron
  • Content count

    1836
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Gilbo

  • Rank
    <a href='/patron.html'><b>StorageReview Patron</b></a>

Contact Methods

  • Website URL
    http://
  • ICQ
    0

Profile Information

  • Location
    Halifax, Nova Scotia
  1. Hey everyone, I cross-posted this at Ars Technica and what I need, according to some helpful folks over there is a reverse breakout cable. Linkies for Canadians: Newegg Canada, NCIX. Newegg's generally cheaper, but shipping will typically be more.
  2. I know the opposite exists: you can buy a cable to take you from SFF-8087 to 4x SATA connectors. I want to go the other direction. Do such cables exist? Can I just use the above cable backwards (I've never seen one. Are the connections unidirectional? I think SATA is the same at both ends?) I want to use 4 mobo ports to connect to an SFF-8087-accepting hotswap bay. If the cable exists, anyone foresee any problems?
  3. More Intel X25-M G3 SSD Specs

    Actually, OWC at least has promised 2011 for its SF-2000 implementation, not 2012.
  4. Wow. Very impressive! A lot of people seem to have success by striping across multiple cards.
  5. Nizzen, I assume that "pass through" mode is for using the controller with software RAID or individual disks? Can you have a non-software RAID in that mode? Very impressive that the access time seems to be lower than with the ICH10R!
  6. Crucial C300 Discussion

    A lot of other reviewers have had issues with the C300. Lloyd Chambers does tests on the Mac platform targeted at photographers and found its performance degraded and was unreliable. The SandForce drives seem to perform excellently in any conditions.
  7. P.S. Hey Spod, long time no see!
  8. Like Spod's getting at, it's the reduction of the interruption factor. It's not valuable to measure the productivity advantage in net seconds. Instead you need to think about it in terms of how many times your computers' crappy performance interrupts your workflow. Instead of it being 100 times a day, it's now 10. Now you don't ALT-TAB into your IM or email and screw around for 15 minutes nearly as often. You can keep concentrating on what you're working on. I don't even use the computers I own that I haven't upgraded to SSDs anymore --it's too much of a PITA...
  9. Someone finally has an 1880? Fantastic! The 4K reads & writes are worse than I might have expected. High access times due to the controller? Crazy speeds other than that. Excellent scaling on the sequential reads & writes.
  10. SATA RAID controller

    I'd also recommend software RAID in your situation. Hardware RAID only offers performance advantages for RAID levels which implement parity. Even then, with modern, multicore CPUs it's not a clear call. Most software RAID implementations are better tested and MORE reliable, not less.
  11. Hey LesT, I can't tell when you're talking about the OCZ drive and when you're talking about the OWC. I assume from the introduction of your post that you like the OWC and don't like the OCZ, despite the contrary stuff farther down (typos I assume)?
  12. How do you upgrade a RAID 1?

    You should have no problem at all plugging in a bare drive. You may have to point the RAID controller management software to the new disk to start the rebuild. It should be usable, but may be slower than normal depending on how the controller prioritizes the rebuild. Rebuilding will definitely go slower if you're using the system at the same time. It's a small thing, but worth mentioning that if you're going this route rather than cloning to a new RAID 1, you shouldn't wipe or throw out the old disk. It's an extra backup in case the 1st disk, by extraordinarily bad luck, were to die during the rebuild.
  13. How do you upgrade a RAID 1?

    In place? This is the traditional way: Back up everything! 1. Pull one disk. 2. The array becomes degraded. 3. Put in one of your new disks. 4. Rebuild. 5. Repeat with the other disk. 6. Expand the partition & filesystem to the full capacity of the new disks using a partition management tool. The above will work across any implementation of RAID 1. You could also clone your data to a new array and then expand the partition as you proposed. No need to do it with one disk and rebuild though. Just clone it directly to the new RAID 1.
  14. It's nearly impossible for any website to test every firmware revision of a particular flash controller, which is a shame. Combine that with the host controller interaction (6 Gbps Marvell vs ICH10R for example), and it's really hard for a purchaser to tell what they should be buying. I have no idea to solve those problems without a lot of money.
  15. Small block IO is faster on the 9211 in all benchmarks I've seen. In fact, you may have done quite a few of those benchmarks (over at XtremeSystems if I recall correctly) . The highest STR's I've seen are also from the 9211, in direct comparisons with the 9260. Those are with 16 Intel SLC SSDs --no question the controllers are the bottlenecks. Access time is also lower on the 9211, which for low queue depth situations, specifically real-world desktop use is one of the most important fundamental metrics to look at when evaluating SSD performance. I haven't seen benchmarks with FastPath enabled on the 9260 though, and I don't claim to have seen every benchmark on the net. I'm just reporting what I've seen in the research I've recently performed. It's worth mentioning most desktop applications don't do multithreaded IO and don't pipeline IO. They wait to receive data before issuing more requests and they do this in a single thread. This means turning around those requests with as little latency as possible is the only way to push data back to the application faster. If you have multiple applications issuing simultaneous requests, the OS can take advantage of command queuing, but otherwise the queue depth will stay at ~1. In my use Lightroom behaves this way. Even though its processing multiple files in multiple threads, it's using a single thread for IO. If you want to improve the IO performance of applications like this you need the lowest latencies possible. The 9211 will be better than the 9260 in these situations. Desktop applications were written this way for performance reasons: to avoid thrashing mechanical disks. It limits SSD performance significantly though. Until SSDs are ubiquitous, I don't think we'll see any change. Write amplification in RAID 0 comes from the simple fact that you're dividing a write that may have fit into fewer erase blocks on a single drive into at least one erase block per each disk. There's an interaction between stripe size and SSD erase block size here (which is itself an interaction between channels and NAND page size), but every person using RAID 0 I've observed optimizes stripe size for performance, not write amplification. To avoid this problem you need to do two things: 1) Your stripe size needs to be equal to, or larger, than the SSD's block erase size, and, 2) If larger, your stripe size needs to be an exact multiple of the block erase size. For example: Let's say you have an 8 drive array of SandForce SSDs (lucky you...). You have a 64KB stripe size, which is probably the most common size, since it's, 1) a common default, and 2), it also often delivers optimum performance in some very popular benchmarks (ATTO, I'm looking at you). SandForce's block erase size is 512KB. You want to write a 2MB, 2048KB file to your array. This file will be divided into 32 stripes, 4 per drive. On a single SSD, this would have resulted in a minimum of 4 read/erase/write operations, and a maximum which depends on filesytem alignment issues and the internal fragmentation of the SSD (which can be reduced by garbage collection because it understands the OSes filesystem AND hopefully has TRIM). Now it results in 8 read/write erase operations, and maybe, depending on internal fragmentation, many more. We can be certain that internal fragmentation will be higher in the RAID case because: 1) we lack TRIM, 2) we lack a filesystem table for the drive's internal garbage collection to analyze. In this case you have a minimum of 2x write amplification, before you deal with internal fragmentation which is going to be magnified due to the absence of TRIM and a filesystem table! Maybe overall each SSD gets less data written to it (but only before write amplification), but that's just a symptom of a higher capacity-to-data-written ratio: a single 8x larger capacity SSD would be the only way to hit your ideal 1/8 write amplification (i.e. a 400GB SSD vs 8x 50GB in RAID to continue from the example above - the 400GB SSD would last a minimum of twice as long, writing 2 MB files). It's important to remember that when we talk about write amplification in relation to SSD lifetimes, we need to consider the data written vs. the capacity (including spare capacity). That said, RAID 0 isn't as bad as striping with parity where your write amplification will go through the roof as you update small bits of parity all over the place. Striped parity is a disaster with SSDs. RAID 3 would be the way to go, but it's unusual to see these days. Sorry for the long post. Complicated stuff...