gfody

Member
  • Content Count

    146
  • Joined

  • Last visited

Community Reputation

0 Neutral

About gfody

  • Rank
    Member
  1. I use a pair of sata docks currently to carry my main system drive (Crucial M500 SSD) between my laptop (2014 MBP) and desktop (PC). For my laptop I use USB3 (gen1 on the mbp - 5gbit) and for my desktop I use an esata port. I've been using this setup for years now and was thinking about how I could upgrade to a 950 Pro. It seems like a waste if I can't increase throughput. For the MBP that means using the thunderbolt port and I could get a thunderbolt adapter for my PC. Then the problem is that there doesn't seem to be any thunderbolt m.2 adapters, only USB. I found this thunderbolt sata dock which I could use with a sata/m.2 adapter but then that would bottleneck the 950 Pro to sata speeds again defeating the purpose. A native thunderbolt/m.2 adapter would be ideal but it doesn't seem like anybody is making them.
  2. Does anyone know of an m.2 enclosure with thunderbolt interface? The only ones I can find are usb. Usb wouldn't be so bad but my laptops usb ports are only 5gps. I found this thing http://www.sonnettech.com/PRODUCT/fusionpcieflashdrive.html but it's not just an enclosure and I'm not sure what sort of SSD is included. Ideally it would just take an 80mm m.2 drive.
  3. Is the fragmentation resilience really intrinsic to the new 3D layout or is it just controller enhancements? http://www.pcper.com/reviews/Storage/Samsung-850-Pro-512GB-Full-Review-NAND-Goes-3D/Performance-Over-Time-and-TRIM
  4. I just caught the SR review that includes the 910 and shows them actually much closer than the mysql bench http://www.storagereview.com/fusionio_iodrive2_mlc_application_accelerator_review_12tb I think the mysql bench is actually an iodrive duo based on the capacity and I guess the iodrive2 is actually a bit slower than the duo
  5. I'm wondering if anyone has tested and compared these products? Intel 910 800GB is ~$4k and does 180k/75k iops 4k random read/write FusionIO ioDrive2 785GB is ~$10k and allegedly does over 900k iops The only head to head I could find for db load is here: http://www.ssdperformanceblog.com/2012/09/intel-ssd-910-in-tpcc-mysql-benchmark/ ..seems like the ioDrive's performance improvement is only marginal. Anybody have links to benchmarks or first hand experience they can share?
  6. considering that enterprise SSDs are 10-20x more expensive than consumer SSDs - is anyone using consumer SSDs in a RAID for a production db server? If so can you share your raid configuration, what drives you used, what kind of performance you get and how the SMART counters are predicting remaining drive life over time?
  7. If there was a kickstarter for a pcie ddr3 drive I would pay. I'm surprised the dram storage vendors don't offer products like this.. actually violin has these things called "memory cards" but there's not much info on the page and judging by the capacity I'd guess these were flash based: http://www.violin-memory.com/products/velocity-pcie-cards/
  8. gfody

    Areca 1882

    Has anyone tried a dual rank 16gb dimm in the 1882? http://www.acmemicro.com/ShowProduct.aspx?pid=10099
  9. Would be nice to see a real competitor to ioDrive.
  10. gfody

    Fusion iodrive benchmarking

    ioDrive seems perfect for use as 2nd level cache with FancyCache
  11. Assuming they would all work the same way - I'm considering the MegaRAID SAS 9280 I'm curious if this controller works similar to Areca in that you can setup one underlying RAID utilizing all of your drives and then define multiple logical volumes on top of that. It's for a new VM server and I find this feature extremely helpful for setting up pass-through volumes for my VMs.
  12. 8 months ago I purchased 50 x25-m gen2 160gb here and so far 1 has died. I didn't try to recover the failed drive I just sent it in for RMA replacement as I didn't need the data.
  13. I suppose randman's result could actually be from the cache difference. I just noticed in the screen shot that total test size is set to 2GB so it's less than the cache on one and not the other.
  14. The 1880 may have a more sophisticated roc but for raid0 performance I'm thinking raw throughput might be the dominating factor. Also I think there's some overhead when SAS controllers talk to SATA devices. I'm interested to see the benchmarks. So far it looks like the 1880 has quite a gap to make up and I'm not sure the additional cache will do it.