• Content Count

  • Joined

  • Last visited

Everything posted by gfody

  1. I use a pair of sata docks currently to carry my main system drive (Crucial M500 SSD) between my laptop (2014 MBP) and desktop (PC). For my laptop I use USB3 (gen1 on the mbp - 5gbit) and for my desktop I use an esata port. I've been using this setup for years now and was thinking about how I could upgrade to a 950 Pro. It seems like a waste if I can't increase throughput. For the MBP that means using the thunderbolt port and I could get a thunderbolt adapter for my PC. Then the problem is that there doesn't seem to be any thunderbolt m.2 adapters, only USB. I found this thunderbolt sata dock which I could use with a sata/m.2 adapter but then that would bottleneck the 950 Pro to sata speeds again defeating the purpose. A native thunderbolt/m.2 adapter would be ideal but it doesn't seem like anybody is making them.
  2. Does anyone know of an m.2 enclosure with thunderbolt interface? The only ones I can find are usb. Usb wouldn't be so bad but my laptops usb ports are only 5gps. I found this thing but it's not just an enclosure and I'm not sure what sort of SSD is included. Ideally it would just take an 80mm m.2 drive.
  3. Is the fragmentation resilience really intrinsic to the new 3D layout or is it just controller enhancements?
  4. I'm wondering if anyone has tested and compared these products? Intel 910 800GB is ~$4k and does 180k/75k iops 4k random read/write FusionIO ioDrive2 785GB is ~$10k and allegedly does over 900k iops The only head to head I could find for db load is here: ..seems like the ioDrive's performance improvement is only marginal. Anybody have links to benchmarks or first hand experience they can share?
  5. I just caught the SR review that includes the 910 and shows them actually much closer than the mysql bench I think the mysql bench is actually an iodrive duo based on the capacity and I guess the iodrive2 is actually a bit slower than the duo
  6. considering that enterprise SSDs are 10-20x more expensive than consumer SSDs - is anyone using consumer SSDs in a RAID for a production db server? If so can you share your raid configuration, what drives you used, what kind of performance you get and how the SMART counters are predicting remaining drive life over time?
  7. If there was a kickstarter for a pcie ddr3 drive I would pay. I'm surprised the dram storage vendors don't offer products like this.. actually violin has these things called "memory cards" but there's not much info on the page and judging by the capacity I'd guess these were flash based:
  8. gfody

    Areca 1882

    Has anyone tried a dual rank 16gb dimm in the 1882?
  9. Would be nice to see a real competitor to ioDrive.
  10. gfody

    Fusion iodrive benchmarking

    ioDrive seems perfect for use as 2nd level cache with FancyCache
  11. Assuming they would all work the same way - I'm considering the MegaRAID SAS 9280 I'm curious if this controller works similar to Areca in that you can setup one underlying RAID utilizing all of your drives and then define multiple logical volumes on top of that. It's for a new VM server and I find this feature extremely helpful for setting up pass-through volumes for my VMs.
  12. 8 months ago I purchased 50 x25-m gen2 160gb here and so far 1 has died. I didn't try to recover the failed drive I just sent it in for RMA replacement as I didn't need the data.
  13. I suppose randman's result could actually be from the cache difference. I just noticed in the screen shot that total test size is set to 2GB so it's less than the cache on one and not the other.
  14. The 1880 may have a more sophisticated roc but for raid0 performance I'm thinking raw throughput might be the dominating factor. Also I think there's some overhead when SAS controllers talk to SATA devices. I'm interested to see the benchmarks. So far it looks like the 1880 has quite a gap to make up and I'm not sure the additional cache will do it.
  15. The 1880ix is a SAS controller and the 1231 is a native SATA controller. The 1231 also has a dual core 1.2ghz intel chip onboard whereas the 1880 is rocking an 800mhz ppc chip. Let us know when you get the 4gb installed I'm suspecting the 1231 might still be faster.
  16. Infiniband is at least a switched low level protocol. OCZ are just plumbing the pcie lanes over these cables, hence the need for very high quality cables and short lengths. Basically this lets you take one pcie slot and connect several raid controllers to it. Not a bad way to raid raids IMO, it's nice that top raid is still in hardware - in my plaided setups I've always had to implement the top raid in software.
  17. The workload might be small random mixed reads and writes which is most SSD's worst nightmare. I think only x25 or C300 does well with that kind of workload? I am running intensive database apps myself on large raids of x25-m with great success. A caching raid controller is a must in this scenario. I recommend Areca w/4GB
  18. My testing was with arc1280 and sata drives. Scsiport did indeed go much much higher and since this was for a heavily IO bound SQL server that regularly saw 200+ QD I went with the scsiport driver in production. When I contacted Areca about it they said it was due to a bug in the driver but I never followed up with that. I'm curious is your LSI using a scsiport driver?
  19. In my testing with Areca the storport driver doesn't scale past 32 where the scsiport scales up to 256. The storport tests showed significantly more iops than the scsiport at the same queue depth though. Sorry I don't have the data handy.
  20. ICH10R gives decent performance in soft raid 0
  21. what are the specs of your test rig? I'm curious how a raid of x25-m160g2 stacks up to ioDrive. I can try running the same bench on a similar spec machine here.
  22. There's an article with some benchmarks on Adaptec's MaxIQ system here,2511.html You have to use x25-e's for cache though it won't let you enable it on x25-m's I'd test it myself but I only have x25-m's Tom's conclusion is about what you'd expect - so long as the working set is under the cache size the performance is great. The real question is what size is the working set of your average desktop, gaming, server, etc? It's hard to say.
  23. gfody

    OCZ Z-Drive

    Not finding much.. here's one, Sandra results :\
  24. gfody

    OCZ Z-Drive

    They seem to be generally available and reasonably priced (as far as pcie-flash solutions go) 512gb for $1900 quoting 800mb/sec 1tb for $4400 quoting 1.4gb/sec Anyone know how these stack up against an SSD raid using cheaper sata SSDs or at least some good iometer results?