vushvush

Member
  • Content Count

    16
  • Joined

  • Last visited

Posts posted by vushvush


  1. Hey Kevin.... so finally I see you managed to put together a RAID array with amazing 13 GB/s read and 6 GB/s write.... Was this basically just using storage spaces to create a RAID10 array? Have you seen anything else close to this is raw data transfer performance? Is this a single volume or multiple volumes?

    I ask because It says "4 x 12 SSD Storage Spaces Mirror Pools (12 SSDs per HBA)" .... were the transfers happening on four volumes or one RAID 10 volume? As you may recall I'm looking to build a very high performance SMB file server and these numbers seem very interesting.


  2. 6GB/sec on a single controller? Which drives/motherboard are you using?

    I have tried the Areca 1882 (with v3.0 pcb), the LSI 9286 and the Adaptec 71605. I can max them out at about 3.5 GB/s. but I would really think they should be able to top that, as theoretical speeds of 4.8 GB/s per controller should be possible. I was then hoping to soft stripe the two drives, but that produces often slower results than either drive combined.


  3. So this is a topic I would to start, in order to try and crack a problem I'm having, and hopefully help other people in the forum trying to accomplish this feat.

    I'm looking to build a Windows Server (2008 or 2012) that will have a drive with a sequential throughput of 6 - 8 GBytes per second. I'm not talking about using IOMeter to benchmark the drives directly, but rather a real world example of a RAID 5 or 50 or 10 that can get those results through the file system. Currently it seems like I cannot really get passed the 4 GB/s mark.

    I think it would carry some value if we could share information and see what the fastest drive that can be built using such real world scenarios.

    Currently I have tried the latest round of PCI-E 3.0 controllers like the Areca 1882 (with v3.0 pcb), the LSI 9286 and the Adaptec 71605, all with fairly similar results.

    Let's get this done!