• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About vushvush

  • Rank
  1. Are these numbers for real? Is this on some kind of ridiculously compressible data? If not... wow
  2. Says available today.... but no pricing mentioned. Is this only available in bulk or will there be a way to purchase these through CDW for instance?
  3. It could be multiple files to take advantage of threading better..... but yeah, single volume. Does it have something to do with syncing the request stack?
  4. Why would it be limited to the 1 HBA? If you did a storage spaces pool wouldn't it use all 4 HBA's bandwidth?
  5. Is that code for yes too late?
  6. Is it too late to try measuring performance on a single volume?
  7. Hey Kevin.... so finally I see you managed to put together a RAID array with amazing 13 GB/s read and 6 GB/s write.... Was this basically just using storage spaces to create a RAID10 array? Have you seen anything else close to this is raw data transfer performance? Is this a single volume or multiple volumes? I ask because It says "4 x 12 SSD Storage Spaces Mirror Pools (12 SSDs per HBA)" .... were the transfers happening on four volumes or one RAID 10 volume? As you may recall I'm looking to build a very high performance SMB file server and these numbers seem very interesting.
  8. vushvush

    EchoStreams FlacheSAN1L-D Review Discussion

    It's not even the network storage layer..... It's the RAID process in general. I've still yet to see a single benchmark with any type of RAID perform writes at over 3 GB/s
  9. vushvush

    EchoStreams FlacheSAN1L-D Review Discussion

    Hey Kevin, I've chatted with you about this before.... but it seems like the speed (MB/s) of RAW drives keeps going up as newer adapters/protocols (PCIE3.0/12 gb sas/NVMe. etc') are introduced, but put the drives into an array and the performance drops (specifically the array's write ability). Is this something that will ever be addressed in your mind?
  10. I understand the reads being pretty close between RAID 0 and RAID 10 but why are the write speeds nearly identical. I'd assume an almost double write performance....
  11. So how do these differ from the Re?
  12. vushvush

    Ultra Fast Real world RAID drive

    OK the new Octal not withstanding as it's price is beyond enterprise levels Everything else is still at 2 GB/s I don't even know if I believe their 6 GB/s claim as it sits on PCI-E 2, which is capped at 4 GB/s anyway. So I think there is some fluff going on with their numbers.
  13. vushvush

    Ultra Fast Real world RAID drive

    All SSD PCI-E drives are still under 2 GB/s Although they are usually working in perallel in costume situations (HPC/DB Servers/Etc'). My need is high throughput single volume storage.... tricky
  14. vushvush

    Ultra Fast Real world RAID drive

    I have tried the Areca 1882 (with v3.0 pcb), the LSI 9286 and the Adaptec 71605. I can max them out at about 3.5 GB/s. but I would really think they should be able to top that, as theoretical speeds of 4.8 GB/s per controller should be possible. I was then hoping to soft stripe the two drives, but that produces often slower results than either drive combined.
  15. So this is a topic I would to start, in order to try and crack a problem I'm having, and hopefully help other people in the forum trying to accomplish this feat. I'm looking to build a Windows Server (2008 or 2012) that will have a drive with a sequential throughput of 6 - 8 GBytes per second. I'm not talking about using IOMeter to benchmark the drives directly, but rather a real world example of a RAID 5 or 50 or 10 that can get those results through the file system. Currently it seems like I cannot really get passed the 4 GB/s mark. I think it would carry some value if we could share information and see what the fastest drive that can be built using such real world scenarios. Currently I have tried the latest round of PCI-E 3.0 controllers like the Areca 1882 (with v3.0 pcb), the LSI 9286 and the Adaptec 71605, all with fairly similar results. Let's get this done!