sroberts

Member
  • Content Count

    3
  • Joined

  • Last visited

Community Reputation

0 Neutral

About sroberts

  • Rank
    Member
  1. My apologies for MY late reply Kevin O'Brian, It is so hard to get a focused moment of time these days. lets see. you asked: Can you share how you are approaching the test and what results are concerning you? I have our Dual Port Dual Controller 10gE SMB 16 drive NAS system connected to a 10gE Switch via two CAT6 cables. Each port and Portal on the NAS have there own IP address. The Switch is a non-managed. From the switch I am connecting to two hosts. 1 HP z230 with an Intel X540 dual NIC. 1 new MacPro ethernet connection is 10gE SanLink2.This is a closed network. I have tested RAID 6 and 10 on the HP and my max throughput is approx 450Read and 170Write which seems low for such an expensive piece of equipment. I only have 3GB of DDR ram on the device. I tested RAID0 to see where the bottleneck was but again I get about 465Read and 190Write. Strange. I have tested some pretty fast devices on that HP system so I know the host is capable. Perhaps I have the NIC card on the wrong PCIe slot but I am on a gen3 pcie slot so.... And then I run IOmeter. I connect to the server which mounts the shared folder and when I launch ioMETER I delete all the worker except 1. 16 threads, 5min tests. and my tests are as follows 4k random read 4k random write 4k sequential read 4k sequential write 8k random 70%read 30%write 128k sequential read 128k sequential write On the mac side I have started testing DigLloyd Disk tester IOps test and read only tests in terminal: 4k random read disktester iops--xfer 4K --threads 16 4k random write disktester iops --xfer 4K --threads 16 4k sequential read disktester iops --sequential --xfer 4K --threads 16 4k sequential write disktester iops --sequentia --xfer 4K --threads 16 8k random 70%read 30%write ? 128k sequential read disktester iops --xfer 128K --threads 16 128k sequential write disktester iops --xfer 128K --threads 16 If it would help, I could create an ICF file for IOMeter that you could run to mimic our FIO tests. = Yes Please! I could use all the help I could get. Because I can not saturate the 10gE pipe with one host do you think I am going down the right path by trying to simultaneously run Read or Write tests on two different hosts? What do you make of the same R/W results for both the RAID 6 and RAID 0 setups? Again, Thank you so much for your time and patience Steve
  2. Kevin O'Brien, Thank you very much for the concise information and possible data offer. Just getting started with fIO has been a block for me. How does it work? I have looked online and even emailed the creator (with no response) but it seems like you have to be a Command Line ninja just to get it going? any thoughts on the novice's approach to fIO? Specific instructions to duplicated your script implementation? I have been just using ioMeter and hoping for the best. But the unit that I am trying to wrap my head around is way under performing on ioMeter so I must be doing something wrong. As for the specific data that you said you might be able to help me with; I am looking for Data for the 8k test and 8K 70R/30W test(Throughput [MB/s], Average Latency & Max latency for 16threads and 16Queue)? Do you have any data for 1MB (Throughput [MB/s], Average Latency & Max latency)? Do you ever test with a Mac Host? if so what synthetic are you testing with? I am not even close to saturating the 10gE pipe(600MB/s). I was thinking of using parallels and ioMeter but then I'd just be testing a parallels system which is tacking on more layers... In the vast ocean of non-responses, Thank you Thank you for your replies and help, Steve.
  3. Hello Friends, > I have really enjoyed learning from your professional performance reviews. However I am still a novice when it comes to SAN/NAS systems. I am creating a side by side comparison of the dual controller (HA) 12 (3.5)drive+ NAS systems that you have reviewed so that I have a better understanding of which vendor fits where on the performance list. I have data from your website for: EMC / VNXe 3200, EchoStreams / DuraStreams DSS320 and Quanta / mesos cb220. I have a few questions that I would greatly appreciate if you could answer for me if you found a little time: > > 1) For the 4k random 100% R/W, average and Max latency, 8K and 128K sequential data. Was that data gathered from fio? Its not alway clear if fio was the synthetic benchmark > 2) Were these devices configured in a RAID set with the Device GUI/Utility? In what RAID formation was the durastreams_dss320 & mesos cb220 data gathered at? RAID 5 like the EMC VNXe3200 review? > 3) Besides the three devices mentioned above, do you have data on any other Medium-sized dual controller (HA), 10gE NAS systems that I missed? > 4) According to a 2013 storagenewsletter.com report EMC shares this market mostly with NetApp. Have you reviewed any of NetApp's devices (I see FAS2240-2 but that is a 2.5 drive array and SSD reliant)? > 5) For the JetStor NAS 1600S do you have RAID 5 data (as opposed to the data on your site which is RAID 10, 50 & 60)? So that I can unify the results with the other reviews... > 6) I know that i am asking a lot here but are you able to send out raw data to me from these devices? As I couldn't possibly afford to do all this testing myself... > > Thank you in advance > Steven Roberts