sroberts

data comparison for dual-controller NAS 16+ drive (HA) devices

Recommended Posts

Hello Friends,
> I have really enjoyed learning from your professional performance reviews. However I am still a novice when it comes to SAN/NAS systems. I am creating a side by side comparison of the dual controller (HA) 12 (3.5)drive+ NAS systems that you have reviewed so that I have a better understanding of which vendor fits where on the performance list. I have data from your website for: EMC / VNXe 3200, EchoStreams / DuraStreams DSS320 and Quanta / mesos cb220. I have a few questions that I would greatly appreciate if you could answer for me if you found a little time:
>
> 1) For the 4k random 100% R/W, average and Max latency, 8K and 128K sequential data. Was that data gathered from fio? Its not alway clear if fio was the synthetic benchmark
> 2) Were these devices configured in a RAID set with the Device GUI/Utility? In what RAID formation was the durastreams_dss320 & mesos cb220 data gathered at? RAID 5 like the EMC VNXe3200 review?
> 3) Besides the three devices mentioned above, do you have data on any other Medium-sized dual controller (HA), 10gE NAS systems that I missed?
> 4) According to a 2013 storagenewsletter.com report EMC shares this market mostly with NetApp. Have you reviewed any of NetApp's devices (I see FAS2240-2 but that is a 2.5 drive array and SSD reliant)?
> 5) For the JetStor NAS 1600S do you have RAID 5 data (as opposed to the data on your site which is RAID 10, 50 & 60)? So that I can unify the results with the other reviews...
> 6) I know that i am asking a lot here but are you able to send out raw data to me from these devices? As I couldn't possibly afford to do all this testing myself...
>
> Thank you in advance
> Steven Roberts

Share this post


Link to post
Share on other sites

1) All synthetic tests we perform use FIO 2.0.12.2 when we use Windows on the host side.

2) In the DuraStreams and CB220 reviews they were configured in RAID10, the VNxe used RAID6

3) Not yet, although we are working on a Nexenta platform that should have its review posted soon. Dual HA controllers with a 36 drive pool

4) Currently just the FAS2240-2

5) We didn't test that platform in RAID5... generally we steer clear of RAID5 for production environments and prefer RAID6 for fault tolerance or RAID10 for performance situations.

6) I think we can work that out, any particular areas you are interested in? Our charts are built from that raw data specifically... were there other data points you were looking for?

Share this post


Link to post
Share on other sites

Kevin O'Brien,

Thank you very much for the concise information and possible data offer. Just getting started with fIO has been a block for me. How does it work? I have looked online and even emailed the creator (with no response) but it seems like you have to be a Command Line ninja just to get it going? any thoughts on the novice's approach to fIO? Specific instructions to duplicated your script implementation? I have been just using ioMeter and hoping for the best. But the unit that I am trying to wrap my head around is way under performing on ioMeter so I must be doing something wrong.

As for the specific data that you said you might be able to help me with; I am looking for Data for the 8k test and 8K 70R/30W test(Throughput [MB/s], Average Latency & Max latency for 16threads and 16Queue)? Do you have any data for 1MB (Throughput [MB/s], Average Latency & Max latency)?

Do you ever test with a Mac Host? if so what synthetic are you testing with? I am not even close to saturating the 10gE pipe(600MB/s). I was thinking of using parallels and ioMeter but then I'd just be testing a parallels system which is tacking on more layers...

In the vast ocean of non-responses, Thank you Thank you for your replies and help,

Steve.

Share this post


Link to post
Share on other sites

Sorry for the late reply, had some folks in the lab Monday and Tuesday!

FIO does require you to be a bit of a Bash or Powershell wizard to get going, but it isn't too incredibly hard. To that end, IOMeter and FIO will both get you into the same ballpark on performance. FIO is just easier to script and has some nicer features to better stress storage. For what you are doing IOMeter should be working just fine. Can you share how you are approaching the test and what results are concerning you?

If it would help, I could create an ICF file for IOMeter that you could run to mimic our FIO tests. The FIO stuff we use has a lot of scripting in place for loops and to run automated in the background... stuff that wouldn't really help you out as much. If you know your way around IOMeter though, it would be a good starting point for you I think.

On the Mac side there aren't a lot of utilities that work well for it. FIO and Iometer can be compiled and run in that environment, but I've never found either of them stable enough for our tests.

Share this post


Link to post
Share on other sites

My apologies for MY late reply Kevin O'Brian,

It is so hard to get a focused moment of time these days. lets see. you asked:
Can you share how you are approaching the test and what results are concerning you?
I have our Dual Port Dual Controller 10gE SMB 16 drive NAS system connected to a 10gE Switch via two CAT6 cables. Each port and Portal on the NAS have there own IP address. The Switch is a non-managed. From the switch I am connecting to two hosts. 1 HP z230 with an Intel X540 dual NIC. 1 new MacPro ethernet connection is 10gE SanLink2.This is a closed network. I have tested RAID 6 and 10 on the HP and my max throughput is approx 450Read and 170Write which seems low for such an expensive piece of equipment. I only have 3GB of DDR ram on the device. I tested RAID0 to see where the bottleneck was but again I get about 465Read and 190Write. Strange. I have tested some pretty fast devices on that HP system so I know the host is capable. Perhaps I have the NIC card on the wrong PCIe slot but I am on a gen3 pcie slot so.... And then I run IOmeter. I connect to the server which mounts the shared folder and when I launch ioMETER I delete all the worker except 1. 16 threads, 5min tests. and my tests are as follows
4k random read
4k random write
4k sequential read
4k sequential write
8k random 70%read 30%write
128k sequential read
128k sequential write
On the mac side I have started testing DigLloyd Disk tester IOps test and read only tests in terminal:
4k random read
disktester iops--xfer 4K --threads 16
4k random write
disktester iops --xfer 4K --threads 16
4k sequential read
disktester iops --sequential --xfer 4K --threads 16
4k sequential write
disktester iops --sequentia --xfer 4K --threads 16
8k random 70%read 30%write
?
128k sequential read
disktester iops --xfer 128K --threads 16
128k sequential write
disktester iops --xfer 128K --threads 16
If it would help, I could create an ICF file for IOMeter that you could run to mimic our FIO tests. = Yes Please! I could use all the help I could get.
Because I can not saturate the 10gE pipe with one host do you think I am going down the right path by trying to simultaneously run Read or Write tests on two different hosts? What do you make of the same R/W results for both the RAID 6 and RAID 0 setups?
Again, Thank you so much for your time and patience
Steve

Share this post


Link to post
Share on other sites

Here is the IOMeter file I whipped up. I modified the runtime a bit so the test goes longer, but that stuff is up to you. Also make sure you run up the thread count (workers or managers) as well as queue depth. Rename it to *.icf

StorageReview IOMeter Sample.txt

Going to run through the questions i a bit, just moving through backlog ;)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now