Kevin OBrien

EchoStreams FlacheSAN2 Custom Flash Array Build Discussion

37 posts in this topic

StorageReview has started testing the FlacheSAN2 48-bay 2U storage array from EchoStreams. This platform features 48 densely packed 180GB Intel SSD 520s, five PCIe 3.0 LSI 9207-8i HBAs, three Mellanox ConnectX-3 56Gb/s dual-port InfiniBand adapters and is built around a custom two-processor Intel motherboard. Follow this log for the ongoing build process and benchmark results.

EchoStreams FlacheSAN2 Custom Flash Array Build

Share this post


Link to post
Share on other sites

We're thrilled to have this rig in the office as part of our expanding array testing. If you have questions or requests please post them here.

Share this post


Link to post
Share on other sites

We're thrilled to have this rig in the office as part of our expanding array testing. If you have questions or requests please post them here.

Looking forward to seeing the benchmark results. Is it possible for us to purchase the chassis ourselves?

Share this post


Link to post
Share on other sites

Looking forward to seeing the benchmark results. Is it possible for us to purchase the chassis ourselves?

Also happy to connect you directly if you want to PM me your contact info.

Share this post


Link to post
Share on other sites

I'd like to see if Shogun2 Total War loading times are faster using this array.

lol

In fact what you could do are some ridiculous comparisons, of this ludicrous array against a single sata mech HDD. Some ideas: Windows install time on a VM, data copy time.

Sure some business related benchmarks of journaled database would be more serious and professorial looking, but...I already know what the results would be. What I like to see is the fun angle.

m a r c

We're thrilled to have this rig in the office as part of our expanding array testing. If you have questions or requests please post them here.

Edited by lecaf

Share this post


Link to post
Share on other sites

Fun things are always good :-)

I installed Win7 onto a RAMDisk via VirtualBox once and it finished the copying in 17 seconds.

Share this post


Link to post
Share on other sites

About to add some updates today. This past week we received all of our InfiniBand networking gear and are prepping HP DL380p servers with Windows Server 2012 for testing. Looking to get the InfiniBand side going today for some baseline tests.

Share this post


Link to post
Share on other sites

Thanks for the update Kevin. Looking forward to it...

peace,

Edited by ehorn

Share this post


Link to post
Share on other sites

About to add some updates today. This past week we received all of our InfiniBand networking gear and are prepping HP DL380p servers with Windows Server 2012 for testing. Looking to get the InfiniBand side going today for some baseline tests.

Not sure whether you've used InfiniBand before, but make sure you setup your subnet manager correctly to ensure the network comes up. You'll need one subnet manager per IB fabric.

Can you also test with 10-gigabit ethernet. I understand that it's going to choke the available bandwidth but I'm more interested in IOPS. I imagine with small 4K IOPS 10-gigabit should still be reasonable, especially a bonded pair.

One other question, what OS or OS's are you planning on using for the testing? Would it be possible for you to test with OpenIndiana build 151a and/or Solaris 11 to verify compatibility with your hardware stack?

Edited by kristofferjon

Share this post


Link to post
Share on other sites

We're working now with Windows Server 2012 with SMB 3.0, but we'll run some other tests on alternate OSs as well.

Share this post


Link to post
Share on other sites

Currently we are working out some firmware/bios quirks with our NICs in PCIe 3.0 mode. Once that gets smoothed out we will start posting numbers from the array B)

Share this post


Link to post
Share on other sites

New firmware applied to our IB cards looks to have fixed an incompatibility problem we were seeing with the servers. 10 of them were flashed to the new software, to give you an idea of the scale of that project B)

Tomorrow we'll dedicate one of the servers to kick off that project to get our IB fabric setup (other servers are cranking away on other tests right now).

Share this post


Link to post
Share on other sites

New firmware applied to our IB cards looks to have fixed an incompatibility problem we were seeing with the servers. 10 of them were flashed to the new software, to give you an idea of the scale of that project B)

Tomorrow we'll dedicate one of the servers to kick off that project to get our IB fabric setup (other servers are cranking away on other tests right now).

Sounds big, looking forward to seeing it.

Share this post


Link to post
Share on other sites

New firmware applied to our IB cards looks to have fixed an incompatibility problem we were seeing with the servers. 10 of them were flashed to the new software, to give you an idea of the scale of that project B)

Tomorrow we'll dedicate one of the servers to kick off that project to get our IB fabric setup (other servers are cranking away on other tests right now).

Oh yeah....

http://www.storagereview.com/images/StorageReview-Mellanox-InfiniBand.jpg

http://www.storagereview.com/images/StorageReview-EchoStreams-FlacheSAN2-Rear.jpg

B)

Share this post


Link to post
Share on other sites

Can you also test with 10-gigabit ethernet. I understand that it's going to choke the available bandwidth but I'm more interested in IOPS. I imagine with small 4K IOPS 10-gigabit should still be reasonable, especially a bonded pair.

Let's say you expect 2 MIOPS @4K burst like they do (which you easily get on RAM alone in local anyway), you have the following:

2.000.000 * 4.096 * 8 bits/second = 64Gbit/s.

Considering overhead, or better performance or other unexpected factors... I would say link aggregation of those 6 qsfp ports in 10GbE mode could fall short.

On the other hand, using 512b you would know the 4K random performance (they're equal) and hold in one 10GbE link - but then that wouldn't exactly show how great the Mellanox kit and the echostreams barebone are.

Edited by L..

Share this post


Link to post
Share on other sites

"The top speed with an outstanding I/O figure of 32, we hit 2,045,787 IOPS 4k read and 1,798,432 IOPS 4k write with an outstanding I/O figure of 4"

How fun is that? Really impressive results Kevin! and all from a single 2U chassis...

Also, can you cast some light on the test configuration (i.e. workers, managers, test duration, etc...)?

I assume these drives are 'steady state'? Still planning on over-provisioning the drives? Any thoughts on how you plan to organize the array?

Thanks for sharing the results. Very impressive IOPS.

peace,

Edited by ehorn

Share this post


Link to post
Share on other sites

Actually the numbers posted today are sustained, not steady-state. We are gearing this platform towards read-intensive workloads so most of the stuff today was getting our baseline figures to make sure all the equipment is functional without errors. Nothing like bringing the CPU up 99.70% utilization ;)

Share this post


Link to post
Share on other sites

Hi,

What software are you running on the server? Are you able to provide the retail cost for the enclosure without the disks?

thanks,

nick

Share this post


Link to post
Share on other sites

Not sure if it's generally available yet but we'll find out what pricing is expected to look like at least.

Share this post


Link to post
Share on other sites

To answer the software question, currently the SAN is running Windows Server 2012 Standard early-release... which we will be wiping to start fresh with the RTM that came out yesterday.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now