Adam_a

VMware VSAN 6.2 All-Flash Review Discussion

Recommended Posts

VMware VSAN 6.2 gives organizations the chance to leverage flash in new and exciting ways, driving both efficiency and performance gains. Highlights of the new release include erasure coding, RAID5 support, deduplication and compression, all of which clearly lead to getting the most out of flash from a capacity standpoint. While the data reduction benefits will vary based on workload and configuration, it's reasonable to expect a 3-6x capacity gain. This basically means a relatively affordable 1TB drive is effectively able to deliver 3-6TB. This benefit alone makes 6.2 a worthy upgrade and gives flash a great place to excel. 

 

VMware VSAN 6.2 All-Flash Review

Share this post


Link to post
Share on other sites

=== Disclosure: Pure Storage employee ===

Dear Storage Review Team,

I love the intent of standardized benchmark results - expect you consistently most deployment details that may significantly skew the results.
Case in point with the VSAN review.
An all-flash VSAN is experiencing 50ms of latency?
Why?
Is data protection (mirror or erasure costing) (and what FTT level 0-1-2) impacting performance?
Is data reduction (deduplication and compression) enabled and causing issues?

A bit more detail would go a long way to identifying if the VSAN was configured for availability, affordability or performance.

-cheers,
v

Share this post


Link to post
Share on other sites

The charts are very clear as to if DR is enabled or not (its labeled!), RAID5/EC support is offered, but we tested standard mirror profiles, and on latency this is BMF reported SQL Server latency, not storage I/O. We have comparable numbers against traditional SAN as well as SAN with a dedupe appliance if you look at similar reviews. SQL latency is with a 60,000 user-load is a tad different than a vdbench/FIO result.

If you've played around with application testing before, you'd see some of the latency aspects of that versus synthetic IOPS/latency figures. Sysbench also has similar values across the board.

Here is a good example of an all-flash array before and after deduplication comes into play.

permabit_sql_server_output_avglatency_15

permabit_sysbench_avg_latency.png

Share this post


Link to post
Share on other sites

Sorry to resurrect an old thread but I thought your review was more relevant to my question than starting a new thread. I'm in the process of putting together a small all flash VSAN cluster and had some questions about the thought process that went into building your test cluster. I see that you used 4 disk groups per host and 5 capacity disks per disk group. Was there a specific reason that you used 5 ssd disks per disk group?

I recently had a conversation with a vmware engineer who stated that the number disks used for the capacity tier of a disk group in an all flash configuration should not impact read performance to a great extent. Unfortunately he didn't have any concrete numbers but it makes me wonder if there's more of a benefit to using the disk slots for adding additional disk groups rather than multiple capacity tier ssds. I believe that the current maximum number of disk groups is 5 while your test cluster had 4.  

Is there any way that you can do a follow up review to show how all flash performance increases when going from 2 to 3 to 4 to 5 disk groups and also how the number of capacity tier ssds also affect this?  There's a lot of information out there stating the multiple disk groups increase performance but I have not been able to find a reputable review showing how much of an improvement is gained by increasing the disk groups.  And I haven't seen anything at all about the number of ssds used in the capacity tier affecting performance.  

 

Share this post


Link to post
Share on other sites

Our disk group number was governed by the number of cache SSDs we had. We needed an even split, so that's how we ended up in 4 groups of 4+1. Building a setup from scratch you'd probably want to leverage something that makes more sense from a cost perspective, since as we saw the SSDs didn't have a limit on the performance of VSAN, it was more that we hit the top end numbers in general for the platform at the time we reviewed it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now