Brian

Why We Don't Have a Nutanix NX-8150 Review Discussion

Recommended Posts

That brief history on its own is simple fact, but in the interest of full disclosure to our readers, we think it best to provide the full details of what happened. The history is important in understanding not just the way Nutanix operated in this case, but how their attitude could adversely affect those navigating the increasingly crowded, and complex world of hyper-converged solutions. Prior to delivering the systems to us, we had multiple conversations about our testing capabilities and methodology. We first deployed the Nutanix NX-8150 cluster on January 20, 2015, assisted by an on-site Nutanix representative.

Why We Don't Have a Nutanix NX-8150 Review

Share this post


Link to post
Share on other sites

Well, good work on SR's part. Consider is a lesson learned about Nutanix though. It's a shame how they're handling it. Our Nutanix stuff works well for what it is.

Share this post


Link to post
Share on other sites

We have heard several good customer stories about Nutanix. Would be great if you can share more about your implementation and config.

  • Like 1

Share this post


Link to post
Share on other sites

We've been using Nutanix for about six months now. It started for a VDI pilot, but now we've added SQL dev/test and we're planning production Exchange migration.

We deployed the NX3460 for the VDI, using Citrix, for a pilot group of 150 users. So far we're hearing good things, but of course VDI is a lot more complex than just the infrastructure underneath it. But it's been fast and I haven't had to touch it since we deployed, so that's a win.

For the SQL stuff, we were initially skeptical about their sales team's claims, so we did some pretty extensive testing. We ended up doing a POC and swinging our existing VMs over to their 8000 series boxes. I've got to say, we were blown away by the perf, and it was a lot simpler than our existing NetApp config (which I still like as a filer). I like the flexibility to switch hypervisors, which if nothing else should help me negotiate a better VMware ELA.

All that said, it's disappointing to hear how this went down, but our experience has been really positive from sales to support, and I expect we'll be deploying more soon.

Share this post


Link to post
Share on other sites

We didn't get to run SQL sadly, but I'm sure you'll do fine. The main concern is performance under heavy load, doesn't sound like you're there yet. Hopefully it works out well for you.

Curious though, how do you test SQL internally?

Share this post


Link to post
Share on other sites

We didn't have a great method that wasn't some kind of synthetic, so we took our reporting server VM and just storage vMotioned it to Nutanix to test, and timed the report runs. Roughly 2x faster from what we saw. Hardly scientific, but the results have held, and the DBAs are happy, which is good enough for me. Load is about 75% CPU at peak on the big jobs.

Share this post


Link to post
Share on other sites

Yea production is usually the best indicator to say the least. You mention CPU, generally how much overhead do you see on the Nutanix CVM side on day to day grind?

Share this post


Link to post
Share on other sites

Why not download the Nutanix community edition and install it on four dell R730 with NVE SSD or Super Micro servers

for a benchmark, and at the same time compare the new Nutanix erasure codec and the old raid 10 methord.

Alan

Share this post


Link to post
Share on other sites

I would also like to see a performance comparison of the following Hyper Converged Products

Nutanix v VSAN v Maxta v Simplivurt Omnicube v and any others out there.

There is little to no Performance info on these systems

Alan

Share this post


Link to post
Share on other sites

Hate to hear how this went down. Our Nutanix experience has been excellent, rock solid stability, great performance, and some of the best support I've ever dealt with. We've been a customer for coming up on 2 years now, literally don't have a single complaint about them.

Share this post


Link to post
Share on other sites

I would also like to see a performance comparison of the following Hyper Converged Products

Nutanix v VSAN v Maxta v Simplivurt Omnicube v and any others out there.

There is little to no Performance info on these systems

Alan

You'd be surprised at how difficult this actually is. VSAN we have, as you know. The others are slow to engage. We are making great progress with Atlantis however, expect them to participate. You have to remember that most of the HC guys are pitching ease of use and have little internal depth when it comes to anything other than synthetic testing, so there's quite a bit of trepidation.

Hate to hear how this went down. Our Nutanix experience has been excellent, rock solid stability, great performance, and some of the best support I've ever dealt with. We've been a customer for coming up on 2 years now, literally don't have a single complaint about them.

That's good to hear...don't let it rub off on you that we're negative on the platform. We did see some good stuff and the multi-hypervisor support and easy to use GUI is a good combo. Your point about support is also a favorable point.

Share this post


Link to post
Share on other sites

Hate to hear how this went down. Our Nutanix experience has been excellent, rock solid stability, great performance, and some of the best support I've ever dealt with. We've been a customer for coming up on 2 years now, literally don't have a single complaint about them.

We've heard that a lot, and its not surprising. I think the customer interaction they have and the product they offer works really well. They just made some poor decisions when it came to letting us us do an honest take on their product.

I would also like to see a performance comparison of the following Hyper Converged Products

Nutanix v VSAN v Maxta v Simplivurt Omnicube v and any others out there.

There is little to no Performance info on these systems

Alan

Saw your tweet and really preferred bringing this up in the forums... so glad you posted here! Currently we have no plans of testing software of any sort without full cooperation of the company involved. So for the very reason we wouldn't test a device shipped to us from an unrelated party, we cant start testing community versions for formal reviews. We are getting closer to getting some additional HCI vendors to work with us (Atlantis is gearing up for some tests shortly after VMworld), the main struggle has been acquiring hardware for the review.

VMware supplied the VSAN platform (uses slightly different components for their HCL than say what Nutanix might use)

Nutanix supplied the NX-8150 platform (uses slightly different components than VSAN... see where this is going?)

We'd love to work with the other guys, but with most standardizing on similar but still unique hardware, we need the vendor to supply the gear to test. Many don't have budgets for that. While some gear might have components easily swapped out, since we don't own the gear we can't repurpose it for another vendor's project without the others permission.

Share this post


Link to post
Share on other sites

I do see where your comming from and having to work with the vendors but with the true SDS vendors that let you install

on standard x86 platforms I think a true comparrison would be to compair there SW on the same HW platform.

For Example Nutanix OEM to dell on the R730 platform, and MAXTA, Atlantis, VSAN all support that platform

so to get a real comparrison on performance in my mind this should be done on the same HW.

Even Supermicro will do the job each node with two 12 core Zeon E5 V3, 128GB Ram, two NVE SSD, and whatever amount of NL SAS drives, LSI HBA, 10GB Nic cards.

I know there is a lot of SDS S/W out there but the main interest must be systems that scale out and distribute the load like VSAN, Nutanix, Maxta

I know you cant do Simplivurt as they use a H/W compression card and they don't sell only the S/W

Alan

Edited by current77

Share this post


Link to post
Share on other sites

You bring up a good point and that's what we were trying to do in most cases...get like-to-like hardware. In the end though we prefer to have dedicated clusters, rather than one set of hardware to run 10 software packages on. Having 10 dedicated clusters is better, so we can iterate testing without having to re-foundation each time. The smaller HC guys can't support getting us hardware though, which is also a challenge. We're talking a lot of inside baseball here, and maybe we need to, but I don't want to get people too distracted. The end result is that we're actively working with 5 HC vendors and expect at least two more in the lab for review this year.

Share this post


Link to post
Share on other sites

Shame this happened in this way. We have been a Nutanix customer for several years. Started with a VDI cluster, then moved our production server cluster to Nutanix from HP chassis/blades/SAN when we built a new headquarters 9 months ago.

We have had great results, and have been very satisified from pres sales through to implementation, and their support has been great.

The practice of vendors wanting to review and approve testing methodologies of independant reviewers to present their kit in the most positive light is nothing new. Nutanix just didnt handle this particularily well, IMO. Hopefully theres not too much backlash for them, as I believe their tech. is very good.

Share this post


Link to post
Share on other sites

Lukas from Nutanix.



We appreciate the opportunity to work with StorageReview.com, and the time they provided and spent with us. They made a great effort to make this work and we weren't able to come through for them.



StorageReview.com had great intentions and we have a lot of respect for StorageReview.com's emphasis on being an independent testing team in a market filled with pay-to-play analysts and testing. We won't argue with their opinion of us, and they are entitled to it. We feel blindsided of course.



We had good intentions as well with this review. We sent our equipment at no cost to StorageReview. I wasn't personally involved in the first 4 months of this engagement, but it appears like we definitely started off on the wrong foot, and it's very clear we mismanaged this situation. We didn't treat them like a customer, and that was a big mistake.



A few things I wanted to clarify quickly. We sent the following equipment to StorageReview.com:



Nutanix Equipment shipped to StorageReview.com:


4 nodes of Intel Ivybridge v2 servers, 2x12 cores @ 2.7GHz, 30M Cache, 8.0 GT/s QPI,


256 GB of 1600 MT/s RAM


4x800GB SSD per node [edited: I corrected this after I had made a mistake in the initial post here, I had previously received the wrong information. Thank you for correcting me here.]


20x1TB 7200 RPM drives per node



After reviewing our specifications and that we were engaged with StorageReview.com, VMware and Chuck Hollis sent the following equipment to StorageReview.com



VMware vSAN equipment shipped to StorageReview.com:


4 nodes of Intel Haswell v3 servers, 2x14 cores @ 2.6GHz,35M Cache,9.60GT/s QPI (later generation processor, more cores)


256 GB of 2133 MT/s RAM (later generation memory @ 2133 MT/s 33% faster)


4x800GB SSD per node


20x1.2TB 10K RPM SAS drives (more expensive hard drives)



http://cpuboss.com/cpus/Intel-Xeon-E5-2697-v3-vs-Intel-Xeon-E5-2697-v2



We got uncomfortable with the difference in systems and felt like we might be getting played somehow. We expressed our concerns to Brian and that is when the tone of conversation changed. Our two big issues with the proposed review were the discrepancy in hardware and as Brian mentioned, that we wanted to ensure to include the filesystem behavior and feature testing scenarios (failures/rebuilds, network utilization, VDI bootstorms, clones, snapshots, long-running DB tests etc.) not just the VMmark and sysbench benchmarks which we felt didn't show the full picture. We weren't able to come to common grounds on the test plan. At no point did we ask to test us against a competitor on an "lower-spec" system than what we were being tested on, only one with equivalent hardware. This seems to be a misunderstanding.



Also, we obviously want to do the testing that shows the advantages of our architecture and design, and we will be releasing that to the public. Every company does. VMware invented their own benchmark with VMmark. So did NetApp. EMC and Oracle have invented plenty of them. But playing these games do take us away from customer support and engineering development. It also doesn't particularly align with our vision of being more than a storage company, and especially more than a storage company who is purely defined by benchmarks. We want to make sure to maintain a balance and stay focused on supporting real customer deployments and learning what we can do to make our technology a better fit for their requirements and use cases.



Our key benchmark will always be customer success. As exemplified by the feedback on this post and on reddit, we seem to be doing at least OK by many of our customers. We didn't plan to comment on this self-inflicted fiasco, but we really appreciate that plenty of our partners and customers did. It is clear that the entire industry is watching us. Fortunately, we did do Brian and Kevin at least one favor by getting more [edited: social media shares] than any recent article I could find, including their positive review of VSAN.



We wish Brian and Kevin and StorageReview.com the best of luck and will work to become a better company than this engagement has shown. Thank you again to our customers who stood up for us. We owe you better.


Edited by Lukas_Lundell

Share this post


Link to post
Share on other sites

A few things I wanted to clarify quickly. We sent the following equipment to StorageReview.com:

Nutanix Equipment shipped to StorageReview.com:

4 nodes of Intel Ivybridge v2 servers, 2x12 cores @ 2.7GHz, 30M Cache, 8.0 GT/s QPI,

256 GB of 1600 MT/s RAM

4x400GB SSD per node

20x1TB 7200 RPM drives per node

After reviewing our specifications and that we were engaged with StorageReview.com, VMware and Chuck Hollis sent the following equipment to StorageReview.com

VMware vSAN equipment shipped to StorageReview.com:

4 nodes of Intel Haswell v3 servers, 2x14 cores @ 2.6GHz,35M Cache,9.60GT/s QPI (later generation processor, more cores)

256 GB of 2133 MT/s RAM (later generation memory @ 2133 MT/s 33% faster)

4x800GB SSD per node (double the flash capacity!?)

20x1.2TB 10K RPM SAS drives (more expensive hard drives)

See any issues here? I did.

Thanks for reaching out on the forums, but you are either mistaken or not being truthful about the Nutanix-supplied NX-8150s. This is our build sheet, flash drives are 800GB, matching the VSAN platform:

NX
-
8150
-2697v2
NX
-
8150
, w/2697v2 CPU 4

C-MEM-16GB-DDR3 Option, Memory, 16GB, RDIMM, DDR3 64

C-HDD-1TB-2.5 Option, HDD, 1TB, w/2.5" Carrier 80

C-NIC-10G-2 Option, Config, Dual Port 10GbE NIC 4

C-SSD-800GB-2.5 Option, SSD, 800GB, w/2.5" Carrier 16

C-CBL-3M-SFP+-SFP+ Option, Cable, 3m, SFP+ to SFP+ 8

Also I don't want to argue semantics, but like we brought up multiple times, VSAN and Nutanix builds generally skew with Nutanix leaning SATA and VSAN SAS. More "expensive" probably wouldn't be the right term to use. Faster capacity tier yes, but nearly all of the workloads we used were small enough to sit inside flash... making that argument not really play out in testing.

And yes, the traffic spike end of week was nice, but not really earth shattering. We generally have 4-8x the amount of traffic to HDD reviews than this particular article. This isn't our first rodeo... we do actually cover lots of industry segments. HCI is growing, but not the entire enterprise and consumer storage markets. VSAN article reads and the Nutanix piece right now both match our VNX5200 review, and are about half our VNXe3200 review. Compared to a NAS HDD? That is about 20k views vs upwards of 800k views for some.

Share this post


Link to post
Share on other sites

Hello Kevin,

You are right, I made a mistake. I was working from an email that the engineer who installed the system sent me. I have updated my post accordingly with the correct information after pulling the factory manifest that was shipped. Thank you for correcting this.

Regards,

Lukas

Thanks for reaching out on the forums, but you are either mistaken or not being truthful about the Nutanix-supplied NX-8150s. This is our build sheet, flash drives are 800GB, matching the VSAN platform:

C-CBL-3M-SFP+-SFP+ Option, Cable, 3m, SFP+ to SFP+ 8C-SSD-800GB-2.5 Option, SSD, 800GB, w/2.5" Carrier 16C-NIC-10G-2 Option, Config, Dual Port 10GbE NIC 4C-HDD-1TB-2.5 Option, HDD, 1TB, w/2.5" Carrier 80C-MEM-16GB-DDR3 Option, Memory, 16GB, RDIMM, DDR3 64NX-8150-2697v2 NX-8150, w/2697v2 CPU 4

Also I don't want to argue semantics, but like we brought up multiple times, VSAN and Nutanix builds generally skew with Nutanix leaning SATA and VSAN SAS. More "expensive" probably wouldn't be the right term to use. Faster capacity tier yes, but nearly all of the workloads we used were small enough to sit inside flash... making that argument not really play out in testing.

And yes, the traffic spike end of week was nice, but not really earth shattering. We generally have 4-8x the amount of traffic to HDD reviews than this particular article. This isn't our first rodeo... we do actually cover lots of industry segments. HCI is growing, but not the entire enterprise and consumer storage markets. VSAN article reads and the Nutanix piece right now both match our VNX5200 review, and are about half our VNXe3200 review. Compared to a NAS HDD? That is about 20k views vs upwards of 800k views for some.

Edited by Lukas_Lundell

Share this post


Link to post
Share on other sites

A question for the people getting good performance on SQL workloads - are you using it in the default "devil may care" setup, or have you set it up for data consistency with proper writes to multiple nodes? My main concern with hyper converged is that not one vendor mentions this in their material, and it could lead to serious data corruption if a node is lost after confirming writes but before copying those writes to a second controller. This setup is fine for VDI but not for SQL, and I'm told setting up for consistency reduces the performance considerably, which is logical given the extra steps and processing.

I ask because I've never had the kit to do testing on so am very interested if real world performance is great even once the settings are date safe. This being my main reason for sticking with traditional SAN.

Share this post


Link to post
Share on other sites

It may be worth starting a new thread for your needs lustyd, as you see we don't have Nutanix SQL results, but we can divine some things from the MySQL data. We do however have SQL data on VSAN and have other HC systems coming in the lab. Anyway, start a new thread with your needs and we'll see what we can do to help.

Share this post


Link to post
Share on other sites

Thanks Brian, it's not a general query so much as asking those above who reported good performance if they had set up the system to ensure consistent data. I know that performance suffers when you do this because all of the performance on these systems stems from the assumption that we're storing locally in RAM and SSD. To be consistent we must write off to another node and both nodes need to write to non volatile storage. This throws any special sauce out of the window and gives you the performance of traditional SAN only buying twice the disks (traditional SAN uses two controllers to all drives to achieve the same). The fact that they reported very good performance means either I'm wrong about the special sauce, or that their data is massively at risk.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now