• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About lustyd

  • Rank
  1. Thanks Brian, it's not a general query so much as asking those above who reported good performance if they had set up the system to ensure consistent data. I know that performance suffers when you do this because all of the performance on these systems stems from the assumption that we're storing locally in RAM and SSD. To be consistent we must write off to another node and both nodes need to write to non volatile storage. This throws any special sauce out of the window and gives you the performance of traditional SAN only buying twice the disks (traditional SAN uses two controllers to all drives to achieve the same). The fact that they reported very good performance means either I'm wrong about the special sauce, or that their data is massively at risk.
  2. A question for the people getting good performance on SQL workloads - are you using it in the default "devil may care" setup, or have you set it up for data consistency with proper writes to multiple nodes? My main concern with hyper converged is that not one vendor mentions this in their material, and it could lead to serious data corruption if a node is lost after confirming writes but before copying those writes to a second controller. This setup is fine for VDI but not for SQL, and I'm told setting up for consistency reduces the performance considerably, which is logical given the extra steps and processing. I ask because I've never had the kit to do testing on so am very interested if real world performance is great even once the settings are date safe. This being my main reason for sticking with traditional SAN.