wiredsim

Member
  • Content Count

    6
  • Joined

  • Last visited

Community Reputation

0 Neutral

About wiredsim

  • Rank
    Member
  1. I think there are some ways that you could improve the value of your testing to the reader, that are based on the setup and applicability of the results in a CPU scaling determination: Core count: You ran the same number of 16 core VMs on all CPUs. Resulting in massive CPU wait state performance impacts by over-provisioning the vCPUs when run on the 7302P (16C/32T) and to a lesser extent the 7402P (24C/48T). This largely explains the massive latency increase on the 7302P. I understand these are your usual tests, but in this instance, I think it would have been very useful to have run this test with multiple VM counts, such as at 1, 2, 4 and 8 SQL VMs and then we could produce a SQL performance per $ metric as a result. As a result, the review is more an exercise in showing the impact in the VMware world on CPU over-provisioning thresholds when dealing with processor-intensive workloads. As you allude to in the closing paragraphs, many people are interested in these CPUs in single socket designs to save on VMware licensing costs. I would love to see a review of the 7402P, 7502P and 7702P models against systems with two comparable Intel dual-socket CPUs. Such as the 7502P vs two Gold 6242 Xeons. If you can run those tests and 2, 4, 8 and 16 VMs plus add in your VDI testing, that would be an extremely valuable review to many of your readers. Thank you for all your hard work on providing independent news and reviews! Storage review is one of my frequent visits, so keep it up!
  2. I'm struggling with Synology to find a mid-tier device that supports both M.2 SSDs and 10Gb. I'm being forced to QNAP as they actually have products that have both of these features and the client needs are for high-speed video editing shared storage and 1gb ethernet just doesn't cut it. I ended up setting up one environment with a DS1817+ and added a 10Gb PCIe card, but I had to burn 2 of the front slots on SATA SSDs. Why oh why Synology would you come out with another 1gb limited device?
  3. It seems the VDBench is better able to take advantage of the distributed performance across the cluster vs the SQL Server test is essentially testing a single node / single DB?
  4. Is it just me or are the SQL server avg latency results and Sysbench average TPS results rather lackluster? Compare against the SC5020 that achieved 6-8ms or the QSAN XCubeFAS review that achieved 5ms across the board. 21-26 is rather poor for the SQL latency. However further on with the SQL, Oracle and VDI results is seems far more competitive. What am I missing?
  5. Can you confirm iSCSI performance is comparable?
  6. What connectivity was used for this review, SC or iSCSI? What speeds were you connected at and using what switching? Thanks!