Peter Enoch

  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Peter Enoch

  • Rank
  1. Your right latency is also very important. I've run a test now to check the latency. I tested with 16KB blocksize, 16 workers, 16 queues, 70%read, 30%write and 100% random on RAID-50 over 24 disks: 67.939 IOPS and avarage latency 3.77 ms.
  2. Hi, can't remember the latency numbers when testing I've tried with lots of differents numbers in disks for RAID-0, RAID-1, RAID-5, RAID-10, RAID-50 and RAID-6 RAID sets. Havn't seen any big differents in performance. RAID-0 over 24 disks only x2 the RAID-50 numbers. But RAID-0 is a bit risky Again I love the box, but just sad they don't offer a upgraded/highend controller also to this system so it can handle all-flash solution better.
  3. Tried the test from all four VRTX Blades with VMware 6.0. Each blade have then VRTX LUN attached and each Blade have a Windows 2012r2 VM on it. I tried with maybe a more "real-life" performance test with 16 workers, 16 DQL, 65% read, 35% write and 45% random - still on RAID-6 over 24 SSD's 1 Blade with one VM running the IO Meter test: 37.102 IOPS 2 Blades each with one VM running the IO Meter test: 34.883 IOPS combined 3 Blades each with one VM running the IO Meter test: 33.699 IOPS combined 4 Blades each with one VM running the IO Meter test: 32.773 IOPS combined Same test on PowerEdge 730XD with 2xDell Sandisk SSD's in RAID-1: 88.032 IOPS The PE730XD is a physical server so should perform a little better then running in an VM. Again my conclusion is, if buying the VRTX, the controller cannot handle full speed if all-flash only in the storage unit. It's sad that Dell don't have an upgraded controller for this system
  4. I've trought about the same thing (if I understand you correct) I would try to get all 4 Blades installed with Windows 2012r2 on local drives and then connect them all to the same LUN and try to run the same tests. Just to see if the combine speed for all 4 test equals the result I see for now or if we get better combined result. I know RAID6 have a huge write hit, but over 24x drives should it still not be faster than 2 drives in RAID-1? I've also tried with RAID-50 and didn't see any big performance gain.
  5. The result for Read after 3 hours run: 4K 100% Read and 100% random 16 Workers and 16 Disk Queue Lengh on the VRTX storage for 3 hours: 41.272 IOPS (24xDell Toshiba SSD's in RAID-6) 4K 100% Read and 100% random 16 Workers and 16 Disk Queue Lengh on the PowerEdge 730XD for 3 hours: 128.703 IOPS (2xDell SanDisk SSD's in RAID-1
  6. Yes just need to know at which price vSAN licenses will cost
  7. Yes I also had A look at the FX2, but picked the VTRX because we throught it could performe with the SSD's. Hard for us to find the right storage, but it could be vSAN or EQL maybe
  8. We have about 100 low to medium performance VMware VM's that we want to consolidate from about 15 1 Unit servers to 4 Blade servers. We have bought and tried a Dell PowerEdge VRTX system with 25 Enterprise SSD's, but sadly it seems the controllers in that system cannot perform for all flash solution. We currently also have Dell M1000e Blade system and have 6-8 differently Dell EqualLogic SAN's running. I really would like the storage to perform very fast and thats why we look at a hybrid or all flash solution. From Dell i'm thinking on the Equallogic 6210XS (hybrid SAS / SSD SAN) or the 6210S (All-Flash) Budget is about 100K or less.
  9. Hi Brian, I will try to make new thread about that subject
  10. We can agree on this now But we had no idea that the Shared PERC controller performed SO bad. But IF the controllers was great then this box was ideal for us and new solutions. Any other idea to new solution for us? Currently we think about M630 Blades to our M1000e Bladecenter and then maybe one or two Equallogic PS6210XS or one EqualLogic PS6210S. We just "loved" the idea with one small unit (VRTX) that could handle 4xBlades and shared storage in 5 Unit, but maybe it's not possible with any vendor?
  11. 4K 100% Write and 100% random 16 Workers and 16 Disk Queue Lengh on the VRTX storage for 3 hours: 41.290 IOPS (24xDell Toshiba SSD's in RAID-6) 4K 100% Write and 100% random 16 Workers and 16 Disk Queue Lengh on the PowerEdge 730XD for 3 hours: 132.953 IOPS (2xDell SanDisk SSD's in RAID-1) So over 3x better Write performance. I know RAID-6 have worse Write performance, but shouldn't 24x Enterprise drives still beat this? Again I think when I've the time I will drive to our datacenter and take 2xToshiba SSD's to try in the PE730XD system, I think the result will be even better than with the SanDisk drives from Dell. I'm testing the same for read performance now, here RAID-6 should normally perform better. I'll be back
  12. Your right, I think i'll see the same for the 24xSSD in the VRTX vs. the PE730XD with 2xSSD, but i'll test it. But i'll be surprised if the VRTX test showed big increase in IOPS
  13. Hi Kevin, I can do that, but the tests I see from other reviews almost shows the same and that is that the performance over longer time stays very steady or is there something I miss? I think your right about the NVMe storage, but we didn't look that way because that will mean "fixed" storage for each Blade (if one PCIe is mapped to each server) Just sad to see that this system can't handle SSD or high load.
  14. Correct, but how to test differently? I can't move all the workload before i'm pretty sure it will perform fast. I've testet with 3 VM's and it was running OK, but not lightning fast which we expected. I know test isn't real world, but still if i'm already on 2xSSD's on PE730XD sees 4x performance then I don't think we should be satisfied with the current numbers we see from the VRTX system. Again remember that Fault Tolerance mode on the VRTX decreases performance even more then what I've wrote. I hope to test 2xToshiba SSD's in the PE730XD, just to see if they outperforms the current 2xDell SSD's (SanDisk) in the PE730XD
  15. The goal is to run abort 100 medium load VM's on this system. But i'm a little bit worried it Will not handle the load. still dosn't understand that it seems to underperform so much.