Peter Enoch

VRTX SSD Performance, what to expect?

41 posts in this topic

11 minutes ago, Brian said:

That may be the case, but even so, VRTX wouldn't be my first choice for a IOPS-centric all flash deployment. 

We can agree on this now :(

But we had no idea that the Shared PERC controller performed SO bad. But IF the controllers was great then this box was ideal for us and new solutions.

Any other idea to new solution for us?

Currently we think about M630 Blades to our M1000e Bladecenter and then maybe one or two Equallogic PS6210XS or one EqualLogic PS6210S.

We just "loved" the idea with one small unit (VRTX) that could handle 4xBlades and shared storage in 5 Unit, but maybe it's not possible with any vendor?

 

Edited by Peter Enoch

Share this post


Link to post
Share on other sites

Our suggestion probably requires a new thread with a lot of detail around capacity and performance needs and a rough budget. Do that and we'll be happy to toss suggestions your way. 

Share this post


Link to post
Share on other sites
5 minutes ago, Brian said:

Our suggestion probably requires a new thread with a lot of detail around capacity and performance needs and a rough budget. Do that and we'll be happy to toss suggestions your way. 

Hi Brian,

I will try to make new thread about that subject :D

Share this post


Link to post
Share on other sites

On platforms, I really like the Dell FX2. 2U, holds 4 nodes... basically the VRTX with the storage chopped off. That opens up space for a storage system in the 3U remaining compared to the VRTX.

Share this post


Link to post
Share on other sites

Yes I also had A look at the FX2, but picked the VTRX because we throught it could performe with the SSD's.

Hard for us to find the right storage, but it could be vSAN or EQL maybe

Edited by Peter Enoch

Share this post


Link to post
Share on other sites

VSAN all-flash would be higher performance than you are seeing right now... with deduplication and compression helping you out on capacity.

Share this post


Link to post
Share on other sites
22 hours ago, Peter Enoch said:

4K 100% Write and 100% random 16 Workers and 16 Disk Queue Lengh on the VRTX storage for 3 hours: 41.290 IOPS (24xDell Toshiba SSD's in RAID-6)

4K 100% Write and 100% random 16 Workers and 16 Disk Queue Lengh on the PowerEdge 730XD for 3 hours: 132.953 IOPS (2xDell SanDisk SSD's in RAID-1)

So over 3x better Write performance. I know RAID-6 have worse Write performance, but shouldn't 24x Enterprise drives still beat this?

Again I think when I've the time I will drive to our datacenter and take 2xToshiba SSD's to try in the PE730XD system, I think the result will be even better than with the SanDisk drives from Dell.

I'm testing the same for read performance now, here RAID-6 should normally perform better. I'll be back :D

 

 

The result for Read after 3 hours run:

4K 100% Read and 100% random 16 Workers and 16 Disk Queue Lengh on the VRTX storage for 3 hours: 41.272 IOPS (24xDell Toshiba SSD's in RAID-6)

4K 100% Read and 100% random 16 Workers and 16 Disk Queue Lengh on the PowerEdge 730XD for 3 hours: 128.703 IOPS (2xDell SanDisk SSD's in RAID-1

Share this post


Link to post
Share on other sites

Have you tried running the same test, but hitting one of four LUNs from four servers and looking at the aggregate performance?

Also you will be taking a huge hit in performance looking at RAID6 vs RAID1 or RAID10.

Share this post


Link to post
Share on other sites
10 hours ago, Kevin OBrien said:

Have you tried running the same test, but hitting one of four LUNs from four servers and looking at the aggregate performance?

Also you will be taking a huge hit in performance looking at RAID6 vs RAID1 or RAID10.

I've trought about the same thing (if I understand you correct)

I would try to get all 4 Blades installed with Windows 2012r2 on local drives and then connect them all to the same LUN and try to run the same tests. Just to see if the combine speed for all 4 test equals the result I see for now or if we get better combined result.

I know RAID6 have a huge write hit, but over 24x drives should it still not be faster than 2 drives in RAID-1?

I've also tried with RAID-50 and didn't see any big performance gain.

 

Share this post


Link to post
Share on other sites

Tried the test from all four VRTX Blades with VMware 6.0. Each blade have then VRTX LUN attached and each Blade have a Windows 2012r2 VM on it.

I tried with maybe a more "real-life" performance test with 16 workers, 16 DQL, 65% read, 35% write and 45% random - still  on RAID-6 over 24 SSD's

1 Blade with one VM running the IO Meter test: 37.102 IOPS
2 Blades each with one VM running the IO Meter test: 34.883 IOPS combined
3 Blades each with one VM running the IO Meter test: 33.699 IOPS combined
4 Blades each with one VM running the IO Meter test: 32.773 IOPS combined

Same test on PowerEdge 730XD with 2xDell Sandisk SSD's in RAID-1: 88.032 IOPS

The PE730XD is a physical server so should perform a little better then running in an VM.

Again my conclusion is, if buying the VRTX, the controller cannot handle full speed if all-flash only in the storage unit. It's sad that Dell don't have an upgraded controller for this system :(

Share this post


Link to post
Share on other sites

I'm running into a VRTX project myself currently. Hope to be able to do some testing.

What about latency during your testing? Was it any good?

I think the VRTX has a pretty good price point and the numbers are not that bad in my opinion given what you pay for. The PERC controller in the VRTX system could be hit by parity calculations much harder than some higher-end controllers, especially with 24 disks in RAID. Offcourse, comparing different RAID-types (R1 vs R5 or even single disk) is not fair. Did you test creating a simple RAID1 set of 2 drives in the VRTX, or the minimum amount of drives for RAID1 sets  and see what numbers you get ?

Share this post


Link to post
Share on other sites

It really comes down to the workload. Most SMBs where this machine is targeted would be fine and are frankly probably still buying disk. Could it do more with flash? No doubt. Anyway, looking forward to your results Jay. 

Share this post


Link to post
Share on other sites
17 hours ago, JayST said:

I'm running into a VRTX project myself currently. Hope to be able to do some testing.

What about latency during your testing? Was it any good?

I think the VRTX has a pretty good price point and the numbers are not that bad in my opinion given what you pay for. The PERC controller in the VRTX system could be hit by parity calculations much harder than some higher-end controllers, especially with 24 disks in RAID. Offcourse, comparing different RAID-types (R1 vs R5 or even single disk) is not fair. Did you test creating a simple RAID1 set of 2 drives in the VRTX, or the minimum amount of drives for RAID1 sets  and see what numbers you get ?

Hi, can't remember the latency numbers when testing :mellow:

I've tried with lots of differents numbers in disks for RAID-0, RAID-1, RAID-5, RAID-10, RAID-50 and RAID-6 RAID sets. Havn't seen any big differents in performance.

RAID-0 over 24 disks only x2 the RAID-50 numbers. But RAID-0 is a bit risky :D

Again I love the box, but just sad they don't offer a upgraded/highend controller also to this system so it can handle all-flash solution better.

 

Share this post


Link to post
Share on other sites

ok that's good to know actually. I guess there are limits then. However, latency is quite important to note when judging the numbers. If the 40K-ish IOPS is running at low latency, it's actually not that bad. Somehow, i'm not quite sure that's the case actually ....

Share this post


Link to post
Share on other sites
9 hours ago, JayST said:

ok that's good to know actually. I guess there are limits then. However, latency is quite important to note when judging the numbers. If the 40K-ish IOPS is running at low latency, it's actually not that bad. Somehow, i'm not quite sure that's the case actually ....

Your right latency is also very important. I've run a test now to check the latency.

I tested with 16KB blocksize, 16 workers, 16 queues, 70%read, 30%write and 100% random on RAID-50 over 24 disks:

67.939 IOPS and avarage latency 3.77 ms.

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now