Jump to content


Photo

Toshiba MKx001GRZB Enterprise SSD Review Discussion


  • You cannot start a new topic
  • Please log in to reply
10 replies to this topic

#1 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,396 posts

Posted 06 January 2012 - 11:56 AM

Today we launched a new review format looking at the Toshiba MKx001GRZB SLC-based enterprise SSD.

Toshiba MKx001GRZB Enterprise SSD Review

#2 johnw42

johnw42

    Member

  • Member
  • 66 posts

Posted 06 January 2012 - 02:00 PM

I do like the focus on steady-state performance. It is always good to know what the performance will be under near worst-case conditions. I do have a few suggestions:

1) Please document the pre-conditioning procedures used to insure the tests were conducted under steady state. I assume you did something like continuous 4KiB random writes at QD=32 while monitoring the write speed and waiting for it to stabilize. That procedure (or whatever SR used) should be mentioned in the reviews. Ideally, a pre-conditioning graph of write speed vs. time should be included, so that readers can see at a glance that the SSDs did indeed reach steady state before the testing. Such a graph is also useful to see how initial speeds compare to steady state speeds.

2) Please include one or two of the best consumer drives in your tests as a comparison, say the Corsair Performance Pro or one of the new Plextor models (M2P or M3S). This would be helpful for people who have enterprise-like heavy workloads but who opt to use consumer SSDs instead of enterprise SSDs. I think a lot of people will use consumer SSDs with enterprise-like heavy workloads, because enterprise SSDs often cost four to eight times what a consumer SSD costs. For such people, seeing how much they are giving up with the consumer SSDs vs. the enterprise SSDs will be helpful in deciding whether to pay the extra money for enterprise SSDs.

3) I hope at least some of the new test procedures SR used in this review will be applied to every future consumer SSD review. Perhaps the consumer SSD reviews could report the usual data (not steady state), but then also include SS test results for the 2MiB sequential, 2MiB random, and 4KiB random tests. That way the SR readers could see how the consumer SSDs perform on those three sets of tests, both out of the box, and at steady state. I have not seen any other SSD review site show that sort of data, and I think many consumer SSD readers would be suprised by the differences between OOB and SS performance, with some consumer SSDs having a much larger performance degradation than others. It would be good to see this test applied to the SSDs I mentioned in a previous comment: OCZ Vertex 3, Crucial m4, Samsung 830, Corsair Performance Pro. Also the Intel 320, and Intel 520 when it is released. I think those are the most common consumer SSDs that people might choose to use for an enterprise-like heavy workload.

By the way, I notice that many of your graphs still have the axis label as "MB" instead of "MB / s".

Edited by johnw42, 06 January 2012 - 02:02 PM.

#3 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,396 posts

Posted 06 January 2012 - 03:23 PM

1) on the pre-conditioning front, each test was pre-conditioned with its own specific datatype since steady-state varies depending on workload. You can reach the steady-state quicker through 2MB random write, but you still need to pre-condition it for that specific workload. We are working on a way of outlining the process, which for the tech geeks or companies that inquire we can send the IOMeter prep results. The basic process for each of these areas was to first measure the length of time needed before it reached steady in a given area and then go well beyond that in the actual test. (IE if it takes 1 hour to hit steady, do 2 hours of conditioning and another 2-4 hours of actual testing).
2) This is one area we probably won't be able to do officially in the reviews given the target audiences of each drive. In the forums we might decide to pick a few drives to put through the same testing to see how they might stack up, but for the main charts we can't compare those vastly different targeted models. We have no problem showing those results, just not on the main review.
3)Lets just say that great minds think alike ;). Many of those ideas are already in motion, we just want to get things streamlined to handle this in a semi-automated way. Right now the enterprise tests still require user input and adjustment for each step of the test (pre-conditioning monitoring) versus letting a script handle most of the benchmarking process. We definitely plan on moving past just 4K steady on consumer products, since there has been a huge level of interest on this from readers as well as companies we are working with. First step might be the steady server profiles.

On the graph front we are actually in the process of revamping them right now. If you look at the Samsung RAID review, you can see the 1.5 version of sorts, which is slowly making its way to 2.0 which will be completely dynamic.

#4 johnw42

johnw42

    Member

  • Member
  • 66 posts

Posted 06 January 2012 - 04:19 PM

1) on the pre-conditioning front, each test was pre-conditioned with its own specific datatype since steady-state varies depending on workload. You can reach the steady-state quicker through 2MB random write, but you still need to pre-condition it for that specific workload. We are working on a way of outlining the process, which for the tech geeks or companies that inquire we can send the IOMeter prep results. The basic process for each of these areas was to first measure the length of time needed before it reached steady in a given area and then go well beyond that in the actual test. (IE if it takes 1 hour to hit steady, do 2 hours of conditioning and another 2-4 hours of actual testing).


Which still does not tell us what preconditioning was done. It is not difficult to describe in a few sentences. For example: WIPC (workload independent pre-conditioning) was done with 4KiB random writes QD=32 0/100 R/W mix until reaching SS. Then WDPC (workload dependent pre-conditioning) was done with 2MiB sequential writes QD=4 0/100 R/W mix for 5 rounds until reaching steady state.

For a good example of the type of tests and charts that should be included in SSD reviews, both consumer and enterprise, check the "MLC-A Full report" and "SLC-A Full report" links at the bottom of the table on this SNIA page:

http://www.snia.org/forums/sssi/pts

#5 johnw42

johnw42

    Member

  • Member
  • 66 posts

Posted 06 January 2012 - 04:22 PM

2) This is one area we probably won't be able to do officially in the reviews given the target audiences of each drive. In the forums we might decide to pick a few drives to put through the same testing to see how they might stack up, but for the main charts we can't compare those vastly different targeted models. We have no problem showing those results, just not on the main review.


That is rather vague. Do you mean to say that you will not include less expensive SSDs for comparison in the reviews because you believe it will offend the manufacturers of the SSDs? I suppose that is the problem with relying on manufacturers to provide free review samples rather than purchasing the products to be tested.

#6 johnw42

johnw42

    Member

  • Member
  • 66 posts

Posted 06 January 2012 - 04:29 PM

3)Lets just say that great minds think alike ;). Many of those ideas are already in motion, we just want to get things streamlined to handle this in a semi-automated way. Right now the enterprise tests still require user input and adjustment for each step of the test (pre-conditioning monitoring) versus letting a script handle most of the benchmarking process. We definitely plan on moving past just 4K steady on consumer products, since there has been a huge level of interest on this from readers as well as companies we are working with. First step might be the steady server profiles.


You may want to look at 'fio' which is basically a scripting language for performing IO tests. I have used the linux version, but a web search can turn up windows binaries if you need them.

http://freshmeat.net/projects/fio/

Here is someone who wrote a shell script to do some rudimentary automation with 'fio' for SNIA SSS tests:

http://storagetuning...-specification/

Of course, that is just a beginning. Much more sophisticated tests could be done with 'fio'.

#7 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,396 posts

Posted 06 January 2012 - 04:46 PM

Which still does not tell us what preconditioning was done. It is not difficult to describe in a few sentences. For example: WIPC (workload independent pre-conditioning) was done with 4KiB random writes QD=32 0/100 R/W mix until reaching SS. Then WDPC (workload dependent pre-conditioning) was done with 2MiB sequential writes QD=4 0/100 R/W mix for 5 rounds until reaching steady state.

For a good example of the type of tests and charts that should be included in SSD reviews, both consumer and enterprise, check the "MLC-A Full report" and "SLC-A Full report" links at the bottom of the table on this SNIA page:

http://www.snia.org/forums/sssi/pts


Sorry I wasn't more clear on the pre-conditioning. Each tests' preconditioning was the exact same data workload used for benchmarking. IE 4K random would get 4K random preconditioning at 100% random, QD32, etc until reaching SS. On the mixed workload tests the pre-conditioning was the exact workload used in the test as well, IE on the Database test 67/33 R/W mix, 8K transfers, QD32 until reaching steady point. In terms of intervals to gauge how a drive is responding in tests we will first go through with 2 minute intervals, and once we know when the turning point is setup a longer script to start recording the actual test results well past that point.

I was just stating that to reach steady quicker in some areas you could run through 2MB sequential, but for the purposes of our our enterprise reviews, all pre-conditioning is done with the exact workload the drive will receive in the benchmark that follows.

Hope that is more clear :)

#8 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,396 posts

Posted 06 January 2012 - 04:51 PM

That is rather vague. Do you mean to say that you will not include less expensive SSDs for comparison in the reviews because you believe it will offend the manufacturers of the SSDs? I suppose that is the problem with relying on manufacturers to provide free review samples rather than purchasing the products to be tested.


We send all the enterprise gear back anyway, but when possible we want to compare apples to apples. Putting a $200 client SSD on charts with a $3,000 enterprise grade SSD just doesn't make sense. No one considering the purchase of one would look at the other. We do appreciate your feedback though and perpetually modify our testing plans based on feedback like yours.

#9 johnw42

johnw42

    Member

  • Member
  • 66 posts

Posted 06 January 2012 - 05:20 PM

No one considering the purchase of one would look at the other.


That is simply not true. I personally know of two sites that use Intel consumer SSDs and/or Crucial consumer SSDs in heavy workload environments, and by word of mouth, I have heard of many others. In both cases that I am familiar with, they considered enterprise SSDs, but rejected them because it did not make sense to pay 8 times as much, when similar performance can be obtained by using multiple consumer SSDs in RAID.

#10 johnw42

johnw42

    Member

  • Member
  • 66 posts

Posted 06 January 2012 - 05:21 PM

Hope that is more clear :)


Yes, thank you for the clarification.

#11 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,396 posts

Posted 06 January 2012 - 05:28 PM

That is simply not true. I personally know of two sites that use Intel consumer SSDs and/or Crucial consumer SSDs in heavy workload environments, and by word of mouth, I have heard of many others. In both cases that I am familiar with, they considered enterprise SSDs, but rejected them because it did not make sense to pay 8 times as much, when similar performance can be obtained by using multiple consumer SSDs in RAID.


Well it really depends on the market each is put in, but I completely understand where you are coming from. As we expand the devices put through the enterprise testing process you will probably see more devices compared. Perhaps once we transition over to the dyanmic charts we might have a default group of drives that can be swapped out for whichever drives are also in the database. ;)

Our main goal right now though is to get more drives put through the new process and figure out how they stand. If there are some interesting drives that stand out in particular areas, it would definitely be worth noting in that particular section. Areas that have heavy read-percentages would easily use a MLC or eMLC drive instead of the SLC variants.



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users