Brian

Crucial RealSSD C300 Review - CTFDDAC064MAG

Recommended Posts

Crucial currently wears the fastest consumer SSD crown, as seen in our recent review of the 256GB C300. Thanks to the SATA 6Gb/s interface and Marvell controller, Crucial's SSDs manage spectacular speeds. 256GB SSDs aren't for everyone though, Crucial has wisely expanded their capacity offerings to include 128GB and 64GB versions of the C300. In this review we dive into the 64GB model (CTFDDAC064MAG-1G1), highlighting the differences to the 256GB patriarch of the RealSSD C300 family.

Full Review

Share this post


Link to post
Share on other sites

A nice review and certainly a very interesting product! However, I'd like to comment on a few things:

1. SATA2 vs. SATA3

The difference seen here is by far the largest I have seen on the web. It's actually disturbingly large, considering that sometimes there was a difference of a factor of 2 while the overall transfer rate was still well below the limit of SATA2. In my small picture of the world it shouldn't be like this, i.e. any limitations posed by the SATA2 interface shouldn't matter much as long as you're staying below 260 MB/s.

Furthermore in other reviews the C300 (64, 128 or 256 GB) drives were attached to onboard Marvel controllers or AMD Southbridges. In both cases the drives could speed up when they could use their very high sequential read speeds, but in real world tests they tended to tie the Intel SATA2 controller at best. Certainly not a lot faster.

So I'm wondering if your LSI Raid Card is interfering here. For example the driver could do some sophisticated software caching to screw the comparison. Would you mind to test one of the SATA2 SSDs using this card, as a sanity check?

2. Graph units

Including separat graphs showing IOps and sometimes average latency is nice and modern, but personally these numbers don't tell me anything. Their just different measures / untis of the same fact. To transfer IOps into something seizable I'd need to know the average transfer size. I'd multiply both and get MB/s. So I'd say that graphs in IOps are not useful unless the average transfer size is known, as in the case of 4kB and 2MB tests. For Storagemark 2010 on the other hand it's of no use.

I also had a problem reading this article: on my 1920x1080 screen I couldn't read the charts in MB/s and the legend at the same time (*). What I'd like instead would be a link below or on top of each graph where we can switch units between MB/s, IOps and average ms. Maximum ms should still be a separate graph, as it adds new information.

3. Graph legend

Might I suggest you use the same color for similar drives throughout the review? By now the graphs look tidy due to the similar color scheme in each of them. But to figure out what's actually shown one has to study the legend and search for the new colors the drive of interest got in this new graph.

And the graphs take up less than 1/4 of the horizontal space of my screen. Granted, it's not exactly a small one - but there'll be quite some space left on pretty much every modern monitor, especially wide screen models. For me it would make reading the gaphs much easier if the drive model, i.e. the contents of each legend string, would be placed to the left or right of the corresponding line in the graph. That saves the eyes from constantly going up and down, first searching for the drive names and colors in the legend, remembering them and then searching for the colors in the graph. Doing it this way would also automatically fix issue (*) I brought up in 2.

4. Comparison drives

I magine I like the Sandforce drives as much as you do. But including 3 of them in every review? Personally I don't pay much attention to them in the graphs and in my mind they're bundled as "the Forces". This is especially true if a drive as different as the 64 GB C300 is reviewed. Here the slight differences between the different Forces are not interesting. If you review another Sandforce, then of course include the others as well.

Best regards and keep up the good work!

MrS

Share this post


Link to post
Share on other sites
MrSpadge' date='18 September 2010 - 08:04 AM' timestamp='1284811479' post='264207']

And the graphs take up less than 1/4 of the horizontal space of my screen. Granted, it's not exactly a small one - but there'll be quite some space left on pretty much every modern monitor, especially wide screen models. For me it would make reading the gaphs much easier if the drive model, i.e. the contents of each legend string, would be placed to the left or right of the corresponding line in the graph. That saves the eyes from constantly going up and down, first searching for the drive names and colors in the legend, remembering them and then searching for the colors in the graph. Doing it this way would also automatically fix issue (*) I brought up in 2.

I agree 100%, I would respectfully ask for this as well. I too spend a lot of time scrolling up and down trying to figure out where each drive is.

Otherwise, good review!

Share this post


Link to post
Share on other sites
MrSpadge' date='18 September 2010 - 07:04 AM' timestamp='1284811479' post='264207']

A nice review and certainly a very interesting product! However, I'd like to comment on a few things:

1. SATA2 vs. SATA3

The difference seen here is by far the largest I have seen on the web. It's actually disturbingly large, considering that sometimes there was a difference of a factor of 2 while the overall transfer rate was still well below the limit of SATA2. In my small picture of the world it shouldn't be like this, i.e. any limitations posed by the SATA2 interface shouldn't matter much as long as you're staying below 260 MB/s.

Furthermore in other reviews the C300 (64, 128 or 256 GB) drives were attached to onboard Marvel controllers or AMD Southbridges. In both cases the drives could speed up when they could use their very high sequential read speeds, but in real world tests they tended to tie the Intel SATA2 controller at best. Certainly not a lot faster.

So I'm wondering if your LSI Raid Card is interfering here. For example the driver could do some sophisticated software caching to screw the comparison. Would you mind to test one of the SATA2 SSDs using this card, as a sanity check?

2. Graph units

Including separat graphs showing IOps and sometimes average latency is nice and modern, but personally these numbers don't tell me anything. Their just different measures / untis of the same fact. To transfer IOps into something seizable I'd need to know the average transfer size. I'd multiply both and get MB/s. So I'd say that graphs in IOps are not useful unless the average transfer size is known, as in the case of 4kB and 2MB tests. For Storagemark 2010 on the other hand it's of no use.

I also had a problem reading this article: on my 1920x1080 screen I couldn't read the charts in MB/s and the legend at the same time (*). What I'd like instead would be a link below or on top of each graph where we can switch units between MB/s, IOps and average ms. Maximum ms should still be a separate graph, as it adds new information.

3. Graph legend

Might I suggest you use the same color for similar drives throughout the review? By now the graphs look tidy due to the similar color scheme in each of them. But to figure out what's actually shown one has to study the legend and search for the new colors the drive of interest got in this new graph.

And the graphs take up less than 1/4 of the horizontal space of my screen. Granted, it's not exactly a small one - but there'll be quite some space left on pretty much every modern monitor, especially wide screen models. For me it would make reading the gaphs much easier if the drive model, i.e. the contents of each legend string, would be placed to the left or right of the corresponding line in the graph. That saves the eyes from constantly going up and down, first searching for the drive names and colors in the legend, remembering them and then searching for the colors in the graph. Doing it this way would also automatically fix issue (*) I brought up in 2.

4. Comparison drives

I magine I like the Sandforce drives as much as you do. But including 3 of them in every review? Personally I don't pay much attention to them in the graphs and in my mind they're bundled as "the Forces". This is especially true if a drive as different as the 64 GB C300 is reviewed. Here the slight differences between the different Forces are not interesting. If you review another Sandforce, then of course include the others as well.

Best regards and keep up the good work!

MrS

1. Good question! This came up in another thread and we had some behind the scenes testing to find out what sort of boosts the LSI card gave for other SSDs. Now the C300 mentioned below is the 256GB model, not the 64GB. This was the result:

First up is our Productivity Test:

IO MB/s response time

C300 raid0 single 128k

9061.93 266.92 0.868

intel 160gb raid0 single 128k

6252.35 182.78 1.268

owc 120gb raid0 single 128k

8316.87 242.82 0.930

intel 40gb raid0 single 128k

3281.67 95.80 2.411

HTPC Trace:

C300 raid0 single 128k

5457.00 252.90 1.427

intel 160gb raid0 single 128k

2999.24 139.86 2.625

owc 120gb raid0 single 128k

5598.39 261.16 1.395

intel 40gb raid0 single 128k

1458.80 68.00 5.467

Also when it came to some reviewers saying the Intel ICH SATA2 transfers were best, I have no doubt when they were trying to use the Marvell controller cards for SATA3. In our testing with that pile of junk it was severely driver limited and was half the speed of Intel chipset tests. We still have that card in the office and one of these days if I see some information saying the drivers were magically fixed, I will retest the C300 and post the results. Until then I go most says wondering if I should drop it down the garbage disposal or not.

2. I will try and find an old post I made when we first introduced the real-world benchmarks. I believe I went into the cluster size % for each of the traces. When it comes to higher resolution viewing, right now our charts are very tall and made by hand. Luckily though this won't always be the case. We are working on dynamically created charts which will use more of the available horizontal space on the page, moving the legend to the left side of the bars. This is the next thing coming now that the site redesign was finished.

3. The Dynamically created charts will solve a lot of these problems. Below is an example. I don't know if it is completely finalized at this point but it will be really close to this:

post-70131-12848328064237_thumb.jpg

4. Point taken. The new chart program will hopefully solve this as well. Users will be able to select whatever drives they want to see compared ;)

Share this post


Link to post
Share on other sites
Guest Phil

According to SR 'real world' benchmarks the Sandforce drives are three times faster than C300 64GB on SATA II. Techreport and Anandtech show very different results.

I benchmarked these drives in my own notebook, the C300 actually managed to outperform the Sandforce drives is some situations.

I hope SR will include some single task real world benchmarks in their reviews. That way we can get an impression of the real real world performance.

Share this post


Link to post
Share on other sites

Phil - I suggest starting a new thread in site suggestions with the type of benchmark you'd like to see. Arguing that our real world tests aren't though is a little odd. Perhaps those scenarios just don't align with what you want to see. We plan on having 8-10 by the time we're "complete" with the testing suite.

Share this post


Link to post
Share on other sites
Guest Phil

Will do Brian.

the reason I'm calling the SR benchmarks not real world is explained quite well by TR:

"DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignored—IOs are fed to the disk as fast as it can process them. This approach doesn't give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. "

http://techreport.com/articles.x/19079/6

Share this post


Link to post
Share on other sites

Will do Brian.

the reason I'm calling the SR benchmarks not real world is explained quite well by TR:

"DriveBench produces a trace file for each workload that includes all IOs that made up the session. We can then measure performance by using DriveBench to play back each trace file. During playback, any idle time recorded in the original session is ignored—IOs are fed to the disk as fast as it can process them. This approach doesn't give us a perfect indicator of real-world behavior, but it does illustrate how each drive might perform if it were attached to an infinitely fast system. "

http://techreport.com/articles.x/19079/6

Real world behavior would be playing back the trace file in real world speeds, or the speeds as which they were recorded. This would provide a hilarious and equally pointless benchmark since every drive as fast as the drive originally traced would provide the same score and slower drives would be playing keepup. Under that definition real-world tests would be for load/stress testing and seeing which storage arrays could handle X load near indefinitely. Those tests are seen in the enterprise market when places look at upgrading or diagnosing equipment. They have a set daily-load test, played back in real speed to see if the new setup can handle that, and then play it back in fastest speed to see what the difference is. That result would give you an idea of storage growth to know how much additional load the system could handle. Benchmarks by their very nature stress the drive in faster-than-real-world conditions to rank drives and see which outperform others. Some real-world benchmarks are also reaching a point where drives are becoming so fast that if you look at single activities (load X program, play trace back) nearly every top-tier drive will give the same score. You need much longer traces to start showing strengths and weaknesses of certain drives.

Not all of our traces are multi-tasking. The HTPC trace multitasks as stated in the descriptions, but the Productivity trace and Gaming trace have a lot of single-threaded activity. The gaming trace is completely single threaded (not counting whatever Windows 7 does in the background during normal operations). It consisted of playing games. Play X game, exit, play Y game, exit, and so on.

Share this post


Link to post
Share on other sites

Hi TSullivan,

I finally found the time to browse through those numbers. They're not complete apples to apples comparisons due to the stripe size of 128kB on the LSI card - but this shouldn't make a drive 50% faster, otherwise noone would be running the standard 4kB configs. Furthermore the C300 256GB performs just about the same in your 128kB testing as it did in the C300 64 GB review. So we should be able to safely compare the numbers. Let's condense the data a bit:

Productivity in MB/s

Name | Intel | LSI

C300 256 GB | 114 | 267

X25-M 160 GB | 120 | 183

OWC 120 GB | 212 | 243

HTPC in MB/s

Name | Intel | LSI

C300 256 GB | 140 | 253

X25-M 160 GB | 127 | 140

OWC 120 GB | 248 | 261

So the C300, which is the only one running SATA3 on the LSI, gains most going from the Intel to the LSI. However, the other drives also gain quite some speed, especiall the X25, since the OWC looks SATA2 limited anyway. Looking at the extreme case (X25 in Productivity) we see a speed improvement of 52% going from the Intel controller to the LSI card. This is while running in SATA2 mode on both controllers. So I'd say something else is definitely going on here (software caching?) and skewing the C300 results in favor of SATA3, whereas in reality a significant part of the "apparent SATA3 advantage" is due to the LSI card. It's still a valid result, but probably does not mirror what most people will experience using AMD, Intel or Marvell onboard controllers.

Taking a look at the review again I just found this passage:

The LSI RAID card shows much higher speeds, but has the advantage of a healthy 512MB buffer, dedicated RAID controller, and obviously the faster 6.0Gbps interface. In this review, after much trial and error, we have tweaked the IOMeter settings when testing on the external RAID card to counteract the large buffer and give a better representation of the true speed of the SSD.

Or to put it in a provocative way: you tweaked IOMeter, but the other tests were inflated, ehm helped, by the large cache?

2. Nevermind looking up the values. If you know the overall number of IOPs then the average IOP size = transfer rate / number of transfers. It's not that I'd want to have this value so I can do calculations with it, it's that I already know the interesting result in MB/s.

3. & 4. Yeha!

MrS

Share this post


Link to post
Share on other sites
MrSpadge' date='26 September 2010 - 12:48 PM' timestamp='1285523304' post='264383']

Taking a look at the review again I just found this passage:

Or to put it in a provocative way: you tweaked IOMeter, but the other tests were inflated, ehm helped, by the large cache?

Haha, its not as bad as it sounds I swear!

When we first started testing SSDs with IOMeter we realized they couldn't be tested in the way way we test hard drives. If you set your sample size too large most SSDs will freak out and start running garbage collection midway through the test. After talking with a few manufacturers we settled on 1,000,000 sectors (or a bit over 400mb). That is fine for anything going through the Intel chipset, but when dealing with the RAID card we started to test the cache speeds instead of the drive. We upped the sample size to 10,000,000 sectors (bit over 4GB) which solved the odd spiking IOMeter server tests that nearly blew the scale off our previous charts.

Share this post


Link to post
Share on other sites

That makes a lot of sense regarding IOMeter :)

However.. I don't want to badger you and would stop here if I wouldn't feel this is important. As the Storagemark 2010 Productivity test shows a drive can gain 52% more performance if it's run in SATA2 mode on the LSI card rather than the ICH10R. Since we know the Intel is an excellent controller this is very probably caused by the large cache on the LSI card.

And this changes the conclusions and assessment of the C300 itself. Sure, it's still a very good drive, but you attribute the combined performance gain from SATA3 and the cache to SATA3 alone. This is not something to be solved easily until we get the Intel 6 series chipsets. But IMHO in presenting the information this way you're doing your readers a disfavor - the typical customer of a 64 GB C300 will probably not be running it on a 350€ raid card. It's going to be either a Marvel chip or an AMD or Intel southbridge.

This is also something to keep in mind for any future tests using this controller card.

MrS

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now