JcRabbit

Help configuring 3 Intel X25-M 80 GB SSDs in RAID 0

Recommended Posts

Ok, lets put the new StorageReview board to the test. :P

First of all, I'm completely new to RAID, so please treat me kindly. :rolleyes:

I just purchased a new system with a core i7 920, Asus P6T Deluxe v2 motherboard (ICH10R chipset), 3 Intel X25-M 80 GB SSDs and Windows 7 64 bit. Because I didn't have much time, I asked the shop I purchased the system from to put it together and build the RAID 0 array for me.

This, apparently, was a mistake: I just run HDTACH 3.0 and HD Tune Pro 4.01 and the average read is about 245 MB/s, which should be the speed of a single SSD drive, not 3 in a RAID 0 array (from what I read, SSDs in RAID 0 scale in almost perfect multiples until the limit of the chipset, which is somewhere around 600 MB/s).

So, what is wrong? What did they do wrong and what do I have to check/undo to get the RAID 0 array working as it should?

Since I'm new to RAID, I'll probably need a step by step guide, sorry... :( In other words, if you need to ask me questions, as you probably will, please tell me also where I can find the answers.

Edited by JcRabbit

Share this post


Link to post
Share on other sites

Something else: I just booted into the Intel Matrix Storage Manager and this is the information it gives me:

ID: 0

Level: RAID 0(Stripe)

Strip: 128 KB

Size: 223.5 GB

Status: Normal

Bootable: Yes

The 3 SSDs are listed as Intel model SSDSA2M080, 74.5 GB, with a Type/Status of 'Member Disk (0)'

Share this post


Link to post
Share on other sites

Somebody?

Anyway, further info:

HDTach reports the burst rate to be 440.3 MB/s (rather low, no?)

The SSDs are all connected to the red ICH10R SATA connectors (SATA1 to SATA3) and the CD-ROM drive is connected to the SATA4 connector.

Perhaps this is a question of not installing the proper driver on Win7? I've been searching the net for info on this, but, so far, most of the articles I've found seem to skip all the initial basic steps.

Share this post


Link to post
Share on other sites

Ahem. I think I've just shown my complete and utter ignorance on this subject, hehe.

HD Tune's default block size for benchmarks is set to 64 KB, which is less than the 128 KB stripe size of the RAID array. Once I set the block size to 512 KB, the average transfer rate jumped to 545 MB/s, with 617 MB/s maximum.

Share this post


Link to post
Share on other sites

LOL - well your self-thread solved itself.

I was thinking about where the bottleneck would be, possibly bad driver, but in the end it was neither ;)

Share this post


Link to post
Share on other sites

You were testing sequential reads, yes? I'm surprised that upping the read block size from 64KB to 512KB had such a big effect. Does HDTune only use a queue depth of one? That would explain it.

256MB/sec at 64kB blocks is 4,000 requests per second, or 0.25ms per request. Maybe that's just the upper IOPS limit of the ICH10R?

Share this post


Link to post
Share on other sites

No idea.

I'm assuming the reason why the 64 KB read block was resulting on an average speed equal to a single SSD drive is because the block size was lower than the stripe size (128 KB), which meant it could not be split among the 3 SSD drives in the array.

On the other hand, I'm still not 100% convinced this is how it should be. I've seen HDTACH screenshots showing 600+ MB/s speeds on SSD RAID 0 arrays and I don't think you can set the block size on HDTACH.

So, any opinions on what I am seeing and what I should be seing are still very much welcome.

For instance, shouldn't a special Intel driver be used with SSDs? If so, how do I check the guys at the store installed the correct driver?

Share this post


Link to post
Share on other sites

Here I go again adding to my self-thread, hehe.

I searched for 'Intel' on the Start Menu and found out that the Intel Matrix Storage Console is installed - which means the 'special' Intel drivers must be installed as well. The Write-Back cache was disabled though - I enabled it because I will be running this system under the 'protection' of an APC 1000VA Smart-UPS.

Share this post


Link to post
Share on other sites
No idea.

I'm assuming the reason why the 64 KB read block was resulting on an average speed equal to a single SSD drive is because the block size was lower than the stripe size (128 KB), which meant it could not be split among the 3 SSD drives in the array.

On the other hand, I'm still not 100% convinced this is how it should be. I've seen HDTACH screenshots showing 600+ MB/s speeds on SSD RAID 0 arrays and I don't think you can set the block size on HDTACH.

Sort of. If the bench program issues multiple requests simultaneously (making sure there's always at least 8 or 16 or whatever requests outstanding) that ensures that all the SSDs are working all the time, and you should see fast speeds with that block size.

Share this post


Link to post
Share on other sites
I still don't understand the need for the 3 x X25 RAID 0 implementation.

I suppose the short answer is 'because I can' or even 'why not?' ;)

Seriously, it was time to switch my main development system to Win7 64 bit, so I might as well upgrade the hardware too - and make it a bit 'future proof' while I was at it (i.e.; get something good enough to last me for a few years).

I'm still the proud owner of a 300 GB Velociraptor that is in my old system, but the obvious storage devices to move to at this time are SSD drives. A single 160 GB SSD ($549 at Newegg) did not have enough capacity to be my primary drive, so I figured I might as well go for three 80 GB SSDs in RAID 0 for a total of 240 GB of storage space at a cost of $867 (3 x $289). It was either that or two 160 GB SSDs in RAID 0, which would cost me $1098.

I opted for the setup that costs less, provides the storage space I need without being overkill, and still manages to be faster than the alternative by 1/3.

Share this post


Link to post
Share on other sites
Sort of. If the bench program issues multiple requests simultaneously (making sure there's always at least 8 or 16 or whatever requests outstanding) that ensures that all the SSDs are working all the time, and you should see fast speeds with that block size.

Exactly. Which is why I find strange the low results in HD TACH and HD Tune with small block sizes. Do both programs have queue depths of one? If so, why have I seen pictures of HD TACH on the web displaying fantastic results for SSDs in RAID 0?

Share this post


Link to post
Share on other sites
Here I go again adding to my self-thread, hehe.

I searched for 'Intel' on the Start Menu and found out that the Intel Matrix Storage Console is installed - which means the 'special' Intel drivers must be installed as well. The Write-Back cache was disabled though - I enabled it because I will be running this system under the 'protection' of an APC 1000VA Smart-UPS.

I would be interested to know what sort of impact Write-Back makes on SSD performance (positive, negative, or none/within margin of error limits). Unlike mechanical drives, I can see where using Write Back on an SSD might either actually have no effect or impede performance, because writing to the cache might not be faster than writing directly to the SSD itself, or, if the cache became full, it might be forcing writes to wait. It also seems fairly intuitive that, unlike a mechanical hard drive, SSDs will not benefit because of any write back optimization that works by "sorting" the writes so that the data is written back in a manner that takes into account or tries to minimize the spinning of the platters (i.e. elevator sorted writing) <_<:blink:

Could you do the same ATTO benchmark with Write Back disabled and post those results?

Also, I noticed that in your benchmark, you've only set a queue depth of 4, which seems to be contrary to what you were saying in another reply stating that 8 or 16 would ensure that all SSDs were working all the time. Was that just an oversight?

Share this post


Link to post
Share on other sites
Also, I noticed that in your benchmark, you've only set a queue depth of 4, which seems to be contrary to what you were saying in another reply stating that 8 or 16 would ensure that all SSDs were working all the time. Was that just an oversight?

Read back, it wasn't me who said that - I'm the RAID 0 newbie, remember? ;)

Anyway, I didn't specifically choose 4, that's the default Queue size for the ATTO benchmark.

As for the write-back, that is actually a feature of the *Intel* MSM drivers - I suppose they know what they are doing. I'll try making running some differential benchmarks with it enabled and disabled as soon as I have the time.

What apparently really makes a difference, from what I have been reading, is using the Intel RST driver instead of the Intel MSM driver.

Share this post


Link to post
Share on other sites
As for the write-back, that is actually a feature of the *Intel* MSM drivers - I suppose they know what they are doing.

Yes, they knew what they were doing but what they were doing was designed for hard drives not SSD's. Not many people actually know the effect of caching the dirty bits waiting to be written on a RAID 0 SSD array as of this minute.

oc

Share this post


Link to post
Share on other sites

True.

In the mean time, I have some GREAT news. I checked the firmware on the 3 drives and two of them had the very first firmware version released (2CV102G9), which meant they didn't even support TRIM. The other drive was from a later batch and had the 2CV102HA firmware - mismatched firmware revisions on a RAID array is not a good thing, I suspect.

I updated the firmware on all 3 drives to the 2CV102HD revision (the latest) and now HD TACH shows 3,413.4 MB/s burst speed (previously it was 400 something) and an average read speed of 704.8 MB/s!!! Whoohoo!

Problem solved, I guess. :D

Share this post


Link to post
Share on other sites

Let us know how it goes over time. That's a surprising boost for such a simple issue...glad it's working out for you.

Share this post


Link to post
Share on other sites
Also, I noticed that in your benchmark, you've only set a queue depth of 4, which seems to be contrary to what you were saying in another reply stating that 8 or 16 would ensure that all SSDs were working all the time. Was that just an oversight?

Read back, it wasn't me who said that - I'm the RAID 0 newbie, remember? ;)

Anyway, I didn't specifically choose 4, that's the default Queue size for the ATTO benchmark.

As for the write-back, that is actually a feature of the *Intel* MSM drivers - I suppose they know what they are doing. I'll try making running some differential benchmarks with it enabled and disabled as soon as I have the time.

What apparently really makes a difference, from what I have been reading, is using the Intel RST driver instead of the Intel MSM driver.

From the way you phrased your reply regarding about keeping the SSDs busy, it seemed like you agreed with the concept, so I thought you might have made a mistake or overlooked the setting when running the benchmark, so I thought I should bring it to your attention in case you had. :mellow:

As far as the write back caching goes, I am not saying it doesn't make sense for mechanical hard drives, where it is obviously beneficial. I'm just pointing out that for various reasons it may not be beneficial or even have negative performance gains when used in conjunction with SSDs. :ph34r:

Share this post


Link to post
Share on other sites
From the way you phrased your reply regarding about keeping the SSDs busy, it seemed like you agreed with the concept, so I thought you might have made a mistake or overlooked the setting when running the benchmark, so I thought I should bring it to your attention in case you had. :mellow:

I was just pointing out that it was Geshel, not me, who said that a queue size of 8 or more would ensure all SSDs were busy an any one time. I'm still learning about this things. :D

Anyway, whatever it was, it was not a queue or block size problem, as the results I got after updating all 3 drives to the latest firmware prove: now both HD Tach and HD Tune show over 700 MB in throughoutput, the latter after reseting the block size back to 64.

As for how this was actually affecting real word performance, I can not really say because I'm still prepping the system and wasn't using it that much - but let me tell you that seeing Photshop CS2 open in 3 seconds flat (I kid you not) is a real eye opener!

As far as the write back caching goes, I am not saying it doesn't make sense for mechanical hard drives, where it is obviously beneficial. I'm just pointing out that for various reasons it may not be beneficial or even have negative performance gains when used in conjunction with SSDs. :ph34r:

Or not. Nobody here really knows, apparently. My feelings on this (uninformed) are that it might not be as beneficial as it is for hard drives, sure, but it doesn't hurt either (provided your system is protected by an UPS, of course).

Share this post


Link to post
Share on other sites
I would be interested to know what sort of impact Write-Back makes on SSD performance (positive, negative, or none/within margin of error limits).

I just found something very curious: remember when I said that it was the firmware updates that got HD TACH to finally report 700 MB/s for the 3 SSD drives RAID 0 array? Well, turns out it wasn't. I turned off volume write back cache yesterday to troubleshoot something here, and today, when showing the speed of the array to a friend, HD TACH had reverted to the previous 200-250 MB/s. Turning the volume write-back cache via the Intel Matrix Storage Manager back on returned the data transfer rate in HD TACH to 700 MB/s.

Share this post


Link to post
Share on other sites

I just found something very curious: remember when I said that it was the firmware updates that got HD TACH to finally report 700 MB/s for the 3 SSD drives RAID 0 array? Well, turns out it wasn't. I turned off volume write back cache yesterday to troubleshoot something here, and today, when showing the speed of the array to a friend, HD TACH had reverted to the previous 200-250 MB/s. Turning the volume write-back cache via the Intel Matrix Storage Manager back on returned the data transfer rate in HD TACH to 700 MB/s.

Yup, enabling Write Back Cache on the ICH10R does wonders for SSDs. I'm actually quite amazed at the speeds you are getting - the ICH10R is supposed to have a maximum throughput of 667MB/sec, which I hit with my 3x X25-E's in RAID0. You somehow have managed to surpass that :-)

post-3945-12647217916261_thumb.gif

Share this post


Link to post
Share on other sites

I just found something very curious: remember when I said that it was the firmware updates that got HD TACH to finally report 700 MB/s for the 3 SSD drives RAID 0 array? Well, turns out it wasn't. I turned off volume write back cache yesterday to troubleshoot something here, and today, when showing the speed of the array to a friend, HD TACH had reverted to the previous 200-250 MB/s. Turning the volume write-back cache via the Intel Matrix Storage Manager back on returned the data transfer rate in HD TACH to 700 MB/s.

Is it possible that you are actually measuring "burst" (i.e., cache) speed rather than sustained speed? This could happen if the data being written fits within the size of the cache.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now