Sign in to follow this  
eguy

First Review of 800MB/S ioDrive from Fusion-IO

Recommended Posts

A HW RAID card with lots of cache would not improve the SSD performance. I require constant 25MB/sec random 1MB writes. Caching helps for bursts, but for the steady-state, the disk needs to be able to support the steady-state rate by itself. The best steady-state write rate I got from the M disk was about 5MB/sec. This was with a 32-deep queue. There is absolutely nothing that a caching controller can do to improve that - once the cache is full, it can't accept data faster than the SSD will write it.

I assume those 5MB/second writes were interspersed with read requests.

You are being arrogant and wrong. You haven't tested a caching RAID controller with a SSD, but your experiences with Intel Matrix Raid make you assume that it can't possibly help. So you won't bother trying.

What the caching controller can do is PRIORITIZE the requests. When it is dumping the cache, it can be smart enough not to request reads in the middle of the write.

The Max Sequential write rate of the Intel X25-M drive is about 75 MB per second. 1MB files are big enough to get the full transfer rate, assuming the controller is smart enough to NOT send any other requests while the disk is writing. Assuming you are using four M drives in RAID 0, it should be able to dump a full 256MB cache in under a second.

When the cache is NOT full, writing the 1MB file should take about one millisecond.

Share this post


Link to post
Share on other sites
A HW RAID card with lots of cache would not improve the SSD performance. I require constant 25MB/sec random 1MB writes. Caching helps for bursts, but for the steady-state, the disk needs to be able to support the steady-state rate by itself. The best steady-state write rate I got from the M disk was about 5MB/sec. This was with a 32-deep queue. There is absolutely nothing that a caching controller can do to improve that - once the cache is full, it can't accept data faster than the SSD will write it.

I assume those 5MB/second writes were interspersed with read requests.

You are being arrogant and wrong. You haven't tested a caching RAID controller with a SSD, but your experiences with Intel Matrix Raid make you assume that it can't possibly help. So you won't bother trying.

What the caching controller can do is PRIORITIZE the requests. When it is dumping the cache, it can be smart enough not to request reads in the middle of the write.

The Max Sequential write rate of the Intel X25-M drive is about 75 MB per second. 1MB files are big enough to get the full transfer rate, assuming the controller is smart enough to NOT send any other requests while the disk is writing. Assuming you are using four M drives in RAID 0, it should be able to dump a full 256MB cache in under a second.

When the cache is NOT full, writing the 1MB file should take about one millisecond.

Hiya,

No, there were no read requests interspersed.

The "Max sequential write rate" of the X25-M is a published spec that can be hit in ideal conditions. It does not hold for the conditions in which I was testing the drive. That is, more or less, the entire point of this thread.

Do you have a new explanation as to how a caching controller could throw data at the drive faster than it can write it?

Share this post


Link to post
Share on other sites
Hiya,

No, there were no read requests interspersed.

The "Max sequential write rate" of the X25-M is a published spec that can be hit in ideal conditions. It does not hold for the conditions in which I was testing the drive. That is, more or less, the entire point of this thread.

Do you have a new explanation as to how a caching controller could throw data at the drive faster than it can write it?

I think the flaw in your reasoning is assuming that the Intel Matrix Raid was actually utilizing the Intel SSD efficiently. For some applications Intel Raid will work almost as well as a dedicated card. For others the difference is huge. If you assume that the 5 MB/sec write speeds that your application got from a X25M connected to an Intel Matrix RAID is the best your card can do, you are probably wrong.

Actually, another factor is that in a Windows NTFS, when you tell it to write a 1MB file, you are writing two other small pieces of data in addition to the 1MB block of data - a directory entry and a block-allocation map entry. When you are using a big write-back cache, the file allocation and directory updates are likely to be bunched together into a better performing semi-sequential write instead of writing two semi-random small block writes along with every 1MB data-block write.

But go ahead - ask for advice and ignore what you are told when it doesn't agree with your pre-conceived notions. Just like my ex-boss. I'm used to it by now.

Share this post


Link to post
Share on other sites
Hiya,

No, there were no read requests interspersed.

The "Max sequential write rate" of the X25-M is a published spec that can be hit in ideal conditions. It does not hold for the conditions in which I was testing the drive. That is, more or less, the entire point of this thread.

Do you have a new explanation as to how a caching controller could throw data at the drive faster than it can write it?

I think the flaw in your reasoning is assuming that the Intel Matrix Raid was actually utilizing the Intel SSD efficiently. For some applications Intel Raid will work almost as well as a dedicated card. For others the difference is huge. If you assume that the 5 MB/sec write speeds that your application got from a X25M connected to an Intel Matrix RAID is the best your card can do, you are probably wrong.

Actually, another factor is that in a Windows NTFS, when you tell it to write a 1MB file, you are writing two other small pieces of data in addition to the 1MB block of data - a directory entry and a block-allocation map entry. When you are using a big write-back cache, the file allocation and directory updates are likely to be bunched together into a better performing semi-sequential write instead of writing two semi-random small block writes along with every 1MB data-block write.

But go ahead - ask for advice and ignore what you are told when it doesn't agree with your pre-conceived notions. Just like my ex-boss. I'm used to it by now.

I was running IOMeter writing directly to the Physical Disk object, bypassing NTFS.

The only "flaw" in this sub-thread is your incorrect assumptions about how my tests were run, and failure to acknowledge that many people around the world have been seeing similar behavior from the SSDs and from the ioDrive, and how Intel has even gone on record explaining why it happens and how to "fix" it. I will not be discussing this specific point (IMR being the "cause" of bad SSD write perf) any further.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this