Sign in to follow this  
eguy

First Review of 800MB/S ioDrive from Fusion-IO

Recommended Posts

David,

Which extraordinary syntethic benchmark conditions are you refering to?

"A copy and paste real test, 615 mixed files (5.98GB) in 3min 27sec 92:"

That is the most basic operation... copying files. It takes me similiar amount of time on my 750 GB Seagate with similiar number of files. Or this is expected? I mean to pay that amount of money... and wait for second on each IM message as in OCZ Core or not do any better for the most basic drive operation than something that costs 100 times less?

This kind of use-case is quite common, and definitely not one that would be considered pathological. So, no, that performance is not to be expected.

Indeed, the Adobe Photoshop use-case that we showcased at EforALL is quite similar. It showed load and save times of multi-GB images to be between 5 times and 20 times faster than traditional disks. Reducing the time to save very large images down to a few tens of seconds vs minutes.

I have not seen any reports of unexpectedly low performance come through our customer support organization. Have you requested support? If so, what is the ticket number, so that I can get you some assistance.

-David

Share this post


Link to post
Share on other sites
I have not seen any reports of unexpectedly low performance come through our customer support organization. Have you requested support? If so, what is the ticket number, so that I can get you some assistance.

-David

Those tests are not mine. Please follow the link and request the test files from the author so that you can investigate. There is a whole bunch of IoDrive tests there as well.

http://forum.ssdworld.ch/viewtopic.php?f=1...6ae261981c9b366

Share this post


Link to post
Share on other sites

Some good information on SSD's in general, a bit specific to Intel, but all of the high performance, high volume random write SSD's are applicable. From

http://www.hardware.fr/art/imprimer/731/

also:

http://www.hwupgrade.it/articoli/storage/2...orprese_10.html

and this pdf from a while back has great tech info:

http://research.microsoft.com/users/vijaya...sd-usenix08.pdf

SSDs all have what is known as an “Indirection System†– aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.

The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD’s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.

When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.

SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host “deletes†the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.

In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD’s unused content needs to be defragmented. There are two methods which can accomplish this task.

One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will “Prepare†the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most “user-like†method to accomplish the defragmentation process, as it fills all SSD LBAs with “valid user data†and causes the drive to quickly adapt for a typical client user workload.

An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.

I also have a few comments on FusionIO:

Quote from David Flynn:

Interestingly enough, it's not just because our garbage collection is more efficient than others that we get so much better write performance. It's actually another dirty secret in the Flash SSD world - poor performance with mixed workloads.

Other SSD's get a small fraction of their read or write performance when doing a mix of reads and writes. You'd think that if one gets X IOPS on reads and Y IOPS on writes, one should get 50% * x + 50% * y under a 50/50 RW mix. In reality they typically get less than a quarter of that.

This is fundamentally because NAND is half-duplex and writes take much longer than reads. This makes it a bit tricky to interleave reads and writes. The ioDrive, on the other hand, mixes reads and writes with great efficiency.

I have been there, done that for sure. My particular use case is the ability of a disk to sustain high random read i/o while doing some sequential writes in the background every now and then.

At first, we thought that even the el-cheapo OCZ/RiData style drives might do this. But no, even though they can do ~4k random read iops and sustain 70MB/sec sequential writes, you don't get 2k random reads + 35MB/sec when you mix, or any linear combination of the above. You get something like 400 random read iops with high latency (5% of them over 1 sec latency) and 15MB/sec or so sequential writes. I suspect that during this activity the erase block erasure to data written ratio is also bad, lowering device lifetime too.

Severely degraded read performance persisted even if the sequential write was throttled to writing a 8MB chunk every 2 seconds. Basically, read/write concurrency blows on those.

On some other devices, I noticed other trends. Some favored reads to extreme delay of writes, and some vice-versa. I played with 3 different Linux I/O schedulers and about 50 variations of tuning parameters within them, so that the device would see different types of streams of events. Nothing worked.

There was only one SATA based SSD I tested (I did not test any that are priced over $750) that had a fairly remarkable ability to get better than the linear combination of the two individual tests, and do so with only a minority of the read requests slowing from the typical 0.1ms to 50ms read latency.

I have not gotten my hands on a IoDrive, because that would serve a different purpose for us for a totally different server component, but when we reach our I/O limit there it is certainly an option I'm going to look at -- but the best use I can see for it would be to combine it with Solaris/OpenSolaris and ZFS, with a L2ARC cache.

I know how these things work and anyone can read the pdf I linked above to understand the basics of how the garbage collection (or compaction, or whatever) algorithms affect SSD design and performance. I forsee that in the future, you have a few "modes" of operation you can place a device into, to deal with various workloads better. Intel (and probably others now, or soon) already do this and dynamically try to pick the best one as shown above.

Additionally, if OS's allow for a "deallocate" block I/O command flash drive garbage collection algorithms will improve a great deal for any workload that deletes files (almost all real world ones).

The OS shouldn't really do more than that -- remove most OS based scheduling, and just do the most simple request merge and tail+head request merging in a medium queue, screw the elevator algorithms, add a 'deallocate', and limit the prefetching to smaller sizes (and mostly do it for metadata). The File systems should stop caring about fragmentation past a threshold of a few MB in chunk size, and work to reallocate recently freed space sooner rather than later. So much of the work that file systems do now is purely because they are designed for rotating media with long seek times. A file system and OS adapted to flash would use less CPU, have lower latency on all workload types, and avoid 'pathological' issues in the drives more often.

Share this post


Link to post
Share on other sites
Those tests are not mine. Please follow the link and request the test files from the author so that you can investigate. There is a whole bunch of IoDrive tests there as well.

http://forum.ssdworld.ch/viewtopic.php?f=1...6ae261981c9b366

Oh, interesting. This is on Windows, and through an unauthorized reseller, who, I suspect at best has early beta drivers. We have not released Windows outside of working with large enterprise customers in the Beta. No wonder we haven't received any support calls.

I'll have to jump over to that thread and follow up...

BTW: I had my email address incorrect on my prior post. It is david@fusionio.com.

-David

CTO Fusion-io

david@fusionio.com

Share this post


Link to post
Share on other sites

Well Scott,

although we are not friends, so it seems, i must admit that your knowdledge is......impressive.

The only thing i can say is.....keep them coming.

Jeff

Share this post


Link to post
Share on other sites

> With more "reserve" space (difference between physical and formatted), the garbage collector's worst-case can be improved - and performance guaranteed.

This makes sense to me and I'd like to see the different operation modes mentioned in high level documentation so people don't get confused and spread FUD. I'd also like to see the ability to tell when the device is in the slower mode. If this device was to be used as a primary storage device, it would eventually happen that people would use a tray icon monitoring program to see the status of the fusion-io device and how close it is to freaking out on them.

I look forward to unbiased reviews of fusion-io devices with final drivers :) No such thing has come along yet.

Share this post


Link to post
Share on other sites
There was only one SATA based SSD I tested (I did not test any that are priced over $750) that had a fairly remarkable ability to get better than the linear combination of the two individual tests, and do so with only a minority of the read requests slowing from the typical 0.1ms to 50ms read latency.

Do you mind revealing the identity of that mythical beast of an SSD?

Share this post


Link to post
Share on other sites
Yes, finally a drive that looks like it will actually live up to the hype. I really want one. I guess I will have to go to Vista 64 to get it, and I want to boot from it too.

That is a $3K drive for 80GB. Waste of money i believe for an end user. Good for being a guinea pig though. Each and every SSD was groundbreaking since 2 years... Yet we still have nothing that works.

Besides, you will be needing a secondary drive on your system... And for bigger file operations, that disk will be used due to its capacity (movies, entertainment, etc). Your system will feel still slow since majority of high throughput file operations will be done on the 2nd drive.

Better to wait another year while it explodes on people's systems and matures. I would pay $1000 for 320GB. Those are stellar numbers.

Yes, this drive does appear to be a little short on the features and long on the price.

Compare it to a RAID 0 using 3 X25-M 80GB drives, an Areca ARC-1231ML-2GB caching controller with a ARC-6120 battery backup.

The RAID will be:

1) Bootable

2) 240GB

3) 10% cheaper

4) 600MB/sec sustained reads

5) >200MB/sec sustained writes

6) Expandable and capable of also accelerating a SATA array

Generally, it looks better across the board. I'm recommending a similar configuration for my company's next Oracle server.

Share this post


Link to post
Share on other sites

Did i here Oracle? :-)

Interesting config for that purpose. Redo logs and tablespaces on that volume and.....tchakkka.....you can use /*+ PARALLEL(xyz,128) */ :-))))))

Share this post


Link to post
Share on other sites

It seems to me that the main benefit of this drive, or even the point of this drive, is the extreme low latency thanks to its interface.

I wonder if FusionIO are planning to debottleneck the rest of the IO stack and give us a block device and filesystem that allows very small very fast reads and writes. It makes sense to have something between RAM and HD the way things are headed.

A) RAM, 10's of GB, volatile, < 0.01ms, 20gb/sec

B) Fusion, 100's of GB, non volatile, < 0.1ms, 1gb/sec

C) HD, 1000's of GB, semi permanent, < 1ms, 500mb/sec

With 3 physical layers behind the scenes it might actually finally be possible to create a unified abstraction layer for storage. Let complicated caching algorithms figure out what is stored where.

Edited by gfody

Share this post


Link to post
Share on other sites

8GB RAM sticks cost ~$3500 each. If someone made one of those RAM-drive things on a PCIe board, with 10 DIMM slots, the total cost for 80GB would be at least $20K. I'm assuming it would have its own battery backup.

$20K for 80GB of storage is a lot, but for the type of system it would probably be used on, it really isn't that much.

Share this post


Link to post
Share on other sites
8GB RAM sticks cost ~$3500 each. If someone made one of those RAM-drive things on a PCIe board, with 10 DIMM slots, the total cost for 80GB would be at least $20K. I'm assuming it would have its own battery backup.

$20K for 80GB of storage is a lot, but for the type of system it would probably be used on, it really isn't that much.

HM.

I see 2 8GB sticks together here for 1.5k€.

http://geizhals.eu/?cat=ramddr2regecc&xf=253_16384

4GB Modules are below 45€ nowadays, if you dont want the fastest speeds:

http://geizhals.eu/?cat=ramddr2&xf=253_4096~256_1x

So your 40Gbyte ramdrive would be at least... 500€? Doesnt sound that expensive, only twice the $/Gbyte than an intel SSD...

Share this post


Link to post
Share on other sites
So your 40Gbyte ramdrive would be at least... 500€? Doesnt sound that expensive, only twice the $/Gbyte than an intel SSD...

but that ram drive would not retain data when the power is off. and it would need hundreds of dollars of custom hardware just to operate. the gigabyte i-drive is a piece of crap that can corrupt data in the blink of an eye. so to do a better job, it would have to cost more money. now we're up to the price of a fusion-io for something not as good. who is going to buy your ram drive when you can just buy more ram directly for your motherboard, and then get a different solid state ram device for main data.

Share this post


Link to post
Share on other sites

There are many cases where there is a good reason to have a very memory hungry process running on a single system. And as pointed out, halving the size of your DIMMs can result in a fraction the cost. So a board with 8 slots could hold a maximum of 64GB using 8GB DIMMs. (You can, if you have insane amounts of money, purchase 16GB DIMMs.) But for less than half the price, you could get 16x 4GB DIMMS. And if you could put eight of those in a device that used them as storage, then the performance for paging out would likely (depending on the application) be relatively trivial.

And when you're talking about a difference of $10K, that can mean a lot to most people.

Of course, until such a device is developed and determined to be reliable, it's all academic.

Share this post


Link to post
Share on other sites
Yes, this drive does appear to be a little short on the features and long on the price.

Compare it to a RAID 0 using 3 X25-M 80GB drives, an Areca ARC-1231ML-2GB caching controller with a ARC-6120 battery backup.

The RAID will be:

1) Bootable

2) 240GB

3) 10% cheaper

4) 600MB/sec sustained reads

5) >200MB/sec sustained writes

6) Expandable and capable of also accelerating a SATA array

Generally, it looks better across the board. I'm recommending a similar configuration for my company's next Oracle server.

The 200MB/sec of sustained writes is not possible with these drives. Even with RAID 0 that would imply 66 Mbyte/sec each or 16K sustained Write IOPS at 4K block size.

These X25-M drives start off at 11K Write IOPS. When they get to the full state and start garbage collection, they drop to less than 1K IOPS. Latencies go through the roof and your Oracle system will perform very poorly if it has any sustained write activity.

Hope you get a chance to test properly before you implement this recommendation. Unless a drive claims to be Enterprise class, it should not even be considered for applications with continuous workloads.

Share this post


Link to post
Share on other sites
Yes, this drive does appear to be a little short on the features and long on the price.

Compare it to a RAID 0 using 3 X25-M 80GB drives, an Areca ARC-1231ML-2GB caching controller with a ARC-6120 battery backup.

The RAID will be:

1) Bootable

2) 240GB

3) 10% cheaper

4) 600MB/sec sustained reads

5) >200MB/sec sustained writes

6) Expandable and capable of also accelerating a SATA array

Generally, it looks better across the board. I'm recommending a similar configuration for my company's next Oracle server.

The 200MB/sec of sustained writes is not possible with these drives. Even with RAID 0 that would imply 66 Mbyte/sec each or 16K sustained Write IOPS at 4K block size.

These X25-M drives start off at 11K Write IOPS. When they get to the full state and start garbage collection, they drop to less than 1K IOPS. Latencies go through the roof and your Oracle system will perform very poorly if it has any sustained write activity.

Hope you get a chance to test properly before you implement this recommendation. Unless a drive claims to be Enterprise class, it should not even be considered for applications with continuous workloads.

I've heard that you can boost worst-case write performance by formatting these MLC SSDs to less than their rated capacity. So in the example above, formatting each drive to use only 60 GB instead of 80 would give you a 180GB array with better worst-case write performance. I've also heard that Intel X-25M's do not degrade as bad as all other MLC drives. I'd love to have the chance to do proper testing, but at the place where I work now, I doubt they will give me the opportunity.

Actually, now that the X25-E are available, it is more reasonable to try something like this with 4 32GB X25-E drives, which would max out the RAID controller's throughput. The RAID would only be 128GB total, and would cost around $3800 now ($3300 in Jan.09), But it could sustain over 130,000 random reads/sec. and over 13,000 Random writes/sec after the cache is full.

Actually, I found out that the Oracle DB that they wanted maximum performance for was really small (only 7GB). By running it in a tempfs on a Core i7 machine with 12GB of 1600MHz DDR3, I should be able to eliminate any I/O bottlenecks completely.

Share this post


Link to post
Share on other sites
Yes, this drive does appear to be a little short on the features and long on the price.

Compare it to a RAID 0 using 3 X25-M 80GB drives, an Areca ARC-1231ML-2GB caching controller with a ARC-6120 battery backup.

The 200MB/sec of sustained writes is not possible with these drives. Even with RAID 0 that would imply 66 Mbyte/sec each or 16K sustained Write IOPS at 4K block size.

The X25-M drives are rated to 71MB/sec sequential write speed - so maximum sequential throughput for three drives should be about 213MB/sec. With 2GB of controller cache sitting between the drives and the OS, even semi-random writing should get 'smoothed' into bursts of largely sequential writes.

Although it is true - worst case continuous 'true random' writing will result in overall throughput not much better than you would get with rotating disks.

The application where these drives/controller combination would excel would be with a very read-heavy database, which has infrequent updates, when additional data would get loaded into the database and re-indexed. Which happens to be the exact type of database my company uses.

Share this post


Link to post
Share on other sites

So what happenned to this ground-breaking technology?

Nothing. More hype. Apparently performance of the drive decreases with use:

I have tested these Fusion cards extensively. They slow down the more you write to them. Giving them idle time allows them to regain some - but not all - of their performance.

...

However, I have a real-world workload that gradually slowed down every day for two weeks on a 160GB card, despite 12 hours a day of idle time.

Yesss... 3 mins for 5GB file transfer for something that costs $3K!!! If i use it a day or two, i think it will take 3 hours to do that 5GB copy!!!

1 single Intel SLC have above the same IOPS than the FusionIO, it depends of the access specifications.

One single Intel SLC have

34'086.76 IOPS

for a 512B block size, 80%read and 20% write, 16 outstandings.

At the same access specifications the ioDrive with 1.2.0 driver had "just"

24'515,11 IOPS,

and in order to compare a HDD Western Digital Caviar have 127.63 IOPS)

2 INTEL SLC's outperform this with double the capacity.

Technology for idiots with money!

Share this post


Link to post
Share on other sites

This is so frustrating. I ran into this problem when testing the new Intel SSDs - both the M and E model. Using completely random write patterns, both got so slow that Windows / the RAID driver actually kicked them off the bus - they timed out.

On a hunch (and after "resetting" them), I tried only writing to 512kB boundaries, in 512kB chunks. My thinking here was that this would keep fragmentation to a minimum, and also allow the drive to do full-page writes instead of read-modify-writes.

That worked better, but only marginally. The M drives settled out to a rate of 5-10MB/sec writes, albeit with many writes taking several seconds to complete. Mixing any writes in with reads limited the total aggregate IO rate to 50MB/sec. I can get that performance from a single Velociraptor.

I'm hoping to get my hands on an ioDrive, but haven't yet. It sounds like it has the same issues, plus the downside that it uses host machine resources (CPU, ram) to manage the flash. That's actually very disconcerting to me, as well. The reason I need an ioDrive is because I need all of the 32GB of ram the machine has!

For my purposes (as a temporary cache for disk reads), I don't need very much write perf, but I need that fast read perf. And, I can guarantee that all my writes and reads will be the same block size and aligned to that block size (eg 512KB or 1MB). I would hope that, given these conditions (and perhaps leaving a little bit of free space at the end of the drive), I could coax an ioDrive into sustained high performance - like 500MB/sec reads plus 50MB/sec writes. Every word from the company here and elsewhere claims that this should be possible. If it's not, then I don't have any idea how they can claim this to be an Enterprise device.

Share this post


Link to post
Share on other sites
This is so frustrating. I ran into this problem when testing the new Intel SSDs - both the M and E model. Using completely random write patterns, both got so slow that Windows / the RAID driver actually kicked them off the bus - they timed out.

On a hunch (and after "resetting" them), I tried only writing to 512kB boundaries, in 512kB chunks. My thinking here was that this would keep fragmentation to a minimum, and also allow the drive to do full-page writes instead of read-modify-writes.

That worked better, but only marginally. The M drives settled out to a rate of 5-10MB/sec writes, albeit with many writes taking several seconds to complete. Mixing any writes in with reads limited the total aggregate IO rate to 50MB/sec. I can get that performance from a single Velociraptor.

I'm hoping to get my hands on an ioDrive, but haven't yet. It sounds like it has the same issues, plus the downside that it uses host machine resources (CPU, ram) to manage the flash. That's actually very disconcerting to me, as well. The reason I need an ioDrive is because I need all of the 32GB of ram the machine has!

For my purposes (as a temporary cache for disk reads), I don't need very much write perf, but I need that fast read perf. And, I can guarantee that all my writes and reads will be the same block size and aligned to that block size (eg 512KB or 1MB). I would hope that, given these conditions (and perhaps leaving a little bit of free space at the end of the drive), I could coax an ioDrive into sustained high performance - like 500MB/sec reads plus 50MB/sec writes. Every word from the company here and elsewhere claims that this should be possible. If it's not, then I don't have any idea how they can claim this to be an Enterprise device.

I've found a better solution - maximum speed across the board - 250,000 IOPS - write as fast as read - no slowdowns.

3 x 28GB Acard 9010 + Adaptec 5805 controller. It is bulky, but you can boot from it, and you can write 80TB a day to it without ever wearing it out.

http://www.acard.com/english/fb01-product....20State%20Drive

$1200 for 3

http://www.pcprogress.com/product.asp?m1=p...MDD2-533-8GBKIT

Buy 12 8GB kits for $1000 total.

$500 for an Adaptec 5805

And three 32GB CF memory cards for under $200

Note - each Acard 9010 appears as two 14.2GB SATA II drives to the Raid controller.

You can also hang two more conventional SATA drives off the 5805.

Share this post


Link to post
Share on other sites
This is so frustrating. I ran into this problem when testing the new Intel SSDs - both the M and E model. Using completely random write patterns, both got so slow that Windows / the RAID driver actually kicked them off the bus - they timed out.

By the way... What RAID controller are you using? Does it have any cache?

Also, what RAID driver version?

Share this post


Link to post
Share on other sites
This is so frustrating. I ran into this problem when testing the new Intel SSDs - both the M and E model. Using completely random write patterns, both got so slow that Windows / the RAID driver actually kicked them off the bus - they timed out.

By the way... What RAID controller are you using? Does it have any cache?

Also, what RAID driver version?

That was just with the mobo chipset running Intel matrix RAID. I don't know about cache but I doubt it has much, if any. The overall steady-state write rate was so low that I don't think cache would have changed anything.

Thanks for the Acard suggestion, but that's too bulky for me.

Share this post


Link to post
Share on other sites
This is so frustrating. I ran into this problem when testing the new Intel SSDs - both the M and E model. Using completely random write patterns, both got so slow that Windows / the RAID driver actually kicked them off the bus - they timed out.

That was just with the mobo chipset running Intel matrix RAID. I don't know about cache but I doubt it has much, if any. The overall steady-state write rate was so low that I don't think cache would have changed anything.

Thanks for the Acard suggestion, but that's too bulky for me.

Intel matrix raid is a joke. Get a real hardware SATA RAID card from Areca or Adaptec with at least 256MB of write-back cache. Areca has cards with 2GB of cache and 12 SATA2 ports for under $800 (ARC-1231ML). They also have basic cards with 8 ports and 256MB of cache for under $300.

Write-back cache would make a HUGE difference with SSD's.

Get the RAID card, configure it for RAID 0 and make sure write-back cache is enabled, and test the Intel SSD's again.

I would be very surprised if they gave you inadequate performance if properly configured.

Acard is overkill if you don't need to handle a lot of small blocksize random writes.

Share this post


Link to post
Share on other sites

A HW RAID card with lots of cache would not improve the SSD performance. I require constant 25MB/sec random 1MB writes. Caching helps for bursts, but for the steady-state, the disk needs to be able to support the steady-state rate by itself. The best steady-state write rate I got from the M disk was about 5MB/sec. This was with a 32-deep queue. There is absolutely nothing that a caching controller can do to improve that - once the cache is full, it can't accept data faster than the SSD will write it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this