jpiszcz

Has anyone here achieved > 1.0GiB/s read/write (non-cached)?

Recommended Posts

In a forum post here a while back I recall seeing someone with 20 drives on an Areca controller who said they got 1.0GiB/s..

Was just curious if anyone actually 'drives' + tunes their controllers + to their maximum?

I found this very saddening:

http://www.newegg.com/Product/ProductRevie...N82E16816116045

Pros: Amazingly simple to set up.

Cons: Slower than marketing hype says. I have 16 x ST3750640AS in a RAID 6 and am not getting anywhere near the 600-700MB/s read/write performance the marketing brochure states. However, it is fast: HD Tach 240MB/s burst, 162MB/s avg read

Other Thoughts: I doubt my mb/cpu is to blame, I have the Intel 975XBX2 and QX6700.

240MiB/s burst with 162 MiB/s average read? Sounds like a joke to me-- I have used 12-port 9550-SX's and see similar performance (even slower (80-90MiB/s), he had 16 drives).

So I am curious has anyone found something that evenly achieves anything close to that?

There are various AoE solutions out there (www.coraid.com) comes to mind for 24 and 48 drive "packs" that use 10GB/s ethernet and other vendors offer iSCSI implementations.

Just curious, many people just use a 2 drive raid0 for a gaming box (74GB x 4 raptor ADFD raid0 offers 327 MiB/s) read/write incase anyone was curious, but in a lot of the reviews on the web they put 4 or 8 drives on a 16 port controller, what's the point?

I'd like to see a REAL benchmark with 16 or 24 raptor 150s on a raid card, now that would really push the card and the BUS I/O performance.

Share this post


Link to post
Share on other sites

I've managed 700MB/sec with software RAID, and 320MB/sec with HW + 5 drives, and that's with onboard SATA ports + el-cheapo $20 ebay SATA controller... I'm surprised the high-end $500+ cards are getting less...

Then again, HDTach is of no use whatsoever when benchmarking RAID anyway.

Share this post


Link to post
Share on other sites
I've managed 700MB/sec with software RAID, and 320MB/sec with HW + 5 drives, and that's with onboard SATA ports + el-cheapo $20 ebay SATA controller... I'm surprised the high-end $500+ cards are getting less...

Then again, HDTach is of no use whatsoever when benchmarking RAID anyway.

What kind of configuration did you achieve 700 MiB/s with and what FS was used?

Share this post


Link to post
Share on other sites

I've used areca (and 3ware, lsi, et al cards) the best I can get out of a single card (24-channel areca) is about 800MB/sec (jfs, xfs, ext3 all about the same or at least within statistical error of each other). With multiple cards I can get up to ~1.2GB/sec but not 1.6GB/sec that you would expect which I'm guessing is the inefficiency of the SB chipset and buss structures on the MB. I only have workstation boards here (tyan, asus, et al) no server ones to try on.

You will get better speeds from software RAID verses hardware raid (parity raid that is) as your CPU is much faster than the raid parity chips (mainly intel IOP 3xx series now a days). What raid cards buy you is the offload of the parity from your cpu. Most people who have servers and a bunch of drives the purpose of that system is NOT to run the array but to run an application. If you are taking 100% of your cpu to calculate parity for drives that leaves nothing for your application to run which defeats the purpose. I rather have 5% cpu overhead for the I/O (95%+ free for applications) and let the raid card do the rest for me.

Share this post


Link to post
Share on other sites

I've managed to exceed 1.0GB/sec using fiber channel on a couple of occasions. Four 4GbE connections streaming data off SAN 10K or 15K disk. Tested using dd if=<target> of=/dev/null in linux environments. Did some looking a year ago to see if such rates could be achieved with more workstation-oriented controllers (e.g. Areca) but didn't seem to find much of a track record.

Share this post


Link to post
Share on other sites
In a forum post here a while back I recall seeing someone with 20 drives on an Areca controller who said they got 1.0GiB/s..

Was just curious if anyone actually 'drives' + tunes their controllers + to their maximum?

I found this very saddening:

http://www.newegg.com/Product/ProductRevie...N82E16816116045

Pros: Amazingly simple to set up.

Cons: Slower than marketing hype says. I have 16 x ST3750640AS in a RAID 6 and am not getting anywhere near the 600-700MB/s read/write performance the marketing brochure states. However, it is fast: HD Tach 240MB/s burst, 162MB/s avg read

Other Thoughts: I doubt my mb/cpu is to blame, I have the Intel 975XBX2 and QX6700.

240MiB/s burst with 162 MiB/s average read? Sounds like a joke to me-- I have used 12-port 9550-SX's and see similar performance (even slower (80-90MiB/s), he had 16 drives).

So I am curious has anyone found something that evenly achieves anything close to that?

There are various AoE solutions out there (www.coraid.com) comes to mind for 24 and 48 drive "packs" that use 10GB/s ethernet and other vendors offer iSCSI implementations.

Just curious, many people just use a 2 drive raid0 for a gaming box (74GB x 4 raptor ADFD raid0 offers 327 MiB/s) read/write incase anyone was curious, but in a lot of the reviews on the web they put 4 or 8 drives on a 16 port controller, what's the point?

I'd like to see a REAL benchmark with 16 or 24 raptor 150s on a raid card, now that would really push the card and the BUS I/O performance.

Is it even physically possible to get 1 GiB/s read/write speeds (uncached)?

Considering that most drives (async) are 4-8 MB/s write, you'd need 125 drives to be able to get that. (Not entirely sure what uncached reads would be like, but I would presume that it'd be like doing a surface scan across the entire array. *eek*

(Correct me if I'm wrong).

But I think that those speeds that people are actually reporting are cached in some way, shape, or form.

Even if you were able to sustain say a 50 MB/s read/write speed per drive, you'd still need about 20 of them in order to get remotely close to the 1 GiB/s mark, and that's also assuming 100% efficiency and scalability.

I would also presume that if you want to get remotely close to that, chances are, you're probably going to be using FC or SAS drives on a 10G or FC/IB backbone. I know someone who's close to saturating a 10G network using SAS drives.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now