Sign in to follow this  
Nizzen

Areca 1880Ix-24 + 8X Crucial C300 128Gb

Recommended Posts

Raid 5 :ph34r:

post-61380-1282870857495_thumb.png

cache test:

post-61380-12828709086345_thumb.png

Test setup:

Asus supercomputer MB

Intel 980x @ 4ghz

12gb 1600mhz cl6

Enermax revolution 1050

Areca 1680ix-12 + Areca 1880ix-24

I have 9x crucial c300 + 10x intel 160gb ssd. More tests soon.

This is just my fun rig :P

Edited by Nizzen

Share this post


Link to post
Share on other sites

LMAO - I'm a little surprised CDM can handle that much speed ;)

What is this rig and what the heck do you have it doing!?!?

Share this post


Link to post
Share on other sites

Someone finally has an 1880? Fantastic!

The 4K reads & writes are worse than I might have expected. High access times due to the controller? Crazy speeds other than that. Excellent scaling on the sequential reads & writes.

Share this post


Link to post
Share on other sites

I am testing also, seems to be a little rough around the edges right now, yet i also do not have the uber array that nizzen has, i am using 8R0 vertex. :unsure:

Share this post


Link to post
Share on other sites

Nizzen, I assume that "pass through" mode is for using the controller with software RAID or individual disks?

Can you have a non-software RAID in that mode?

Very impressive that the access time seems to be lower than with the ICH10R!

Share this post


Link to post
Share on other sites

LSI 9260-8i on the left...areca 1880IX-12 on the right. array is 8R0 vertex

1880v9260.png

4k random gets even better at pci-e 105...

4krandom1880.png

the low qd random access is superb!

Share this post


Link to post
Share on other sites

appears to be a firmware issue of sorts...much like the initial release of all raid cards there are some glaring issues...will be better as it matures though.

just playing with the card, using it for pcmark vantage hopefully :)

Share this post


Link to post
Share on other sites

Hmm looks like a queue depth of 32 and above the ARC-1880ix hits a wall of some sort-- driver issue probably?

In my testing with Areca the storport driver doesn't scale past 32 where the scsiport scales up to 256. The storport tests showed significantly more iops than the scsiport at the same queue depth though. Sorry I don't have the data handy.

Share this post


Link to post
Share on other sites

very interesting. I haven't tested the scsi port drivers. will be doing that shortly though. ty for the tip! what drives did you test with?

EDIT: so does the scsi port actually go higher?

Edited by Computurd

Share this post


Link to post
Share on other sites

In my testing with Areca the storport driver doesn't scale past 32 where the scsiport scales up to 256. The storport tests showed significantly more iops than the scsiport at the same queue depth though. Sorry I don't have the data handy.

Ahh that's what it is. I run an Areca myself with the storport driver since my queue depths rarely get that high. Unfortunately I don't have sufficient resources here to do benchmark production hardware, and I'm not about to futz that much with my personal box.

Share this post


Link to post
Share on other sites

My testing was with arc1280 and sata drives. Scsiport did indeed go much much higher and since this was for a heavily IO bound SQL server that regularly saw 200+ QD I went with the scsiport driver in production. When I contacted Areca about it they said it was due to a bug in the driver but I never followed up with that.

I'm curious is your LSI using a scsiport driver?

Share this post


Link to post
Share on other sites

yes, however i am having issues with it atm ....their latest firmwares have had more bugs than...well i dunno...just tons of bugs :lol:

my results with the 1880 have mirrored your statement that the scsiport scales better. ty for that! however, the low QD is not as good, as you also stated so definitely staying with storport driver.

Edited by Computurd

Share this post


Link to post
Share on other sites

IOPS?

post-61380-1284921845972_thumb.png :ph34r:

Wow. Very impressive!

A lot of people seem to have success by striping across multiple cards.

Share this post


Link to post
Share on other sites

Wow. Very impressive!

A lot of people seem to have success by striping across multiple cards.

Reading all these reviews are very encouraging, especially to the issue I am facing. I just recently upgrading from the Areca 1231ML with 4GB of cache to the 1880-IX with the default 1GB of cache (planning to upgrade to 4GB). The issue I am seeing is that the performance using RAID 0 with 5x SATA III Crucial C300 128GB drives is less on the 1880 than the 1231. I know there is a cache difference between the two cards but I wouldnt think that would affect the transfer rate. I made sure that the settings on both cards matched. I even deleted the previous RAID array and created a new one on the 1880 with no difference. Anyone have any ideas?

Thanks,

Randman

EVGA X58 4-WAY SLI

12GB Corsair RAM 8-8-8-24 1600

Asus Xonar Xense

EVGA 2x480 SLI

Areca 1880IX 1GB

5 x Crucual C300 128GB SSD

post-71084-12867270834391_thumb.jpg

post-71084-12867270910485_thumb.jpg

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this