mattsimis

Cheapest RAID controller that can do 6-7GB/s sustained in RAID0?

Recommended Posts

Just for fun, I wanted o put together a string of SSD's in Raid0 and use it for a games drive (since all the saves are in Cloud etc now redundancy isnt important). Ive been picking up 480GB Sata3 SSDs for the last couple of weeks (started out as another idea). Whats the cheapest PCI3.0 performance orientated controller I can use for this? Ive found the Dell PERC H330 which is a LSI SAS3008 based controller for about $240.. anything else out there?

Again, this is purely for fun, Im aware "cheap + RAID0" is a bad plan for data integrity!

Share this post


Link to post
Share on other sites

Those 2 connectors each "fan out" to four SAS/SATA connectors:

If you wire all 8 cables to 6G SSDs, you can expect an average of

about 450 MB/second (+/-) per SSD in RAID-0 mode.

For scaling comparisons, this next measurement achieved 1,879 MB/second

with four Samsung 840 Pro SSDs and a cheap Highpoint RocketRAID controller:

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium.Direct.IO.2.jpg

4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium

1,879 / 4 ~= 470 MB/second (average) x 8 = 3,760 MB/second

The latter 3,760 is an optimistic / "best case" prediction for 8 x 6G SSDs.

You would be better off, performance-wise, to upgrade your SSDs

to 12G SAS drives, but they are very expensive presently e.g.

Toshiba have a line of such enterprise-class 12G SSDs.

Also, the upstream bandwidth of an x8 PCIe 3.0 edge connector is:

x8 @ 8G / 8.125 bits per byte = ~7.88 GB/second

i.e. exactly twice the upstream bandwidth of Intel's DMI 3.0 link

(and just shy of 1GB per lane, due to the start and stop bits on each jumbo frame):

PCIe 3.0 uses a 128b/130b "jumbo frame" = 130 bits / 16 bytes = 8.125 bits per byte.

Hope this helps.


p.s. The customer reviews at Newegg.com are worth reading:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118235

Edited by MRFS
  • Like 1

Share this post


Link to post
Share on other sites

To put it differently, even with almost zero controller overhead

and the best 6G SSDs currently being manufactured,

you should not expect to exceed 8 @ 550 MB/second = 4,400 MB/second.

Things would be much different now if the SATA standards group

had adopted a SATA-IV policy of "syncing" with PCIe 3.0 chipsets

i.e. 8G clock rate and jumbo frames, at a minimum.

Using a factor of 470/600 as the average scaling penalty, then

we predict:

8 "SATA-IV" SSDs @ 8G / 8.125 x (470/600) = 6.17 GB/second (your objective)

Thus, it's a combination of the current 6G MAX HEADROOM of the SATA-III standard

and its 8b/10b "legacy frame" that prevents you from reaching your objective with 6G SSDs.

Edited by MRFS
  • Like 1

Share this post


Link to post
Share on other sites

FWIW, Intel and Areca are basically rebranding LSI-- if you're concerned about support for an LSI card, I would buy an LSI-branded card (instead of a 3rd party one like a Dell or Intel) directly.

  • Like 1

Share this post


Link to post
Share on other sites

Those 2 connectors each "fan out" to four SAS/SATA connectors:

If you wire all 8 cables to 6G SSDs, you can expect an average of

about 450 MB/second (+/-) per SSD in RAID-0 mode.

For scaling comparisons, this next measurement achieved 1,879 MB/second

with four Samsung 840 Pro SSDs and a cheap Highpoint RocketRAID controller:

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium.Direct.IO.2.jpg

4xSamsung.840.Pro.SSD.RR2720.P5Q.Premium

1,879 / 4 ~= 470 MB/second (average) x 8 = 3,760 MB/second

Well as a reference point Im using my cheap, no name (Kingdian P3's) SSD's and onboard Z97 Intel software raid0, this is 3 drives:

Kingdian_P3_480gb_onboard_raid.png

Thats about 520MB/s each and effectively "free".. these are $99 SSDs and no hardware controller at all. With 4 it would prob do 2GB/s at which point I think the Intel PCH hits a wall, hence the controller plan. Ill post back when I get the Dell 3008 based card, lets see if its as woeful as the NewEgg reviews suggest.

Edited by mattsimis

Share this post


Link to post
Share on other sites

> With 4 it would prob do 2GB/s at which point I think the Intel PCH hits a wall

Correct. Here's the math:

Intel's Z97 chipset uses a DMI 2.0 link upstream.

That's x4 PCIe 2.0 lanes @ 5 GHz / 10 bits per byte = 2.0 GB/second MAX HEADROOM upstream

That chipset also used the 8b/10b "legacy frame": 1 start bit + 8 data bits + 1 stop bit = 10 bits per byte

Your average for 3 x SSDs = 1,579 / 3 = 526 MB/second (best case)

That's pretty good for a software RAID-0.

Share this post


Link to post
Share on other sites

Here are a few obstacles you might need to anticipate:

(a) when we tried to do a fresh install of Windows

onto 4 brand new Intel SSDs and a Highpoint RocketRAID 2720SGL,

the problem turned out to be the Adaptec SFF-8087 cable:

we switched to the Startech version (from Newegg) and

that cable continues to work AOK; so, be on the lookout

for cable incompatibility;

( b ) that 2720SGL comes with INT13 ENABLED, which

causes nasty results when the chipset is set to RAID mode;

INT13 is Interrupt 13, which makes that card bootable;

one sequence that works is to install that 2720SGL withOUT

any drives connected, then flash the BIOS in that card

to DISABLE INT13; of course, if one is doing a fresh install

to a bootable RAID-0 using that controller, then the INT13 default

is the correct setting, but it most often requires that the

motherboard BIOS setting be changed to IDE or AHCI

for drives wired to integrated SATA ports; thus, don't be

surprised if installing your RAID controller results in

DISABLING the motherboard's integrated SATA ports;

( c ) be sure you are using the latest device driver

for the card you ordered, and also there may be a need

to flash the latest BIOS for that card: nevertheless,

when we tried to update to the latest device driver

for the Highpoint 2720SGL, that new driver caused a

lot of instability, probably because we did NOT reformat

the array members; so, we continue to run with a

prior device driver, which may explain why our averages

are lower than your average per SSD;

(d) flashing the BIOS in our 2720SGL was also needed

to enable 6G transmission clocks on each SATA/SAS port:

this limitation appears to be peculiar to that particular card;

thus, if you find that your SSDs are actually running at 3G,

the solution is most probably an upgraded device driver

and/or upgraded BIOS in your add-on card ("AOC").

Hope this helps.

Edited by MRFS
  • Like 1

Share this post


Link to post
Share on other sites

One more thing: just because a PCIe expansion slot is mechanically x16 (full length),

the chipset may be assigning a fewer number of logical PCIe lanes to any given slot.

We've been circumventing that behavior by installing our RAID controllers

in the first x16 expansion slot, which is normally where x16 video cards are inserted.

(In our office, we have no need to super high-bandwidth video.)

Since you intend to install a RAID controller with an x8 edge connector,

you should be fine as long as you confirm that the chipset is also

assigning x8 logical lanes to that expansion slot, NOT x4 or less.

Your motherboard User Manual should have documentation on this point.

And, there may be a BIOS setting which controls how many lanes

are assigned to the other x16 slots below the primary slot (closest to the CPU socket).

Also, very often the summary Specs that are published in motherboard

marketing literature also document the lane assignments for each PCIe slot

e.g. if your motherboard is still being sold by Newegg.com, those Specs

should be in Newegg's description of that motherboard.

Look for text like this: "x16 / 0" or "x8 / x8"

"x16 / 0" means x16 lanes are assigned to the first expansion slot

when the second expansion slot is empty.

"x8 / x8" means x8 lanes are assigned to the first expansion slot

and x8 lanes are also assigned to the second expansion slot

when both slots are populated.

And so on.

You wouldn't want your upstream bandwidth cut IN HALF

merely because of lane assignment decisions that were

made by the chipset without your knowledge or control

e.g. from x8 to x4.

  • Like 1

Share this post


Link to post
Share on other sites

So... the problem with the LSI 9341 (and 9300, 9311 etc) is they require Interrupt 15 support in the motherboard.. which isnt in most consumer boards (mostly in server boards). Im getting Code 10 (and even completely disabling onboard SATA doesnt help) in Windows as were others, who contacted LSI who said:

=============================================
The 9341-4i and -8i are software raid controllers. It has to be able to allocate
memory during boot up or the driver will not initialize. The system board must
support Interrupt 15 (memory allocation). Many desktop and workstation
boards do not support INT15. There is no workaround it

The better option is use a hardware raid model 9361-4i or -8i

-----

Sucks big time.

Share this post


Link to post
Share on other sites

Anyone know if the IBM Serveraid 5210 is locked to IBM hardware (or otherwise not compatible with "normal" PCs)? Its a LSI SAS3108 based card.

The alternative is the Supermicro AOC SAS2308 based cards, which are nearly the same price but a generation older.

Share this post


Link to post
Share on other sites

Ok.. before throwing more cash at it, tonight I got the 9341-8i working by crossflashing (possible via UEFI shell only!! ffs so annoying) to a 9300-8i. I now have a IT mode device, which isnt what I wanted.

I set up striping in Windows 10 and performance is abysmal, capped at 200MB/s. Largely the same if just using one drive with no striping. Same drives as above which are doing 550MB/s off onboard Intel PCH.

Back to googling..

Share this post


Link to post
Share on other sites

I believe an HBA BIOS must be compatible with

the corresponding device driver.

I'm not at all surprised that you got abysmal results.

Share this post


Link to post
Share on other sites

I believe an HBA BIOS must be compatible with

the corresponding device driver.

I'm not at all surprised that you got abysmal results.

Of course Im using the 9300 driver! I also let Windows use its own driver (provided by LSI, dated 2015) and the latest on site.

The 9341 seems to use a "Megaraid" driver while the 9300 and 9311 use the same LSI_SAS3.sys driver.

Ill try the Megaraid driver (ie the one for the card pre-crossflashing) later but I doubt it will work from looking at the INF file.

I followed the guide here which suggests its faster as a 9300, RAID0 on the 9341 was the slow option (and not an option for me as I dont have a server board).

https://forums.servethehome.com/index.php?threads/crossflashing-of-lsi-9341-8i-to-lsi-9300-8i-success-but-no-smart-pass-through.3522/

Edited by mattsimis

Share this post


Link to post
Share on other sites

Managed to get the 9311-8i IR firmware installed, including RAID0 support (which required disabled LAN and GFX and other stuff in BIOS). Exactly the same, 200MB/s in Crystalmark (and file copy in Windows goes from 2GB to 190MB/s after 2seconds). Sigh

Share this post


Link to post
Share on other sites

Wouldn't you technically need a 16x PCI 3.0 card to get that kind of performance?

Indeed, but we have moved on to a new low of trying to break the mightly 200MB/s barrier now. :D

Share this post


Link to post
Share on other sites

Ordered a SAS2308 based card today. Came home and had a thought, I can use the onboard GFX on my HTPC and use its single PCIe slot to test the LSI (now on SAS3008 firmware from Supermicro) in Ubuntu. And guess what.. 540MB/s with one drive, 940MB/s with two. The 200MB/s limit is some sort of weird fault with my motherboard (Z97x Gigabyte Gaming 3).. only 1 slot seems to work properly, the one slot I use already for my GPU.

So.. seems to be nothing wrong with the LSI card, though now I have 2x SAS HBA's !!

Share this post


Link to post
Share on other sites

Is this your motherboard:

http://www.newegg.com/Product/Product.aspx?Item=N82E16813128711&Tpk=N82E16813128711

Tell us what's in the 3 PCIe x16 slots.

Note: the top 2 are PCIe 3.0

the bottom PCIe x16 slot is PCIe 2.0

You should review the BIOS options as documented

in your motherboard's User Manual.

See more documentation on lane assignments here:

http://www.gigabyte.com/products/product-page.aspx?pid=4966#sp

1. 1 x PCI Express x16 slot, running at x16 (PCIEX16)

* For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.


2. 1 x PCI Express x16 slot, running at x8 (PCIEX8)

* The PCIEX8 slot shares bandwidth with the PCIEX16 slot. When the PCIEX8 slot is populated, the PCIEX16 slot will operate at up to x8 mode. (The PCIEX16 and PCIEX8 slots conform to PCI Express 3.0 standard.)

3. 1 x PCI Express x16 slot, running at x4 (PCIEX4)

* The PCIEX4 slot shares bandwidth with all PCI Express x1 slots. All PCI Express x1 slots will become unavailable when a PCIe x4 expansion card is installed.

* When installing a x8 or above card in the PCIEX4 slot, make sure to set PCIE Slot Configuration (PCH) in BIOS Setup to x4. (Refer to Chapter 2, "BIOS Setup," "Peripherals," for more information.)

Edited by MRFS

Share this post


Link to post
Share on other sites

Its all fine now. The middle slot + the LSI 93xx doesnt like each other, as in the LSI card isnt detected and the BIOS on the mobo got corrupted.

What did work however was moving the 93xx to slot 1 (the full fat 16x PCIe 3.0 slot) and the GPU to Slot 2, which brings down the speed to PCIe 3.0 8x for PCIe 16 (physical) slots 1 and 2.

This is 5x OCZ Trion 150's on the LSI SAS9341 reflashed to SAS9311 (/supermicro SAS3008 generic) running at PCIe 3.0 8x.

Capture.png

Also unlike the manual above (which I had found and read yesterday), this statement is incorrect: "All PCI Express x1 slots will become unavailable when a PCIe x4 expansion card is installed."

What actually happens is all slots continue to work on the board, however the 4x slot (electrically, its Slot3 which is a physcial 16x length slot) operates at 1x (but at 2.0 PCIe speed).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now