Sign in to follow this  
Followers 0
Skouperd

DAS + Internal RAID slow speed

13 posts in this topic

Hi everybody, after researching several sites on the web, the closest somebody came to solve my problem was on this site.

So, without further ado, my current setup is as follows:

1. My "Server" is a relative high end desktop (old gaming motherboard), quad core CPU, and several network cards. The server basically just acts as a file server so there is very little overhead on the CPU. The OS installed is Windows Server 2008 R2.

2. The motherboard have 5 on-board SATA ports, populated with 5x1.5TB hdd's (1.5TB is cheapest / GB).

3. A cheap PCI-SATA card houses my OS drive and a spare drive (only two SATA ports on the card)

4. My DAS enclosure, http://www.chyangfun.com/pro01_2_3.asp is equipped with two SIL3726 chips.

5. I’ve plugged in 5x1.5TB hard drives into the DAS enclosure, 4 drives on the one SIL3726 board, and one on the second SIL3726 board. (the remaining slots have been filled with other drives)

6. Together with the purchase of the DAS enclosure came bundled a SIL3132 raid controller

I set this up as follows:

1. Using the onboard RAID of my motherboard, created a RAID 0 array with the 5x1.5TB drives. I called this Array1

2. Using the SIL3132 controller, I then created another RAID 0 array in my DAS, again 5x1.5TB drives. Let’s refer to this as Array2.

3. Using Windows Server 2008R2 I then mirrored these two arrays, resulting in me having a RAID0+1 setup with 7.5TB usable space.

4. In summary, I have 10x1.5TB hard drives, 5 in the DAS, and 5 internally. (And several other ones that is not really applicable to this discussion)

The reason why I opted for the RAID0+1 and not RAID10 is because I also end up having to extend my arrays. Having a RAID0+1 array, allows me to break the mirror (software) destroy one of the arrays, plug in more hard drives in that array, recreate a new increases space RAID0 array, copy the files from the original array onto the newly increased RAID0 array. When files have been copied successfully, I am able to break the first array, increase the size, and just mirror it again. I appreciate the risk when actually doing this, but given the hardware I have available, a risk I am comfortable to deal with.

Where I currently have a problem, is that my read speeds on the RAID01 array was shocking. I was getting in the region of 250MB/s. This is acceptable in most scenarios, but the mere fact that I had 10 drives, each capable of around 80MB to 100MB just tickled that “nerd” feeling in me in saying that I can get more.

Analysing the two arrays in more detail, I was getting around 230MB on Array 1 (the motherboard RAID0), but only 130MB on the Array2 (the DAS with the SIL3132). Having a mirrored, (read from both sources) this then tied back to the speed of 250MB I’ve observed in the RAID01 setup (speed of slowest drive times number of arrays, slowest drive = 130x2=260 less some additional overheard = 250MB). My initial hope was to get over 400MB/s (hoping for 500MB but I am a realist).

From researching the problem, it appears as if the bottleneck is tied to the SIL3132, refer to this post here:

I am not sure if the limitation comes in with the bus speed on the PCI-Express x 1, (250MB) or if it is E-SATA perhaps. However, the SIL3726 is stated to be SATA 2 (3Gb/s) compliant which means the bus speed of is definitely a potential bottleneck.

It is obvious that if I want to reach my goal of hitting 400MB/s throughput that I will need to upgrade some hardware in my “server”. This is my current restrictions / objectives:

1. I have reached a limit in terms of how much hard drives I can house directly on my internal motherboard, so ideally I would need something that can expand on the internal drives a little bit more, or I will need to obtain another DAS at some point.

2. I am residing in South Africa, and we don’t get all the name brands here such as Highpoint, Arcena etc. We do however get Adaptec controllers. A good indication of cards that I can get is from this store here: http://www.sybaritic.co.za the Rand / Dollar exchange is around R6.5 for every USD.

3. This is a home setup, so obviously I would like to keep the cost as low as possible, and re-use as much of the hardware that I can, but still have the ability to expand in future.

4. I continue to increase my array on a regular basis, I’ve managed to do just that with the RAID01 setup for quite some time now without any data losses, but should one be able to incorporate a dynamic raid expansion or raid migration solution then that will be awesome.

Now, from a hardware perspective, I realise that I will need to chuck the SIL3132 card if I want to get better throughputs, but I will prefer to keep the DAS since I recon one should be able to get at least 250MB/s from her. (Which if paired with an internal 250MB/s should give me my 400MB read target)

The RAID card that I have been eyeing now for a very long time is the Adaptec 3805 but I cannot find any confirmation that it will work with the SIL3726 chipset in my DAS. Also, I am kind of new to dedicated RAID cards (I’ve been dealing with the cheapies mostly), so I am not exactly sure how the SATA / SAS expanders work, since the SIL3726 uses e-SATA cables.

If anybody ask why I aim for 400MB/s, truth be told, it just sounds better than 200MB and I know that the hard drives definitely is capable of doing it (and more). So if I cannot reach it, then it will not be a train smash, but if I can reach it, then that will be awesome!

I appreciate any constructive feedback and suggestion please. Also, I apologise for any spelling and grammatical errors, English is not my first language.

Kind regards

Skouperd

Share this post


Link to post
Share on other sites

The SIL3726 is a piece of junk and is more likely to be your bottleneck. In my experience it doesn't work with non-SI RAID cards either, but I haven't tried many. I wasn't able to get more than about ~130MB/sec out of the SIL3726 before it caught fire.

If you want decent bandwidth out of your DAS then you should look at replacing the DAS itself or using direct connections to the drives in it (e.g. with a multiplex cable) and a controller with sufficient ports to plug into the motherboard.

Share this post


Link to post
Share on other sites

The SIL3726 is a piece of junk and is more likely to be your bottleneck. In my experience it doesn't work with non-SI RAID cards either, but I haven't tried many. I wasn't able to get more than about ~130MB/sec out of the SIL3726 before it caught fire.

If you want decent bandwidth out of your DAS then you should look at replacing the DAS itself or using direct connections to the drives in it (e.g. with a multiplex cable) and a controller with sufficient ports to plug into the motherboard.

Thank you for the response. (I was wondering if anybody actually read this).

With regard to your point that the SIL3726 is a piece of junk, I am tempted to agree with you, however since my DAS is fitted with 2 of them (each capable to run 4 drives) each using their own e-SATA cable, then even if I am able to JUST squeeze 100MB out of each card, then I will be able to reach my goal of 400MB (200MB on my internal array, and 100MB + 100MB on the DAS arrays). If I can squeeze 125MB out of each of the SIL3726's then that will increase my read speed to almost 500MB per second, which for me is more or less what I am realistically thinking the max is I should expect given the cost of the DAS.

But my original reasoning for opting for the SATA Port multiplier technology is as follows. If each SATA port multiplier (such as the SIL3726) is capable to house 4 drives, (some can house 5), then a Adaptec 3805, or 5805 will effectively be able to manage 8 of these port multipliers, which will give you 32 SATA drives (or 40 in the case of 5). If each of the port multipliers (such as the SIL3726) is restricted to say 100MB/s (you mentioned 130MB/s) then that should equate to a total througput of 800MB/s which is virtually approaching the bus bandwidth limitation of the Adaptec 3805 (being PCI Express x 4 = 8Gbs = 1GBs).

I suppose, in summary, I am very curious to find out if RAID controllers, such as Adaptec 3805 or Adaptec 5805 will work with normal, el-cheapo SATA port multipliers. I know that HighPoint have released a card specifically targeted at SATA port multiplier technology, as can be seen here which I suppose support my theory that the above does seem like a poor man's way to both storage space and reasonable good throughputs.

http://news.softpedia.com/news/HighPoint-Is-First-to-Launch-PCI-E-2-0-x16-SATA-Port-Multiplier-172734.shtml

At the end of the day, I will only be able to either get a new DAS or a new RAID controller ideally, I would love to upgrade them both at the same time, however, I will need to choose between getting a proper raid controller or getting a better DAS. Getting a better DAS will not help me personally a lot since I still need a decent raid controller.

Anyevent, the above is just my thinkings, but I would appreciate if you could say what you would do in the above situation. Let me know if my maths are making sense, and if I perhaps misunderstand something.

Kind regards

Share this post


Link to post
Share on other sites

Cheapest thing I can think of is to use just the enclosure and disregard the SIL3726 Multiplier chips. Run one seperate eSATA cable for each drive through the back of your case into a RAID card with eSATA ports. Why eSATA? Because eSATA will give you more length. Now if it makes you feel comfortable, you can back all the cable into some sort of sleeve. This is a lot cheaper than now trying to find a whole new eSATA enclosure with a better port multiplier chip.

The question is, are there any SATA raid cards with eSATA ports instead of SATA ports.

Share this post


Link to post
Share on other sites

While there are standards for port multipliers as part of the SATA specification, I've yet to get the Sil3726 to work properly with any non-SI device. But like I said, mine caught fire a while ago so I haven't been able to test it with my latest hardware. Personally, I prefer SAS expanders as they're faster and more reliable and manageable but cost a hell of a lot more.

At the end of the day though if 400MB/sec is all you're looking for then connect 2 drives to one multiplier and 3 drives to the other, this should get you at least 100MB/sec per multiplier, added to the motherboard drives to get your target.

There's a reason I went for the LSI 1068E for my setup though, it has 8 ports and 8 PCIe lanes and can deliver well over 1.6GB/sec per card. It also has 4 external ports per card. And they only cost £25 off eBay.

In my experience eSATA is a myth. It's simply SATA with a different connector on the end. You can easily get cables with eSATA at one end and SATA at the other, the electrical signalling is the same (in theory eSATA has higher voltage tolerances for more length but in practice this is irrelevant). Standard (non-e) SATA controllers will easily do more tham 2m with standard internal SATA cables, so if you ask me you can completely ignore the distinction between the two.

Share this post


Link to post
Share on other sites

(in theory eSATA has higher voltage tolerances for more length but in practice this is irrelevant)

Interesting... But surely there IS some benefit with eSata for length more than 2 metres?

If I measure the output of an eSata chip, you're saying the signaling voltage isn't any higher?

Share this post


Link to post
Share on other sites

Hi guys, thank you all for your reponses. With regard to the suggestion to run cables directly to the harddrives in the DAS, I agree, that that is actually a very good point and one that have crossed my mind. The only thing is that in order to make that work, you actually need a SATA card that is capable to handle a fair number of drives. I think on last count I have 15 drives (and growing) but that should keep me out of mischieve at least till I can get a decent enclosure / PM.

Thanks for all the comments people, very much appreciated.

Share this post


Link to post
Share on other sites

The only thing is that in order to make that work, you actually need a SATA card that is capable to handle a fair number of drives. I think on last count I have 15 drives (and growing) but that should keep me out of mischieve at least till I can get a decent enclosure / PM.

What's the problem? Do it the same way you did it before?

Share this post


Link to post
Share on other sites

Interesting... But surely there IS some benefit with eSata for length more than 2 metres?

If I measure the output of an eSata chip, you're saying the signaling voltage isn't any higher?

I haven't measured the output directly, only tested operation in practice. What I did was daisy-chain 5 50cm internal SATA cables and plug various drives onto the end of them, and connect the other end to various standard internal SATA controllers. Regardless of signalling voltage, all drives and controllers worked fine with 2.5m of cable despite even eSATA spec only allowing 2m max. So in other words you don't "need" eSATA in practice to use longer cables - even standard SATA has no problem going to 2.5m+

Skouperd:

Since you're just using basic RAID0 and RAID1 modes, you can easily get cheaper, older controllers that can handle the job - such as one of these

Add on a couple of multilane adapters if you like to cut down the cable clutter. These can be physically slotted into the same space as the port multiplier on modular DAS devices

Edited by qasdfdsaq

Share this post


Link to post
Share on other sites

I haven't measured the output directly, only tested operation in practice. What I did was daisy-chain 5 50cm internal SATA cables and plug various drives onto the end of them, and connect the other end to various standard internal SATA controllers. Regardless of signalling voltage, all drives and controllers worked fine with 2.5m of cable despite even eSATA spec only allowing 2m max. So in other words you don't "need" eSATA in practice to use longer cables - even standard SATA has no problem going to 2.5m+

This is extrapolative testing. Ok, it worked, but you are technically out of spec. Now they have power and sata all in one cable. The new P67 boards have a port for it.

Also the card you linked him uses some form of extended PCI... It will be a lot slower if he plugs that into a standard PCI port. He needs something that's at least x4 or x8 PCIe.

Edited by mockingbird

Share this post


Link to post
Share on other sites

What's the problem? Do it the same way you did it before?

Hi Mockingbird, it was only very recently that I needed to "upgrade" my arrays again with an additional 4 drives causing me to acquire the DAS device. Before then, I connected some of the drives on just normal el-cheapo sata controllers (2 and 4 ports).

qasdfdsaq, with regard to the two links you've send me. The multilane adaptor looks very cool indeed, thanks for linking that. With regard to the adaptec 16 port card, I will rather keep an eye out on one that is PCI-Express.

I received confirmation from Adaptec as well that none of their 3 series, nor 5 series RAID controllers will work with port multipliers. So looks like the highpoint (here: http://news.softpedia.com/news/HighPoint-Is-First-to-Launch-PCI-E-2-0-x16-SATA-Port-Multiplier-172734.shtml) may be looking more and more towards the option / solution for me. (Now if they will just release it!)

Thank you for all your guys input, really much appreciated.

Share this post


Link to post
Share on other sites

This is extrapolative testing. Ok, it worked, but you are technically out of spec. Now they have power and sata all in one cable. The new P67 boards have a port for it.

That's kinda the point. If devices with 1m max cable length spec can easily work over cables of 2.5m length, then there's no need to "upgrade" to the 2m spec in practice if you want to use 2m long cables.

Mind you a second look at the eSATA spec shows it doesn't actually increase maximum transmit voltage, only modifies the minimums. So the only real difference remains the connector(s) and shielding (which again isn't neccessary in practice IMO).

For the card, I didn't look that hard specifically, just went on Ebay and typed in "16 port SATA". Like I said, I got my 8-port HP SAS HBAs for £25 each, though that was a heavily discounted price on Ebay. But there are certainly options. You can indeed put an 8x card in a 16x slot (most of the time) as well as a 1x or 4x slot (if they're open ended slots, or if you don't mind voiding your warranty with a soldering iron). The HP SC44ge comes with 4 external ports (1 multilane connector), 4 internal ports, and is PCIe x8 - still occasionally goes on Ebay for less than £50. It also supports both SATA and SAS devices; works with SAS expanders but I've not tried SATA port multipliers. It does RAID 0 and 1 (and combinations of the above) only, but if that's all you need then it's cheap, fast connectivity.

If you're in the US, try this while you're at it: http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=220718653281&ru=http%3A%2F%2Fshop.ebay.com%3A80%2F%3F_from%3DR40%26_trksid%3Dp5197.m570.l1313%26_nkw%3D220718653281%26_sacat%3DSee-All-Categories%26_fvi%3D1&_rdc=1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0