Sign in to follow this  
Darkknight

Raid 5 technical questions

Recommended Posts

I'm building a 4TB raid 5 (5x 7k1000), on ICH10R/Corei3 setup. I've read through quite a few topics here, and maybe I'm using the wrong search terms, but I simply cannot find what I'm looking for spelled out in a straight forward manner. I would like a solid recommendation on stripe size, cluster size, and an idea on if I'll need to properly align the partition with the stripes. I was just reading a similar topic on R5 performance on nforce based MBs, but I'm not so clear on if that information applies to my situation. Currently, my build is running XP 32 bit, I'd rather not have to upgrade to W7 unless wholly necessary. I don't want the extra license cost, nor do I want to throw away a fresh fully tweaked setup of XP. I have a separate boot/system drive outside the array, and another separate drive I use for data buffering during media rips/encodes. My array is strictly for AV media storage, and the vast majority of my space is taken up with 1.5GB-5GB video files that I stream via GbE->802.11n->HTPCs in the house. There are thumbnails for each video file, if that matters at all. Music is not stored in the array. Although the array need only output the lowest amount to suffice an HD stream or two, I'd still rather have it built as efficiently as possible, which is why I'm here.

I've read conflicting, (or again I may be failing to understand) information on if 32bit XP can read a 4tb array. I'm not booting from the array, I'm only concerned on XP's ability to read/write the full 4TB of the array. I think that Win 32bit/MBR has a cluster limit count, not an array size count, so as long as the cluster size is big enough, 32bit XP can read it, even if it can't boot from it. I understand bios has also has a size limit, but that should be on a per disk basis, not on an array basis. What I have ready suggests XP will place the start of the partition at 63 or something, and it needs to be at 2048? Is that correct or am I mincing terms anywhere above? The disks I'm using utilize a standard 512b sector size AFAIK, I think I'd like to use a cluster size of 4Kb and a strip of 16Kb, but that is really just a guess on my part. Optimally, I'd like the array to be tuned for the fastest writes possible, as that is what takes the longest in my process. Rarely do I need to copy anything back off the array, and I suspect in tuning it for fast writes, the reads will likely still be much faster than the destination disk anyway.

Suggestions? Thanks!

Edit:

TL;DR? Check bottom of PG 2. ;)280MB/s Write & 490MB/s Read speeds!

Edited by Darkknight

Share this post


Link to post
Share on other sites

Well I know from experience, even Win 7 has issues with MBR partitions over 2TB. It hits a wall and won't let you address anything beyond the barrier. It has to be converted into a GPT partition. I have not worked with Win XP and large arrays before, so not sure I can help out with that aspect.

In regards to stripe size, it really depends on what content you will be serving off the RAID. I can some different stripe scenarios using our HTPC and Productivity traces. HTPC would line up pretty close with what your uses are

512K

HTPC: 2495.02 I/O 114.46 MB/s 3.136ms

Productivity: 1576.54 I/O 46.55 MB/s 5.053ms

256K

HTPC: 2434.50 I/O 111.65 MB/s 3.250ms

Productivity: 1724.41 I/O 50.90 MB/s 4.607ms

128K

HTPC: 2279.23 I/O 104.53 MB/s 3.468ms

Productivity: 1748.34 I/O 51.62 MB/s 4.552ms

64K

HTPC: 2063.87 I/O 94.69 MB/s 3.816ms

Productivity: 1691.53 I/O 49.95 MB/s 4.701ms

In that case HTPC really liked 256K and 512K, but if you are serving small files off it, expect to get a drop in performance. As stripe size went up, scores on productivity dropped.

Share this post


Link to post
Share on other sites

Argh, RAID woes!

I finally finished building the array yesterday, and initializing finished a few minutes ago. I can definitely say that I cannot find a way to get XP 32bit to view the full 3.8TB in a single array. The logical disk manager doesn't even see the array at all. Using a 3rd party HDD utility, I can see and partition the array, but only for 2048GB. I had hoped that if the array blocks were larger, it wouldn't be a problem, but XP still sees 512b sectors. I don't think I can even make (2) 2TB partitions. I think, when creating the array, if I had done 2 arrays (each half the size - 2TB) windows would be fine with it. So, I'm faced with the choice of breaking the array, and spending another day reiniting it, or upgrading windows. :(

I have learned a few things though:

1) Antec P182 natively only holds 6 HDD. (Had to drive back to M/C 45 min away to get a 4 in 3 HDD device module)

2) Microcenter does not stock 5 of any one type of right angle inexpensive SATA cables, apparently there are 50 or so $20 EL cables available.

3) Do not ask for help finding stuff at M/C.

4) Plan ahead.

5) Intel RST only supports 128k or smaller stripes. (At least on this ICH10R board)

Edit: I just thought about something, I believe I can run a lightweight 64bit OS to operate the array via VMware, then partition the array using that o/s into 2 partitions that XP would then be able to pick up on. The VM would have to always be running to access the volume, however. This is an option I had considered before when I was thinking about using unraid vs IRST. Only problem I can see is, the more complicated this gets, the less reliable the array seems, which kind of defeats the purpose.

Edited by Darkknight

Share this post


Link to post
Share on other sites

I admire all the work you did to set that up, but in my experience with RAID arrays over the years I have found that I like things as "standard" as possible.

Years ago, before Adaptec standardized their RAID controllers, if a controller died and you replaced it with a new controller, the new controller could not read the existing RAID array of drives - even though it was in PERFECT condition!

Really fried me at the time, couldn't believe what I was hearing. Their answer, of course, was DUPLEXING which means you buy a SECOND expensive RAID controller from them to keep the server from going down. Naturally, you still have to replace the controllers (and buy 2 no less) it just gives you the opportunity to delay the transition and do a current backup at your convenience.

best,

Roger.

Share this post


Link to post
Share on other sites

I ended up installing W7. I've spent all day considering my options, and every other option seems to only be less reliable and harder if not outright impossible to migrate out of once the array is in use. Neither of those options are acceptable just to save a few hours work of installing, tweaking, etc to get W7 to work the way I need it. With W7 (32bit)installed, the array shows up fine, and I have the option to format it. I had done a lot of research on this, but I've poured over so much information that I simply cannot remember what I had decided on as far as cluster size. My stripe size is 128k. I store mostly large files, read a lot and write a little, though writing (ripping) is the more time consuming effort. Suggestions on allocation unit size?

FWIW, the final hardware installation looks pretty neat. I went though a considerable amount of trouble routing cables around to minimize air flow obstructions around the array drives, and even made a custom power cable just to fit the drives. It wasn't terribly complicated, but certainly more time/effort than I expect a lot of people to go to when building a home server. The 4 in 3 HDD holder also looks pretty nifty sitting in the front of the case. I'll have to take some pictures eventually.

Share this post


Link to post
Share on other sites

I've been thinking the whole time it would be easier to just upgrade, but didn't want to argue against your initial thesis ;)

Share this post


Link to post
Share on other sites

I've been thinking the whole time it would be easier to just upgrade, but didn't want to argue against your initial thesis ;)

All Read & Write rates list below are for sequential operations. It's a large file server, not an SQL DB, max throughput is more important than IOPs for my needs.

Well, at least I'm past that point. I've been busting my .. HDDs for the past 2 days trying to come up with a good combo for the stripe & fs cluster size. I took the advice of another NAS builder and simply benchmarked my setup (ATTO) from 128k stripe & 64k NTFS cluster, all the way to 16k stripe & 2k cluster, with every option in between (there went my Saturday). When I identified the best combos for each stripe, I then used Teracopy and moved an 8GB iso to and from the array, then over the network to verify real world results. 128k/32k produces the best read results by far at a staggering 500MB/s avg read speed. The writes are an abysmal 31 MB/s however. *Write back cache is enabled* Disabled, it produced 500KB/s (yes KILOBYTE) write rates.

The best write speeds were accomplished with a 32k stripe/4k cluster combo. With that, write speeds are an improved ~50MB/s, and read speeds are in the 300MB/s range, which coincidentally are both second only to the 128k/32k combo. Since this array is data storage only, and ultimately limited by GbE (125MB/s theoretical max) anyway, I felt it was better to go with the 32k/4k combo for the higher write speeds.

The problem I'm having is that elsewhere on the interweb (I should really stop reading about this), I'm seeing people achieving ICHxR Raid 5 100MB/s write results. Many of the benchmarked systems are even using older, smaller, & slower, drives! W---T---F?!

I don't know what else to try at this point. To compound the problem, local writes to the array are ~50MB/s, network writes are 15MB/s!. Net reads & writes to other single drives on the same system are ~65MB/s. Network reads from the array are ~65MB/s. 65MB/s seems to be a chipset limitation at this point, because the individual drives bench faster than this, but can't read/write to each other beyond that number.

Pulling out my hair here. Could use some (helpful) pointers to try to solve this conundrum. I'm definitely not in a position to fork out any extra $$$ for a dedicated raid card ATM due to the extra expenditures just to get this rolling, please don't suggest that. Even if that were the only solution, I'll stick with the 50MB/s writes. The point is though, ICHxR has a proven higher throughput available than what I'm experiencing. Based on the fact that simply altering the stripe & cluster sizes I was able to achieve an incredible speed boost, I really think this must be some sort of a configuration issue holding me back from faster seq writes.

Basically, based on the other hardware I'm using, I'd realistically like to see 100MB/s benched writes. That's enough to max out the capability of the drive(s) the data will be copied from, and beyond what appears to be some sort of artificial limitation putting the brakes on at 65MB/s anyway.

Edit, the picture is even more muddy now: If I push the file to the array from another system, it's 15MB/s, if I pull it on the array from that same system, it's 53MB/s. Why would it matter where the transfer is started from? Both systems use Win 7, btw.

Edit #2, Seems as though there is some inherent problem with CIFS/SMB that file transfer speeds are sometimes affected by the direction of initiation. There is no one-fix for this problem. There are so many suggestions from turning off advanced networking features (RSS, TCP chimney, SMB2, etc) to replacing NICs, cables, other hardware. Bottom line I suppose is, the use case that I'm having trouble with is not common, and easily worked-around. I don't prefer to leave things "broken" but TBH, I have been down this road before unsuccessfully, and I have bigger fish to fry ATM.

I'd rather any potential replies focus on the R5 write speed issue. ;) Thanks!

Edited by Darkknight

Share this post


Link to post
Share on other sites

All I can say is Intel onboard RAID sucks (I've been saying this since 2006 or 2007). I did some light testing on my ICH10R system running RAID-5 about a year ago when I got it, got about ~80MB/s, which sucked, and stopped using it. That said I don't know where your problem lies, but 50 is pretty bad even for Intel's onboard RAID.

First off, did you partition the array under Windows 7 or under XP before you installed W7? What partitioning format is it in?

I'd point you at my ancient post about onboard RAID optimization under XP but it's a little irrelevant under W7 (though you could still give the ideas a try), and IIRC it didn't have as much effect on ICH10R as on NF570:

Lastly, as a long shot, you could have a faulty disk that's slowing the rest down. I'd delete the array (or boot with the ICH10R in AHCI mode) and run W7 software RAID on it to monitor the individual drives' performance.

Share this post


Link to post
Share on other sites

Does anything look out of place with this? I'm not sure, but I think that this means the partition is not aligned properly. It's supposed to be a single partition aligned at 1024kb, right? I checked W7s registry entry, and it's set for 1048576 bytes. If it's not aligned properly, it's possible that *all* of the benching I did yesterday is invalid, and I would get different results. What is the "reserved" partition in there for? I simply used the LDM to initialize and format the disk with as a simple volume. I didn't ask for 2 parts...

Can someone who knows what the offsets are supposed to be, give me an idea if this is the way it's supposed to look?

Thanks

post-71195-12885491943876_thumb.png

Edited by Darkknight

Share this post


Link to post
Share on other sites

The first (128MB) partition certainly isn't aligned properly but that shouldn't be a problem. The second partition starts on 129MiB exactly so the alignment looks OK. Still, as mentioned ICH10R didn't have as much alignment issues in my experience - though I've seen write cache to have paradoxial effects (faster when off, etc.).

Share this post


Link to post
Share on other sites

The first (128MB) partition certainly isn't aligned properly but that shouldn't be a problem. The second partition starts on 129MiB exactly so the alignment looks OK. Still, as mentioned ICH10R didn't have as much alignment issues in my experience - though I've seen write cache to have paradoxial effects (faster when off, etc.).

I deleted the 2 partitions, one of which did not go quietly. I created a single new partition, using disk part with align=1024. Listing the details of the new partition shows it is in fact started at offset 1024k . Write speeds are still slow, and now for some reason, read speeds have plumetted. I no longer achieve ~65MB/s copy speed, and ATTO looks stupidly slow read and write. I'm not even certain that it's reliable at this point.

Losing hope, need someone to throw a life preserve to me here. Giving up on it for tonight.

Share this post


Link to post
Share on other sites

Wouldn't MSM/RST app detect a bad disk? I'll break the f'ing array again, I know you're right, just hate the 16hr rebuild process. If I do manage to find & fix this issue, would the rebuild time drop proportionally to the gain in write speed?

Share this post


Link to post
Share on other sites

Broke the array, testing each one, but repeat tests on the same drive are not supplying the similar results. 135MB/s Write, then 145MB/s Write... ATTO is that unreliable? None of them benched under 133MB/s, in any case.

Share this post


Link to post
Share on other sites

I don't think there is a drive problem at all. I created a raid 0 array, using IRST to bench it. I'm getting 500MB/s+ write & 600MB/s+ read speeds. That's not indicative of a drive or cable problem. The problem seems to lie with the RST driver R5 implementation, I think. Been considering all options recently, ZFS, VSF, WHS in a VM. None of these are good solutions (for me) IMO. I truly have even considered windows dynamic disk spanning, just to get the volume I want. I didn't like that idea not because of the FUD surrounding it's use. You can actually use 3rd party tools, cheap or free even, to recover any files left on non-damaged disks in a spanned array, you just can't do this from the LDM. I don't want to use it because it won't balance the data usage between all the drives.

Tested a different combo. 4 Drive R5 deliver 75-90MB/s writes. :(

Edited by Darkknight

Share this post


Link to post
Share on other sites

Just when I'm sure I had hit a dead end, I think I stumbled on the answer. Let's all raise a glass to utter, dogged determination to do things the way you want, and the wives that believe in us no matter how many times we don't succeed!

I realigned the partition manually using different settings. Win 7's 1024KB alignment *is not* I repeat *is not* ideal for every raid. Simply aligning blocks & clusters is not enough. I boosted my speeds from 50MB/s write & 350MB/s read up to 80MB/s write & 500MB/s read! I'm pretty sure I can get even more out of it. I'll have to re-bench the different cluster/block ratios again, and hand align the damn partition each time, but this is clearly where my raid was falling down. In fact, I think this is likely the single source for every ICHxR R5 speed complaint.

I would post the settings I've used, but if there is anything I have learned doing this, it's that every raid is different, and you need to find the settings that work for *your* hardware by testing and research. Posting my settings will make you think that you can take the lazy way out, when in fact my settings may well make it worse for you. Hopefully, for the next guy reading this with his R5 problems, I've pointed you in the right direction and this might be able to fix it for you too. It's taken me 5 solid days of research & experimentation to find the culprit. To be fair, partition alignment was the one thing I didn't alter in my quest to get better speeds. I read over & over that the 1024K alignment was fine, and probably by the same people that insist ICH/hybrid raid = bad & software raid = fail without ever trying it, basically parroting what they've heard from someone else who did try it, but never really tried to fix it.

Edit: 280MB/s avg write speed & 490MB/s avg read sucka!. Anyone who says Intel "fake raid" sucks, and you *need* a dedicated card to get good R5 performance can kiss my ICH. ;)

Victory:

post-71195-12887687827323_thumb.png

Edited by Darkknight

Share this post


Link to post
Share on other sites

What did you end up with for alignment and did you end up with the 32k stripe/4k cluster combo?

Also you wrote "disk part with align=1024", but to get 1mb aligned you need to do align=2048.

Edited by kickarse

Share this post


Link to post
Share on other sites

Just a little update:

I had a drive fail on me about 5 days ago. Dropped out of the array with a failed disk/SMART error (Smart Command Failed) message. My first thought was a CCTL/TLER error, but I tested the drive using Hitachi's drive tools, and as it turns out, the disk developed a ton of corrupt sectors and legitimately failed. I exchanged it today at M/C for a new unit. The rebuild finished in under 5 hours, fully successful, with no loss of data (isn't R5 cool?). Had I given up on R5 prior to succeeding, I would have lost some or maybe all of the data I had stored, at least the part that was on that drive as with other raid variants. As it stands, I not only didn't lose a thing, but I was able to continually use the array the entire time. The hardest, most aggravating part of the whole ordeal was just getting the replacement drive.

Edited by Darkknight

Share this post


Link to post
Share on other sites

To compound the problem, local writes to the array are ~50MB/s, network writes are 15MB/s!.

If I push the file to the array from another system, it's 15MB/s, if I pull it on the array from that same system, it's 53MB/s. Why would it matter where the transfer is started from? Both systems use Win 7, btw.

I take it all devices are connecting through or to a Gigabyte network port?

Regarding the manual alignment; When using Win 7 x64, is there any need to also manually adjust the alignment?

I myself am using HD204UI's 4k sector drives, so if u can give me some pointers on 'how' to align these manually, that would be a big help.

Not sure if this applies tho, since i read that these drives emulate a 512 kb sector...

My Results:

Aight, i did some testing and benching with AS SSD, and this is my current setup / achieved results:

RAID%20speeds.jpg

I apologize for the large image.

Long story short; My best results on my current setup:

RAID5 / 3 disks (HD204UI's): 128 stripe size with 32k NTFS Cluster size, with Write Back Cache on.

In that configuration:

Seq. Read: 240 MB/s

Seq. Write: 261 MB/s.

I am stunned. I've read threads all over the place of people trying to get the best out of their setup while being stuck on ridiculous low write speeds, and i read on various forums, that the most optimal setting for a 3-disk RAID5 = stripe size x 2 (3 disks - 1 parity disk) must equal cluster size. Thus, thinking that RAID5 Stripe 32k = 64 Cluster.

As u can see in my results, in that config i only get:

Seq. Read: 219 MB/s

Seq. Write: 222 MB/s.

Further more, there aren't that much big differences in the various combo's.

There are 2 combo's that suffer heavily from having WBC turned off, but for the most, it didn't matter that much on Seq. Writing.

The thing that stood out to most and was kinda interesting, is that the 64k stripe / 32k cluster with WBC off only gives 25 MB/s Seq. Write, and the 128k stripe / 32k cluster gives around 38 MB/s.

With the further exception of 128k stripe / 64k cluster (168 MB/s), all is above 200 MB/s Seq. Write speed, wether WBC is ON or OFF.

So, basicly... im stunned.

Any comments on this? And maybe some feedback wether or not this 240 read / 260 write is good?

p.s.,

I didn't bother with letting the benchmark go through the entire 4k / 64k tests, as first glance they never got above the 0,8 mb/s.

However, when copying a 4 GB file from my SSD to the Array, i got 260-320 MB/s according to Windows and the file was there in no time.

This practical test was also done to make sure i didn't suffer from cache / benchmark polution. I did the the tests a number of times, and also re-created the entire volume / partition on every try.

I will run a final check with ATTO on the 128 Stripe / 32k NTFS Cluster combo after initializing and post the results.

@Darkknight:

I'm REALLY curious @ your settings :)

I seem to have kinda the same write speed (260 vs 280 mb/s), but your read speed is much higher. Is that because you have 5 disks where i have 3?

Could you let me know what Stripe Size and NTFS Cluster size you use, pretty please ;) Even a PM will do fine. I'm just curious as to what i might achieve with your settings on a 3-disk array.

*And make a note here that i didn't do *ANYTHING* to manually align partitions or any of that.

This is almost Out of the Box installment, and frankly one has to try VERY hard to get BAD results, no matter what Stripe / Cluster combo you choose (See the results).

Kind regards,

Kami.

Edited by KrazeyKami

Share this post


Link to post
Share on other sites

After 24 hours of initializing, here are my final results:

Onboard ICH10R (Asus P6T SE)

RAID5:

3x HD204UI

128k Stripe

32k Cluster

WBC ON

No Manual Alignment or other tweaks needed.

READ: 251 MB/s

WRITE: 265 MB/s

RAID5.jpg

Wonder how long it stays stable ;)

For the ones that are interested, here is the data on CPU time vs RAID5 usage:

BLUE LINE: CPU USAGE %

RED LINE: ARRAY WRITE TIME %

GREEN LINE: ARRAY READ TIME %

First, the IDLE load:

CPU%20vs%20RAID5%20-%20IDLE.jpg

Next, the WRITE load, while writing a 3 GB file from my SSD to the RAID5 Array:

CPU%20vs%20RAID5%20-%20WRITE1.jpg

CPU%20vs%20RAID5%20-%20WRITE2.jpg

And finally, the READ load, while playing the DVD "300" from my RAID5 Array:

CPU%20vs%20RAID5%20-%20READ.jpg

As you can see, while:

IDLE: The CPU and ARRAY LOAD is close to 0%.

WRITE: The CPU LOAD is around 10% while at it's peak (ARRAY WRITE being used 80-100%)

READ: The CPU LOAD is between 5% and 10%. Funny thing is the ARRAY READ is virtually 0; probably cause all data is currently loaded into RAM.

Do note that this was tested on a i920 CPU with 4 Cores / 8 Threads (Hyperthreading) and 12 GB RAM.

Kind regards,

Kami.

Edited by KrazeyKami

Share this post


Link to post
Share on other sites

Sorry, I don't have topic notifications turned on any more, so I only see this topic when I stop by the forum.

Win 7 x64/x86 does not have an ideal offset built it, it is simply *improved* over previous versions. The fact of the matter is, there is no magic number that will work for everyone's setup. Different hardware, different drive counts, different drivers will produce different results. I suspect that there is less volatility in this formula when using a discrete true hardware RAID card, but as I have no experience with that, I cannot say for sure.

As I subsequently discovered, RAIDs are much more complicated than they look. I had honestly expected to hook up the drives, run the IRST manager, and have awesome results... That was not the case, as this thread details.

Bottom line: Best practice is to properly align the partition to suit your setup. This requires prodigious use of Diskpart & ATTO.

RAID 5 read speeds, when properly configured, should resemble RAID 0 read speeds - i.e. speeds scale up so that each additional drive adds a magnitude performance increase [(# Drives - 1) * single drive read = RAID 5 read speed] - There is of course overhead, so you will never achieve that, but it's a good guideline.

Take for example, my final setup: Single drive AVG linear read ~ 125MB/s, 5x drive (Really 4x + parity) Raid 5 AVG read ~ 490MB/s. Pretty close if you ask me. It took nigh on 50+ benchmarks, and a crap-ton of hours logged to get it though.

Write speeds are the big question mark however, and largely result from a combination of hardware, O/S overhead, disk count, and geometry. Everything helpful that I know is already in this thread. Benchmarking is your friend.

Intel raid (IMO) is not as bad as people make it out to be. My results are stellar as far as I'm concerned, and I didn't pay a ridiculous amount for a hardware card to only duplicate function I already have. I have built, broken, and even rebuilt from a failed drive. I've been using it for months, and it works great *for my needs*.

The procedure for manually aligning a partition is not complicated, but there are a lot of steps that need to be carefully followed. Rather than trying to copy/paste, I suggest that you use Google - "Diskpart manual align" - and read up. I don't mean that to be unhelpful, just that I don't think I'm the best authority to consult.

I do use GbE exclusively for my wired links, and all tests were performed wired. 4k Sector drives are a different problem entirely. Some use internal 512b emulation, some expose the 4k sectors directly on the interface. They have a different process for aligning partitions that I don't fully understand, not having had to do it myself. I know enough from what I studied to know that sector emulation adds another level of complexity to the process that makes it very different from standard 512b drives.

Final note: Your read/write speeds look very good for a 3 drive array IMO. I suppose your goal should be to make sure the raid is not the limiting factor in storage transfers. Beyond that, speed for the sake of speed that you will not be able to use seems like a waste of time. In my case, my writes were repeatedly limited when I had multiple drives concurrently writing to the array on the same machine, which happens often for me. After my last adjustments, they run about 85% when writing together, which I consider an acceptable overhead.

I hope that was helpful.

Share this post


Link to post
Share on other sites

This discussion is a bit old, but as I just spent the past three days trying to get a RAID 5 set up, partly using what was posted here, I wanted to give my lessons learned.

While I appreciate the hours/days of work by the others here, my findings were certainly different. I'm running Win7 with built-in RAID controller on my Intel DZ68BC MoBo. I'm using three 2TB WD Green drives. Much/most of the information out there of successful RAID 5 builds (i.e., those without terribly slow write speeds) gives stripe sizes of 64K or 128K with 32K or 64K clusters. Then a lot of discussion is spent on getting the proper partition alignment using diskpart.exe, etc.

So, that's what I did. Keeping in mind that I generally had no idea what I was doing, I set my stripe to 64K and cluster to 32K and ran tests for at least 20-30 different partition alignments via dispart.exe, ranging from 4KB to 64MB. MOST results yielded read speeds (via ATTO) of <10MB/s, with the random spike of >100MB/s for a particular block size. I ran some tests with 128K stripe and 64K/32K clusters as well, with similar results. I finally settled on 64K stripe, 32K cluster, 64K align. In ATTO, write speeds were >120MB/s for any block size 128K or greater. So, I thought I finally had success.

Then I tried actually copying files to my new raid (from my solid state OS drive as initial tests)...and found the performance highly erratic and often still terribly slow (<10MB/s). For example, two different mp4 files, each 1.4GB would exhibit totally different behavior. One would finish copying in a few seconds. The other would hit a wall half way through and take minutes. Another 10GB file would also hit a wall a few GB into the copy and slow to a crawl.

This was enough for me to decide that using a "standard" stripe/cluster arrangement and finding the right partition alignment is NOT the silver bullet I thought it would be. Then I found another post which said that the stripe width should equal the cluster size. That is:

(1) stripe width = stripe size x (drives - 1)

(2) stripe width = cluster size (or block size)

So:

stripe size = cluster size / (drives - 1)

This meant that for a 3-disk array, with a max NTFS cluster size of 64K, the max stripe size would be 32K (lower than most recommendations I found). However, when I tried this setting, I got >100MB/s writes (via ATTO and HD Tune) for basically any partition alignment of 64K or some multiple of it. And I tested A LOT of different alignments. Then for the real world tests:

(1) Two 1.4GB video files, each copied in a few seconds

(2) One 10GB video file transferred in ~90 seconds (>100MB/s)

(3) A collection of 14 video files ranging from 60-500MB for a total of 5.5GB took 50 seconds

(4) A 55GB backup file went at 100MB/s for the first 9GB, then 70MB/s for the next 8GB, then 55MB/s for the rest of it (I don't understand this behavior, but I got it repeatedly with this stripe/cluster arrangement, regardless of partition alignment...regardless, it was a LOT better than all the other stripe/cluster settings I tried which quickly slowed to <10MB/s).

I've now copied my 1.5TB of mixed data to my new RAID and consistently get writes from 70-100MB/s. In theory, I should be able to get better than this, but as it's taken three days of trials to finally get something that is consistent (and consistently 10x better than what I was getting to start with), I'm happy.

Anyway, I just wanted to through this out there. If you are me and having lots of problems with getting your RAID 5 to work, then give the above a try.

Share this post


Link to post
Share on other sites

I would not dare to create such a large raid5 set, especially with WD greeen drives. Just google (wd green tler)

They are known to drop out of raids, and i cannot even imagine the rebuild times of an 4TB array. I would just greate them as either basic disks and find some sort of software that can replicate the data over more than location.

regarding your stribe size and cluster sizes, i would probably just use the default values, you normally only change thoses if you see highly sequential writes etc. In normal windows usage, you will mostly see random reads/writes and alot of it will be small IOs.

your raid 5 controller is probably not hardware assisted and it just might not be able to perform any better.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this