This discussion is a bit old, but as I just spent the past three days trying to get a RAID 5 set up, partly using what was posted here, I wanted to give my lessons learned.
While I appreciate the hours/days of work by the others here, my findings were certainly different. I'm running Win7 with built-in RAID controller on my Intel DZ68BC MoBo. I'm using three 2TB WD Green drives. Much/most of the information out there of successful RAID 5 builds (i.e., those without terribly slow write speeds) gives stripe sizes of 64K or 128K with 32K or 64K clusters. Then a lot of discussion is spent on getting the proper partition alignment using diskpart.exe, etc.
So, that's what I did. Keeping in mind that I generally had no idea what I was doing, I set my stripe to 64K and cluster to 32K and ran tests for at least 20-30 different partition alignments via dispart.exe, ranging from 4KB to 64MB. MOST results yielded read speeds (via ATTO) of <10MB/s, with the random spike of >100MB/s for a particular block size. I ran some tests with 128K stripe and 64K/32K clusters as well, with similar results. I finally settled on 64K stripe, 32K cluster, 64K align. In ATTO, write speeds were >120MB/s for any block size 128K or greater. So, I thought I finally had success.
Then I tried actually copying files to my new raid (from my solid state OS drive as initial tests)...and found the performance highly erratic and often still terribly slow (<10MB/s). For example, two different mp4 files, each 1.4GB would exhibit totally different behavior. One would finish copying in a few seconds. The other would hit a wall half way through and take minutes. Another 10GB file would also hit a wall a few GB into the copy and slow to a crawl.
This was enough for me to decide that using a "standard" stripe/cluster arrangement and finding the right partition alignment is NOT the silver bullet I thought it would be. Then I found another post which said that the stripe width should equal the cluster size. That is:
(1) stripe width = stripe size x (drives - 1)
(2) stripe width = cluster size (or block size)
stripe size = cluster size / (drives - 1)
This meant that for a 3-disk array, with a max NTFS cluster size of 64K, the max stripe size would be 32K (lower than most recommendations I found). However, when I tried this setting, I got >100MB/s writes (via ATTO and HD Tune) for basically any partition alignment of 64K or some multiple of it. And I tested A LOT of different alignments. Then for the real world tests:
(1) Two 1.4GB video files, each copied in a few seconds
(2) One 10GB video file transferred in ~90 seconds (>100MB/s)
(3) A collection of 14 video files ranging from 60-500MB for a total of 5.5GB took 50 seconds
(4) A 55GB backup file went at 100MB/s for the first 9GB, then 70MB/s for the next 8GB, then 55MB/s for the rest of it (I don't understand this behavior, but I got it repeatedly with this stripe/cluster arrangement, regardless of partition alignment...regardless, it was a LOT better than all the other stripe/cluster settings I tried which quickly slowed to <10MB/s).
I've now copied my 1.5TB of mixed data to my new RAID and consistently get writes from 70-100MB/s. In theory, I should be able to get better than this, but as it's taken three days of trials to finally get something that is consistent (and consistently 10x better than what I was getting to start with), I'm happy.
Anyway, I just wanted to through this out there. If you are me and having lots of problems with getting your RAID 5 to work, then give the above a try.