All Read & Write rates list below are for sequential operations. It's a large file server, not an SQL DB, max throughput is more important than IOPs for my needs.
Well, at least I'm past that point. I've been busting my .. HDDs for the past 2 days trying to come up with a good combo for the stripe & fs cluster size. I took the advice of another NAS builder and simply benchmarked my setup (ATTO) from 128k stripe & 64k NTFS cluster, all the way to 16k stripe & 2k cluster, with every option in between (there went my Saturday). When I identified the best combos for each stripe, I then used Teracopy and moved an 8GB iso to and from the array, then over the network to verify real world results. 128k/32k produces the best read results by far at a staggering 500MB/s avg read speed. The writes are an abysmal 31 MB/s however. *Write back cache is enabled* Disabled, it produced 500KB/s (yes KILOBYTE) write rates.
The best write speeds were accomplished with a 32k stripe/4k cluster combo. With that, write speeds are an improved ~50MB/s, and read speeds are in the 300MB/s range, which coincidentally are both second only to the 128k/32k combo. Since this array is data storage only, and ultimately limited by GbE (125MB/s theoretical max) anyway, I felt it was better to go with the 32k/4k combo for the higher write speeds.
The problem I'm having is that elsewhere on the interweb (I should really stop reading about this), I'm seeing people achieving ICHxR Raid 5 100MB/s write results. Many of the benchmarked systems are even using older, smaller, & slower, drives! W---T---F?!
I don't know what else to try at this point. To compound the problem, local writes to the array are ~50MB/s, network writes are 15MB/s!. Net reads & writes to other single drives on the same system are ~65MB/s. Network reads from the array are ~65MB/s. 65MB/s seems to be a chipset limitation at this point, because the individual drives bench faster than this, but can't read/write to each other beyond that number.
Pulling out my hair here. Could use some (helpful) pointers to try to solve this conundrum. I'm definitely not in a position to fork out any extra $$$ for a dedicated raid card ATM due to the extra expenditures just to get this rolling, please don't suggest that. Even if that were the only solution, I'll stick with the 50MB/s writes. The point is though, ICHxR has a proven higher throughput available than what I'm experiencing. Based on the fact that simply altering the stripe & cluster sizes I was able to achieve an incredible speed boost, I really think this must be some sort of a configuration issue holding me back from faster seq writes.
Basically, based on the other hardware I'm using, I'd realistically like to see 100MB/s benched writes. That's enough to max out the capability of the drive(s) the data will be copied from, and beyond what appears to be some sort of artificial limitation putting the brakes on at 65MB/s anyway.
Edit, the picture is even more muddy now: If I push the file to the array from another system, it's 15MB/s, if I pull it on the array from that same system, it's 53MB/s. Why would it matter where the transfer is started from? Both systems use Win 7, btw.
Edit #2, Seems as though there is some inherent problem with CIFS/SMB that file transfer speeds are sometimes affected by the direction of initiation. There is no one-fix for this problem. There are so many suggestions from turning off advanced networking features (RSS, TCP chimney, SMB2, etc) to replacing NICs, cables, other hardware. Bottom line I suppose is, the use case that I'm having trouble with is not common, and easily worked-around. I don't prefer to leave things "broken" but TBH, I have been down this road before unsuccessfully, and I have bigger fish to fry ATM.
I'd rather any potential replies focus on the R5 write speed issue. Thanks!