Sign in to follow this  
Followers 0
Tassadar

Slow write speed on intel Raid 5 (6xRED 4tb)

7 posts in this topic

Hi all,

This is my first post in this forum, I hope someone can lend me a hand since now I have get out of ideas.

I've built a raid 5 on a Asrock z87 extreme6 using six Western Digital RED 4tb that are connected to the six intel controller SATA3 ports, with the aim of creating a 20Tb Raid 5. The OS is Windows 8.1 x64.

I created the raid from the BIOS utility selecting 64kb size (I had option of selecting 64 and 128 but the utility reccomended 64kb in case of raid 5). Once in Windows I formatted the raid unit with a 20Gb partition and write speed was really slow (10 MB/s max), even after waiting raid to be completely constructed (it took several hours)

After reading and looking for information I enabled write cache and disabled write-cache buffer flushing. I also set simultaneous on in the Intel Rapid Storage Technology panel. After doing this the write speed increased to 25-30 MB/s.

I have notice that physical cluster size is 4098 bytes (usual on those 4tb disks), but logical cluster size is 512 bytes:

2qkmsfa.png

Shouldn't those cluster sizes match to have a good performance? In this case, how to change it?

I've try to delete partition and create again, but selecting different cluster sizes for que partition, and the best performance is using 64kb (the stripes size), but it's only 50-60 MB/s actual speed copying a big MKV file from an SSD, and even doing it if doesn't makes any change on the capture where we see the 512 bytes for logical sector size.

AS SSD Benchmark seems to tell that partition is correctly aligned:

23jlthz.png

The results of the speed here seems ok, but as I told, real speed never exceeds 58-59 mb/s in writes.

I attack a capture of fdisk, I really don't know if it's ok or bad aligned:

2z83leh.png

ATTO DISK Benchmark:

13zread.png

Those 6 discs were installed on a NAS, having a write speed higher than 80 mb/s, where is the problem here?

Many thanks in advance

2 people like this

Share this post


Link to post
Share on other sites

Which benchmark are you using to measure the performance? Firstly software RAID5 is never going to be as good as using a dedicated H/W RAID card with some form of cache. In your benchmarks it is hard to tell how you are measuring performance. Software RAID + low queue depth tests will result in fairly poor performance.

Also you aren't leaving much room for hardware failures going with software RAID5 using 6x 4TB drives. If a drive fails you run a high chance of encountering a read error during the rebuild process and your data becomes a recovery problem at that point. Is this just generic scratch space?

Share this post


Link to post
Share on other sites

Many thanks for answering, Kevin,

The results of AS SSD Benchmark looks ok as can be seen on the screenshot I attached, but when I copy a big file from another disk to the raid speed is between 50 and 60 mb/s (windows 8 shows instant speed when copying a file).

Can you tell me if cluster size are ok? (I attached a screenshot on first post). Do you think then that this speed can't be higher? I really would like to have best possible performance.

the raid will store videos and music, important data will be bakuped in another disk, but the idea was having the raid 5 safety advantage instead of using individual disks.

How could I configure 2 partitions having one of them a better tolerante to failures at cost of losing some space? If possible and not too difficult could be interesting.

Many thanks in advance.

Share this post


Link to post
Share on other sites
I attack a capture of fdisk, I really don't know if it's ok or bad aligned:

Doesn't show actual sector alignment unfortunately. You may need to check again.

As a rule larger stripe sizes are better for sequential performance, smaller for smaller files.

6 drives means you can run RAID6 (or if in Linux and on software RAID, RAIDZ2) two tolerate up to two drive failures, but if you're stuck with motherboard chipset RAID you may be SOL.

Share this post


Link to post
Share on other sites

You need a controller that can do a RAID in hardware or performance will lag.

SO you will need to get the best support info on what the controller can do and how fast.

Your hardware may be doing all the striping and ECC in software resulting in poor write speeds.

Off hand I would look at an LSI card for a RAID 5 set up in hardware.

Share this post


Link to post
Share on other sites

Doesn't show actual sector alignment unfortunately. You may need to check again.

 

As a rule larger stripe sizes are better for sequential performance, smaller for smaller files.

 

6 drives means you can run RAID6 (or if in Linux and on software RAID, RAIDZ2) two tolerate up to two drive failures, but if you're stuck with motherboard chipset RAID you may be SOL.

Agreed. Ditch the fake hardware raid and if you're not going to use a stupid expensive controller, use mdadm/zfs with linux/freebsd.

Be warned, the raid6 parity is going to destroy your processor.... Hopefully your at least a quad core if you're wanting serious write speeds.

Sent from my rooted HTC Supersonic using Tapatalk 2 Pro

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0