Jump to content


Slow write speed on intel Raid 5 (6xRED 4tb)

red raid intel slow performance

  • You cannot start a new topic
  • Please log in to reply
4 replies to this topic

#1 Tassadar



  • Member
  • 2 posts

Posted 15 July 2014 - 07:26 AM

Hi all,


This is my first post in this forum, I hope someone can lend me a hand since now I have get out of ideas.


I've built a raid 5 on a Asrock z87 extreme6 using six Western Digital RED 4tb that are connected to the six intel controller SATA3 ports, with the aim of creating a 20Tb Raid 5. The OS is Windows 8.1 x64.


I created the raid from the BIOS utility selecting 64kb size (I had option of selecting 64 and 128 but the utility reccomended 64kb in case of raid 5). Once in Windows I formatted the raid unit with a 20Gb partition and write speed was really slow (10 MB/s max), even after waiting raid to be completely constructed (it took several hours)


After reading and looking for information I enabled write cache and disabled write-cache buffer flushing. I also set simultaneous on in the Intel Rapid Storage Technology panel. After doing this the write speed increased to 25-30 MB/s.


I have notice that physical cluster size is 4098 bytes (usual on those 4tb disks), but logical cluster size is 512 bytes:




Shouldn't those cluster sizes match to have a good performance? In this case, how to change it?


I've try to delete partition and create again, but selecting different cluster sizes for que partition, and the best performance is using 64kb (the stripes size), but it's only 50-60 MB/s actual speed copying a big MKV file from an SSD, and even doing it if doesn't makes any change on the capture where we see the 512 bytes for logical sector size.


AS SSD Benchmark seems to tell that partition is correctly aligned:




The results of the speed here seems ok, but as I told, real speed never exceeds 58-59 mb/s in writes.


I attack a capture of fdisk, I really don't know if it's ok or bad aligned:




ATTO DISK Benchmark:




Those 6 discs were installed on a NAS, having a write speed higher than 80 mb/s, where is the problem here?


Many thanks in advance

#2 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,411 posts

Posted 15 July 2014 - 07:34 AM

Which benchmark are you using to measure the performance? Firstly software RAID5 is never going to be as good as using a dedicated H/W RAID card with some form of cache. In your benchmarks it is hard to tell how you are measuring performance. Software RAID + low queue depth tests will result in fairly poor performance. 


Also you aren't leaving much room for hardware failures going with software RAID5 using 6x 4TB drives. If a drive fails you run a high chance of encountering a read error during the rebuild process and your data becomes a recovery problem at that point. Is this just generic scratch space?

#3 Tassadar



  • Member
  • 2 posts

Posted 15 July 2014 - 08:37 AM

Many thanks for answering, Kevin,


The results of AS SSD Benchmark looks ok as can be seen on the screenshot I attached, but when I copy a big file from another disk to the raid speed is between 50 and 60 mb/s (windows 8 shows instant speed when copying a file).


Can you tell me if cluster size are ok? (I attached a screenshot on first post). Do you think then that this speed can't be higher? I really would like to have best possible performance.


the raid will store videos and music, important data will be bakuped in another disk, but the idea was having the raid 5 safety advantage instead of using individual disks.


How could I configure 2 partitions having one of them a better tolerante to failures at cost of losing some space? If possible and not too difficult could be interesting.


Many thanks in advance.

#4 continuum



  • Mod
  • 3,531 posts

Posted 16 July 2014 - 08:41 PM

I attack a capture of fdisk, I really don't know if it's ok or bad aligned:

Doesn't show actual sector alignment unfortunately. You may need to check again.


As a rule larger stripe sizes are better for sequential performance, smaller for smaller files.


6 drives means you can run RAID6 (or if in Linux and on software RAID, RAIDZ2) two tolerate up to two drive failures, but if you're stuck with motherboard chipset RAID you may be SOL.

#5 unhappy



  • Member
  • 4 posts

Posted 26 August 2014 - 09:57 PM

You need a controller that can do a RAID in hardware or performance will lag.

SO you will need to get the best support info on what the controller can do and how fast.

Your hardware may be doing all the striping and ECC in software resulting in poor write speeds.

Off hand I would look at an LSI card for a RAID 5 set up in hardware.

Also tagged with one or more of these keywords: red, raid, intel, slow, performance

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users