Search the Community

Showing results for tags 'stripe'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 1 result

  1. [Apologies if this seems misplaced. I don't see a forum on RAID configuration.] I'm configuring a RAID on SSDs. It happens to be 3 drives in a RAID 5, but this is a fairly generic question. I had the idea that I could reduce stripe read-modify-write operations and write amplification by using a segment size of 4k (which equates to a stripe size of 8k, in my case). Then, I build the filesystem with a block size that matches the stripe size. The only downside I can see is the overhead of using such a small stripe size, if the controller is too dumb to combine a sequence of 4k reads into fewer, larger reads. The reason I care about performance of small writes is that this filesystem will be used for software builds, among other things. This involves frequently creating large numbers of small/medium-sized files. From what I can tell, this isn't a very common practice, but I suspect the tendency towards large stripe sizes is a legacy of mechanical disk drives and simple controlers. My RAID "controller" is Linux software RAID (mdadm). Any thoughts?