Son_of_Rambo

Member
  • Content count

    2
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Son_of_Rambo

  • Rank
    Member
  1. New twist for SMR drives in RAID

    It's not about which drive breaks, it can be either a SMR or PMR drive. It's about the write speed of the drive you put in to replace it. The write speed of SMR has been notoriously lower than PMR and in a rebuild scenario results in a very long rebuild time (and thus a solid recommendation NOT to use SMR in RAID). This is about seeing if that flaw can be fixed by utilizing a PMR drive as the replacement. So let's say you had an array with 2 x SMR & 2 x PMR drives. If an SMR breaks you'd replace with a PMR. If a PMR breaks you'd replace with a PMR. Over time, your array will move towards being all PMR. The advantage was that you could utilise the cheaper SMR drives when you were first setting up the RAID. When doing a full upgrade (eg. 4x 4TB RAID -> 4 x 10TB RAID) you could again utilise cheaper drives in the initial build.
  2. Just a suggestion for an extra thing to look into when next benchmarking a SMR drive. I know everyone says "don't use SMR in RAID" because... Bad rebuild times - such as the ~57 hrs vs ~20 hrs taken from the Seagate 8TB archive drive review. But what if you built a SMR based RAID array but with the use of PMR drives for use as rebuild drives. Now with no technical knowledge here, this would seem to fix a few issues (and may cause new ones??). Pro: * Cheaper through use of SMR drives for initial RAID ?? * Short rebuild times - in line with PMR drives Con: ?? * Potential for decreased future RAID performance due to mixed drive composition?? Thus I think it'd be great if we could look at testing this in the next SMR drive review? Say initial build RAID 5 x 4. Whack a bunch of data on it, test performance, yank a drive and rebuild with SMR / PMR, retest performance and compare. Also compare to RAID 5 with PMR drives with PMR rebuild (the standard setup). The potential of lowering entry cost could mean an increased size of RAID for many people without blowing the budget whilst being protected against long rebuild times? Thoughts? (Happy to be wrong... just would like it if it was proven and not just because...)