Son_of_Rambo

New twist for SMR drives in RAID

4 posts in this topic

Just a suggestion for an extra thing to look into when next benchmarking a SMR drive. I know everyone says "don't use SMR in RAID" because... Bad rebuild times - such as the ~57 hrs vs ~20 hrs taken from the Seagate 8TB archive drive review. 

 

But what if you built a SMR based RAID array but with the use of PMR drives for use as rebuild drives. 

Now with no technical knowledge here, this would seem to fix a few issues (and may cause new ones??). 

 

Pro: 

* Cheaper through use of SMR drives for initial RAID

?? * Short rebuild times - in line with PMR drives

 

Con: 

?? * Potential for decreased future RAID performance due to mixed drive composition??

 

Thus I think it'd be great if we could look at testing this in the next SMR drive review? Say initial build RAID 5 x 4. Whack a bunch of data on it, test performance, yank a drive and rebuild with SMR / PMR, retest performance and compare. Also compare to RAID 5 with PMR drives with PMR rebuild (the standard setup). The potential of lowering entry cost could mean an increased size of RAID for many people without blowing the budget whilst being protected against long rebuild times? 

 

Thoughts? (Happy to be wrong... just would like it if it was proven and not just because...)

Share this post


Link to post
Share on other sites

SMR performance has gotten more than a few tweaks as time as gone on, I think Anandtech's got a more recent review than the original Archive 8TB that had better performance. I'd be curious how that handled RAID use.

As interesting as a possible SMR + PMR for rebuilds could be, I don't think long-term that's going to be as useful because long-term, it sounds like PMR drives will go away. (also, as a nitpick, I'd much rather see RAID6 testing than RAID5, especially given that the UBER's start to catch up to you when you're taking volumes this big...)

Share this post


Link to post
Share on other sites

That method sounds interesting, but unless you are breaking the drive yourself, how will you only bust the SMR drives so the rebuild activity only happens on an PMR drive?

Share this post


Link to post
Share on other sites
11 hours ago, Kevin OBrien said:

That method sounds interesting, but unless you are breaking the drive yourself, how will you only bust the SMR drives so the rebuild activity only happens on an PMR drive?

It's not about which drive breaks, it can be either a SMR or PMR drive. It's about the write speed of the drive you put in to replace it. 

 

The write speed of SMR has been notoriously lower than PMR and in a rebuild scenario results in a very long rebuild time (and thus a solid recommendation NOT to use SMR in RAID). This is about seeing if that flaw can be fixed by utilizing a PMR drive as the replacement. 

 

So let's say you had an array with 2 x SMR & 2 x PMR drives. If an SMR breaks you'd replace with a PMR. If a PMR breaks you'd replace with a PMR. 

Over time, your array will move towards being all PMR. The advantage was that you could utilise the cheaper SMR drives when you were first setting up the RAID. When doing a full upgrade (eg. 4x 4TB RAID -> 4 x 10TB RAID) you could again utilise cheaper drives in the initial build.  

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now