You show 2700 4k random write IOps for this drive. That's an order of magnitude faster than the Enterprise drive! The best 15k rpm drives barely achieve 400 IOps. There's no way this data was written to the platters at this speed and in the same way as regular drives do it. Either it's still in the cache or the drive stores those writes in a clean region, where it can aggregate them (and mark the region which should have been overwritten as unused). The former would simply be an improper measurement, whereas the latter would create fragmentation (which is cleaned up later on as garbage collection).
If the latter is actually being used I think this could be of great benefit for regular desktop users on any drive. Such usage is generally very bursty, and what really hurts are moments when HDDs come to a crawl due to random acesses. Ideally it could be a setting switchable by the user or the drive could detect sustained loads and revert to regular behaviour in such cases (otherwise the entire disk would soon be completely fragmented). I think one could call this "adapting lessons learned from SDDs to HDDs".
Is this just a part of the DRAM cache? Don't regular HDDs also use this as write cache? That's the most obvious usage of this cache, despite handling file table etc.
And you're saying repeatedly that the drive doesn't perform well under sustained write activity. However, the 128k sequential tests showed a very good performance (195 MB/s read and write). At this rate they'd quickly overpower a 20 MB write cache, so this can't explain these results. Furthermore, from how I understand how the drive works, sustained writes should pose no problem whatsoever. Take a new one and fill it up to 8 TB - easy. Otherwise the full sequential speed couldn't be achieved under 128k sequential tests. It's the rewrites which hurt and trigger further internal reads and rewrites of entire shingled blocks (until the next synchronization boundary). Or am I misunderstanding something here? You may refer to your result of 30 MB/s for the full VM backup. Was this measured upon the initial backup creation, or did you overwrite an existing one? If I'm right you should adjust those passages in the article, as they would give a wrong impression of the drive.
This brings me to an interesting question: why is the RAID rebuild so slow? What is the NAS ordering the drive to do? Has the drive been formatted before the rebuild? In the worst case (from the point of view of the Archive HDD) I could imagine the NAS to scan and compare all files on both drives, in the order of the file system (creating lot's of random accesses), and then updating any differing data on the new drive (causing lot's of rewrites). Another unfavorable variant would be to copy file by file, while somehow transferring the same fragmentation which is probably present on the source drive. Again this would trigger lot's of rewrites of imcomplete shingled sections.
A version which should IMO work very well is to just copy everything, sector by sector, from the reference HDD to a formatted Archive HDD. All data would be sequential and could be grouped in large blocks, so any shingled section can be written at once, without any rewrites. I'm no NAS programmer, but this doesn't sound too hard.
@jasrockett: I do not see why you couldn't use the Archive for long term storage of your photos. You write them once and that's it. There may be a read access every now and then. And when you want to edit one you'd copy it over to your PC anyway, work on it and then transfer it back. The drive can cope with rewriting a few 10 MB's occasionally. My dad is doing it like this: one 8 TB Archive in the PC for accessible storage of the mostly static photos and an external one for mirroring according to his backup schedule. And if you're buying them: go straight for the 8TB version. At 2+ TB/year you'll need the space anyway and they're just 13% / 30€ more expensive than the 6 TB version (in Germany).