Hrafn

Member
  • Content Count

    13
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Hrafn

  1. As well as 'cold storage' archives, it would be interesting to see how suitable these disks are for a home computer's 'video library' -- which would generally consist of large sequential files, which would generally be created and deleted as a single unit, but whose timeframes may be more likely to be months (movies) or days (this week's television programmes) rather than a true cold storage scenario. This would also make the size of the rewriting 'bands' a matter of interest. If relatively small (64-100MB) this should not adversely affect the drive's usage for this purpose, if significantly larger (e.g. 1GB), it might (moving the driver closer to purist Write-Once-Read-Many in its utilisation). I look forward to reading the review.
  2. Am I misinterpreting the underlying test, or does this chart (https://www.overclockers.at/files/21-februar-2015_11-19_write_201045.png) (from https://www.overclockers.at/storage_memory/review-seagate-archive-hdd-v2-8tb_242172) show reasonable sustained sequential writes (at least up until 60% of the drive is written)?
  3. Oddball question, and potential quick-and-dirty way to see if large sequential writes bypass the cache -- what happens when you attempt to write a single file, that is larger than the cache (e.g. a bluray disk image), to the disk? This should force the drive to show its true colours, and give little opportunity to mistake sequential data for random. I would also expect that backing up large numbers of largish disk images would be a valid use-scenario for a very large archive disk (meaning that if it gets indigestion doing this, it's got some problems for its core business).
  4. So if the "no 'safe' write activity that can happen anywhere except the landing zone" scenario inevitably happens for sustained sequential writes, then how does the synthetic test avoid this? If the synthetic test avoids it, then avoidance must be at least possible, and the question becomes is avoidance the rule or the exception?
  5. Surely that would only happen if the drive got heavily fragmented, or almost full? On a reasonably unfragmented drive, that has a reasonable amount of free space, the majority of free space should consist of wholly-unwritten shingled-blocks, shouldn't it? All writes would only result in rewrites if all blocks are partially written -- I suppose that this would be possible with a very large number of very small files and very poor garbage collection, but in reality?
  6. I don't think he's saying they're slow under all sustained sequential writes ("And again as mentioned the synthetic test didn't show the sustained sequential drop"), just some of them ("but the single drive Veeam and separate RAID1 rebuild figures did"). My assumption (based on some ambiguous wording in the Skylight report) is that the drive may have the capability to bypass the cache and do sustained sequential writes directly to the shingled drive (avoiding the cache's potential bottleneck for sustained writing), but that some circumstances lead it to treat apparently-sequential data as 'random' and still use the cache.
  7. I think the on-disk write cache is in fact 20GB (and the 20MB is just a typo). I'm also wondering if the slow RAID-rebuild/Veeam speeds are indicative of general sustained sequential write speeds, or an an artifact of some idiosyncacy of the processes involved.
  8. What are the sustained sequential write stats for this drive? I don't remember seeing any in the review. I would have thought that it would bypass the cache for such writes, writing them directly to the shingled portion of the drive (and the Skylight report certainly implied that this was a likely strategy).
  9. This paper ( https://www.usenix.org/conference/fast15/technical-sessions/presentation/aghayev ) gives a more detailed explanation of the drive's behaviour, and especially its vulnerability to sustained random writes. If that's a significant part of what you want to use a drive for, then it's clear that this drive isn't for you. Whether this category covers "most end users" is an open question. One interesting idea I did see floated on another forum to improve SMR's performance even further was to make it a hybrid drive, with the drive's SSD acting as the drive's persistant cache.
  10. I think my home video library case covers my specific interest -- 'cool' (a reasonable amount of turnover, so not-quite-cold) storage of largish (100MB+) files. Generally redundency would be a depreciated issue, as files tend to be non-unique and can be restored by re-ripping from optical media and/or redownloading in the case of disk failure. I would suspect that such usage would predominate multi-TB disks in home computers (it's quite difficult to fill them up otherwise). Beyond that, I think there's a general interest in finding out how much of a limitation the drive's unique technology places on its utilisation. Database and RAID setups that require constant writing are clearly right out, but there's a wide continuum between that and the opposite extreme, using it as essentially a pseudo-tape-drive for pure cold storage and/or backup. There certainly seems to be quite a bit of buzz (as well as confusion/nervousness) from NAS-related forums about this drive. I'd also mention that nobody appears to have done a serious review of this drive as yet. I suspect that quite a number of people may be holding off on buying one until an expert has put it through its paces.
  11. Another week, and still no review.
  12. Any idea when the review is coming out? It was originally going to be "out next week" a couple of weeks ago.