• Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by [ETA]MrSpadge

  1. I think you're overthinking it. If you ask me, the 150MB/sec on their write charts is simply an error in their testing. A quick and dirty ATTO test I did yesterday showed write speeds of 180MB/sec+ and very similar to what I saw during ~5TB of sustained writes.

    This drive has 1.33TB platters.

    @point1: Well, matching numbers for one theory do not automatically mean it's true :D

    @point2: yes, 1.33 TB platters including the capacity boost from shingling. I'd like to know how large this boost is. Based on traditional 1 TB platters it would be 33%. I think Seagate claimed around 20%, which would match my assumption of 1.1 - 1.2 TB platters without shingling.

    And thanks for your other information!


  2. Thanks for the answer, Kevin.

    Regarding the 1st point: well, neither drive is SATA. As far as I understand one is PCIe AHCI and the other one is PCIe NVMe.

    Regarding the capacity vs. performance: Samsung does quote a difference in sequential write performance, taken straight from your review:

    • 512GB - Up to 1,550MB/s
    • 256GB - Up to 1,260MB/s

    and for the AHCI version, also taken straight from the SR review:

    • 512GB - Up to 1,500MB/s
    • 256GB - Up to 1,200MB/s

    This makes it even more astonishing that the NVMe version can often keep up with or beat its AHCI cousin. I know it's not easy to comment on drive sizes you don't have in-house, but this doesn't mean you should ignore known differences either.


  3. Excellent drive and a nice review, as always. And some comments from my side:

    - The previous SM951 is the AHCI version, not the PCIe version. Both of them use PCIe for the data transfer.

    - "speeds that were by in far the fastest" -> "speeds that were by far the fastest" (I know it's somewhat common and I've already seen it in several SR reviews, but the "in" makes no sense there)

    - You're comparing a 512 GB AHCI drive versus a 256 GB NVMe drive. That's OK since you're working with what you have. You also list the performance difference due to the smaller capacity in the drive specifications. Yet when you compare the 2 drives, you never mention that part of the performance difference is due to the NVMe driver having fewer NAND dies. Otherwise the AHCI version should never be faster and your results would actually look very strange. This in turn means a 512 GB NVMe drive would be even faster. From my point of view this is a basic analysis of your results, which should not be left up to each reader.


  4. A few seem to be available across Europe: link. That's a surprisingly small amount of shops, but on the other hand the drive has only been listed since 2 months by now. BTW: the cheap "Intenso" branded drives with 4 and 5 TB @ 7.2k rpm are also Toshiba. they won't tell you so, but noone else builds affordable 7.2k rpm drives with more than 3 TB.

    Edit: with 5 platters at 7.2k rpm it would be hard for the drives not to be relatively loud. If this matters depends entirely on the drive's surrounding and the users preferences.


  5. The drive is definitely more vulnerable than others. I would not expect such extreme failure rates without the extreme environment blakcblaze is putting them in. The high amount of vibration in those pods may very well harm the 1st 7.2k rpm drive with 1 TB platters. And this may well be the reason others hesitated for so long to tie this density, and even Seagate themselves didn't use that technology for 4 TB.


  6. If you've got a mainboard which allows Intel SRT you may still be able to set this up for your new HDD. You should need a spare 60 GB on some SSD, have the mainboard SATA ports switched to RAID (you already do) and then assign this space as cache. I haven't actually tried such SSD partitioning, but if you're buying the parts anyway you might as well try it. If you have a backup of your stuff, of course.


  7. Although the latency was only half as fast, the 750 showed improvement in the TPS benchmark with 6,311.93 TPS vs. 6,303.72 TPS by the P3700.

    This sounds weird. The at 10 ms compared to 15 ms the latency reduced to 2/3, or 50% better - which ever way you want to look at it. But there's no factor of 2. And calling it "only half as fast" implies that high values would be better, whereas it's actually the opposite.

    And considering the significant latency difference between these drives the TPS benchmark scores seem about the same: 6312 vs. 6304 is a difference of just ~0.1%! What's the standard deviation between a few runs of this test? I'd be surprised if it's anything less than 1%.

    BTW: even with a hypothetical standard deviation of 0.1% it would make sense to only report 4 significant digits. I'm pretty sure your 5th and 6th digit are meaningless at best, and simply make the important numbers harder to read.

    Apart from that: thanks for testing and the article!


  8. Question is, is a 5k drive much more silent than a 7200 drive?

    It also depends on how loud the other components in your system (or the entire surrounding) are. Another factor is the mounting: if the drive is firmly attached to a rattling metal case, the noise is far larger than with a decoupled mounting. This does not directly answer your question, but what I want to say is this: if you can hear a 5k rpm drive, a 7k rpm drive is audibly louder. If not.. chances are pretty good that you won't hear the 7k rpm drive either.


  9. Toshiba is relatiively cheap, indeed. And they even offer a "normally" priced 7.2k rpm 4 TB model, which you can't get from Seagate or WD.

    I've got a 500 GB model in my work PC from 2013 and I don't like it that much, but don't know how representative that is. It's vibrating more than the 640 GB 2 platter WD Black it replaced (sometimes makes the entire case vibrate) and absolutely chokes on USB 2 transfers. Which sounds like a weird software issue (Intel Z77 chipset on Win 8.1?) since it doesn't show this problem with USB 3. I wouldn't hesitate buying another one of them for a "data grave" type disk.


  10. Alright, you definitely thought about backups :)

    A RAID 1 for SSDs: I read somewhere that modern Intel drivers allow to pass TRIM to SDDs used with chipset raid. I can't tell you more about this, though. If you go for an SSD Raid 0:

    - regular annual failure rates of reliable SSDs (like the MX100) are around 0.1%, so pretty unlikely an way better than HDDs

    - the SSD would by definition host the most frequently changing data, so if any component in your system would benefit from even more security than your current backup plan it would probably a system SSD


  11. Regarding space: a SSD has absolutely no moving parts. There are videos on the Tube where they kick and throw the things and they still work. This is not recommended, but: you don't need a mounting slot for them. Just stuff them in anywhere where they do not short the mainboard or so and you're good to go. Duct-tape as you feel necessary. Further care is IMO only needed if you're moving the machine physically.

    And they're mature enough that there's really not much you'd need to "toy around" with. Just use them and never look back. A 120 or 250 GB Crucial MX100 is a solid starting point.

    Once you have the SSD for your system and programs, the performance requirements on the HDD are relaxed and you can save money there by going with a slower model. Not that any of them could be considered even remotely "fast" compared to SSDs - but faster HDDs quickly drive up the cost significantly.

    The others already touched upon the topic of backups. How serious you want to get ultimately depends on how valuable your data is to you. At the minimum I'd keep a weekly backup on a local external drive, disconnected from the PC if not in use. Your current 3 TB could do this. If e.g. it's only got a few months of runtime left, it would take years to reach that threshold if it's only used for cold backups. And if it really fails you simply replace it and mirror things again - no harm done. Programs like "DirSync" can be used to setup automatic incremental mirror-backups which run quickly once the bulk of data is transfered.


  12. That HDTune benchmark is also somewhat interesting: your test shows a maximum sequential write speed of 190 MB/s, which matches the read speed and the data sheet. However, overclockers.at show a constant write speed of 150 MB/s over a large capacity range. This is unphysical for a HDD, except if it was limited by the interface (it's not). Or, as Brian said, the drive would actually be writing to the reserved landing zone all the time. Can we conclude form this that the landing zone is not at the beginning of the platters, but further in? At the position where STR drop to ~150 MB/s. Looking at the read STR graph from overclockers.at show this to be the case around 4.6 TB, or "roughly in the middle" of the platters. It also matches the average read speed of 152 MB/s very well. Apparently Seagate choose to optimize average acess times by placing the landing zone in the middle, while sacrificing 40 MB/s write speed. Sounds like a good general purpose trade-off.

    Furthermore: at 6k rpm (according to overclockers.at) a maximum speed of 190 MB/s is quite large and way above what drives with 1 TB platters can achieve. This suggests that without shingling the drive would have 1.1 - 1.2 TB platters. Or where else should this performance come from? Shingling "only" increases the track density by overlapping them, but doesn't change the linear density, does it?

    Oddball question, and potential quick-and-dirty way to see if large sequential writes bypass the cache -- what happens when you attempt to write a single file, that is larger than the cache (e.g. a bluray disk image), to the disk? This should force the drive to show its true colours, and give little opportunity to mistake sequential data for random. I would also expect that backing up large numbers of largish disk images would be a valid use-scenario for a very large archive disk (meaning that if it gets indigestion doing this, it's got some problems for its core business).

    I agree - identifying such a case as sequential access, spanning many shingled blocks, should be trivial.

    @Kevin: do you know what happens during that raid rebuild? Is the content of the valid drive simply copied to the new one, or are the contents of both compared and only differences are resolved? Does the raid rebuild time of the Archive recrease if it's freshly formatted (like sending it a TRIM command). And if you say that in a disk in real there will never be any "clean" blocks - does this mean this drive should actually get TRIM commends to help the garbage collection?


  13. You need to remember though that in any scenario with the exception of a drive maybe just out of the box, no sequential write will not distrupt the data on a bordering track. So eventually things slow down because every write eventually turns into a write, read bordering track, and rewriting that track. There is no "safe" write activity that can happen anywhere except the landing zone.

    Unless an entire shingled block is overwritten at once. If this happens one can disregard its current contents immediately and happily write the new ones. As in your tests, we're talking about 100's of GB here, which shall all be written sequentially. If there is so much free space on the drive left, then there have to be many entire blocks which are still "clean / safe". The OS is doing defragmentation and the drive is doing garbage collection in the background for a reason (and I like to think they achieve somethingby doing that).

    This is even more evident in the case of the RAID rebuild: why should there be any dirty / unsafe blocks? Simply wipe the drive prior to the rebuild (well, the NAS software should do this itself).

    From everything that is being said here I see no reason why this couldn't be done. Apparently it's not being done, as your measurements say, but until convinced otherwise I remain convinced that whatever happens there appears to be extremely stupid. Or is some important aspect of how SMR works still missing?


  14. The landing zone is 20GB correct. And again as mentioned the synthetic test didn't show the sustained sequential drop but the single drive Veeam and separate RAID1 rebuild figures did. No magic there, all HDD vendors know this and will tell you as much with SMR drives.

    SMR handle both random and sequential bursts as sequential writes (hence the high 4k burst figure). Once it leaves the landing zone all bets are off.

    Thanks for your answer, Kevin. Do I understand it correctly that the landing zone is the cache described in the article and is a dedicated area on the platter(s)? Is it hidden from the user (as in SSDs), or does performance crash when one fills the last 20 GB?

    OK, now I understand the high 4k scores. But I'm still wondering about sequential writes. Not rewrites, mind you. If the drive is so slow under sustained sequential write, it apparently can't write the "shingled blocks" at once. Instead it seem to be writing a part of it, then notices the next write command to the same block, reads that block again and then overwrites the 1st contents and adds the 2nd data set.

    Seriously, this is rediculously stupid for sequential accesses. How large are these blocks? Considering that one only looses one single track to create a "resync" boundary between shingled blocks, those blocks can't be all that large. A few MB maybe? Certainly small enough to fit comfortably into the 128 MB onboard cache. If not: enlarge the cache a bit. DRAM is dirt cheap compared to such a 8 TB drive. Then gather sequential writes until there's enough for a full shingled segment and write it at once. The drive may cost 1-2$ more, but without the massive sequential performance hit the many new usage scenarios and possible markets are opening up for this drive. One can't fix rewrites this way, so there's still easily enough market left for the traditional drives.


  15. You show 2700 4k random write IOps for this drive. That's an order of magnitude faster than the Enterprise drive! The best 15k rpm drives barely achieve 400 IOps. There's no way this data was written to the platters at this speed and in the same way as regular drives do it. Either it's still in the cache or the drive stores those writes in a clean region, where it can aggregate them (and mark the region which should have been overwritten as unused). The former would simply be an improper measurement, whereas the latter would create fragmentation (which is cleaned up later on as garbage collection).

    If the latter is actually being used I think this could be of great benefit for regular desktop users on any drive. Such usage is generally very bursty, and what really hurts are moments when HDDs come to a crawl due to random acesses. Ideally it could be a setting switchable by the user or the drive could detect sustained loads and revert to regular behaviour in such cases (otherwise the entire disk would soon be completely fragmented). I think one could call this "adapting lessons learned from SDDs to HDDs".

    The drive leverages an on-drive cache (roughly 20MB) to handle inbound writes

    Is this just a part of the DRAM cache? Don't regular HDDs also use this as write cache? That's the most obvious usage of this cache, despite handling file table etc.

    And you're saying repeatedly that the drive doesn't perform well under sustained write activity. However, the 128k sequential tests showed a very good performance (195 MB/s read and write). At this rate they'd quickly overpower a 20 MB write cache, so this can't explain these results. Furthermore, from how I understand how the drive works, sustained writes should pose no problem whatsoever. Take a new one and fill it up to 8 TB - easy. Otherwise the full sequential speed couldn't be achieved under 128k sequential tests. It's the rewrites which hurt and trigger further internal reads and rewrites of entire shingled blocks (until the next synchronization boundary). Or am I misunderstanding something here? You may refer to your result of 30 MB/s for the full VM backup. Was this measured upon the initial backup creation, or did you overwrite an existing one? If I'm right you should adjust those passages in the article, as they would give a wrong impression of the drive.

    This brings me to an interesting question: why is the RAID rebuild so slow? What is the NAS ordering the drive to do? Has the drive been formatted before the rebuild? In the worst case (from the point of view of the Archive HDD) I could imagine the NAS to scan and compare all files on both drives, in the order of the file system (creating lot's of random accesses), and then updating any differing data on the new drive (causing lot's of rewrites). Another unfavorable variant would be to copy file by file, while somehow transferring the same fragmentation which is probably present on the source drive. Again this would trigger lot's of rewrites of imcomplete shingled sections.

    A version which should IMO work very well is to just copy everything, sector by sector, from the reference HDD to a formatted Archive HDD. All data would be sequential and could be grouped in large blocks, so any shingled section can be written at once, without any rewrites. I'm no NAS programmer, but this doesn't sound too hard.

    @jasrockett: I do not see why you couldn't use the Archive for long term storage of your photos. You write them once and that's it. There may be a read access every now and then. And when you want to edit one you'd copy it over to your PC anyway, work on it and then transfer it back. The drive can cope with rewriting a few 10 MB's occasionally. My dad is doing it like this: one 8 TB Archive in the PC for accessible storage of the mostly static photos and an external one for mirroring according to his backup schedule. And if you're buying them: go straight for the 8TB version. At 2+ TB/year you'll need the space anyway and they're just 13% / 30€ more expensive than the 6 TB version (in Germany).


  16. the technology is standard Perpendicular Magnetic Recording (PMR) as opposed to the newer Shingled Magnetic Recording (SMR) drives that are starting to come to market.

    Brian, all I wanted to say is that the statement quoted only uses "newer" to differentiate between PMR and SMR. This could imply to some readers that SMR is inherently better, because that's what mostly happens in the computing world if something new comes along. We both know this is not that case, as both technologies will continue to rightfully coexist for some time. And even with heat assistance you can shingle your data or not, with the same performance implications. The only reason to drop non-shingled HDDs completely would be a competing technique (like some kind of non-volatile solid state storage) achieving near-parity with non-shingled disks in terms of price per storage space.

    The note on the new Toshiba doesn't need to discuss this. I just wanted to point out a possible problem with the current formulation.