• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by [ETA]MrSpadge

  1. Sorry, what states the above clearly? I can't see any connection to the 840 Evo here, except if you imply their pseudo-SLC write cache would be useless. Well.. I'Ve recently built a machine using a 120 GB Evo and it screamed! They're not the best solution for enterprise workloads, or even some power users, but for average use they provide IMO by far the best balance between performance and value. Back to the drive: WD tells us it tweaked firmware, but they don't say this would be the only change. So there's plenty of room for better read heads etc. And Don't worry, the drive cache may be 64 MB by now but at full sequential speed this doesn't even last a half second, even if it's fully occupied by user data (which I doubt). MrS
  2. Never underestimate the power of the dark side! They'll analyze all the benchmarks done in reviews, in addition to anything that may make it into benchmarks in the future, and make their drive perform better in them. It's an extensive analysis and really evil business. They'll even admit to doing it.. just ask for "firmware tuning" MrS
  3. Performance-wise 3 2.5" win against 1 3.5" from the same generation and comparable technology (not comparing 4.2k rpm 2.5" against 15k 3.5"), unless your RAID setup is really bad. Power and hence cooling are probably pretty similar for these 2 options. Cost would usually favor the 3.5" drive, but that He filling might change things. MrS
  4. THe previous 4 TB Black had the dual stage actuator as well. As I said, access times are vastly improved and are IMO a prime candidate to cause this speed-up (in addition to the cache tuning). What enabled them to find the tracks this much faster is a different question.. maybe "just" regular read head improvements? MrS
  5. 667 GB 2.5" platters?! Wow, did expect this without any prior announcements! This means using the same technology we should be able to pack ~1.3 TB/platter into 3.5" by now. Assuming it's not the vibration at the outer radii which keeps them from increasing the density further. Any word on how they accomplished this? Shingled recording, maybe? MrS
  6. I think they fixed whatever problem the previous 4 TB Black had with random access, in addition to what they did to improve caching efficiency. Just look at how good the 2 TB Black is compared to the old 4 TB one in random workloads. This even translates into real applications so far that even there the 4 TB Black is mostly slower than the age-old 2 TB model. The new model looks as superior as a WD Black should be! What about platter density of the smaller models? It will be difficult to hit 1 TB with 800 GB platters.. MrS
  7. [ETA]MrSpadge

    Seagate Desktop SSHD Review Discussion

    Ha, the specs of the 4 TB model are already listed on their homepage. It says 4 platters, yet maximum STR is down from 210 MB/s to 180 MB/s, which means it's actually 5 x 800 GB. Side note: look at the access times. 1 TB 8.5 ms 2 TB 9.5 ms 4 TB 12 ms Seems to support my suspicion that the additional vibration from more platters at higher rpm is what keeps them from releasing 4 platter 1 TB drives. And not even WDs Black with dual stage actuators can get around this.. it's first gen access time at 5 x 800 GB is significantly worse than previous models. MrS
  8. [ETA]MrSpadge

    WD Green 3TB Observations

    A few or even a few 10 bad sectors aren't bad for an older drive (in a non-mission critical environment). I'd only consider replacing a drive if their number starts to increase. Having said that.. the only 2 two drives I'd experienced doing this are a WD Green 640 GB with just a few 100 power-on-hours (but >3 years since buying it) at work and my sister's external WD elements 1.5 TB - also a Green inside, drive was infrequently used with mostly static content and just out of warrenty. Both went over 1000 defective sectors when trying to format them and became really slow - so not even useful for invaluable data any more. Of course other drives have failed too, but these were usually really old, with the appropriate usage. And non showed this failure mode. Well.. it's just a limited personal experience, but fits "WD Green Observations". MrS
  9. [ETA]MrSpadge

    Seagate Desktop SSHD Review Discussion

    If the NAND just goes away if it's worn out that would be the ideal solution (apart from making it replacable with a small socket or slot), but I'd like to have an official statement regarding this issue. If I were you I'd ask the manufacturers that question over and over again for every new hybrid model, until I'd hopefully get an answer. You could also point it out in your reviews.. although this would make it less likely you'd get new hardware in the future :/ Regarding the 4 TB.. well, the number of shops listing it increased from 4 to 9 in the few days since my last post, so a launch may be imminent. But the dreaming part was probably referred to it using 1 TB platters instead of 800 GB, didn't it? BTW: the prive premium on these seems to be quite reasonable, about 30€ independent of capacity. This would make the 4 TB model far superior to e.g. the WD Black, which costs ~40€ more than the 4 TB Hybrid Seagate in that price comparison. MrS
  10. [ETA]MrSpadge

    Seagate Desktop SSHD Review Discussion

    In Germany a 4 TB model appears in the price lists, although it'S not available yet. Apparently it doesn't change anything (still only 8 GB cache), but keeps the rather small price premium over regular models. I wonder if it may finally be the first 4 TB drive with 1 TB platters at 7.2k rpm. BTW: Brian, did you ever get any statement about what happens to these drives once the MLC write cycles are used up? I know that's not an issue for regular SSDs.. but a cache recieves more writes, which is especially true for such a small one. And it can't easily be replaced.. so I wonder: what happens? Will I be able to continue using the drive like a regular one (fine) or could I throw it away? MrS
  11. [ETA]MrSpadge

    WD Green 3TB Observations

    Thanks for posting, Brian. I can understand your point here, and even support it: to see meaningful failure rates with 1 - 2 years you'd at least have to stress-test 1000 units. For every model.. which should make it pretty clear that this is just impossible. But Marcin's point is also very important: if an HDD has a significantly higher failure rate than others, all other characteristics pale in comparison. Especially now with the slow evolution of HDDs, which keeps the older models useful for far longer - if they haven't failed yet. In the past SR had the reliability database to get around the problem of "too few samples". Personally I haven't used it in years.. not ure in which state it is. MrS
  12. You're almost right here. What's missing is that copying small files, even from the same directory, will automatically include some random access too. The files being read may be spread across the disk, they may be written different locations, filling up holes in the current file structure (what ever the OS see fit) and the MFT may be accessed. That's why multi-threaded copy for higher queue depths still improves throughput: the disk can arrange the accesses better through NCQ and can reduce access times. BTW: if the folders you're copying are often the same I'd look into incremental sync'ing with e.g. DirSync (nice freeware). Not sure it can increase QD, but it certainly saves time not to transfer untouched files again. And I'm not a fan of buying large SSDs for storage, that's often a waste of money (IMO). I'd rather use the SSD for temperary storage and as automatic cache. If you're concerned with many small files an SSD would be ideal. And if the SSD cache also buffers writes you may see a massive speed increase. The cache capacity would also be huge compared to the amount of storage required for small files MrS
  13. The higher queue depth would probably benefit random transfers much more than sequential ones. Apparently sequential 4k access is really uncommon.. MrS
  14. Unityole, that makes more sense to me now. Are you already doing incremental backups? Not needing to transfer files is of course the fastest way It would also help a lot to pack them into simple zip files.. although in that case they have to be ready for archieving and not frequently being worked on any more. MrS
  15. I agree with continuum here. And.. don't try to overtweak things. MrS
  16. Since a long time manufacturers have preferred 5x800 GB over 4x1000 GB at 7.2k rpm. Even Seagate does 3x1000 at 7.2k rpm and 4x1000 only at 5.9k rpm. It could well be that the vibration generated by spinning multiple platters currently becomes too much at 4x1000 and high rpm. This would lead to pretty bad access times, which is something the Black and RE can't afford. Well, actually the 1st gen 4 TB Black had significantly worse access time than it's 2 TB predecessor. Uhm.. no! At any given point in time data is only read/written from/to a single platter in traditional HDDs. No interleaving or parallel access happens, since they've only got 1 motor for the actuators anyway. In the 90's Seagate tried several motors, but it was far too complex and expensive and quickly overtaken by simpler drives with denser platters. It's a vague description, for sure, but I have no problem understanding it as "the higher processing power in the new controller enables more accurate tracking, which improves the drives performance". It will be interesting if they finally got back to random performance of the Blacks with 500 GB/platter BTW: that 48% must be something else than STR, which obviously can't change much with just a switch to AF. @Unityole: sequential 4k acess? Any software doing this (and failing to bundle those requests into larger blocks) should be probably be kicked right into its.. code. Random 4k at QD=1, on the other hand, happens sometimes in the real non-server world. MrS
  17. So, as always no word on platter density and count. But they packed more processing power in there and hence increased performance drastically. Which, on the other hand, means their current Blacks for hundreds of $ are horribly limited by their controllers and could actually perform much faster. What a massive fail on their end! May I suggest that most of the increased performance actually stems from finally going from 500 - 800 GB platters to 1 TB platters? The increased tracking accuracy they're talking about is probably just neccessary to enable this density at 7.2k rpm. MrS
  18. [ETA]MrSpadge

    Seagate Ships 1MM SMR Drives Discussion

    This depends on whether you carry it around or not. If this is used to increase the capacity of 3.5" low-rpm mass-storage drives (for NAS, Backup etc.) the disk placement could be very static. MrS
  19. [ETA]MrSpadge

    Seagate Ships 1MM SMR Drives Discussion

    That depends on how large the independent blocks are. The PSU can at least buffer a few 10 ms.. MrS
  20. [ETA]MrSpadge

    Seagate Ships 1MM SMR Drives Discussion

    This technique looks very suitable for archiving large media files - which are what the bulk of todays massive storage is used for anyway. To use them efficienctly I think the shingle-bands should be exposed to the OS/driver to handle things effciently. Some possible optimizations which come to mind: - relax defragmentation by the OS - bundle writes more liberally before pushing them to the disk - rather write to a new band than squeeze data into an existing hole in a almost filled band (and cause lots of re-writes) - upon file modifications: rather than always overwrite starting from where the modification happened, and in the worst-case overwriting the entire band, start at the beginning of a band if the modification is to be done in the fisrt 50% of the band. This should half the average performance hit - align logical block sizes with band sizes to restore write speeds to almost normal levels, trading in some capacity (OK for large files) And thinking about this.. how's the state of linear overlap between bits in HDDs? I've heard in BluRays they're already overlapping 8 bits sequentially and triggering / coding the data on the signal flanks. This requires more states than 0 and 1 to be distinguishable. Is this already being done in HDDs? Can it be done, or would there be some fundamental limitation? This could increase the linear density further, which yields nice STR increases. The drawback would probalby be more fragile data and of course more complicated read-out. But if it can be handled.. MrS
  21. [ETA]MrSpadge

    Serious HD problem

    And for the next time: don't trust important data to any single drive - backup is your friend! MrS
  22. You can also set quotas for NTFS folders. I have never used it, but this could help you to limit yourself to 20 GB (or what ever turns out appropriate) of temporary / donwloaded files. On the other hand: I'm not sure if this throttle is really needed. The OS will complain about a filled drive but continues to work. And the downloads stop anyway. There are also tools which monitor things like disk usage and can drop you an email if a given threshold is reached. Some freeware is probably available. MrS
  23. [ETA]MrSpadge

    laptop drives for RAID 0

    Well.. most reviewers are not impressed by hybrid drives because they don't increase (sigle run) benchmark scores all that much. And you often find comments like "In practice, the difference over a regular HDD was huge. Just not as large as with a full SSD." If given the choice I'd rather have one hybrid drive than two regular ones in Raid 0. The largest performance "problem" with the hybrids is that they usually don't cache writes and that every now and then you're reading uncached data. I don't expect this situation to change in any meaningful way in Raid 0. And I'd rather go for one moderately prized SSD like the 120 GB Samsung 840 Evo and a 1 TB regular HDD than 2 HDDs (Hybrid and Raid 0 or not) if I had 2 drive bays in my laptop. MrS
  24. @Rod: Yes, the Evo and Pro are pretty different drives, as their price difference reflects. Hence I'd rather go with the vanilla 840 than the Pro, if the Evo is not yet available (in Germany shops expect to get them within 1 to 2 weeks). The benefits of the Pro are higher performance and longer write endurance. Neither of which matter for typical desktop / laptop use: the performance difference should be unnoticable without a stop watch, and depending on your laptop you might already be CPU-bound with slower SSDs. Or you might be limited to SATA2, which would make both drives perform similar. And write endurance: it's absolutely safe at regular desktop workloads, and even more so at 500 GB capacity! If you can afford the Pro it's surely a good drive.. but I don't think it makes any sense to spend that money over a vanilla / Evo, even if you have it. Don't bother with partitioning unless you want a D: drive anyway. Just never fill the one partition / the drive completely.. I'd say leave at least 10 GB free. Performance will be somewhat reduced compared to an empty drive, but you're not suddenly starting to write 10's of GB of very small files randomly to the drive, are you? Regarding the capacity choice: I don't think it makes all that much sense to store large videos on SSDs either. I'd rather archive them on an external 2.5" HDD, which could easily be taken along with the laptop. For the price difference between a 250 and a 500 GB SSD you could easily afford another external 1+ TB HDD, in addition to the 500 GB you'll already have after the upgrade. MrS
  25. NTFS also has compression, although it only works on 64kb chunks at a time and has to be quick, i.e. can't compress as much as Winrar or alternatives. MrS