Search the Community

Showing results for tags 'smr'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Calendars

  • Community Calendar

Found 2 results

  1. Just a suggestion for an extra thing to look into when next benchmarking a SMR drive. I know everyone says "don't use SMR in RAID" because... Bad rebuild times - such as the ~57 hrs vs ~20 hrs taken from the Seagate 8TB archive drive review. But what if you built a SMR based RAID array but with the use of PMR drives for use as rebuild drives. Now with no technical knowledge here, this would seem to fix a few issues (and may cause new ones??). Pro: * Cheaper through use of SMR drives for initial RAID ?? * Short rebuild times - in line with PMR drives Con: ?? * Potential for decreased future RAID performance due to mixed drive composition?? Thus I think it'd be great if we could look at testing this in the next SMR drive review? Say initial build RAID 5 x 4. Whack a bunch of data on it, test performance, yank a drive and rebuild with SMR / PMR, retest performance and compare. Also compare to RAID 5 with PMR drives with PMR rebuild (the standard setup). The potential of lowering entry cost could mean an increased size of RAID for many people without blowing the budget whilst being protected against long rebuild times? Thoughts? (Happy to be wrong... just would like it if it was proven and not just because...)
  2. I recently bought a new 8TB Archive disk (ST8000AS0002). Unfortunately, writing large amounts of data to this disk almost always results in timeouts where the write process stalls for minutes and eventually the OS decides the disk is not accessible any more, resets the SATA bus or even deactivates the SATA port (until the next reboot). This of course results in corrupted files. I already changed cables, disk location in my PC, and actually had the disk itself replaced too. SMART information and selftests of the disk do not show any failures or errors. I know the design of SMR drives require an internal reorganization when large amounts of data are being written but I would have expected this to result in slower but somewhat constant write rates, not write stalls taking more than a minute. Is this behaviour intentional? If so, is there a maximum allowed stall time that I can configure somwhere? Thank you! Hardware: MSI-7817 Haswell chipset, i5-4570 CPU, 8GB RAM