123stevea

Member
  • Content Count

    4
  • Joined

  • Last visited

Community Reputation

0 Neutral

About 123stevea

  • Rank
    Member
  1. Some comments - ALL disk device interfaces are block oriented, and all disk i/o is in terms of full blocks [a disk block has been 512bytes for decades, tho' in the past several years drives w/ 4KiB blocks have appeared, with corresponding OS changes] . Modern SSD drives must erase an entire PAGE at a time, and PAGES are typically on the order of 0.5 to 4MB in size (perhaps several thousand - 512B blocks), and the page erase takes a considerable amount of time (milliseconds). FLASH bits cannot generally be re-written without erasure (of the page). Generally after you toggle a bit from zero(the erased state) to a one (the non-erased state) the only way to write a zero is by erasing the entire page. In addition, modern high density FLASH chips used in SSDs come from the manufacturer with several percent failed bits (along with some 'extras' and some on-chip 'masking' features that can only manage a limited amt of failed bits.). The bits continue to fail during operation, then as they 'wear', after a few hundred to a few thousand write cycles, the bits fail at a high rate. These are peculiarities of the media. The design of an SSD controller is a study in making reliable systems from unreliable & awkward components. To that end ... Blocks inside an SSD can be categorized as 1/free (clean, erased, ready to write), 2/ in-use (written with data), 3/ dirty (written, and then either subsequently TRIMed, or else marked dirty when the same LBA is re-written) or sometimes 4/ failed (enough FLASH bits have failed making the block unusable). The controller maintains a MAP from all in-use LBAs to on-FLASH physical addresses (PBAs). This MAP from LBA->PBA changes every time the LBA is written. The SSD controller cannot calculate which PBAs are actually unused (potentially 'dirty') without info from the OS. For example when you delete a 1000 block file, your OS filesystem (which the controller does not understand) may be modified by re-writing a directory structure block and a few bitmap blocks, but it doesn't generally zero the 1000 blocks or otherwise tell the drive those many blocks are no longer needed. Instead the OS *may* send a TRIM command to the drive declaring to the controller that those 1000 blocks are unused by the OS (transitioning them from in-use to 'dirty') and generally returning zeros in response to reads. When a particular LBA has no MAP (has never been written or has been TRIMed), a read operation that finds no LBA->PBA map, so the drive returns zeros (nearly all modern drives). A write operation to the same LBA instead writes data to some PBA from the free list, and assigns a new MAP from LBA->PBA. This transitions the PBA block from FREE to IN-USE. When an LBA has a MAP to some PBA, the read operation returns data. In a write operation to that LBA ... a/ controller writes data to some PBA on the free list, and assigns a new MAP from LBA to that PBA, b/removes [unMAPs] the old LBA->PBA MAP. This transitions the 'new block' from 'free' to 'in-use', and transitions the old PBA from 'in-use' to 'dirty'. -- As the drive becomes filled with 'dirty' and 'in-use' blocks and relatively few free blocks the drive controller must perform garbage collection in order to allow new writes (increase the 'free list'). Generally the controller finds a PAGE with a lot of dirty blocks and few in-use & free blocks and prepares to erase it. To do this it must copy off (read, write-to-another-page and change maps) all of the in-use blocks. Then the FLASH page is erased (at a large time expense). This problem of another write forcing the controller to copy/clean/erase a page is sometimes characterized as 'write amplification'. ON a very filled SSD, one block write effectively causes a lot of writes (by copy plus a page erase). This write-amplification, where one block write effectively causes numerous copy-writes, is a serious performance problem, as the drive becomes nearly filled with in-use blocks (a filled disk). For this reason many drives are built with "overprovisioning". Write amplification increases drive wear as well. You might have noticed that SSD drives have peculiar accessible drive size. Your nominal 512GB drive may actually have only 500GB or 480GB accessible to the bus. Since the 480GB drive can only have 480GB of pages 'in-use' this means the remaining (512-480) 32GB of pages can only be in states 'free', 'dirty' or 'failed'. Having a large amount of blocks in the not-in-use and not-failed states means the controller can garbage collect more effectively with far less write-amplification, thus improving performance. You can also create an unused partition and TRIM it, to add additional used-level overprovisioning. An SSD controller also has to manage the use of blocks to even "wear"; in a high density FLASH each bit can only be written a few hundred to a few thousand times before failing ! An SSD drive controller has another very important function - it manages FLASH errors. Brand new FLASH chips are extremely high density and come from the factory with several percent of cells failed !! Cells continue to fail as the drive is used, and eventually as the wear limit is approached the failure rate increases. There are on-(FLASH)chip features that allow a limited amount of masking out of bad bits, but also the drive controller must manage unrecoverable units of memory. As errors accrue the amount of overprovisioning will decline (the controller marks units of FLASH as inaccessible and just uses the rest). Some enterprise SSD controllers drop into a read-only mode once the of wear or writes exceeds the OEM limit. == There are generally speaking two OS policies wrt TRIM. Most typically the OS filesystem code can(optionally) TRIM blocks that are no longer needed as they are released. The moment you delete a file the OS will, for example, write blocks related to the directory structure, the bitmap and the TRIM of the unneeded file blocks. An alternative policy is for the OS to periodically (daily, weekly) TRIM all the space within a partition that is not part of the active filesystem block-set. The TRIM commands do use some bus(e.g. SATA) bandwidth so scheduling this for an off-time may have value in some high demand server situations. Also some early SSD controllers behaved rather poorly wrt TRIM commands (they would sync all I/O around the TRIM, causing a bottleneck & poor performance). I don't use Windows, and aside from an understanding that Win8 and later automatically recognize SSDs, that NTFS & the FAT family filesystem can support as-released TRIM, and that they've overloaded the 'defrag' utility to TRIM (and some disk optimize utility) - I have no idea what policy options exist. On LInux and most BSD variants the admin has explicit control. All linux kernel supported file systems, and AFAIK all BSD supported file system can perform 'as released" TRIM. One of the SSD drive controller chip mfgrs was spewing some marketing slime several years ago that you no longer needed TRIM with their magical drive controller. This is marketing hyperbole (aka a lie). Their chip would compress some small subunits of blocks (makes the compression inefficient) , manage the FLASH space in these smaller units (extra overhead), and then have a little extra unused space available to act as some extra over-provisioning (assuming your files blocks were sufficiently compressible). Extra (probabilistic) overprovisioning is a good thing for performance, but it is NOT a substitute for TRIM. To be fair this chip-design house, now part of Seagate, made good SSD controllers chips, but their marketing was dishonest IMO. That's why real-world test, like on this website are so important. If you are writing/re-writing a significant fraction of your disk size per day, that might push you toward "as released" TRIM policy. If you need optimal bus performance for I/O then periodic TRIM migin off-hours might be better. The distinction has become less important as drive size has increased and bus bandwidth has increased. IN any case DO use TRIM. TRIM is a 'must' item for performance and reduced drive wear.. You have no direct control over how/when garbage collection occurs within the drive.
  2. This site has great router(&NAS) reviews including I/O rates to usb3 & esata attached drives. The Asus AAC2900 RT-AC68U look pretty nice t me, but read the full reviews. https://www.smallnetbuilder.com
  3. 123stevea

    Network drives and multiple users?

    You'll find a lot of good reviews/data here. http://www.smallnetbuilder.com/
  4. Your disk has a raw contiguous read speed of ~100MB/sec so there is no way you are getting anything close to that rate through a file system with all the seeks. More likely 35-45MB/sec on a good day. Then you have the overlap issues with network performance. What is probably happening is that the files have already been read and appear in disk buffer when you read - so the read from buffer is much faster and limited by the ethernet speed. When you write there is no buffered data advantage and you are looking at the disk speed through a file system as the limiting factor. Also note that SMB write performance to network servers is sometimes problematic due to misconfiguration. Google it.