srue

Member
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About srue

  • Rank
    Member

Profile Information

  • Location
    Salem, OR
  1. Yesterday was my friend's birthday, and I forgot to get him a gift. Winning this will help make me feel better!
  2. Cool! Sounds like fun. Thanks for doing a giveaway!
  3. srue

    Random thoughts on cache

    This is only a problem if you are unable to pass on that price difference to the consumer. In that case, you could simply offer a bigger-cache model at a price premium. This makes sense to me. The question then is whether the perceived advantage of a bigger cache will outweigh the deterence of the higher sticker price. Actual performance never even enters the equation for the average consumer (what a low opinion we have of this hypothetical average consumer). Makes me wonder why manufacturers even bother improving performance at all. All they should have to do is throw big numbers on the specs sheet and wait for the lemmings to hand over their cash. In any case, this discussion has been enlightening. Thanks for the comments everyone.
  4. srue

    Random thoughts on cache

    Excellent points. I am certain that the OS cache does not completely replace the HD cache. The HD cache, afterall, can have model-specific enhancements. However, I would still argue that there is a great deal of overlap. The non-overlapped functions probably do not require large amounts of ram. I would love to see tests on a 64mb or 128mb cache hard drive. I'd be particularly interested in the difference in performance between cache sizes (from 8mb to 16mb up to 128mb) when the OS cache was turned off. Good point about the marketing advantage. Some people still refuse to buy non-native SATA solutions (or whatever random feature they find crucial (NCQ, SATA300, etc)), even if the drives perform perfectly well. I imagine there will be some people who will buy on cache size alone. Fortunately we have excellent sites like SR so we don't have to buy on mere stats. Finally, while I agree that the risk of data corruption from power loss increases with write cache size, the extra memory can be limited to read caching. Or more likely some balance of risk/performance could be found. Consumers are already somewhat acclimated to this risk, as it is already present in the OS cache. Larger write caches at the HD level could represent only a small increase in actual risk.
  5. Ok, long time reader, first time poster. Recently I was wondering to myself why HD manufacturers do not have a line of drives with massive caches. I'm thinking along the lines of 32, 64, or even 128mb. The first reason that jumped out at me was cost. The larger buffers would clearly increase the cost of the drives. 32mb of ram must cost more than 16mb, and 64mb would be even more. But how much more? A normal 1gb PC3200 DIMM costs about $100. A typical 2-sided DIMM has 16 memory chips that are each 64mb. That means each chip costs roughly $6.25 at retail. It would presumably be even lower straight from the manufacturer and not attached to a PCB. I would happily pay $6.25 more to have a hard drive with a 64mb cache. But it gets better. The memory doesn't even have to be as fast as PC3200. As far as bandwidth goes, it only has to be able to saturate whatever bus connects the HD. SATA only gets up to 300MB/sec. Even good old PC100 beats that. Latency is another concern, but again PC100 is still an order of magnitude ahead of most hard drives. Could it be that larger caches produce an unacceptable penalty for cache misses? 64mb of cache takes longer to search than 16mb. But it seems that manufacturers could minimize this by keeping track of what is in the cache - then the drive would only have to scan the file allocation table (or whatever) to determine if the requested data is present. Granted, the table for a 64mb cache would be bigger than a 16mb cache, but it seems to me this entire search could be done while the read heads are still moving into position. So here's my next best guess: the OS disk cache performs most of the function that a HD cache performs (hereinafter OS cache refers to caching in system ram and HD cache refers to caching on the HD PCB). Using Windows as an example, the OS already creates a sizeable disk cache depending on how much ram you have. A typical system might have 1gb of ram. It would not be unusual to have several hundred MB of that ram being used as disk cache if there is plenty of free memory. Anything that is going to be in a HD cache will also be in the OS cache. It seems like most of the time a large HD cache would go unused. Windows even does read-ahead caching. So it seems to me HD caching quickly reaches a point of diminishing returns as the size increases. The benefits it grants now probably come from the highly specialized nature of the cache - facilitating internal operations, reordering read/write/transfer data, more efficient algorithms, more/less aggressive read-ahead, etc. I have to believe that if greatly increasing the HD cache size would provide a substantial increase in performance then manufacturers would be doing it. RAM prices are low enough that the premium would be a small percentage of the drive cost. Therefore, the performance benefits of a larger cache simply must not outweigh even a small increase in price. I appreciate any comments. -Stuart