Ok, long time reader, first time poster.
Recently I was wondering to myself why HD manufacturers do not have a line of drives with massive caches. I'm thinking along the lines of 32, 64, or even 128mb. The first reason that jumped out at me was cost. The larger buffers would clearly increase the cost of the drives. 32mb of ram must cost more than 16mb, and 64mb would be even more. But how much more? A normal 1gb PC3200 DIMM costs about $100. A typical 2-sided DIMM has 16 memory chips that are each 64mb. That means each chip costs roughly $6.25 at retail. It would presumably be even lower straight from the manufacturer and not attached to a PCB. I would happily pay $6.25 more to have a hard drive with a 64mb cache.
But it gets better. The memory doesn't even have to be as fast as PC3200. As far as bandwidth goes, it only has to be able to saturate whatever bus connects the HD. SATA only gets up to 300MB/sec. Even good old PC100 beats that. Latency is another concern, but again PC100 is still an order of magnitude ahead of most hard drives.
Could it be that larger caches produce an unacceptable penalty for cache misses? 64mb of cache takes longer to search than 16mb. But it seems that manufacturers could minimize this by keeping track of what is in the cache - then the drive would only have to scan the file allocation table (or whatever) to determine if the requested data is present. Granted, the table for a 64mb cache would be bigger than a 16mb cache, but it seems to me this entire search could be done while the read heads are still moving into position.
So here's my next best guess: the OS disk cache performs most of the function that a HD cache performs (hereinafter OS cache refers to caching in system ram and HD cache refers to caching on the HD PCB). Using Windows as an example, the OS already creates a sizeable disk cache depending on how much ram you have. A typical system might have 1gb of ram. It would not be unusual to have several hundred MB of that ram being used as disk cache if there is plenty of free memory. Anything that is going to be in a HD cache will also be in the OS cache. It seems like most of the time a large HD cache would go unused. Windows even does read-ahead caching.
So it seems to me HD caching quickly reaches a point of diminishing returns as the size increases. The benefits it grants now probably come from the highly specialized nature of the cache - facilitating internal operations, reordering read/write/transfer data, more efficient algorithms, more/less aggressive read-ahead, etc.
I have to believe that if greatly increasing the HD cache size would provide a substantial increase in performance then manufacturers would be doing it. RAM prices are low enough that the premium would be a small percentage of the drive cost. Therefore, the performance benefits of a larger cache simply must not outweigh even a small increase in price.
I appreciate any comments.