Disk Write Optimization?
I'm working up a scenario where I update an object several times over a short period. After the first retrieval, I can pretty safely assume that the object is in memory, so I can reasonably bank on the notion that subsequent retrievals will come from cache. For each inbound event, I'll retrieve and update -- every time -- replacing the cached image.
So my question is about the write behind log. Let's say I've got 1M keys/values queued to write to disk, but only 500K of the keys are unique -- there are several values for the same key in line to be written to disk. At this point, the ones earlier in the queue are already stale -- only the last one matches the current cache image.
Is there a mechanism in place that optimizes the write-behind so that the earlier images get removed from the queue?