My favorite Varnish branch these days is the persistence branch. Persistence was introduced in Varnish Cache 2.1 as en experimental feature. The 2.1 code was never feature complete as the code that was suppose to handle a full cache situation was never written. So, what would happen was that Varnish would more or less self destruct when the cache hit 100% full. It would throw away everything stored in cache and start over. Not very useful. However, not many people were asking for it either.
A couple of months back we talked to some people from the Wikimedia foundation. They are still running Squid and have been eager to migrate to Varnish but have been unable to do so because Varnish doesn't persist it's cache. They are afraid they won't be able to get back up again if they should ever suffer a catastrophic event such as a power outage. So, they're stuck with Squid until we get persistence feature complete. A chance to deploy our software on one of the five largest websites in the world isn't something we want to waste. So, we decided to finally write the missing bits to the persistence code.
Persistence in Varnish
Persistence in Varnish is somewhat different then the typical database or filesystem approach to persisting data. Since we're a cache we're allowed to lose some data if that would help performance. When you start Varnish with some storage backend that backend is called a silo. Persistence splits that silo up into segments. Then it starts adding objects to one of the segments. When that segment is full it is sealed and synced to disk. Then a new segment is opened. If Varnish experiences an unexpected exit Varnish will discard all the data from the open silo. All the content from the closed, read only silos is kept. That way we get to keep the number of synchronous operations, which usually are the limiting factor in performance, to a minimum. Smart, eh?
How far along are we?
The current code is feature complete. When the cache gets full it will start discard whole segments of data until we back on a confortable level. A bit brutal but for some workloads, as there might be objects in that segments which still have a week or month left of the TTL. But, for other workloads, where the time-to-live for most of the objects is more or less the same, it is perfect.
Future improvements
The code will then be refined adding support for copying objects from one segment to another in order to avoid backend requests. That way we could copy out the objects from the segment we're about to kill. Another optimization would be to have Varnish more or less automatically store objects with different TTLs in different segments. That way, when we're full there shouldn't be much need to copy objects once we decide to kill off a segment.
3.0 backport and packages?
If there is demand we should be able to backport this to the 3.0 branch. Would you find that useful?
Where is the sauce?
It lives in this github repo - in the branch named persistence. Also, the details about the implementation can be found in the varnish trac.