Most people know that “cache invalidation” is one of two hard things in computer science (the other one is “naming”) and a strong cache invalidation strategy is as important as your caching strategy. This is because you need to be able to free memory from your storage to make room for new cached content (purging) or you may ban content that should not be cached any more because it is outdated or because it was cached in error and never should have been in cache at all (mistakes happen sometimes).
Nowadays an increasing number of websites use more than a single caching instance because it is best practice to have at least a pair of Varnish instances running in High Availability to ensure reliability and availability for your end users. Many of Varnish Software’s customers also use clusters of Varnish servers. For example, groups located in the same spot, i.e. “Production” and “Testing” groups, or groups of servers placed in different regions, which is typically the private CDN approach.
No matter how distributed your architecture is or how many servers you are using, content will, in both cases, need to be invalidated at some point. To ensure every cache instance is up-to-date and consistent with the other caching servers you need to be smart about your cache invalidation.
For example, if you have two Varnish instances running in a High Availability cluster, whenever a piece of content is evicted from one of the two nodes, you want it, for the sake of consistency, to be nuked from the other node as well so that your end users will get the same content, no matter which Varnish instance fulfils it. This is just one example, and it is an easy one, of the possible scenarios you have to plan for. Getting cache invalidation right grows together with your architecture as soon as you have more than a single Varnish cluster and maybe also a few caching layers.
That’s why we have developed the Varnish Broadcaster, which replicates requests to multiple Varnish caches from a single entry point. This will make it easier to purge/ban content across multiple Varnish instances running a single command.
The broadcaster has a REST API, which receives invalidation requests and distributes these to all configure caches.
Varnish Broadcaster comes in packages, making the installation straightforward. Use either “yum install varnish-broadcaster” or “apt-get install varnish-broadcaster”, depending on your OS.
# this is a comment [Europe] First = 188.8.131.52:9090 Second = 184.108.40.206:6081 Third = example.com [US] Alpha = http://[1::2] Beta = 220.127.116.11
You will then need to configure the Varnish caches the broadcaster can send requests to. In this example we have two clusters, one in Europe and the other one in the US, and for each cluster you will need to define the name of your Varnish instance and then its IP address.
Both purging and banning are done running an HTTP request.
curl -is http://localhost:8088/foo/bar -H "X-Broadcast-Group: prod" -X PURGE
We are purging the “/foo/bar” page from every Varnish instance that is part of the group “prod”.
curl -is http://localhost:8088/ -H "Xkey-purge: a1b2c3d4e5f6" -X PURGE
Purge by xkey is also supported as specified here.
curl -is http://localhost:8088/foo -H "X-Broadcast-Group: testing" -X BAN
We are banning the page “/foo” from every Varnish instance that is part of the “testing” group.
Varnish Broadcaster will simplify the whole cache invalidation process, making your caching layer more performant and easier to manage as it will reduce the amount of time sysadmin/devops will have to spend writing a script or any other logic that automates the purging/banning workflow, leaving that time open for other activities and making Varnish even more devops friendly.
Documentation for the Varnish Broadcaster lives on our documentation website. If you'd like to learn more, you can also join us for a live webinar on the 29th of August.