We just released the latest and greatest version of Varnish High Availability (or VHA as we like to refer to it), our cache replicator, with two main features: multi-cluster support for distributed replication and dynamic backends integration for easier and more versatile configuration.
As our last release was quite a while ago, let me give you a tour of VHA to remind you what it's all about. If you already know what VHA is and does, you can jump directly to the aptly named "What is new in VHA" section where I explain the new features.
Varnish High Availability is a cache replicator, meaning that it will watch your Varnish traffic (much like varnishlog and varnishncsa do), and push new content to fellow Varnish nodes if they don't have it yet. You may be wondering why we took time to develop such a solution when pure VCL solutions already exist. The reason is very straightforward: it just wasn't good enough.
VCL-based replication works. But it only works well in some very precisely scoped, basic, scenarios and each new feature requires an exponential amount of work, marring the VCL with exceptions and special cases. For example, most VCL-based replication setups work pretty well for two nodes but become crazy to manage once you want to add a third node or support ESI.
So, we set off to create something that would require virtually no configuration nor maintenance, allowing us to keep our sanity and hard-earned sleep hours. We started small and simple, and release after release, here we are, with a much larger feature set:
As mentioned in the introduction, one of the big features in this release is the multi-cluster support. Before 2.0, VHA would replicate to all known nodes to keep things simple. This translated, for clusters, into something like this:
So we got smarter, and now VHA 2.0 will react like this:
VHA requires light VCL integration to deliver its full potential and in previous versions, we made things easy by generating the VCL glue from the VHA configuration files (nodes.conf). The annoying part was that when a node went up or down, you had to:
Now, we don't have to care about the red lines anymore, decorrelating VHA and Varnish further for better maintainability. Updating nodes.conf is easily automated using DNS, AWS services, or Consul for example, partly because the configuration format is super simple. The three clusters in our previous examples are described by this nodes.conf:
[FRANCE]
FR1 = https://1.1.1.1
FR2 = https://1.1.1.2
FR3 = https://1.1.1.3
[USA]
USA1 = 2.2.2.1:8080
USA2 = 2.2.2.2:8080
USA3 = 2.2.2.3:8080
[CHINA]
CHN1 = http://[1:1:1::1]
CHN2 = http://[3:3:3::2]
CHN3 = http://[3:3:3::3]
The notable point here is that this configuration is the same for all the nodes, in all the clusters, and the best part is that VHA is resilient to desynchronization: nodes can be updated one by one or all at once without problem.
The reason this is possible, stems from our dynamic backends vmod: goto. Upon receiving a replication request, VHA will lookup its callback address, resolve it, and use it as backend, EASY! And to keep replication secure, we offer multiple checks to choose from: ACL, IP, Port, token, and in addition, these criterias can be combined together for more fine-grained control.
Of course, we have been refining the code, and we continued our VSL (Varnish Shared Log) integration to keep expanding our possibilities, notably:
std.log(vha-instruction: forbid)
".All this makes VHA 2.0 the most solid and easiest version to date, increasing your control over cache replication without losing any of the sweet performance boost we already provided. Request a free trial today if you'd like to try out VHA as part of Varnish Cache Plus.