Francisco Velazquez

Recent Posts

Varnish modules (VMODs): An overview

This is a general overview of what Varnish modules (VMODs) are available, what they do and what kind of documentation is available for each. Upcoming blog posts will cover these VMODs individually in greater detail. Be sure to subscribe to the blog so you don't miss any of these posts.

One important thing to keep in mind about VMODs: they are powerful and convenient and help you do a lot of things. But you have to retrieve, build and install/update VMODs yourself if you're not using Varnish Plus. Varnish Plus customers receive packages, which contributes a lot of ease of use and saving time - a lot easier to handle and manage. Naturally this can be as important as the features the VMODs enable.

The Varnish Configuration Language (VCL) is the domain specific language of Varnish. This language is very powerful and efficient for most tasks a cache server performs. However, sometimes you might need more functionalities, e.g., looking up an IP address in a database, or modularizing your code. Varnish modules (VMODs) exist for precisely these kinds of reasons.

VMODs are C code compiled as shared libraries, which can be called from VCL code. Compared to inline C, VMODs are much easier to maintain and are more secure. Every VMOD must come with a .vcc documentation file that can be compiled into a manual page or online documentation. This, together with the fact that VMODs are modules, makes it easier to share code and easier to debug in collaboration with other developers.

Table 1 presents an overview of the available VMODs we package and support in Varnish Plus. The second column, Varnish Plus only, marks those VMODs that are available only for Varnish Plus customers. The third column, Varnish Cache VMODs in Varnish Plus packages, marks those modules that are also available in Varnish Cache, and we bundle them in Varnish Plus (Varnish Plus customers have support also for these VMODs).

Table 1: Availability of Varnish modules (VMODs)
VMOD Varnish Plus Varnish Cache

Besides the VMODs presented here, there are many open source VMODs. An ongoing effort to describe them can be found at Next is a short description the VMODs listed in Table 1.

1. acl

This module allows you to match IP addresses against ACLs similar to the built-in VCL ACL implementation. The key difference is that your ACLs do not need to be bound to the active VCL and can be stored as strings in a separate VMOD or even backend responses.

Currently, only IPv4 addresses and subnets are supported. Entries can be prepended with the exclamation mark ! for a negative match.

Further reading

2. bodyaccess

This VMOD is primarily used when including the request body in the hashing function. We have made a tutorial that demonstrates this:

Further reading

3. cookie

This module handles HTTP cookies more easily (without regex) in VCL. It parses a cookie header into an internal data store, where per-cookie get/set/delete functions are available. A filter_except() method removes all but a set comma-separated list of cookies. A filter() method removes a comma-separated list of cookies.

A convenience function for formatting the Expires directive (in the Set-Cookie HTTP response header) is also included. If there are multiple Set-Cookie headers, vmod-header should be used.

4. debug

This module contains a set of utility functions for the developers of Varnish and VMODs. You should not use this module unless you have a deep understanding of Varnish internals. This VMOD is typically used only by Varnish core developers.

5. directors

This module enables backend load balancing in Varnish and implements a set of basic load balancing techniques.

For VMOD writers, it serves as an example of how one could extend the load balancing capabilities of Varnish.

6. edgestash

This module is a real-time templating engine for JSON objects. Varnish can assign a JSON object to each response, which enables truly dynamic responses. JSON objects can either be fetched from a backend or generated in VCL.

An Edgestash template is fetched and compiled into response optimized byte code and then stored into cache. This byte code is then executed on delivery, efficiently streaming the response to the client using a zero-parse and zero-copy algorithm. JSON is also stored into cache as a fast search index.

Edgestash currently supports the full Mustache spec, and is available in Varnish Cache Plus 4.1.6r2 and later.

Further reading

7. goto

This module enables dynamic backends in Varnish, which enables dynamic routing on a per-request basis. Dynamic backends are those backends defined by request time rather than being predefined.

The goto VMOD can use any HTTP request header as the source to define backends. A clear advantage of this is the possibility of deploying a DNS-backed server pool that scales up and down to accommodate requests load. This feature fits well with autoscaling cloud deployments.

8. header

This module manipulates duplicated HTTP headers, for instance multiple Set-Cookie headers.

Further reading

9. http

This module allows Varnish to control HTTP communication. It can be found in the vmods-extras package. The module supports both synchronous and asynchronous communication, and it has automatic loop detection and prefetch URL generation capabilities. The prefetch capability is particularly useful for video on demand (VoD) and live video streaming servers.

Further reading

10. kvstore

This module allows you to manage a high-performance key-value store from VCL. The database is in-memory only, and supports setting a TTL (Time To Live) when a key-value pair is inserted.

kvstore is good example of a VMOD that fundamentally extends the capabilities of VCL. With kvstore you can, for example, create a hash table to solve problems in high-level languages data organization.

11. leastconn

This VMOD implements the least connections director for Varnish. With this director, Varnish will send client requests to the backend with the least number of active connections.

Each backend has an associated weight. A backend with, e.g. weight 10 will receive twice the number of client requests as a backend with weight 5.

12. paywall

This module contains functions to ease the configuration of the Varnish Paywall.

Further reading

13. rewrite

This module aims at reducing the amount of VCL code dedicated to URL and HTTP headers manipulation. The usual way of handling things in VCL is a long list of if-else clauses:

sub vcl_recv {
        if (req.url ~ "pattern1") {
                set req.url = regsub(req.url, "regex1", "substitute1");
                else if (req.url ~ "pattern2") {
                        set req.url = regsub(req.url, "regex2", "substitute2");

Using vmod_rewrite, the VCL boils down to:

sub vcl_init {
        new rs = rewrite.ruleset("/path/to/file.rules");

sub vcl_recv {
        set req.url = rs.replace(req.url);

where file.rules contains:

"regex1" "subtitute1"
"regex2" "subtitute2"

14. rtstatus

This module lets you query your Varnish server for a JSON object containing counters. For example, visiting the URL /rtstatus.json on Varnish will produce an application/json response in the following format:

  "uptime" : "0+16:12:58",
  "uptime_sec": 58378.00,
  "hitrate": 30.93,
  "load": 2.73,
  "varnish_version" : "varnish-plus-4.1.2r1 revision 4d86388",
  "server_id": "ubuntu-14",
    {"server_name":"boot.default", "happy": 0, "bereq_tot": 13585163,"beresp_tot": 290463715,"pipe_hdrbytes": 0,"pipe_out": 0,"pipe_in": 0,"conn": 82094,"req": 82094},
    {"server_name":"boot.server1", "happy": 0, "bereq_tot": 0,"beresp_tot": 0,"pipe_hdrbytes": 0,"pipe_out": 0,"pipe_in": 0,"conn": 0,"req": 0},
    "VBE.boot.default.happy": {"type": "VBE", "ident": "boot.default", "descr": "Happy health probes", "value": 0},
  "VBE.boot.default.bereq_hdrbytes": {"type": "VBE", "ident": "boot.default", "descr": "Request header bytes", "value": 13585163},
  "VBE.boot.default.bereq_bodybytes": {"type": "VBE", "ident": "boot.default", "descr": "Request body bytes", "value": 0},

Whereas visiting the URL /rtstatus on Varnish will produce an application/javascript response.

15. saintmode

This module lets you deal with a backend that is failing in random ways for specific requests. It maintains a blacklist per backend, marking the backend as sick for specific objects. When the number of objects marked as sick for a backend reaches a set threshold, the backend is considered sick for all requests. Each blacklisted object carries a TTL, which denotes the time it will stay blacklisted.

16. session

This module lets you manipulate local variables of sessions. This can be used to improve setups where Varnish acts both as a backend for Akamai and as a normal server for regular clients. At the time of writing this blog, only the idle timeout can be accessed. Example:

sub vcl_recv {
        if(...) {

Further reading

17. softpurge

This module is used to implement a cache invalidation strategy that allows Varnish to reply to client requests using the cached object and updating it in parallel. softpurge reduces TTL but keeps the grace value of a resource.

18. std

This is the standard VMOD, where the Varnish developers have added various utility functions. Most VCLs use at least one function from this VMOD. Function definitions are the following:

VOID cache_req_body(BYTES)
VOID collect(HEADER)
BOOL file_exists(STRING)
INT integer(STRING, INT)
INT port(IP)
STRING querysort(STRING)
INT real2integer(REAL, INT)
TIME real2time(REAL, TIME)
VOID rollback(HTTP)
VOID set_ip_tos(INT)
INT time2integer(TIME, INT)
REAL time2real(TIME, REAL)
VOID timestamp(STRING)

19. tcp

This module allows for changing attributes of TCP connections in your VCL code. The primary use is for rate limiting (pacing) using the fair queuing (FQ) network scheduler on recent Linux systems.

Further reading

20. var

This module implements basic variable support in VCL. It supports strings, integers and real numbers. There are methods to get and set each variable.

There are global and local variables. Global variables have a lifespan that extends across requests and VCL programs, for as long as the VMOD is loaded. Local variables are cleaned up at the end of a request.

21. vsthrottle

This module limits the traffic rate on a Varnish server. It offers a simple interface for throttling traffic on a per-key basis to a specific request rate. Keys can be specified from any string, e.g., based on client.ip, a specific cookie value, or an API token.

The request rate is specified as the number of requests permitted over a period. This VMOD implements a Token bucket.

Further reading

22. xkey

This module adds a secondary hash key, called xkey, to cached objects. This enables purging across all objects with the same xkey. You can use xkey to indicate relationships, and think about it as tag. It is possible to assign more than one key to an object.

Two good use cases are:

  1. news sites that add a xkey to all articles that about same event, or
  2. e-commerce sites that add a xkey to each stock keeping unit (SKU) of the products presented in a page.

23. cookie-plus

This module is an advanced cookie VMOD for interacting with request and response cookies. vmod-cookieplus allows you to:

  • get, add, delete, and keep Cookie and Set-Cookie values
  • remove unused client side Cookies

24. json

This module parses JSON strings. It can also create JSON contexts out from parsing request and response bodies.

Further reading

Visit Varnish documentation to learn more and stay tuned and subscribe to the blog for more detailed posts about the VMODs and what they can do for you.


Photo by Raphael Koh on Unsplash

Read More

11/7/17 12:44 PM
by Francisco Velazquez

Top 10 things you need to know about Varnish

Whether we meet seasoned or novice Varnish users of the caching software we all love, there are a number of questions we hear again and again. This is a list of questions that have popped up numerous times. If multiple users are curious, maybe you are, too.

1. How to terminate TLS/SSL connections with Varnish Cache and Varnish Plus?

The most convenient way to terminate SSL/TLS connections in Varnish is by using Hitch. To install Hitch, you can follow the steps in the Hitch repository. If you are a Varnish Plus customer, we recommend that you install it from the Varnish Plus repository as described in the Varnish Book or Varnish Software Customer Guide. Three main sources to answer most of Hitch and PROXY related questions are: Hitch documentation, hitch man page, and vcl man page.

Hitch implements the PROXY protocol, which provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies. This protocol allows Hitch to send the client connection information in a HTTP header field to Varnish and the backend. The PROXY header describes which IP address and port was used to connect to the proxy (Hitch), and which IP address and port was connected to. We recommend that you read this blog post which explains in detail how to deal with the PROXY protocol.

Other useful links with documentation about Hitch are:

2. How to review and improve your VCL code?

To improve your Varnish Configuration Language (VCL) code, we recommend that you read the blog posts about best practices in Varnish. If your Varnish configuration doesn't behave as expected, there are many steps you can take to verify or find the problem in your VCL code.

2.1. Test your VCL in varnishtest

varnishtest is a script-driven program that helps you to test your Varnish configuration. varnishtest allows you to define mock clients and servers, and test your VCL against an automatically created temporary Varnish server. You can test external VCL files as in the examples below:

vcl 4.0;

backend default {
    .host = "";
    .port = "8080";

sub vcl_recv {
    if (req.url ~ "/admin") {
varnishtest "Test external VCL"

varnish v1 -vcl {
  include "/etc/varnish/default.vcl";
} -start

client c1 {
  # First client request with VXID=1001
  # Request misses. Varnish creates backend request with VXID=1002.
  # /index.html is cached from transaction VXID=1002
  txreq -url "/index.html"
  expect resp.http.X-Varnish == "1001"

  # Second client request with VXID=1003.
  # Request misses. Varnish creates backend request with VXID=1004.
  # /admin is passed
  txreq -url "/admin"
  expect resp.http.X-Varnish == "1003"

  # Third client request with VXID=1005
  # Request hit. Varnish builds response from resource cached in transaction
  # VXID=1002
  txreq -url "/index.html"
  expect resp.http.X-Varnish == "1005 1002"

  # Fourth client request with VXID=1006.
  # Request misses. (Varnish creates backend request with VXID=1007.)
  # /admin is passed
  txreq -url "/admin"
  expect resp.http.X-Varnish == "1006"
} -run
varnishtest b00001.vtc
# top TEST b00001.vtc passed (1.613)

varnishtest documentation is available in the Varnish Book, and in Varnish Cache documentation. In addition, look at the handful of varnishtest examples in your Varnish installation or in the Varnish Cache repository, and don't forget to consult man varnishtest.

2.2. Test compilation errors

To test your VCL code against compilation errors, run the command:

sudo varnishd -C -f <filename>

2.3. Contact Varnish support

You can speed up the review process of your VCL code by running the varnishgather script. varnishgather is a simple script designed to gather as much relevant information as possible on a Varnish setup. varnishgather collects various statistics and metrics from your setup. This information includes:

  • Any VCL files found in /etc/varnish/ (*.vcl, so /etc/varnish/secret is not included)
  • Output of dmesg, netstat, ip, iptables, sysctl, free, vmstat, df, mount, lsb_release, uname,, and varnishstat -1
  • Any bans currently active
  • Any loaded vcl file

All details about varnishgather and its usage are available at Once you have the output of the script, please upload it to filebin, and send the URL of your upload to

3. How to calculate needed space in MSE and properly configure it?

The Varnish Massive Storage Engine (MSE) is an enhanced storage method for Varnish Plus. MSE is designed to store and handle over 100TB of data with persistence, which makes it very useful for video streaming. If you want your storage to be persistent, MSE needs a bookkeeping file, which is created with the store file system.

In order to calculate the size of the bookkeeping file in MSE, please follow the size recommendations in the Varnish Plus documentation. We also advise you to take a look at the MSE-specific counters in man varnish-counters, so you can monitor the disk utilization. In addition, you might find the man page of mkfse.mse useful for getting the description of all MSE size-related parameters.

4. How to debug VHA and vha-agent?

Varnish High Availability (VHA) has a content-replicator agent called vha-agent. vha-agent reads the varnishlog, and for each object insertion detected in server A, it sends an HTTP HEAD request to a Varnish server B. If server B does not have the object, it requests it from server A. As a result, the same object is cached in both servers with only one single backend fetch.

If the URL length of your web resources is very long, you might need to adjust the http_req_hdr_len parameter of varnishd, so that the URLs are not truncated. vsl_reclen is another parameter to consider with very long HTTP header fields. These and other parameters are available in the man pages of varnishd and vha-agent.

As an additional troubleshooting action, we advise you to ensure that you have the latest versions of VHA and Varnish Cache Plus. Instructions for installing and upgrading VHA are available in the VHA installation guide.

Finally, if the issue you are facing persists, please do not forget to run the varnishgather script, upload the output to filebin, and send the link of your upload when you contact Varnish support. All details about varnishgather and its usage are available at

5. How to debug VCS?

Varnish Custom Statistics (VCS) is a data stream management system (DSMS) implementation for Varnish. VCS allows you to analyze the traffic from multiple Varnish servers in real-time to compute traffic statistics and detect critical conditions. This is possible by continuously extracting transactions with the vcs-key tags in your VSL. Thus, VCS does not slow down your Varnish servers.

VCS collects traffic information from one or more Varnish servers. A typical problem is due to wrong port configuration or firewall rules blocking traffic on the configured ports. You can check connectivity issues by running the vstatdprobe in the foreground with debugging info:

vstatdprobe -Fg -n <varnish instance name> -p <port vstatd server> <IP address vstatd server>

Details about all possible parameters of vstatdprobe are available in its man page.

Other typical configuration issue arise when VCS is configured to analyze data from many hours. If you are experiencing high memory usage, you can get an explanation by multiplying the value in bucket_len \(\cdot\) the number of vcs-key \(\cdot\) average size of vcs-key in bytes. A common value for bytes is around 12KB. If your calculation confirms the size of bucket_len is the problem, you should reduce it.

As an additional troubleshooting action, we advise you to double check that you have the latest version. For that, please see the VCS installation instructions.

6. How to configure Varnish repositories?

We advise you to use the Varnish Software repository and verify whether you have the most recent Varnish version installed. Instructions on how to configure your repository are available in the Varnish Book or Varnish Software Customer Guide. Then, to verify you have the latest Varnish version in Ubuntu, you can use the commands below to update your Varnish installation.

apt-get update
apt-cache policy varnish-plus

You might also want to read the major changes between versions. To find the files describing changes, you can run commands like the following:

dpkg -L varnish-plus | grep changes

7. How to get help with VMODs?

A Varnish module (VMOD) is a shared library that can be called from VCL code. VMODs are an excellent way to modularize (and hopefully share) your code.

At Varnish Software we are working to better document the available VMODs for Varnish Cache and Varnish Plus in Varnish Plus packages. If you encounter a problem with a VMOD packaged in Varnish Plus, you can get help from Varnish support for it. The first recommendation to solve problems with VMODs packaged in Varnish Plus is to make sure you have the most recent Varnish Plus and varnish-plus-vmods-extra packages installed. For installation instructions, please visit

VMODs can be very different from one another. The main advice is to first look at the manual page of the VMOD you are working with. Manual pages have as prefix man vmod_<name>. If documentation is not enough to solve your issues, please run the varnishgather script, upload the output to filebin, and send the link of your upload to Varnish support.

8. How to upgrade from Varnish 3 to Varnish 4?

In the Varnish Software lab, we are working on the vcl-migrator project. We will publish a blog post when this is ready. In the meantime, the varnish3to4 script is the main recommendation to syntactically migrate VCL code from Varnish version 3 to version 4.

varnish3to4 is a script that assists you migrating a VCL file from Varnish 3 to 4. The script aims to replace most of the syntactical changes in VCL code from Varnish 3 to Varnish 4, but it is not exhaustive. You can download the script from Usage and up-to-date details about the script is in the same repository.

If your VCL code is complex and the varnish3to4 script is not enough, you can get help from our experts by using the Varnish Professional services.

9. How to debug sick backends?

Health checks and Saint mode are mechanisms in Varnish to check the health of backends or specific resources, and mark them as healthy or sick. Health checks are done with probes. Failing probes may lead to sick backends, and when all backends from a director are sick you may encounter the no backend selected error. To identify when the probes are failing, we advise you to analyze the health probes with the following command:

varnishlog -g raw -q 'vxid == 0' -i Backend_health
varnishlog -d -q FetchError

The format of the result of a backend health probe is described in the manual page of vsl. Another useful command to debug health checks is varnishadm backend.list -p. The manual page of varnish-cli describes this and related commands.

If a backend is sick, serving stale objects is of great help. Stale objects are served using grace mode. An additional source to understand how stale objects are served is our blog on stale while revalidate.

If the commands in this post are not enough to diagnose the backend issues you are facing, please do not forget to add the result of the varnishgather script to Varnish support.

10. How to expand the cache storage capacity on-the-fly in Varnish?

Varnish has different mechanisms to store the cache, namely malloc, file, and mse (Varnish Plus only). If you change the storage configuration, you have to restart Varnish for the changes to take effect. However, the persistence functionality of mse (Massive Storage Engine) allows you to implement an easy work around to effectively expand your storage on-the-fly.

The workaround consists of configuring the parameters of the Varnish daemon varnishd, by way of having two storage backends, the old and the new one. Then, you modify your VCL to instruct Varnish to use both storage backends based on any condition you wish. For example:

sub vcl_backend_response {
  if(condition) {
    set beresp.storage_hint = "MSE_old"; }
  else {
    set beresp.storage_hint = "MSE_new"; }

Finally you will have to restart Varnish for changes to take effect. Remember that since you are using MSE with persistence enabled, the cache of the previous configuration is preserved. This workaround allows you to increase or decrease the storage backend without overloading your backend.


Read More

9/25/17 1:00 PM
by Francisco Velazquez

Comparing the performance of parallel and serial ESI

Parallel ESI is a performance-enhanced version of open source ESI. Parallel ESI issues all the ESI includes in parallel, upfront, so a single slow include does not slow down other operations. Open source ESI processes includes serially, one command at a time. If a single include is slow, the whole delivery pipeline is stalled.

Read More

7/25/17 1:44 PM
by Francisco Velazquez

What is varnishtest?

Have you used varnishtest? Maybe not, but we’ve just made it a lot easier for you to adopt and put into practice!

Read More

1/7/16 9:13 AM
by Francisco Velazquez

Varnish Software Blog

The Varnish blog is where our team writes about all things related to Varnish Cache and Varnish Software...or simply vents.



Posts by Topic

see all