Ending the pipe dream: when to use pipe in Varnish

As you may know, part of my job consists of helping Varnish users, both customers and community users (join us on IRC!), often looking at their VCL and see if there's anything wrong. And sometimes, there is. After all, VCL is a programming language and mistakes happen. However, one point stands out as being badly understood, and I feel it deserves a short blog post to set the record straight.

So today we're going to talk about piping requests to the backend. Are you ready?

Let's get cracking on pipe!

First, here's what you may see in a VCl:

sub vcl_recv {
    if(req.url ~ "^/admin/") {
        return (pipe);

The idea is that everything under "/admin" is uncacheable, and that returning "pipe" will bypass the cache, making Varnish only a load-balancer/router for this subset of the URL space.

The good

Piping indeed accomplishes what you want, i.e. bypassing the cache.

The bad (and the ugly)

It sadly works in a way that is either damaging for security or for performance. Yikes!

When Varnish is asked to pipe, it will:

  • find a backend
  • send it the request headers that it parsed and possibly modified
  • plug the client socket and the backend one together and let them talk freely until the TCP connection is cut

The issue here is that you lose sight of what goes through Varnish until the connection is done (either the client or the backend closes it, or a Varnish timeout cut it short), meaning you have no idea what the server returns: can't check the status code, can't filter the headers, can't do anything really; piping is the last thing you'll do in VCL. It's not great for accounting either since you can't split the header bytes from the body ones.

As for security, remember that HTTP can reuse connections, meaning you can send other HTTP requests before the connection is closed. So an attacker could request the URL "/admin/" just to get piped, and acquire a direct connection to your backend, using subsequent requests to exploit it.

This is obviously bad, and Varnish has thought of it, which is why before piping, it will unset the "Connection" header, effectively disabling connection reuse for HTTP requests. The previous exploit is no longer possible since the backend will close the connection once the response is sent.

Great? Not really, since your client and backend connections are now tied, every time a user accesses "/admin/", it will lose the connection keep-alive. And it turns out that with TCP, connecting is the longest part (because of the back-and-forth of the SYN-ACK dance). By securing the setup you also shot performance in the foot.

Do not take a pass on pass

What to do then? What you want is actually "passing" and it's used the same way as piping:

sub vcl_recv {
    if(req.url ~ "^/admin/") {
        return (pass);

There's only one (huge) difference with pipe: passing will still go through the VCL state-machine, meaning you keep control of what is sent to the backend, what is retrieved, and what you send back to the client.

With it, sockets are not glued together anymore and Varnish keeps its overseer role, analyzing each request. The big benefit is that you can still use keep-alive both on the client side and on the backend side since they are not coupled anymore. Hurray!

Note: when using Varnish as a load-balancer, this is exactly what happens: "vcl_recv" systematically returns "pass".

What is pipe good for, then?

As passing takes care of all our HTTP(s) needs, and Varnish only accepts HTTP(s) traffic, piping is actually an escape hatch to allow us to handle requests that get upgraded from HTTP.

The only real-use case is pretty much Websockets. That's it. If your service doesn't support them, chances are that you don't need pipe - at all.

Worth noting: the builtin.vcl (that gets executed after your code if you didn't return from it) does return pipe if the request method isn't conventional. I would go a bit further and forbid pipe unless you know you really need it.

Update: the astute Rodrigo notes in the comments a use case that I had totally forgotten about: passing large files to slow clients. When you pass, Varnish slurps data from the backend as fast as it can, and destroys it as soon it's not needed anymore. But if you have a lot of clients, or large files, or a combination of both, the data can take a lot of place in memory. Piping, on the other end will only grabs data from the backend as fast (or as slow) as the client does, so it's not an issue any more.

Ending cargo-culting too

I think this is one of the shortest pieces I've written here, but as you can see, it's pretty straightforward. As "pass" became more useful and documentation clearer about it, we have seen new setups "forget" about "pipe", but as you know, the IT world is not just made of new setups, and piping still lurks around, ported over from old config files, or copy-pasted out of blog posts from 2009.

So, let's do some spring cleaning (a bit early, I admit) and check for antiquated practices in our VCL files!

Read the Varnish Book to get more highlights about Varnish best practices.

Download the Varnish Book

Topics: VCL, VCL best practices, varnish mistakes


14/03/18 14:00 by Guillaume Quintard

All things Varnish related

The Varnish blog is where the our team writes about all things related to Varnish Cache and Varnish Software...or simply vents.


Recent Posts

Posts by Topic

see all

Varnish Software Blog