There is a term in Varnish Cache that every Varnish Cache user should know: “Hit for pass”. Like other Varnish Cache terms it is not self-explanatory and in order to understand what it is you’ll need to understand some of the mechanics of the caching.
Varnish caches. Caching is its sole purpose in life. One of the lesser known facts about Varnish Cache is that it also caches the fact that it isn’t caching something. Wait, what?
Let’s wind back a bit. One of the really nice features of Varnish Cache is that it will only send back a single request for a URL to your backend. This is sometimes referred to as request coalescing or request collapsing.
What happens is that as a request is sent to the backend, a “busy object” is created. If another client comes along and requests the same page Varnish will see the busy object and put the request on a waiting list for a corresponding response. When the response returns Varnish will distribute the response to all the clients on the waiting list. I’m oversimplifying this as there are certain situations where things get a bit more complicated.
But what happens if the backend is uncacheable? If this Varnish installation is busy there might be hundreds of requests for this particular URL coming every second. And since we’re only allowing a single one to access the backend we’re essentially serializing access to the URL. This would obviously be silly so we don’t do that.
Instead what happens is this. When Varnish is expecting a cacheable object and an uncacheable arrives it creates a hit-for-pass object. This is essentially an internal note to Varnish itself saying “Don’t cache this URL”. Whenever the next request for that specific URL arrives then Varnish knows right away that this stuff cannot be cached and it should be sent straight to the backend without waiting for any outstanding requests to finish. This way we’re allowing multiple parallel requests to an uncacheable object.
Pitfalls with hit-for-cache objects
There is one problem with hit-for-cache objects. They are read-only. Since they are internal “notes” rather than cached objects there is no way to list or delete them. So, I would recommend keeping the TTLs low for any object that can potentially deliver a uncacheable object.The VCL built into Varnish Cache does this. Look at the following VCL:
sub vcl_backend_response {
if (beresp.ttl <= 0s ||beresp.http.Set-Cookie ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store|private") ||
beresp.http.Vary == "*") {
/*
* Mark as "Hit-For-Pass" for the next 2 minutes
*/
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
As you can see we’re overriding the TTL whenever we’re noticing that the backend gives us something uncacheable. If you are overriding the default actions in vcl_backend_response then make sure to adjust the TTL so you don’t get stuck with a hit-for-pass object that will live on for weeks.
How to get out of a hit-for-pass pitfall?
Say you’ve gotten a hit-for-pass object on your front page. Your backend servers are probably on fire. What do you do? You could restart Varnish and have it flush its cache completely. If you don’t want to do that, you have the option of altering the hashing for that particular object.
Let’s say you’ve gotten a hit-for-pass object on the frontpage and now it is uncacheable. The following VCL will bypass the object.
sub vcl_hash {
hash_data(req.url);
if (req.url == “/”) {
# workarond for stuck hit-for-pass object
hash_data(“x”); # just add a x to the hash.
}
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
return (lookup);
}
When you look around online, you can see a number of threads asking about what exactly “hit-for-pass” is/does. While most websites use caching and straightforward caching is well-understood, eventually some sites become complex, large and/or heavily trafficked and a basic understanding is no longer adequate. We see this frequently in media or e-commerce sites, for example, where the benefits and effects of caching are essential (and obvious). But for these kinds of businesses, there are so many other moving parts and things to focus on that digging deeper (into things like hit-for-pass or more complex caching concepts) into optimizing caching might be better left in expert hands. We are seeing a lot of companies coming to us for professional support (such as a large German e-commerce platform, starting to use Varnish Plus almost entirely on the strength of the expert support team).
Varnish Cache is a powerful tool that works for a lot of companies and their sites; Varnish Plus takes it a step further, particularly because of the expertise of the support team, which knows caching and the flexibility of the VCL language inside-out.
Ready to test what Varnish Plus can do for you? Sign up for a free trial.
Image is (C) 2015 Stephen Donaghy, used with Creative Commons license.