Digital processes and data transfer are the backbone of day-to-day operations for enterprises everywhere. With digital transformation, teams get more done. From processing orders, analyzing data, delivering content and running internal and external portals, the aim is streamlined digital systems that support business productivity.
In this blog post we’re exploring some aspects of Varnish Enterprise that enable what it is best known for: excellent performance. When we talk about the performance of caching servers, what do we mean?
This episode of Two Minute Tech Tuesdays is about request coalescing, a core feature in Varnish that is used to reduce the stress on origin servers when multiple requests are trying to fetch the same uncached content.
Varnish is a caching server, and a great one at that, that much we already know. But what about the content you don't want to cache? For example, those shopping cart requests and other completely uncacheable API calls?
We can of course handle it, but we've got to be wary of the sirens of the cargo cult because you will often see something like this on the internet:
sub vcl_backend_response {
# check if the backend response header named
# "cache-control" contains the word "private"
if (beresp.http.cache-control ~ "private") {
# if so, don't cache by limiting the Time-To-Live
# period to 0 second
set beresp.ttl = 0s;
}
}
This is both pretty intuitive, and also very wrong. In this post, we'll explore why it's a bad idea, how to do better, and along the way, we'll try to shine some light on a couple of lesser known features of Varnish.
0 Comments