In a recent webinar, we shared some of the big dos and don’ts of Varnish use. After outlining the don’ts in an earlier blog post, we promised to follow up with a rundown of the five Varnish dos as presented in the webinar. Obviously, you should watch the webinar to get the full story on why and how you should take these actions in Varnish, but here’s a quick taster to give you a preview of what you’ll learn.

Without further ado…

 

blog dos banner 1

 

Do these Varnish dos

1. Use Varnish’s shared memory logging 

Varnish has some powerful logging abilities (VSL - Varnish SharedLog) that can provide a lot of troubleshooting and performance insight. Our webinar hosts have both written about shared memory logging in the past: I wrote about it in the Varnish blog, and Thijs provided some in-depth tutorial action on his own blog. We also have a webinar coming up soon dedicated to logging. Stay tuned.

Meanwhile, the logging tools are something we are all passionate about. Log output is quite verbose, but that’s part of what makes it as good as it is -- a massive trove of valuable information. Because of the level of detail you get, it is important to understand how to query and filter the log to get the information that is most valuable for your specific use cases. 

Two tips of note: there are tutorials on the Varnish Documentation site to help you make your way through the logging systems. Also, varnishlog has a JSON output - very important if you need to feed the output to an external tool. 

Do use shared memory logging -- even if you’re not planning to use Varnish for caching -- and familiarize yourself with the shared memory logging functions.  

 

2. Do memory management

Web traffic, as we all know, is unpredictable, and sizing for this lack of predictability is tricky. When you’re sizing your cache process, you have to take into account that the server might have, for example, 90GB available, but there are numerous factors that eat into that availability, such as a runtime cost associated with Varnish, object overhead, etc. Essentially, there are a lot of moving parts, many of which are unpredictable variables. We have generally recommended allocating no more than 80% of RAM available on your system to compensate for not having a complete overview of everything, i.e. leaving a safe cushion against catastrophe. For example, an unexpected spike in traffic will consume memory, and you have no way of knowing that this is going to happen.

In Varnish Enterprise, we offer the Memory Governor, a module that is a part of the Massive Storage Engine (MSE). The Memory Governor is the answer to dueling questions: can we use the machine’s memory resources more efficiently while reducing the amount needed as headroom at the same time as protecting ourselves against huge, sudden traffic spikes? 

 

3. Do edge computing 

Edge computing (and computing on the edge) are buzzwords that definitely mean something for us -- and for performance. 

“The edge” is the outer tier of the architecture - a Varnish layer, a CDN layer - that lets you deliver content from as close as possible to the user as well as cache the uncacheable

Computing on the edge is one thing. In this case, for example, we could process a cookie on the edge to be able to deliver personalized content on the fly. Personal data is the only part of a request not cached, but a placeholder and template can be cached, and as soon as we can inspect and fetch the missing bit (data from cookie), we can insert it at the edge, thereby serving personalized content and “caching the uncacheable”. A number of VMODs come into play to make computing on the edge of your architecture work

Edge computing is a bit different. Varnish is, after all, a CDN software itself. It offers a few caching layers: one lives close to the origin, protecting backends and shielding them from overload. Other layers live closer to end users at the edge of the architecture. 

Right now 5G is a good example for how we’re using edge computing: we have a few caching instances living close to origin data centers as well as many smaller caching instances close to end users distributed across the infrastructure to help get as close to zero latency as possible content delivery. You are probably making use of the edge already, but you could probably be doing more.

 

4. Use custom error messages

Here’s a simple one: Modify the default values for error messages and customize them. You can do this by editing the response in the VCL subroutine to better align the message, look and feel with your website and branding. 

 

5. Do TLS 

The standard connection used to be plain HTTP, but HTTPS has become the new standard. It’s secure and performs better than plain HTTP. This is why in 2015, we started a project called Hitch to build an open source, high-performance TLS proxy.

We offer official Hitch packages as well as in-core TLS within Varnish Enterprise. Watch the webinar to learn more detail about TLS options and how in-core TLS can achieve 150+ Gbps/sec per server. 

 

There you have it… the five dos for using Varnish...for now. We will be back with more dos and don’ts over time. Get in touch if you have questions or suggestions on Varnish dos and don’ts you think we should include. 

 

WATCH THE WEBINAR