Welcome to your freshly installed Varnish instance!


In order for you to start caching with Varnish, you just need to configure a couple of things

  1. Backend configuration
  2. Customizing caching behaviour
  3. Object lifetime & cache invalidation
  4. What about SSL/TLS?
  5. Parameter tuning
  6. Support
  7. Taking it to the next level
  8. Rate & review us
 
Varnish_Icons_white-Cloud
 

Select your subscription

1.Backend configuration

Varnish is a reverse caching proxy, which means it sits in front of your origin servers. You’ll need to register the hostname and port of your backend to allow Varnish to fetch and cache content.


To do this, open up your Varnish Configuration Language (VCL) file, which is located under /etc/varnish/default.vcl on your Varnish server, and edit the following section:

# Default backend definition. Set this to point to your content server.
backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

Replace the .host and .port properties with the hostname and the TCP listening port of your origin server. This can be an individual server, or the endpoint of a load balancer.

To activate these settings, run the following command on the server 

sudo systemctl reload varnish.service

Varnish can also connect to multiple backends. These backends can be created and assigned in your VCL logic. Loadbalancing requests between multiple backends is also possible by grouping them into directors.

Varnish Enterprise also has a dynamic backends module that allows you to define backends on-the-fly

Regular backend definitions are static: they are explicitly defined in your VCL file and will be processed when Varnish starts up.

The dynamic backends feature doesn’t require hardcoded backend definitions and can be interpreted and changed at runtime.

In some cases it is tough to anticipate which backend you’ll connect to. Having the ability to just proxy to a hostname adds a lot of flexibility. 

Here’s an example of dynamic backends:

import goto;
sub vcl_backend_fetch {
    set bereq.backend = goto.dns_backend(“example.com”);
}

Further information on backend selection, load-balancing and backend health checks can be found on our documentation site

Backend documentation

Dynamic Backends Documentation

 


 

2. Customizing caching behavior

The out-of-the-box behavior of Varnish might not comply with what your application expects. Chances are that certain pages cannot be cached due to the use of cookies, or maybe certain pages must be cached, despite the use of cookies.

Either way, the built-in Varnish Configuration Language allows you to fully customize the behavior of Varnish. Our documentation site features a user guide that describes the syntax and the usage of VCL.

We also provide a full reference manual that covers all VCL variables.

Customizing Varnish behavior happens in the same VCL file that we mentioned earlier: /etc/varnish/default.vcl.

To commit these changes, please

please reload the Varnish instances using the following command:

sudo systemctl reload varnish.service

▸ VCL reference manual

 


 

3. Object lifetime and cache invalidation

The lifetime of a cached object is defined by the value of the Cache-Control that is sent by the application. If no such value is set, Varnish will use its default_ttl configuration parameter, which is set to 120 seconds by default.

Here’s an example where the application issues a Cache-Control to cache an object for an hour:

Cache-Control: public, s-maxage=3600

Here’s an example where the application is instructing Varnish not to cache the object:

Cache-Control: private, no-cache, no-store

If for some reason, your application cannot send these headers, you can also override the lifetime of an object by writing VCL code.

Here’s an example where we set the Time To Live of an object to an hour if the request URL was “/about”:

sub vcl_backend_response {
   if(bereq.url == "/about") {
       set beresp.ttl = 1h;
   }
}

Storing objects in the cache with the proper lifetime is important, but getting them out of the cache in time is equally important. One can wait until the Time To Live has expired, but in a lot of cases, cached content needs to be purged explicitly when content changes on the backend.

 

Varnish Enterprise customers can use the typical purging and banning mechanisms that all versions of Varnish support.

However, purging and banning are URL-based. This means you need to know the URL of every occurrence of the content that needs to be invalidated.

For simple content this is perfectly viable, but for content that appears on many different URLs, this can get complicated.

Varnish Enterprise customers can benefit from a tag-based invalidation mechanism where the URL doesn’t really matter: objects can be tagged with multiple tags and by invalidating a tag, all the corresponding objects are removed from cache, regardless of the URL.

The following snippet of code can be added to your VCL file to enable both tag-based invalidation and purge support:

import ykey;
acl purge {
   "localhost";
   "192.168.55.0"/24;
}

sub vcl_recv {
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(403, "Forbidden"));
}

if (req.http.Ykey-Purge) {
set req.http.n-gone = ykey.purge_header(req.http.Ykey-Purge, sep=", ");
return (synth(200, "Invalidated "+req.http.n-gone+" objects"));
} else {
return (purge);
}
}
}

sub vcl_backend_response {
ykey.add_header(beresp.http.Ykey);
if (bereq.url ~ "^/content/image/") {
ykey.add_key("image");
}
}

Replace the acl purge values with the hostnames, IP addresses and subnets that are allowed to perform purges on your Varnish server.

Executing a purge can be done by calling the URL with a custom HTTP PURGE method instead of a regular HTTP GET method. In this example we’re using curl to purge the /about page:

    curl -XPURGE http://example.com/about

If you want to invalidate all objects containing a hello and world tag, you can use the following statement:

curl -XPURGE -H"Ykey-Purge: hello, world" http://example.com

Tagging objects can either be done explicitly in VCL as illustrated in the example above with the ykey.add_key("image") statement.  But you can also add the custom Ykey header to your HTTP responses to tag your objects:

HTTP/1.1 200 OK
Ykey: hello, world
Content-type: text/html; charset=UTF-8
Hello world


 Tag-based invalidation documentation

 


 

4. What about SSL/TLS?

As a Varnish Enterprise user, you have end-to-end SSL/TLS support available.

We’ve pre-installed Hitch, our TLS proxy that is built for the job.

Hitch can proxy requests to Varnish over TCP/IP and Unix Domain Sockets, guaranteeing low latency.  It listens on port 443, it terminates the SSL/TLS connection and it forwards the unencrypted traffic to Varnish directly.

Both Hitch and Varnish support the PROXY protocol, and makes sure the original client IP address is automatically sent to Varnish. Varnish will store this IP address inside the X-Forwarded-For header.

We’ve provided a dummy certificate, but it is important that you load the correct certificates into Hitch. Certificates can be placed into /etc/hitch/certs. Further TLS configuration can be done in the Hitch configuration, which is located in /etc/hitch/hitch.conf.

▸ Learn more about Hitch 


 

5. Parameter tuning

Your Varnish instance has been pre-configured with the default settings, which suits most Varnish users. We advise you to have a look at them and update the configuration according to your needs.

Our documentation site has a reference section with all available runtime options. This will help you to properly configure the Varnish process.

To change the runtime options, run the following command on your Varnish instance:

sudo systemctl edit varnish.service

Edit accordingly and save the file. To commit these changes, run the following command:

sudo systemctl daemon-reload

And finally, it’s a matter of restarting Varnish, by running the following command:

sudo systemctl restart varnish.service

▸ Varnish runtime options reference manual

 


 

6. Support

Varnish Software offers a “getting started” online session that includes an introduction to the software, best practices, tips for configuration and an overview of available support agreements.

 

Cloud customers need to register to get access to software updates. Registration for updates and an online introduction, if wanted, is done by filling out the form available here.

 

Support in standard and premium levels are available through an additional support subscription. This support subscription provides access to Varnish Software Support Service which provides up to 24/7 access, defined SLAs (down to 2 hours response time), assistance to identify solutions or workarounds to problems with the Software.

 

Read here for more about support services or contact sales@varnish-software.com for more details.

 


 

7. Taking it to the next level

As a Varnish Enterprise 6 user, you have a collection of extra features at your disposal. 

In a Cloud environment where elasticity and scalability matters, you might want to use Varnish High availability. VHA will make sure that the cache is synchronized in a setup with multiple Varnish instances. 

We also have an auto discovery service that will keep track of dynamic Varnish inventories. When you scale up your Varnish cluster or scale it down varnish-discovery will make sure VHA is aware of these changes.

▸ Learn more about Varnish High Availability

There are a lot more extra features you can leverage - see the full list here

 


 

8. Rate your experience

Thanks for trying out Varnish Cloud. We hope that the information we provided on this page, and the references to documentation material, will help you accelerate your web project.

We take your satisfaction to heart. Please let us know about your Varnish Cloud experience by leaving a short review.

▸ Review us on AWS

 ▸ Rate us on Azure

1.Backend configuration

Varnish is a reverse caching proxy, which means it sits in front of your origin servers. You’ll need to register the hostname and port of your backend to allow Varnish to fetch and cache content.


To do this, open up your Varnish Configuration Language (VCL) file, which is located under /etc/varnish/default.vcl on your Varnish server, and edit the following section:

# Default backend definition. Set this to point to your content server.
backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

Replace the .host and .port properties with the hostname and the TCP listening port of your origin server. This can be an individual server, or the endpoint of a load balancer.

To activate these settings, run the following command on the server 

sudo systemctl reload varnish.service

Varnish can also connect to multiple backends. These backends can be created and assigned in your VCL logic. Loadbalancing requests between multiple backends is also possible by grouping them into directors.

Further information on backend selection, load-balancing and backend health checks can be found on our documentation site

Backend documentation

Dynamic Backends Documentation

 


 

2. Customizing caching behavior

The out-of-the-box behavior of Varnish might not comply with what your application expects. Chances are that certain pages cannot be cached due to the use of cookies, or maybe certain pages must be cached, despite the use of cookies.

Either way, the built-in Varnish Configuration Language allows you to fully customize the behavior of Varnish. Our documentation site features a user guide that describes the syntax and the usage of VCL.

We also provide a full reference manual that covers all VCL variables.

Customizing Varnish behavior happens in the same VCL file that we mentioned earlier: /etc/varnish/default.vcl.

To commit these changes, please

please reload the Varnish instances using the following command:

sudo systemctl reload varnish.service

▸ VCL reference manual

 


 

3. Object lifetime and cache invalidation

The lifetime of a cached object is defined by the value of the Cache-Control that is sent by the application. If no such value is set, Varnish will use its default_ttl configuration parameter, which is set to 120 seconds by default.

Here’s an example where the application issues a Cache-Control to cache an object for an hour:

Cache-Control: public, s-maxage=3600

Here’s an example where the application is instructing Varnish not to cache the object:

Cache-Control: private, no-cache, no-store

If for some reason, your application cannot send these headers, you can also override the lifetime of an object by writing VCL code.

Here’s an example where we set the Time To Live of an object to an hour if the request URL was “/about”:

sub vcl_backend_response {
   if(bereq.url == "/about") {
       set beresp.ttl = 1h;
   }
}

Storing objects in the cache with the proper lifetime is important, but getting them out of the cache in time is equally important. One can wait until the Time To Live has expired, but in a lot of cases, cached content needs to be purged explicitly when content changes on the backend.

The purging mechanism in Varnish allows you to explicitly purge objects from the cache, based on the URL.

The following snippet of code can be added to your VCL file to enable purge support:

acl purge {
   "localhost";
   "192.168.55.0"/24;
}

sub vcl_recv {
   # allow PURGE from localhost and 192.168.55...
   if (req.method == "PURGE") {
       if (!client.ip ~ purge) {
           return(synth(405,"Not allowed."));
       }
       return (purge);
   }
}

 

Replace the acl purge values with the hostnames, IP addresses and subnets that are allowed to perform purges on your Varnish server.

Executing a purge can be done by calling the URL with a custom HTTP PURGE method instead of a regular HTTP GET method. In this example we’re using curl to purge the /about page:

    curl -XPURGE http://example.com/about


Invalidation documentation

 


 

4. What about SSL/TLS?

While Varnish Cache doesn’t natively support SSL/TLS encryption, it doesn’t mean you can’t still protect your application with an SSL/TLS certificate.

If you run a load balancer in front of your Varnish servers, you can terminate TLS on the load balancer and have your certificates run there.

You can also install a powerful SSL/TLS proxy on your instance called Hitch

Hitch can proxy requests to Varnish over TCP/IP and Unix Domain Sockets, guaranteeing low latency.  It listens on port 443, it terminates the SSL/TLS connection and it forwards the unencrypted traffic to Varnish directly.

Both Hitch and Varnish support the PROXY protocol, and makes sure the original client IP address is automatically sent to Varnish. Varnish will store this IP address inside the X-Forwarded-For header.

We’ve provided a dummy certificate, but it is important that you load the correct certificates into Hitch. Certificates can be placed into /etc/hitch/certs. Further TLS configuration can be done in the Hitch configuration, which is located in /etc/hitch/hitch.conf.

▸ Learn more about Hitch 


 

5. Parameter tuning

Your Varnish instance has been pre-configured with the default settings, which suits most Varnish users. We advise you to have a look at them and update the configuration according to your needs.

Our documentation site has a reference section with all available runtime options. This will help you to properly configure the Varnish process.

To change the runtime options, run the following command on your Varnish instance:

sudo systemctl edit varnish.service

Edit accordingly and save the file. To commit these changes, run the following command:

sudo systemctl daemon-reload

And finally, it’s a matter of restarting Varnish, by running the following command:

sudo systemctl restart varnish.service

▸ Varnish runtime options reference manual


 

6. Support

Help is provided through community documentations, IRC channels and forums. https://www.varnish-cache.org/support/

 


 

7. Taking it to the next level

You are currently running Varnish Cache version 6 (LTS), which is the open-source version of Varnish. We also have an enterprise version of Varnish, that is also available in the cloud provider marketplaces.

Varnish Enterprise 6 (LTS) is the enterprise equivalent of the version you’re currently running and has many more features that are beneficial in the Cloud.

Some notable enterprise features:

  • High Availability
  • Massive Storage Engine
  • End-to-end SSL/TLS protection
  • Data encryption
  • Tag-based object invalidation
  • Request & response body manipulation
  • Throttling & rate limiting
  • Edgestash
  • JSON parsing
  • Built-in HTTP client

▸ Get started with Varnish Enterprise

 


 

8. Rate your experience

Thanks for trying out Varnish Cloud. We hope that the information we provided on this page, and the references to documentation material, will help you accelerate your web project.

We take your satisfaction to heart. Please let us know about your Varnish Cloud experience by leaving a short review.

▸ Review us on AWS

 ▸ Rate us on Azure

 


 

The Varnish Book

The Varnish Book is really the comprehensive, nitty-gritty technical “bible” for all things Varnish Cache and Varnish Solutions.

Get familiar with Varnish VCL (Configuration Language), caching principles and become an expert in all things Varnish with this manual.

 
Book-with-shadow-on-transparent-1-2-h500-1
Read on…

Check out our blog where our team regularly shares their expertise and useful tips and tricks that will help you solve challenges you may run into.

Recent Posts