March 22, 2024
11 min read time

Varnish Cache 7.5 Is Now Available

Varnish Cache 7.5 Is Now Available

March 15th just passed, which means we can expect a new release of Varnish Cache. 

As a quick reminder: every year on March 15th and September 15th a new version of Varnish Cache is released.

To be honest, this time the exact release date was March 18th, which marks the availability of version 7.5.

Just like any of the “fresh” releases, there are new features, bug fixes, and other improvements. Allow me to highlight a couple:

New features

Here are some of the new features in Varnish Cache 7.5:

  • Tunable parameters, new counters, and a VMOD to mitigate the HTTP/2 rapid reset vulnerability
  • A new HTTP/2 control flow window timeout and a counter to measure it
  • A feature flag to interrupt request processing once the client is gone and a counter to measure abandoned requests
  • The ability to disable timeouts in VCL and through runtime parameters
  • A new timeout to limit the duration of pipe transactions
  • Reverted behavior for handling ESI error conditions
  • A default format label to facilitate extending the format of varnishncsa

Let’s have a look at these features in some more detail and focus on the HTTP/2 rapid reset vulnerability first.

HTTP/2 rapid reset vulnerability mitigation

In late 2023, a vulnerability in the HTTP/2 protocol was discovered, and was reported as CVE-2023-44487. The vulnerability exposes a potential denial of service attack where a large volume of streams can be created, which are immediately reset without ever reaching the maximum number of streams.

This causes Varnish, and other affected HTTP/2 implementations, to consume unnecessary server resources for requests for which the response will never be delivered.

Because the HTTP/2 protocol itself has no provisions to limit these so-called “rapid resets”, every implementation of HTTP/2 had to find an appropriate fix.

We reported the issue as VSV00013 and added a set of new runtime parameters to identify and limit rapid resets.

Although the issue was fixed with a security release (Varnish Cache 7.3.1 and 7.4.2) in late 2023, it takes a new bi-yearly release to end up on the “what’s new” list.

  • The new h2_rapid_reset runtime parameter defines the duration below which a reset is considered rapid.
  • The new h2_rapid_reset_period runtime parameter defines the sliding period in which rapid resets are measured
  • The new h2_rapid_reset_limit runtime parameter defines the number of rapid resets that are allowed within the predefined period before rate limiting kicks in

By identifying what a rapid reset is in terms of timing, and by defining how many rapid resets can occur within a given timeframe, Varnish will protect itself by closing the client connection.

In varnishlog, you will see the following lines appear when the rapid reset attack is detected:

Error          H2: Hit RST limit. Closing session.

SessClose      RAPID_RESET 0.754

The Hit RST limit error and the RAPID_RESET reason for session closing are also new. And as explained earlier: although the vulnerability was fixed in Varnish Cache 7.3.1 and 7.4.2, the mitigation is still considered a new feature because the documentation only tracks the bi-yearly releases.

There is also a new MAIN.sc_rapid_reset counter in varnishstat that counts the number of rapid resets it detected.

Varnish Cache 7.5 even has a new H2 VMOD where you can tune the rapid reset parameters in VCL on a per-request basis. There is even a function called h2.is() that returns true if HTTP/2 is used.

Here’s an example of a VCL override of the rapid reset parameters:

vcl 4.1;
import h2;
backend default {    .host="server.example.com";
}
sub vcl_recv {    if(h2.is()) {        h2.rapid_reset(2s);        h2.rapid_reset_limit(20);        h2.rapid_reset_period(1m);    } }

Detect broke streams in HTTP/2

Another new feature that is related to the HTTP/2 protocol is the detection of so-called “broke” streams.

Because HTTP/2 supports multiplexing, multiple requests can be sent over a single connection in parallel. This requires some negotiation between client and server on what resources need to be sent when.

The HTTP/2 protocol uses a so-called “control flow window update frame” to reserve the number of bytes to receive in subsequent data frames. This is used by both clients and servers.

In Varnish Cache 7.5 a new h2_window_timeout runtime parameter is introduced that defines how long an HTTP/2 stream can stall its delivery, waiting for a control flow window update.

If the timeout is exceeded, the stream is considered broke, and is reset. If all streams for the connection are broke, the entire connection is considered bankrupt and the new MAIN.sc_bankrupt counter is increased in varnishstat.

Interrupt request processing when the client is gone

The new vcl_req_reset feature flag, which is on by default in Varnish Cache 7.5, will interrupt request processing when it detects that the client is gone. Because why waste server resources processing a request and trying to serve a response that will never be delivered.

This new feature will interrupt the stream and will internally log the interruption with a fake HTTP 408 status code. The reason why it’s a fake response, is because there’s no active stream to receive it.

This new feature flag can save a significant amount of server resources, and its occurrence is measured in varnishstat using the new MAIN.req_reset counter.

The new feature is particularly useful when a rapid reset attack is detected or when an HTTP/2 stream is considered broke. Rather than riding it out, the vcl_req_reset feature will proactively interrupt request processing and close the stream or connection.

It is possible to disable this feature flag at runtime by using the following command:

varnishadm param.set feature -vcl_req_reset

However, it is advised to keep vcl_req_reset enabled, to conserve server resources in case an HTTP/2 client is gone.

Disabling timeouts

Varnish offers a whole range of timeouts which can be set using runtime parameters. It’s also possible to override them in VCL.

In Varnish Cache 7.5 it is now possible to disable these timeouts, instead of assigning a zero value.

If for example you want to disable the first_byte_timeout at runtime, you can use the following command:

varnishadm param.set first_byte_timeout never

It’s also possible to disable the timeout at startup by adding the following parameter to varnishd:

-p first_byte_timeout=never

You can also disable timeouts in VCL by unsetting them, potentially on a per-request basis. Here’s how to do that:

unset bereq.first_byte_timeout;

Limiting the duration of pipe transactions

A “pipe” transaction is triggered in Varnish when it doesn’t recognize the incoming request as a valid HTTP request. Instead it opens up a TCP connection to the backend, and sends the bytes without any HTTP-related processing.

This means “pipe” transactions are not subject to the HTTP-related timeouts that Varnish supports.

In Varnish Cache 7.5 a new timeout was introduced to limit the duration of “pipe” transactions. This is not an idle timeout, but rather a deadline that is enforced, regardless of the activity in either direction.

The new timeout is called pipe_task_deadline and is set to “never” to ensure compatibility.

You can set the runtime parameter using varnishadm param.set or by adding it with -p to the varnishd at startup. However, you can also override its value using the bereq.task_deadline variable in VCL, which is available within the vcl_pipe subroutine.

Reverted behavior for handling ESI error conditions

In version 7.3 of Varnish Cache, processing error conditions of ESI subrequests was introduced, allowing Varnish to process the “onerror” attribute of an ESI tag, as illustrated below:

<esi:include src="/some-page" onerror="continue"

This allows Varnish to continue processing the parent request, even though the ESI subrequest has failed. The standard behavior in 7.3 was to consider responses with status codes other than 200 and 204 to be erroneous.

It required enabling the esi_include_onerror feature flag and the explicit onerror="continue" attribute to successfully handle other status codes.

In Varnish Cache 7.5 that behavior was reverted and ESI subrequests are now processed regardless of the status code that is returned.

By enabling the esi_include_onerror feature flag, 200 and 204 status codes are enforced, and the “onerror” attribute of an ESI tag is processed.

Here’s how you enable the esi_include_onerror feature flag at runtime:

varnishadm param.set feature +esi_include_onerror

It’s also possible to enable the feature flag at startup by adding the following parameter to varnishd:

-p feature=+esi_include_onerror

Using the default format label in varnishncsa

varnishncsa is a program that returns Varnish logs in a single-line NCSA-style format.

The standard format for a log line goes as follows:

%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i"

This includes the following fields:

  • The remote host
  • The remote log name
  • The remote user
  • The timestamp
  • The first line of the request
  • The status code
  • The response size
  • The value of the “referer” header
  • The value of the “User-agent” header

This format is called the “NCSA combined log format”, but can be further extended.  Extending the log format can be quite tricky, and it’s easy to forget one of the fields of the default format.

That’s why a %{Varnish:default_format}x label was introduced in Varnish Cache 7.5. It captures all the fields of the NCSA combined log format and makes extending the format easier.

So if you want to extend the default log format in varnishncsa, and for example add a field that captures how Varnish handled a request, you could use the following command:

varnishncsa -F "%{Varnish:default_format}x %{Varnish:handling}x"

More changes

More changes, fixes and features that are part of Varnish Cache 7.5 can be found in the release notes on https://varnish-cache.org/docs/trunk/whats-new/changes-7.5.html.

The full documentation can be accessed here. 

Downloading Varnish Cache 7.5

Do you want to give Varnish Cache 7.5 a try? Download the source code.

There are also packages available at https://packagecloud.io/varnishcache/varnish75.

There is also an official docker image for Varnish Cache 7.5, which you can pull in using docker pull varnish:7.5. Find more information about our official Docker images, here.

Which versions of Varnish are now supported?

The Varnish Cache community only supports the 2 latest major versions. Now that Varnish Cache 7.5 is out, version 7.3 is end-of-life. Varnish 7.4 is still supported though.

Additionally, Varnish Software still maintains a long-term supported version of Varnish Cache 6 called Varnish Cache 6.0 LTS. See here for LTS download and install instructions.

New call-to-action