At Varnish Software, we maintain a steady cadence of updates to Varnish Enterprise: small, regular improvements that reinforce a stable core, plus a few larger efforts cooking in the background. If you're running Varnish Enterprise in production, here’s a look at what’s landed recently and what’s coming soon!
So far this year, we’ve released nine updates to Varnish Enterprise. Many of these are minor (bug fixes, optimizations, small feature additions) but they reflect an ongoing focus: keeping Varnish fast and stable in the messy reality of production, not just under idealized lab conditions.
Recent updates include:
See more from our detailed changelog
One recent addition with broader implications is varnish-json, a new tool that outputs Varnish logs in structured JSON format. It’s a small utility, but a key dependency for something bigger: OpenTelemetry integration.
OpenTelemetry is fast becoming the standard for collecting logs, metrics, and traces across distributed systems. We’re in the process of integrating it into Varnish Enterprise to allow teams to export performance and behavior data in a consistent, backend-agnostic format. That means you’ll be able to wire Varnish into your existing observability stack, whether that’s Prometheus, Grafana, or a commercial APM platform.
Our OpenTelemetry components are currently in experimental status, but are available for early access. Reach out via your account manager if you'd like to test them out ahead of our next launch.
We’re also building a new global rate limiting system, a distributed mechanism that enforces limits across multiple Varnish nodes.
Traditional rate limiting works locally, per instance. That’s fine until you need to apply limits across a cluster. Our approach uses NATS, a lightweight messaging system, to let Varnish nodes share state and coordinate enforcement. The result: a single logical rate limit, enforced consistently across your fleet.
This feature is still in development, but shaping up to support scenarios like per-user API quotas or global concurrency limits without requiring custom middleware.
A major project under active development is our Edge Computing Framework, internally nicknamed slEDGEhammer. The idea is simple: give users the ability to run WebAssembly programs directly inside Varnish, in a sandboxed environment, to extend behavior beyond what’s possible in VCL.
To support this, we’ve built:
This opens the door to more advanced logic like custom request filters, dynamic content routing, token processing, and even application-layer extensions, all running at the edge, safely isolated.
We’re also building a cluster key-value store, accessible from within WebAssembly programs, to let you track state across requests and across nodes. Think of it as lightweight stateful coordination between edge scripts.
The cluster key-value store offers a uniform interface, and currently supports Redis as its backend, with other pluggable backend adapters coming in the future.
All four of the major projects above, OpenTelemetry, cluster-wide rate limiting, the WebAssembly framework, and the cluster key-value store, are slated for wider availability in our Fall launch. Until then, we’re continuing to ship smaller releases every few weeks to keep production systems fast and robust.
If you want to test any of the experimental features mentioned here, or want more technical detail on upcoming capabilities, get in touch via your account manager.
Documentation about Varnish Enterprise's features and capabilities can be found here. A detailed changelog is also available.
If you're not yet using Varnish Enterprise, you can start a trial to explore these capabilities in your own environment.