February 28, 2024
4 min read time

Delivering Powerful Web Functionality at the Edge

In this blog post, we’ll explore the evolving landscape of content delivery. We'll delve into the shift to edge computing, use cases you can deploy at the edge today, and what you need to succeed at the edge, in terms of hardware, Infrastructure-as-a-Service (IaaS), and content delivery software

The hunt for exceptional Quality of Service at scale, and at low cost, is pushing workloads to the network edge. Moving compute resources to the edge improves existing services and enables new latency-sensitive, bandwidth-intensive use cases. Achieving this, however, requires infrastructure capable of handling the performance, compute, scalability and flexibility demands of edge delivery. 

The Shift to Edge Computing

Let’s start with a very abridged history of content delivery on the internet. In the “old days,” you would have one Point of Presence (PoP), usually located where the people who maintained it were located. No cloud, just a monolithic single PoP to cater to all users. As the web went global, this became an issue, with every user request needing to do a time-consuming roundtrip for every resource accessed:

 

How was this issue solved? By adding an “edge” - although it wasn’t called this at the time. These were basic server resources, which were positioned closer to end users and able to handle the delivery of some categories of content, rather than requests going back to the origin. Essentially, content that was cacheable and reusable could be stored at these rudimentary edge nodes and serve users in their locations, saving a lot of time and enabling a better user experience. This is basically what a content delivery network is, of course: 

 

After that, as services grew in complexity, there was a move towards isolating logic, splitting components and moving into a microservices-based model. Now our origin isn’t just a monolith - it’s an army of smaller programs, databases and other components. Everything is simpler, maintenance and upgrades are expedited:

Users often still need to go back to the origin for more complex requests, so how about we grab some of the intelligence that we’ve fragmented and isolated and put that at the edge? Let’s take some of that intelligence and deploy it in the edge PoPs, reducing further the number of times clients need to go back to the origin. This is where we’ve come from with edge computing and where we’re going: minimal latency, faster services.

Latency is one benefit, but there are many others:

  • Less strain on core networks
  • More efficient scaling
  • Reduced buffering
  • Enable new services
  • Reduce data transit costs

All of which is possible by processing data closer to users.

But Varnish Only Caches Content, Right?

Hang on, why is Varnish Software talking about edge computing, though? Isn’t Varnish a reverse proxy? A web accelerator? Well. The Varnish ambition is to be the fastest, most efficient HTTP engine. A key tool for this is Varnish Configuration Language (VCL), a uniquely powerful state machine-based configuration language that allows you to apply logic to every HTTP transaction and execute in real-time, on the fly. It enables extreme control and flexibility. With VCL you unlock a whole range of logic at the edge. Edge computing is what Varnish has been doing for the past 15 years:

  • Log generation
  • Origin health checks
  • Paywalls
  • Token (JWT) validation and generation
  • Response body manipulation
  • Device detection
  • Geolocation and customizable actions
  • Custom rate limiting
  • Content stitching
  • Authentication
  • Image optimization
  • Request tracing
  • Request saving
  • A/B testing
  • Advanced HTTP routing
  • Per transaction storage selection
  • Request and response scrubbing
  • Cookie filtering

When the Varnish project began, the term “edge compute” wasn’t in general usage, but this capability has been built into Varnish from the beginning.

But Varnish Only Caches Content, Right?

Edge capabilities like these certainly make more demands of infrastructure, with increased computing power and throughput required to handle the increased request flow and task complexity. The Varnish philosophy is that it’s not always the best idea to simply throw more hardware at the problem. It’s about efficiency, making the most of available processing power and doing as much as possible with as little overhead as possible.

Infrastructure advancements are enabling the scale, performance, cost-effectiveness and energy efficiency needed for delivering edge workloads. 

Vital to successful edge infrastructure deployments, however, is for software and hardware providers to collaborate in a symbiotic relationship that ensures software and hardware are closely aligned, with software extracting maximum possible power from chips in the most efficient way. An ongoing partnership between Varnish Software and Intel has made great strides in hardware efficiency and software optimization. Our goal is to match advancements in edge infrastructure to fully utilize hardware and meet demands of edge workloads, while reducing energy and resource needs. Successful edge software combines deep kernel, processor and hardware knowledge to effectively utilize system resources in software development, for highly performant services, efficiency gains and resource savings. Hardware knowledge and capabilities of CPU, bus, memory and persistent storage are also required. Together, Intel and Varnish have set world-leading benchmarks for throughput and energy efficiency at the edge, with per server milestones of 1.3 Tbps and 1.1 Gbps per watt.

These benchmarks were achieved on a Supermicro 2U CloudDC server with Gen4 NICs, 4th Gen Intel Xeon Processors and Varnish Enterprise 6. It was a synthetic video load with software-based TLS termination, no hardware accelerators and all in-memory, ideal for live streaming and caching use cases. 

Watch our webinar on-demand for a deeper dive into delivering powerful web functionality from the edge.

New call-to-action