When we originally launched the Varnish API Engine, we sought performance. We’ve continued to develop our API engine with a focus on continuous improvement as we’ve built Varnish API Engine 2.0. We know performance and scalability are underdeveloped areas in the API management space, both in terms of how API gateways will match the current API volumes and how they they’ll cope with the massive increase in traffic ahead. We took the engine out for a spin under several different conditions, and wrote up our test findings to share upon launch of the latest Varnish API Engine.
As we’ve prepared to launch the Varnish API Engine, we’ve talked to customers and done a lot of testing, discovering that performance is a critical but often neglected aspect of API management. It’s no secret that API-driven development is starting to run the show in many companies and supplies the backbone of many burgeoning business development ideas. If any of that is going to work, APIs are going to have to scale to deliver on performance expectations.
Over the last couple of years we’ve seen an explosion in the use of HTTP-based APIs. We’ve seen them go from being a rather slow and useless but interesting technology fifteen years ago to today's current, high performance RESTful interfaces that powers much of the web and most of the app-space.
Varnish Cache has been used for HTTP-based APIs since its inception. The combination of caching, high performance and the flexibility brought by VCL makes it an ideal proxy for APIs. We’ve seen people doing rather complex protocol negotiations in VCL to do interesting things like matching frontend and backend protocols.