February 15, 2016
6 min read time

API Performance is More Important than You Think

This post was written by Varnish Software's Denis Brækhus and originally published on API.Report on 8 February 2016.

When an API is born, its purpose in life might be very well-defined and the expected usage seem limited. The creators of the API solve a particular problem, and they do it in a good enough way to handle the initial volume of calls. Maybe the API is designed to scale given a certain projected usage increase. At this point, performance does not seem like an important consideration.

For a lot of APIs the initial use case will be enough, the API’s lifecycle is as expected, and it will deliver its responses to the pre-defined consumers until it is no longer needed. 

In today’s environment, however, the assumptions made when initially creating the API might only hold true for so long. The API that was created to handle a handful of internal applications can suddenly be interesting to and eventually opened up to a number of consumers that are orders of magnitude larger.

The evolving use of APIs

Let me illustrate this in an example that is close to me: real-time public transport traffic information in Oslo. I have lived in, and close to, Oslo (Norway) for over 20 years, and I have followed the evolution of the concept known as real-time traffic information during my years as a public-transport user.

In the beginning the real-time departure information was limited to displays in the busiest stops. I am going to assume that one or more APIs were created at this point to deliver this information. The developers responsible for this project could very well have been aiming for a certain number of stops (a relatively well-defined number with only a certain amount of uncertainty), and the project most likely had a planned roll-out strategy to grow the service and number of stops gradually. The visible evidence was that we got more and more real-time signs around the city over the years.

The platform was not highly reliable early on, but was mostly able to give better information than the static timetables of yesteryear. Over the years the service (and I assume its backend APIs) seemed to grow more stable.

A web-application for querying the traffic information was released at some point, and I still remember the early “WEP” version of said service. It was not very user friendly, and I expect its reach was limited. 

Then, something changed. In 2008, the first iteration of the iOS app was released to the public. Suddenly the amount of consumers of the same real-time API was no longer limited by the number of displays at locations around the city. The potential amount of users grew to all the people owning devices capable of running the app. In 2008, there were not that many smartphone devices, but as we all know, the number pretty much exploded. Combining the advances in user-interfaces that the app gave over the early mobile web, and the ubiquity of the mobile network made this app an invaluable tool to users of the public transport system.

Today I would say it’s safe to assume that regular commuters (and other inhabitants in and around Oslo) are more likely than not to have and use one of the many versions of the real-time app.

So how has this affected the reliability of the service? As an outsider I can only assume that the project has been challenging for the developers and operations people responsible for the API services. To this day the platform seems to work well most of the time, but given enough problems on the routes/lines in the public transport system it still seems to have problems coping with the load. 

When you think about it, it’s not that weird. If there are large-scale schedule deviations, suddenly a large number of people will pick up their smartphone and poll the API for updates, most likely within a short interval.

It would have been fascinating to know more details about the API platform behind the service, but one thing is for sure, the original assumptions about the usage of this real-time service were heavily disrupted with the advent of the smartphone app.

Having discussed APIs with numerous people lately, I have realized that this is not at all a unique history. The usage of APIs is more likely to change over time than traditional HTTP-based applications. Over time more and more APIs that were initially private and/or limited in usage are opened up for wider consumption.

Coping with the load

Given a scenario where one has an API that was created with limited use in mind, there are a few useful strategies in handling a big increase in usage. A lot of times the additional traffic to an API will come in the form of an increased number of read-only calls. More people asking for departure times, more people wanting to know which buses leave from a certain stop, etc. 

Here caching requests is extremely efficient.

Even in the real-time traffic case, caching for half a minute would potentially save a lot of work for the backend systems, and drastically improve the response time for the user. When you consider how many orders of magnitude faster reading cached data from RAM is compared to computing a new result (maybe even involving disk/SSD reads and multiple network transmissions), it makes sense to cache data that will be accessed more than once via an API.

Given APIs that require some form of authentication, offloading and centralizing the authentication handling is also a major boon. On one hand you avoid complexity in the actual API codebase, and additionally you ensure that the API will only be hit by already validated requests, and thus shield it from invalid or malicious calls.

In some scenarios, the finite capacity of an API is known, and in order to prevent overloading the service, you need to be able to limit the amount of requests. Again, the better way to do this is via offloading this to infrastructure that is purposefully built for high volume transaction processing, such as an API management solution.

The average API management tool, created for a bygone era of slow APIs, struggles to handle 200 API calls per second. Clearly, these tools are not up to handling the high volume API usage now required. In fact, heavily trafficked media and entertainment applications are processing more than 10,000 API calls per second. The API management tools that support these applications need to reach much higher performance levels in order to meet predicted volumes of API calls.   

APIs are here to stay, and the only certain thing is that predicting the future usage of your API is close to impossible. Sudden events could trigger massive traffic increases to your APIs practically overnight. Having strategies for ensuring current and future API performance is crucial, no matter what the situation looks like today.

Ready to learn more about API management strategies and the Varnish API Engine? Watch our API strategy webinar to learn more.

Watch the webinar.

Photo (c) 2011 Oriol Salvador used under Creative Commons license.