In the wake of the ongoing Covid-19 crisis, many things have changed - suddenly. We have seen unprecedented disruption to business, and daily life change in unpredictable, and tragic, ways for most of the world’s population. While the consequences of the virus continue to unfold, their far-reaching effects are undeniable.
Hammering home the performance-performance-performance theme again and again, we’ve told the story of caching and protecting your backend many times. We’ve talked a lot about the ways in which Varnish Streaming Server lets you scale up to serve live and VoD streaming content no matter the demand on your platform. But a big part of this story is maximizing the power of the capacity you already have. Ultimately that is what caching is all about: you make copies of content you need to deliver so the copies are delivered, much faster than roundtrips to the backend could achieve while protecting the backend from overload.
As consumers, we expect to access any content we want on any device with within-millisecond-availability, playing on-demand. We don’t usually think about what is required technically to make this work. One of the biggest components in this equation is storage and the process of accessing selected content from storage to serve it. With massive libraries of both in-demand and long-tail content available, the challenges of high-performance, fast, available and consistent delivery are growing.
We talk and write a lot about video streaming and how we can help companies achieve the vaunted trifecta of streaming performance: speed, reliability and flexible scalability. But aside from the ambitious descriptors, what do we actually mean when we talk about streaming when we break it down to its component parts?