We talk and write a lot about video streaming and how we can help companies achieve the vaunted trifecta of streaming performance: speed, reliability and flexible scalability. But aside from the ambitious descriptors, what do we actually mean when we talk about streaming when we break it down to its component parts?
The bare bones: What you need to set up your streaming solution
Whether you’re a media business working in broadcasting or distribution or any other, you’ll have large volumes of video content you need to serve at whatever scale users demand. But broken down, at the core of the streaming proposition are two primary use cases: origin protection and points of presence (PoPs) at the edge. With this set-up, you will be looking for a two-tier streaming solution that really comes down to the right software to manage the specific hardware and storage for your streaming solution. In this case, you need the edge for bandwidth and storage for the vast content libraries that most companies now stream. This begins to cover the “what you need for high-performance, reliable streaming” point without going into too much detail about the specifications because those will be tailored to you.
How streaming works
Over time the way video is streamed has changed. In the past it was done by using progressive download techniques. We will all be familiar with these because progressive downloads require downloading, and sometimes the video playback rate is faster than the download rate. And what does that mean? Buffering. Yes, waiting for the content you want to play until more data is downloaded.
Streaming media is essentially divided into chunks, meaning that smaller fragments get delivered chunk by chunk based on the user’s request. Chunked video is ideal for streaming - even more so when streaming live content.
Is streaming as simple as that?
Yes and no.
The chunked data is served continuously and makes for a smoother end-user experience and allows the user to skip into different parts of the requested video content without waiting for an entire video to download, for example. But in order to deliver these chunks, we actually have to package them, and here’s where we run into a conflict with packaging formats. For example, there are the two most frequently used formats - HLS, developed by Apple, and DASH, which is supported by everyone else. Unfortunately part of the chunking and encoding process means that both formats have to be supported and thus twice the resources will be consumed for one single video. (This is beginning to change with better cooperation between Apple and all the rest who are working together on a format called CMAF.)
Many of the issues companies - and indeed end-users - face with streaming are down to the network being used. Networks are often unable to keep up with the amount of data we want to exchange on an HTTP connection. While computational power and server capacity can be tailored to solve latency and availability issues, there is not a fast track solution for network slowness. But that’s part of why having optimally sized and configured software and hardware coupled with strategically placed PoPs is important - making sure all conditions are ideal for streaming, even if the network itself isn’t ideal.
Varnish Streaming Server - Your high-performance HTTP logic box
We've designed our Streaming Server solution from the ground up to ensure you can take advantage of the kind of ideal-scenario settings that we've laid out above. We call it a "high-performance HTTP logic box", but all you need to know is that with the power of the reverse caching proxy and the flexibility it offers, Varnish Streaming Server delivers a powerful, low resource, stable way to ensure that you’re always delivering the three keys to streaming success - speed, reliability and flexible scalability.
Want to learn more?