With another CDN announcing they are leaving the industry this quarter, the search for an affordable, performant, and consistent content delivery solution gets a little smaller. As more organizations seek reliable alternatives to traditional CDN’s, it’s becoming clear that the performance and cost-effectiveness of the solution are more important than adhering to these “traditional” approaches. This is where innovative solutions like using Storj with Varnish Software emerge. Storj provides secure, distributed, S3 compatible object storage with global access, and Varnish provides an intelligent caching layer that enhances speed and efficiency. Together, we create an approach to content delivery that lets businesses cache more, store more, and deliver faster.
So why are we doing this?
Varnish and Storj first met up at IBC 2024 and it was basically love at first sight. For one, there was no toe-stepping; they focus on storage, we focus on delivery. Our shared goal of delivering the best combination of cost, performance, and reliability was the cherry on top. In fact, when we at Varnish told Storj that using us in front of their service could reduce egress fees for their customers they responded with, “If that makes our customers even happier, good!”
You might be wondering when and why you would front a distributed global network like Storj with a CDN. The great thing about Storj as a CDN origin is that it’s everywhere. This means that in the event of a cache miss, the performance impact is negligible, allowing assets to be retrieved very quickly. Additionally, Storj excels at handling large file sizes. However, as file sizes decrease, the impact of latency becomes more noticeable—this is where fronting Storj with Varnish becomes an ideal solution.
To demonstrate this to you, engineers at Varnish Software and Storj teamed up to do some testing and demonstrate how this is all possible!
The Tests:
Setting up our tests
Before we could conduct any testing, the first thing we needed to do was create a Storj S3 bucket, drop in a bunch of files from size 100KB to 50MB, and then set up our Varnish S3 Shield. For more information on that, you can watch my 5 minute demo video or read along to our blog post documenting the process.
We want to test Storj performance around the world and compare S3 delivery with and without an S3 shield or caching layer. To do so, we placed our Varnish Servers in San Francisco, Amsterdam, and Singapore. After that, we set up three more servers acting as load generators.To get a sort of control, we first had the load generating servers run the test directly to the Storj gateway and our S3 bucket, this lets us see what users might experience without using a caching layer. After that, we had the load generator run the same test but to our Varnish server. The before and after story really shows the value of using Varnish alongside Storj, but we will get more into that in the Results section.
What were we testing?
The tests focused purely on latency, as we did not want to overwhelm the Storj gateway, and as a result, we used smaller machines (4 GB Memory / 2 Intel vCPUs / 120 GB Disk, aka the $32 a month boxes on Digital Ocean 🙂). But that’s right, this is in a production environment (sort of, I did just tell you the Varnish and Load generator are Digital Ocean Droplets)! We used the same access point as a normal Storj customer and didn’t want to be a noisy neighbor, harm production, or worry their Network team with essentially a DDoS attack. While Varnish obviously loves handling huge loads of traffic (see our 1.5 Tbps throughput world record), and I as an engineer love breaking things, we wanted to keep data consistent between Storj and Varnish. It wouldn’t be fair to give Varnish a huge machine and throw 10x the traffic so we of course used the exact same machine for Varnish as our load generating server, and the exact same script for traffic to Varnish as for direct to Storj.
For each test, we used Siege as a load generating client server requesting file types ranging between 100KB and 50MB. As we wanted to not stress Storj, we used just 10 connections but with each making 50 requests, for a total of 500 requests for each file type per test. We ran each test twice to demonstrate consistency, and the final results shown here are the averages between the two test runs. As we are looking at latency, the final results captured are the Elapsed time for all 500 requests, Average Response Time for each request, and the number of times faster Varnish can make Storj in both time and throughput.
Varnish wants to also highlight that we can continue to improve these numbers further by using bigger machines, adding more to the configuration, and additional load generators or traffic (number of connections or requests for example), we just don’t want to put this load on Storj for the reasons listed above. The more traffic you have coming out of S3 the better of a fit you are for this solution!
Results:
Okay now the fun stuff! If it wasn’t obvious, Varnish was going to make things a lot faster but we have a lot of numbers to look at so brace yourself.
Elapsed Time (Time for 500 Requests):
Response Time Average:
Time and Throughput Improvements:
In Summary:
Our testing highlighted the strong performance of Storj’s S3 gateway and the additional value Varnish can bring as a caching layer. By working with Storj, our Varnish nodes were able to significantly enhance delivery speeds for users around the world, demonstrating how the combination of distributed storage and intelligent caching can provide the best experience for your users.
Here’s some quick highlights from our testing locations:
- San Francisco
In San Francisco, Varnish worked seamlessly with Storj to optimize delivery times. Smaller file sizes saw a dramatic 4,677% improvement, while even the largest files benefited from a speed-up of 93%. - Amsterdam
Storj has excellent performance in Europe (click here for a map of Storj’s node locations) with a large concentration of access points, so no surprise the Varnish impact was not as large as the other regions. However, we still helped deliver the smaller files 1,977% faster and the larger files 41% faster. - Singapore
Singapore also saw how Varnish excelled with the smaller file sizes, where it was up to 4,387% faster. Even our larger 50MB files were sent on average 157% faster.
All of these benefits come “out of the box” with the Varnish S3 Shield, there’s no additional configuration or VCL knowledge required, and it can be set up in just 5 minutes.
If we want to add a little configuration, we can drive things even further though! For example, if large video files are your bread and butter, we can use VMOD_Slicer to make range requests and slice up large video segments into smaller chunks, helping deliver those 50MB files even faster than shown here!
To see just how easy this is, check out this blog, then contact us for a live demo where we can show you exactly how Varnish and Storj can help.