Wasabi Hot Cloud Storage is known for its impressive performance and simplified pricing—especially its policy of zero egress fees, a spicy difference in today’s cloud landscape. That means you can pull data from Wasabi without incurring any charges from Wasabi itself. However, cloud providers like AWS or GCP may still charge for the network transfer when that data enters their infrastructure—meaning network costs can still accumulate, even if Wasabi doesn’t bill for the egress.
That’s where caching comes in. Even with Wasabi’s strong baseline performance, caching can play a critical role in reducing both cloud network costs and end-user response times at scale. In this blog, we explore how Varnish Enterprise, deployed as a high-performance cache in front of Wasabi storage, can help turn up the heat without adding to the bill—especially in scenarios with high throughput or repeated content access.
To test this setup in the wild, we ran a series of benchmarks across three global regions: the U.S. West Coast, Amsterdam, and Singapore. Wasabi’s strong performance stood out in every region—consistently saturating the standard networking available on our smaller DigitalOcean droplets. Once we increased our network limit and caching was introduced, we observed clear improvements in both speed and efficiency. The sections that follow dive into how the tests were structured, the results we saw, and what they mean for performance-conscious teams looking to lower costs without sacrificing delivery speed.
Like I said before, we wanted to test/benchmark Wasabi performance with and without caching in three different regions: US west coast, Amsterdam, and Singapore. In each region I created a Varnish Enterprise Node, a load generating server, and a Wasabi bucket. In each bucket, I uploaded ten files for each size ranging from 50 KB to 50 MB. I then had the load generator, in this case Siege, make requests to each different file type 5000 times, or 50 times each over 100 connections.
Giving credit where credit is due, Wasabi performed really well on the first round of testing with standard network limits for Digital Ocean. At two Gbps, both Wasabi and Varnish completely saturated the link. As two Gbps isn’t too high and Wasabi could clearly handle more, I increased the network limits to 10 Gbps and there we began to see the benefits of caching. With this change to our network, I then ran each test 3 times and averaged the results, producing the data shown below.
In Amsterdam, Varnish delivered a substantial performance boost across all file sizes when caching content for Wasabi. For small to mid-sized files (50 KB to 1 MB), Varnish reduced response times by over 95% and achieved throughput improvements exceeding 8,000% for our smallest objects. Even for larger objects, such as 10 MB and 50 MB files, caching significantly improved performance—cutting elapsed times by over 60% and boosting throughput to over 9 Gbps, saturating over 90% of the network.
In San Francisco, Varnish consistently improved performance over direct access to Wasabi storage, especially for smaller objects. Response times for 50 KB and 100 KB files dropped by up to 99%, and throughput increased by over 700% and 500%, respectively. While improvements for larger files were more modest, Varnish still boosted throughput across the board—reaching over 9.4 Gbps on large object delivery.
And finally, in Singapore, Varnish again delivered clear performance gains for all file sizes. Small object delivery saw standout improvements, with response times for smaller file sizes being nearly instantaneous and throughput jumping massively. Medium-sized objects (100 KB to 1 MB) also benefited from significant reductions in response time and healthy throughput boosts. At the larger end with our objects from 5MB to 50MB, Varnish maintained over 9.4 Gbps throughput or over 94% network saturation.
These benchmarks confirm what performance-minded teams have long known: strategic caching doesn’t just save money, it unlocks speed. Wasabi’s hot cloud storage delivers impressive performance on its own, and its zero-egress model makes it an even more compelling solution. But when paired with Varnish Enterprise, performance climbs dramatically, especially for high-frequency, high-throughput scenarios.
Across Amsterdam, San Francisco, and Singapore, Varnish consistently reduced response times and maximized network throughput, exceeding 90% network saturation even on large object delivery. Small files loaded nearly instantly, mid-sized objects saw sharp latency drops, and large files streamed faster thanks to Varnish’s ability to fully utilize available network capacity.
For organizations delivering video, software, or static content at global scale, this combination offers a compelling recipe: reduced latency, lower costs, and delivery that keeps pace with demand, no matter where your users are.