October 22, 2024
10 min read time

S3 Object Delivery with Varnish Software, Equinix, and VAST Data

There are more options than the Cloud

Content delivery is expensive, and with every business in the industry looking to cut costs, we’ve noticed a trend: companies are trying to minimize their cloud usage wherever possible. Egress fees from major cloud providers are driving organizations to reevaluate their options, and that’s where we’ve been able to help. Whether by completely replacing their cloud architecture or optimizing it to reduce cloud usage, we've helped our customers lower costs while improving performance.

But rather than just discuss this, we teamed up with two of our partners, Equinix and VAST Data, to conduct some tests and demonstrate some results. Equinix, with 260+ data centers around the world, has a well-earned reputation for trust, reliability, and global reach. VAST Data, a high-performance, multi-protocol software-defined storage solution provider, also delivers excellent performance and availability. In this partnership, VAST Data supplies the Origin/S3 storage. Equinix provides its bare metal service, Equinix Metal, along with its global, software-defined private network, Equinix Fabric, to power our instances and points of presence. And Varnish powers the caching software, serving as the delivery engine on our Equinix nodes. Together, we offer a more cost-efficient solution for content delivery. And beyond the data we’ll cover here, the existing overlap in our customer base speaks for itself.

Designing our CDN (and Tests)

For these tests, we are focusing on S3 delivery with low latency. If you want to read about Varnish’s world record breaking throughput, you can read about that here, but that was not the goal this time around. While we love saturating NICS and Networks at Varnish, we don’t have to stress test hardware every time we ssh into it. 

This being said, we designed our tests to demonstrate how caching can serve to accelerate S3 object delivery and reduce costs. In effect, this meant we built our own CDN, with an Origin, different cache nodes, and clients. Our VAST Data S3 bucket is in LA, and we used siege as a load generating client in NY. In between, we deployed Varnish in three common Equinix regions; LA, Dallas, and NY. By having our “Clients” on one side of the US, our VAST Data origin on the other, and Varnish on Equinix caching requests in different spots in between, we can see how distance affects latency and your users' experience. But to make things easier for those visually inclined like myself, let’s make a diagram:

map

 

The Tests Themselves

For the tests themselves, we wanted to cover a variety of use cases from small API calls to requests for large high resolution video segments, all while demonstrating how we can accelerate and offload S3 delivery. To do so, we created different sized objects, ranging from 100 KB to 50,000 KB in size. We then used our client generating program siege, to make requests for the different file types and recorded our results. 

As mentioned above, we are focusing on S3 offload and latency, not throughput. In fact, we had network saturation limits in place to keep throughput around 5 Gbps. This was part of why we chose siege as our load generating tool, as it is easy and efficient for configuring lower stressed tests. We also used tc to set traffic control limits on the bonds for the servers we used. These tests were performed in a shared lab environment and we had to be respectful of our neighbors. 

In total, we performed 4 tests over 4 scenarios. The first scenario was a control without caching, where our clients in New York connect directly to the VAST Data S3 bucket in LA (red line in the image above). The other 3 scenarios are the ones with caching, and our Varnish on Equinix nodes in between our origin and client. For each scenario we did a test with 100 clients requesting each of the different file sizes 500 times, for a total of 50,000 requests per test scenario and object type. 

 

The Results

Finally, let’s take a look at the data and some graphs. As we are focusing on latency, the data points we will focus on are Elapsed Time (for all 50,000 requests) and Response Time (average for each request).

File Sizes (Bytes)

VAST Data Direct

LA Varnish

Dallas Varnish

NY Varnish

0.1M

171.31

169.95

87.7

2.12

0.5M

239.45

237.68

122.85

5.03

1M

273.4

271.79

141.88

8.52

2M

308.45

306.53

167.83

15.62

5M

389.99

367.23

363.91

36.55

8M

639.97

578.68

581.24

58.32

10M

802.76

720.41

728.6

73.14

16M

1301.8

1155.36

1165.07

118.3

50M

4119.72

3638.15

3627.99

444.11

File Sizes (Bytes)

VAST Data Direct

LA Varnish

Dallas Varnish

NY Varnish

0.1M

0.34

0.34

0.18

0

0.5M

0.48

0.47

0.25

0.01

1M

0.54

0.54

0.28

0.02

2M

0.61

0.61

0.33

0.03

5M

0.78

0.73

0.72

0.07

8M

1.27

1.15

1.16

0.11

10M

1.6

1.43

1.45

0.14

16M

2.59

2.29

2.32

0.23

50M

8.2

7.25

7.2

0.88

Unsurprisingly, the closer we bring the object to the client, the faster the request is completed. When Varnish is in New York, the latency is so low that it barely registers on our graphs, except for the larger file sizes. However, the trend in response times across different file sizes is clear and exactly as expected. Looking at the smaller sizes, the impact of distance on response times, and the value of caching, becomes even more evident.

Dallas is 

Dallas is about half the distance between NY and LA, and we can see that the tests took about half the time as the tests from NY to LA did. When we bring our cache even closer to clients, and serve from Varnish in NY, we see the latency plummet even further. If we know where our clients are, we can build a tuned solution to support them. 

Not only are we helping with faster performance, but by keeping our traffic outside of the cloud, and serving responses from a local cache, we are reducing the cost of your cloud bill. The cloud can’t charge you for a request that never came into its network. So whether we migrate your S3 storage to a more cost-effective solution like VAST Data, leverage Cloud Adjacent Storage on Equinix Metal, or simply reduce pulls from your existing cloud S3 buckets by deploying Varnish nodes outside the cloud, we offer various ways to help control costs.

As we can see, the value here is in the configurability, customization, and cost savings a partnership like this provides. When you can create nodes where your users are, configure them to an affordable and performant storage solution, and craft your own caching strategy, you're no longer at the mercy of one-size-fits-all cloud services. Instead, you're in control of how and where your content is delivered, optimizing for both performance and cost.