How Varnish Improves Artifactory Performance and Reduces Operational Toil Caused by Bottlenecks

 

Varnish is a software-defined solution for all kinds of web performance challenges. As flexible caching software, it can cache and accelerate the delivery of anything HTTP, including APIs. While Varnish is commonly deployed for web acceleration, streaming delivery or custom CDN purposes, it is also very successful when applied to other workflows, including internal business applications.

Take Artifactory for example. As a binary repository manager, Artifactory acts as a single source of truth for packages, binaries, container images and Helm charts. It gives developers and DevOps engineers one place from which to track artifacts from development to production.

In large organizations that use Artifactory, hundreds or even thousands of engineers could be accessing the same Artifactory platform at the same time, trying to pull the same content. What enterprise users of Artifactory find is that this demand puts pressure on the platform and leads to bottlenecks, loss of performance and delays in DevOps workflows. For mission critical development processes, this can lead to increased complexity, missed deadlines and added costs. It’s a similar situation to a website attracting vast amounts of traffic for specific data, such as web pages or video content. In other words, an ideal use case for powerful caching software.

blog header

 

How bottlenecks in Artifactory occur

Artifactory runs on Tomcat, which uses the Apache HTTP client to handle HTTP requests. Tomcat distributes threads and allocates a thread for each incoming request, up to a maximum specified by maxThreads. What happens when the number of requests exceeds this value?  Requests are queued and experience delays until a thread becomes available. The maximum number of requests in the queue is then specified by acceptCount. If this limit is also reached, errors occur for any additional requests until there is space made available in the queue. So any new requests, uploads, downloads, or updates will fail.

The values of maxThreads and acceptCount can be increased to a certain extent in Tomcat, depending on hardware, but this may not be sufficient to prevent bottlenecks, or may require expenditure on more hardware to reach adequate capacity and lower the risk of latency or failure. Higher latency leads to slower response times for requests (GET, search, certain API calls), but even worse, it can lead to dropped connections, connection pool depletion and timeouts.

 

Benefits of using Varnish in an Artifactory setup

Reduce the number of requests hitting Artifactory instances

As a software-based caching layer, Varnish sits in front of Artifactory instances and caches popular artifacts, delivering them to users instead of relying on Artifactory to do so. This achieves multiple benefits: 

  • Faster delivery of the requested objects because it’s handled by Varnish caching software that is optimized for performance. 
  • Heavily reduced requests hitting the backend, enabling many more clients to simultaneously request content. They aren’t bottlenecked by Artifactory’s own capacity limits. Varnish is capable of handling many thousands of requests per second, so Artifactory can run at a heavily reduced thread count.

 

Reduction in Artifactory license and hardware costs 

Because of the huge reduction in requests hitting your Artifactory setup, fewer instances are required to cater to demand. A Varnish caching layer in front of a package manager can lead to immediate reduction in license and hardware overheads required to give DevOps teams the tools they need.

 

Protect mission-critical content with Mutual TLS 

Varnish Enterprise can employ Mutual TLS (mTLS) to ensure that the clients and servers at each end of an Artifactory pipeline are who they claim to be, by verifying that they both have the correct private key. Often used in Zero Trust security frameworks, Mutual TLS is appropriate for use within Artifactory workflows that handle critical data and intellectual property. It ensures that no user, device, or network traffic is trusted by default, and can also help keep APIs secure.

 

Lower dependency resolution times in Gradle

Artifactory supports a whole range of build tools and plugins and Varnish can help improve the performance of these integrations too. Varnish can lower dependency resolution times in Gradle, for example, thanks to significantly lower latencies. Package managers like npm can also run with greatly improved performance, particularly as they make many requests to the database, so there is a lot of benefit to be had by caching these requests and offloading the traffic to Varnish.

 

Easy revalidation to avoid content staleness

It’s crucial, of course, to ensure Artifactory content is kept up to date in the cache being used. With a range of cache invalidation tools, Varnish makes it easy to keep cached content current. By using Varnish Configuration Language, the domain-specific language used by Varnish to control request handling, many options of invalidation and revalidation are available and executable on the fly.

Application delivery and acceleration use cases are the Varnish “bread and butter.” With experience solving enterprise-scale DevOps challenges and content delivery workloads, we are on hand to discuss your needs.

 

New call-to-action

Topics: varnish enterprise, high performance, Artifactory, Improving artifactory performance

4/5/22 3:21 PM by Ian Vaughan

All things Varnish related

The Varnish blog is where the our team writes about all things related to Varnish Cache and Varnish Software...or simply vents.

SUBSCRIBE TO OUR BLOG

Recent Posts

Posts by Topic

see all