Testing the persistent storage engine

Last week I spent quite a bit of time toying with the Varnish persistent storage engine. The long term goal of the storage engine is to become the default choice in Varnish. Currenty, it has a few limiations so it isn't quite ideal for some use cases. It doesn't really have proper LRU yet, it has something akin to FIFO and it will discard the oldest objects whenever it runs out of storage. If most of your objects are more or less of the same age and you have a bit of space allocated to it it should work quite well.

Back to performance. The tests I did where focused on a typical video-on-demand workload with 1000 objects of 1MB each. I used a somewhat modified version of Spew to do the tests, adding random requests and a limit to the number of transactions it would do. I asked it to do 50.000 requests before stopping.

I ran these tests on my 2 year old desktop machine, sporting an i5 CPU and an Intel X25M SSD. It has 6GB of memory so the working set would fit nicely into memory.

Baseline performance - malloc

Since it all fits into memory using malloc gives me a good overview of what is theoretically possible to achive. Varnish would fetch the data over the loopback from nginx. The pagecache would make sure that the data fetched from nginx is in memory.

I ran the test three times. The benchmark took:

  • 12.3s
  • 13.0s
  • 12.4s

So roughly 12.5 seconds. 

File performance 

  • 18.2s
  • 16.0s
  • 17.2s

Writing stuff to disk slows down file quite a bit as you can see.

How does persistent fare?

  • 14.2s
  • 14.4s
  • 16.3s

As you can it ads a couple of seconds compared to how malloc is performing. Since there is no way we can do it faster than malloc we have to look at the performance relative to malloc. So, the interesting figure is how much slower the various backends are compared to malloc.

File is 4.6 seconds slower and the new persistent code is 2.4 seconds slower. Thats not bad. In a way your could say that it is almost twice as fast. I expect persistent to become quite a bit faster in the time to come.

I did also ran the benchmarks with 100% cache hit rate. There is no measurable difference between the storage engines wrt performance. That's reassuring.

High speed vampire resurrection

(Best header ever!)

The important thing with the persistent storage is that is able to resurrect objects from disk whenever Varnish has been down. It does this by creating vampire objects that contain the bare minimum Varnish needs to identify them and resurrecting them to full blown objects whenever they are requested.

So, in order to test this I filled up the cache, killed Varnish and blew away the Linux VM page cache. Then I started Varnish back up again and ran the benchmark again. The interesting question would be if it is faster resurrecting the objects from disk compared to getting them over the loopback interface - in theory a 3 or 4 gigabit interface with almost zero latency.

The benchmark ran at 4.8 seconds - significantly faster than fetching it over loopback. I believe we where pretty much IO bound here, as my two year old X25M probably isn't capable of delivering data much faster. 

So, as things stand I'm pretty happy with persistent. Anders Nordby from Schibsted ran some tests on it and reported it to be stable, but with higher CPU and IO use compared to malloc, which doesn't really surprise me at this point.

 

The image of the train engine is (c) 2008 pagedooley. The vampire is (c) 2005 Derrick Tyson. Both used under CC license. 

Topics: VCL, Persistence

04/07/2012, 20:30 by Per Buer

All things Varnish related

The Varnish blog is where the our team writes about all things related to Varnish Cache and Varnish Software...or simply vents.

SUBSCRIBE TO OUR BLOG

Recent Posts

Posts by Topic

see all

Varnish Software Blog