Here are the questions we got along with their answers.
How well does Varnish work with NGINX (web server)/PHP-FPM?
Very well. Varnish works in the HTTP level, and as such is compatible with everything that speaks HTTP. The only thing you have to keep in mind are:
- Cookies
- TTL
- Cache invalidation
Can you comment on the product roadmap?
The 4.0 release is underway and should be released after the summer, with a prerelease during the summer, hopefully. The main new features are:
- New logging framework (see this blog post https://www.varnish-software.com/blog/varnishlog-query-language)
- Increased performance (multiple acceptor threads, amongst others)
- HTTP 2.0 compatible architecture. HTTP 2.0 is still far away, but the internals are being reworked in order to fit with what the specs are going to look like
- Directors in vmods. You can now write a director in a vmod. Should make it a lot easier to write a new director.
In addition there are numerous improvements that will allow us to add features quicker.
Could you explain cache revalidation when ESI is not used?
Basically, there is no cache revalidation. See next question.
Does Varnish reinspect the origin's resources periodically?
This is an important question and the answer is no. Varnish will never talk to backend unless it is to fetch a new object. This is why cache invalidation (purging) is important.
How does one cache ads as per country code?
Add the GeoIP Vmod and add the country code to the hash. Or, synthesize a "Country:" request header and add "Vary: Country" to the backend response.
What is the optimal method of purging across several, same-tier Varnish caches?
The VAC does this and does it quite well. It's been tested running tens of thousands of purges over multiple datacenters. If you don't have the VAC and don't want to spend the cash on it I suggest one of two alternatives.
- Build something on top of Varnish Agent 2.0. It should take a web API call and distribute it to the varnishes. The tricky bit is the error handling.
- Publish purges through RSS, a bit like cache channels. Then have a client on each varnish server that pulls from the central source.
Are there any preferred OS platforms for running Varnish? And is there any additional penalty for running it in virtualized enviroments (besides known OS virtualization overhead)?
Most people run on Linux, so that is probably the best understood platform. FreeBSD also has quite a few users and seems to run very well. It also runs on OS X, but I don't think anyone uses it in production. Only 64 bit kernels are supported. 32 bit runs, but not very well.
Varnish runs well on virtualized servers. However, note that Varnish can be a IO hog when it is running out of memory and needs to use disk. Disk operations tend to have quite a bit of overhead in virtualized environments so be careful.
For any "serious" website I would recommend using physical servers.You'll shave off quite a few ms on each request and since Varnish is more or less stateless managing it is rather easy.
How can I get Varnish to serve nicer error pages?
You customize vcl_error. From the VCL of varnish-cache.org:
If I have a dying server behind Varnish, is there a way to configure 'background retries' ?
No. There is no such thing. We dropped it from the design pretty early and I'm glad we did. The semantics are a nightmare. You should have a look at "saint mode", however. Properly configured it can retry several servers several times and serve stale content if the servers fail. Pretty awesome.
Is it possible to have 100k customers requests for url within 5secs using varnish cache?
Yes. :-)
How to solve thundering herd problem using varnish?
Varnish is pretty robust against thundering herd. But if you have a problem with it deploy Grace. It was designed exactly for that purpose. The idea is that you serve the client oldish content is stead if piling them up. You would only need a couple of seconds of grace in order to do so, maybe 10.
For those of you that missed the "You can cache everything" webinar, you can watch the on-demand version here.