MSE at your fingertips - controlling where your objects are stored through the MSE VMOD

 

Usually when we talk about the Massive Storage Engine (MSE), we’re talking about its main features and use cases. After all, storage is a kind of hidden, unsexy necessity that powers, in part, many of the conveniences we take for granted: ubiquitous on-demand streaming services, for example. The instant-delivery accessibility of these content libraries is not magic even if it sometimes seems like it. And our MSE discussion normally revolves around the efficiency and speed enabled by a smart storage and caching setup designed for high-performance video distribution, CDNs, and large-cache use cases.

But this post is not about Massive Storage Engine features. Instead, it’s about a new VMOD (called the MSE VMOD), which lets users control how MSE works. You can dig into more Massive Storage Engine detail in the MSE documentation if you are not sure what MSE is or does, or if you just want to find out more before reading on. The background may help because we’re going to dive right in to explain how you can start using the MSE VMOD.

 

The MSE VMOD

What is MSE VMOD?

The MSE VMOD lets you influence how MSE works, on a per request basis directly from VCL. It is possible (and reasonable) to use MSE without using the VMOD. However, if you need more fine-grained control than what you get from the configuration file, the MSE VMOD will give you access to this.

Note that this VMOD and its functionality was not available until the release of Varnish version 6.0.5r1.

How can I start to use it?

To use the functions described in this blog post, you need to import the MSE VMOD into your VCL configuration. This is done by including a line:

import mse;

near the top of your VCL file.

Memory only objects

When an object goes into MSE, it will typically go into an MSE store, which is an on-disk file specified in the mse.conf configuration file. However, it can also become a memory only, if (at least) one of the following is true:

  • The MSE is configured without any stores.
  • The IO queue for the selected store is too long. (This happens very rarely.)
  • The object is short lived.
  • In VCL, mse.set_stores("none"); was called.
  • In the configuration file, default_stores was set to "none", and mse.set_stores was not called.

Three of the points are independent of the MSE VMOD, but the last two points depend on the MSE VMOD.

Storage selection

The main functionality of this VMOD is to let the system administrator influence how MSE selects a store from the list of stores defined in a MSE configuration file.

When a (disk) store is needed for a cache insertion, it will be selected randomly from a set of candidates. By default this is all the stores in the MSE, and they are selected in a round robin fashion. This is fine for homogenous setups, but if you have stores of varying sizes, overriding the default is natural.

The set of candidates is overridden by calling the function mse.set_stores(). You can also set a default set of stores in the configuration file through the parameter default_stores.

When the MSE VMOD has been invoked, either by calling one of its functions or by setting default_stores, then the round robin algorithm is replaced by a random selection from the candidates.

The candidates can have an uneven weighting. By default it is by store size, but it can be overridden for individual objects in VCL, by calling the function mse.set_weighting().

 

A few examples

Use the store size as weights

If you have stores of different sizes, and you want them to be picked at a rate that is proportional to their sizes, the following example is enough:

import mse;
sub vcl_backend_response {
mse.set_weighting(size);
}

This means that if you use this on a set of empty stores, the expected time for them to fill up (to a certain level) will be the same for each store.

 

Fill up a new store

A related use case is where an MSE configuration has been updated with a new book and store. The existing stores are close to full, while the new one is empty. In this case it might be a bad idea to put all new content in the new store, but you probably want to pick the empty store more often than the others. There is a special selection mode for this specific use case, called "smooth". In this mode, the weight will be equal to the available space in the store, plus the actual size of the store:

import mse;
sub vcl_backend_response {
mse.set_weighting(smooth);
}

In the code snippet above, using mse.set_weighting(available); would fill up the free store much faster. However, in most cases, using smooth will probably work better. When the stores have filled up, smooth will be practically equivalent to size, so there is no need to change back to weighing the stores by size.

 

Different types of stores

For short-lived objects it is smart to make them memory only. If an object has a TTL of a minute, and the server is restarted, the object will probably be expired or almost expired when MSE reads its database.

It is also natural to send different types of objects to different stores in a heterogeneous setup. For example, one might choose to store small objects on nVMe drives, and bigger objects on slower SATA drives. This example combines these two:

import mse;
import std;
sub vcl_backend_response {
if (beresp.ttl < 120s) {
mse.set_stores("none");
} else {
if (beresp.http.Transfer-Encoding ~ "chunked"
    || beresp.http.Content-Length > 1M) {
mse.set_stores("sata");
} else {
mse.set_stores("fast");
}
}
}

The example shows how to send objects with chunked transfer encoding and objects bigger than one megabyte to the SATA stores, but it is possible to use other criteria for this selection.

The above requires the SATA stores to have a tag or name equal to "sata", and the fast stores to have a tag "fast". Our documentation page on MSE describes how to add tags to stores. If the configuration does not have any stores that match "fast", then the corresponding insertions will fail. It is possible to check the return value of mse.set_stores("fast") to detect this.

Note that rotating media should only be used for MSE in very special circumstances, where read/write performance can be sacrificed for storage size.

 

Introducing this VMOD in an existing setup

Since this VMOD simply modifies the behavior of MSE, no additional action is needed when you start using this VMOD. The exception is if you want to add a default_stores parameter in your configuration file. This will require a restart, but no mkfs.mse invocation is necessary.

In other words, there is no need for a restart - you can just load a new VCL with storage selection logic, and it will take effect on the next request.

 

Further reading

The Massive Storage Engine is an important component in Varnish Enterprise, which enables caching of huge video on demand (VoD) catalogs. At the same time Varnish is great also for streaming live video to a huge number of concurrent clients - read more about this by clicking on the banner below or contact us to speak to an expert.

3 Features that make Varnish ideal for streaming

Topics: streaming, MSE, Varnish Massive Storage Engine, massive storage engine

19/11/19 16:45 by Pål Hermunn Johansen

All things Varnish related

The Varnish blog is where the our team writes about all things related to Varnish Cache and Varnish Software...or simply vents.

SUBSCRIBE TO OUR BLOG

Recent Posts

Posts by Topic

see all

Varnish Software Blog