Configuration: Decrease upstream traffic

Origin configuration


To setup Apache you will need to create a virtual host configuration file. For instance 'origin.conf' configuration file under the /etc/apache2/sites-enabled folder.

<VirtualHost *:82>
  ServerAdmin webmaster@localhost

  KeepAliveTimeout 65

  DocumentRoot /var/www/vod
  <Directory />
    Options FollowSymLinks
    AllowOverride None

  <Location />
    UspHandleIsm on

  ErrorLog /var/log/apache2/

  # Possible values include: debug, info, notice, warn, error, crit,
  # alert, emerg.
  LogLevel warn

  CustomLog /var/log/apache2/ common

Apache's default KeepAliveTimeout is 5 seconds, after which Timeout (default 300 seconds) applies but in order to avoid a race condition between Apache as origin and Nginx as shield cache the value here has been raised to above the Nginx default value of '60' (see below).

Using a higher value is motivated from the fact that the shield cache is there to protect the origin, lowering the number of requests - so relying more on reusing open connections makes sense.

At this point you will have Apache configured as an origin with /var/www/vod as DocumentRoot. A good test if everything works is to use the setup from Verify Your Setup.

Running this tutorial will create a directory called /var/www/usp-evaluation directory with content, but it can be renamed to /var/www/vod. Then adjust the URLs in 'index.html' (inside /var/www/vod) to '' and make sure your /etc/hosts file maps your IP address to

You then can start the web server as follows:


sudo service apache2 restart

Apache should now start listening to port 82 as an origin.

Shield cache configuration


Setup a cache and log directory for Nginx:

sudo mkdir /var/cache/nginx
sudo chown -R www-data:www-data /var/cache/nginx

sudo mkdir /var/log/nginx
sudo chown -R www-data:www-data /var/log/nginx


Nginx will listen on port 80 and sit in front of the Apache origin. You will need the following configuration (in /usr/local/nginx/conf):

 1# Compile with --with-http_ssl_module to proxy upstream https://
 2# Compile with --with-http_slice_module to enable slice caching
 4# same user as Apache
 5user www-data;
 7# most sources suggest 1 per core
 8worker_processes 1;
10working_directory /var/www;
11error_log /var/log/nginx/error.log;
12pid /var/tmp/;
14# worker_processes * worker_connections = maxclients
15events {
16  worker_connections 256;
19http {
20  include mime.types;
21  default_type application/octet-stream;
22  client_max_body_size 0;
23  large_client_header_buffers 4 32k;
25  log_format main '$remote_addr - $remote_user [$time_local] "$request" '
26                  '$status $body_bytes_sent "$http_referer" '
27                  '"$http_user_agent" "$http_x_forwarded_for"';
29  access_log /var/log/nginx/access.log main;
31  log_format cache '***$time_local '
32                    '$upstream_cache_status '
33                    'Cache-Control: $upstream_http_cache_control '
34                    'Expires: $upstream_http_expires '
35                    '"$request" ($status) '
36                    '"$http_user_agent" ';
38  access_log  /var/log/nginx/cache.log cache;
40  proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=edge-cache:10m inactive=20m max_size=1g;
41  proxy_temp_path /var/cache/nginx/tmp;
43  upstream origin {
44    server;
45    keepalive 100; 
46    # keepalive_timeout default is 60 seconds
47  }
49  server {
50    listen;
51    server_name edge
53    location / {
54      proxy_pass http://origin;
55      proxy_cache edge-cache;
57      proxy_http_version 1.1;
58      proxy_set_header Connection "";
60      proxy_cache_lock on;
61      proxy_cache_background_update on;
62      proxy_cache_use_stale updating;
64      proxy_cache_key "$request_uri$slice_range";
65      proxy_cache_methods GET HEAD POST;
67      proxy_cache_valid 200 302 10m;
68      proxy_cache_valid 301      1h;
69      proxy_cache_valid any      1m;
71      proxy_buffering on;
72      proxy_buffers 8 32k;
73      proxy_buffer_size 32k;
75      proxy_bind;
77      slice 1m;
78      proxy_set_header Range $slice_range;
80      add_header X-Cache-Status $upstream_cache_status;
81      add_header X-Handled-By $proxy_host;
82    }
83  }


viewers -> cdns -> nginx:80 -> origin:82

If all is correct then you have an edge-origin setup with caching and cache-locks through Nginx.


sudo /usr/local/sbin/nginx

(Assuming you installed nginx in /usr/local/sbin - see the Nginx documentation).

Additional Nginx settings

In order to setup the Nginx cache lock properly you need to have the following settings.

  • In the upstream section of the nginx.conf add this line:


where CONNECTIONLIMIT is the maximum you want to set for keepalive. This value should be chosen based on sizing and expected traffic patterns, i.e. how many simultaneous requests will Nginx be making to the upstream.

The default for keepalive_timeout in upstream is 60 seconds; this value has been maintained so it is not set explicitly in above config.

The keepalive_timeout value of the shield cache must be lower than the origin to avoid a race condition where the client (Nginx working as shield cache) requests content on what it thinks is a valid connection still, whereas the origin/upstream (Apache) has closed the connection. If the client value is lower than the origin/upstream value this will not happen.

  • In the location section add these two lines:

proxy_http_version 1.1;
proxy_set_header Connection "";

This will allow for the "time_wait"-requests to close much quicker and if it would reach CONNECTIONLIMIT, older connections will be closed from Nginx.

  • To mitigate the thundering herd problem for the same proxy_cache_key and to enforce the collapsing of multiple client requests for the same content, we add the following three lines:

proxy_cache_lock on;
proxy_cache_background_update on;
proxy_cache_use_stale updating;

This will execute a single request for the same content to the upstream server, and it will generate a background request if a cached element has expired. In case the content is currently updated by the cache it will generate a response with stale content with the Header value X-Cache-Status: UPDATING.


We recommend using the previous Nginx configuration for a single active Origin Shield Cache which sits in front of Unified Origin. Instead of having multiple Nginx cache servers hitting Unified Origin directly.

The other values define which methods are cached as well a how long some responses are cached. The buffering settings will allow Nginx to respond as soon as possible to the client and 'slicing' is enabled as well (which provides more effective caching of big responses, e.g. media files). Cache key is explicitly set to url+range.

Varnish Cache

By default Varnish Cache and Varnish Cache Enterprise support request collapsing. Both Varnish versions will only apply request collapsing for requested objects with TTL greater than zero (e.g., req.ttl > 0) or if Cache-Control Headers from the backend are present. Therefore, this mechanism allows the media workflow of Unified Origin and Varnish Cache to scale. Unified Origin provides cache HTTP response headers based on the VoD or a Live use case. This is explained in more detail in HTTP Response Headers.