My understanding of proxy_cache_lock is that only one request should be
passed to the proxied server for a given uri, even if many requests from
the
same uri/key are hitting nginx while it is being refreshed.
When the cache folder specified in the proxy_cache_path is empty, it
works
well and behave like I described above.
However, if the element in the cache already exists, but is expired
(according to proxy_cache_valid configuration), all concurrent requests
will
hit the proxied server and the resource will be downloaded multiple
times.
Here is my config:
proxy_cache_path /usr/share/nginx/cache levels=1:2 keys_zone=CACHE:10m
max_size=2g inactive=1440m;
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name localhost;
location /videos {
proxy_cache CACHE;
proxy_cache_valid 200 15s;
proxy_cache_revalidate on;
proxy_cache_lock on;
proxy_cache_lock_timeout 30s;
proxy_cache_lock_age 30s;
proxy_cache_use_stale error timeout invalid_header
http_500 http_502 http_503
http_504;
proxy_pass http://origin_server/videos;
}
}
Basically what I want to do is to be able to take advantage of the
“proxy_cache_revalidate on” to force a If-Modified-Since request, but
only
one request should go and fetch the new element from the proxied server,
even if multiple requests are coming in for the same uri/key and the
cache
is expired.
To be more specific, in my case, the resources downloaded are videos
between
1MB to 10MB in size, so they take some time to download and saving
bandwidth
is important, and only one request should be done, not multiple (to the
proxied server).
Using “proxy_cache_use_stale updating” is also not an option since I
want
all requests that are coming simultaneously to wait and use the new
resource
when there is a new one returned from the proxied server.
Is there something I am doing wrong, or is this the expected behavior?
Is
there a way to do what I am trying to do with nginx?
I am using nginx 1.8.1 on Ubuntu Server 14.04 x64.
Regards,
Jeeeff
Posted at Nginx Forum: