I use Nginx as reverse proxy, and I would like to rate limit the
requests
to origin server, but only limit the requests with cache status EXPIRED.
I
just tested with a map “cache_key”, and the rate limiting doesn’t work,
the
$cache_key was logged as empty string. But changing
$upstream_cache_status
to non-upstream variables like $remote_addr and adding an IP match value
will make the rate limiting work. The zone I defined like so:
limit_req_zone $cache_key zone=cache_host:1m rate=1r/m;
map $upstream_cache_status $cache_key {
EXPIRED $host;
default “”;
}
I enabled cache setting in nginx.conf, and one of my server chunk uses
the
rate limit zone like below:
limit_req zone=cache_host busrt=1;
Is this because $upstream_cache_status value is set after the request is
sent to origin server and got the response, while $cache_key is used in
rate limit zone which checked before the request was sent to origin
server? If
so, is there a recommended way to implement rate limiting only for
requests
with specific cache status?
I assume $upstream_cache_status variable is set after requests are sent
and
responses are got. But is there a way to do do rate limiting ignoring
cache?
Really appreciate any help on this.
On Wed, Jul 20, 2016 at 02:03:44PM -0400, linnading wrote:
Hi there,
I assume $upstream_cache_status variable is set after requests are sent and
responses are got. But is there a way to do do rate limiting ignoring cache?
Really appreciate any help on this.
I’m afraid that, having read the mails, I’m not at all sure what kind
of limiting you want to do.
If 10 requests come in at the same time to-or-from the same something,
you want the last few requests to be delayed or rejected.
It is “to the same upstream server” that I care about. I would like to
limit the request rate to the same upstream server.
The Scenarios is like:
10 requests at the same time to the same upstream server, the upstream
server should only receive requests at rate 1r/m. Last few requests will
be
delayed or rejected. But for these last few requests, some of them can
be
served by cache, they should not be delayed/rejected.
If non-stock is ok for you, possibly one of those can work?
The Scenarios is like:
10 requests at the same time to the same upstream server, the upstream
server should only receive requests at rate 1r/m. Last few requests will be
delayed or rejected. But for these last few requests, some of them can be
served by cache, they should not be delayed/rejected.
I think that the limit_* directives implementation is such that the
choice is made before the upstream is chosen; and there are no explicit
limits on the connections to upstream. That is likely why the
third-party
module was created.