0.7.67 - limit_req problems

Hello all,

There must be something that i miss when trying to setup rate limiting
on our 0.7.67 installation

We are using nginx as a load-balancer for a rails application, and
recently we have been hit by a very badly written bot that generates up
to 10 req/s without checking the results. Much like a e-mail harvesting
bot, very badly written …

The first thing i did was to make sure their requests weren’t forwarded
to the backend, issuing a 444 and hoping that they would notice and just
stop doing what they are unsuccessfully trying to do :slight_smile:

That didn’t stop them. So currently, i am blacklisting their IPs each
time i see that bot reappearing. Guess what … they are on EC2
instances and seem to be very flexible when it comes to changing their
IP address :slight_smile:

I grew a bit tired after twelve attempts, so i’m looking for alternate
possibilities. Note that we have some legitimate traffic on the same
server that can reach that amount of requests, so i can’t just blindly
apply global rate limiting or add rules to the firewall.

Their IP can change, they are using a very specific user-agent, and are
hitting a very specific location that isn’t normally available on this
server, so i came up with the following config:

http {
  include        /opt/nginx/conf/mime.types;
  default_type   application/octet-stream;

  geoip_country  /opt/nginx/conf/GeoIP.dat;
  geoip_city     /opt/nginx/conf/GeoLiteCity.dat;

  limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
  limit_req_zone $http_user_agent zone=blackhole:10m rate=1r/m;
  limit_zone     two $binary_remote_addr 10m;
  ...
  server {
  ...
    location ^~ /api/ {
      limit_req zone=blackhole;
      limit_conn two 1;
      return 444;
    }
 ...

But it doesn’t seem to work, my server is still hit with many req/s,
much more than what i specified in the configuration.

What am i doing wrong here ? do limit_* directives really work in
“location” directives ?

What i ultimately would like to do is implement some kind of blackhole
for those guys:

  • make their request sit there idling for a long time
  • make sure they don’t take too much simultaneous connections, so we
    have workers available for legit requests (limit_conn

Is there such a module available ?

Thanks a lot in advance,

Erwan

Posted at Nginx Forum:

Hello!

On Wed, Jun 16, 2010 at 09:46:21AM -0400, zuzur wrote:

The first thing i did was to make sure their requests weren’t forwarded
server that can reach that amount of requests, so i can’t just blindly

  limit_req zone=blackhole;
  limit_conn two 1;
  return 444;
}


[/code]

But it doesn’t seem to work, my server is still hit with many req/s,
much more than what i specified in the configuration.

In /api/ location you explicitly return 444 for all users. It’s
not clear what effect do you expect from limit_req/limit_conn
there but most likely it will be negative - requests will be
delayed / or 503 will be returned instead of immediate connection
close.

What am i doing wrong here ? do limit_* directives really work in
“location” directives ?

Yes they do.

Maxim D.

Maxim D. Wrote:

requests will be
delayed / or 503 will be returned instead of
immediate connection
close.

Hello Maxim, thanks for your reply.

In fact, just the inbound requests from a single host generate a
sustained 600kb/s on our server. As we pay for this bandwidth, and no
legitimate user should access this /api URI, i would really like to slow
them down. If i could just make their requests sit there for 24 hours, i
would do that. :slight_smile:

What am i doing wrong here ? do limit_*
directives really work in
“location” directives ?

Yes they do.

So any idea why they don’t with my configuration ? I see tens of
requests / s when my ‘blackhole’ limit_req_zone is set to allow 1 req
per minute ! I must be doing something wrong, i just can’t find what …
I didn’t provide the full configuration as it was very long, but if
needed i can paste it …

Thanks,

Erwan

Posted at Nginx Forum:

Hello!

On Wed, Jun 16, 2010 at 12:01:01PM -0400, zuzur wrote:

not clear what effect do you expect from
limit_req/limit_conn
there but most likely it will be negative -
requests will be
delayed / or 503 will be returned instead of
immediate connection
close.

Sorry, missed: return 444; executed during rewrite phase, while
limit_req/limit_conn executed during pre-access phase, which is
later. So actually limit_req/limit_conn do nothing in your
config.

In fact, just the inbound requests from a single host generate a
sustained 600kb/s on our server. As we pay for this bandwidth, and no
legitimate user should access this /api URI, i would really like to slow
them down. If i could just make their requests sit there for 24 hours, i
would do that. :slight_smile:

So you want to keep such connections open instead of immediately
closing them, right? limit_req will delay requests as long as
burst isn’t yet reached, try setting big enough burst.

And add some actuall request processing to the picture, with
“return 444;” you’ll see no effect.

I didn’t provide the full configuration as it was very long, but if
needed i can paste it …

See above.

Maxim D.