Is there a way to monitor the busyness of my dedicated server cpu cores
and
stop serving new connections passed a given cpu load threshold?
Put another way, what is the standard approach/technique to dynamically
limit the maximum number of connections my machine can cope with?
Bear in mind one IP can be many eyeballs. I use the module with a
setting of 10 per IP. I set the firewall to a higher limit to allow some
non-web services, but not infinite. This can fight back a very
unsophisticated DOS attack. A real DOS is distributed, so the IP limit
won’t be useful.
I had a document hit Twitter and their servers hammered my lowly VPS.
Besides an IP limit, I suggest a rewrite to eliminate hot linking, which
effectively is what Twitter can do. If they tweet a link to a webpage,
no problem. That would limit those twitter users to each individually
set their browser to a webpage, which slows the requests. Out of
paranoia, I blocked all of Twitter IP space. The same for Facebook.
Again, the eyeballs can use their ISP via a link. I’m not comfortable
with social media companies directly accessing my server since they have
huge data bandwidth.
That leaves large corporations and universities as the situation where
one IP is really many eyeballs. A connection limit of 10 will be too low
in these cases occasionally, but you have to set the limit somewhere.
Original Message
From: Anoop A.
Sent: Friday, May 20, 2016 11:26 AM
To: Nginx
Reply To: [email protected]
Subject: Re: CPU load monitoring / dynamically limit number of
connections to server
Does that really limit per the OP’s request? Reading the document, it
seems like a monitoring module, but not something that limits nginx. For
example, ulimit is just a linux command. Maybe you could set a chron job
to sniff the load based on this monitoring and adjust system parameters
accordingly. (Beware of positive feedback loops.!
This was my method to set limits on nginx. First, download http://www.labs.hpe.com/research/linux/httperf/
I found my server could handle 3000 requests per minute. Note this is
running httperf ON the server itself. This does not include the
connection to the Internet. But now you know how much abuse the server
can take.
Then I ran httperf over the Internet from a number of different ISPs.
The performace was more like 100 requests per second at best. Clearly
the pipe I share is the limiting factor. I pulled a somewhat arbitrary
10 users at a time limit, which is actually real life most of the time.
I’m not CNN or a porn site. So I set the limit request to 10 per second.
Remember every photo, document, etc. takes one request.
This is barely scientific, but it is better than nothing.
Original Message
From: ohenley
Sent: Saturday, May 21, 2016 12:37 PM
To: [email protected]
Reply To: [email protected]
Subject: Re: CPU load monitoring / dynamically limit number of
connections to server