I’m currently working on nginx limits as a reverse proxy on a Debian
box.
My current setup is a nginx configured as a http/https web server (for
static content) and a reverse proxy for node.js processes on the same
server and in future on other Debian boxes.
I was unable to see (while reading the documentation) real hard
limitations for nginx in this setup excepting ephemeral ports
exhaustion.
It may be a concern as Node.js applications usually open a websocket
that is connected as long as a user stays connected to the application.
If I understood everything correctly it means that nginx in this setup
will not be able to manage more than about 64k clients connections.
Am I right ? What can be done if I would like to go over this 64k limit
? Could you please suggest a solution ?
On Tue, May 10, 2016 at 12:26:59PM +0200, Artur wrote:
Am I right ? What can be done if I would like to go over this 64k limit
? Could you please suggest a solution ?
As long as you are using TCP/IP and have only one backend (ip +
port), and only one local address on nginx side, then you are
limited by the number of local ports nginx can use. Theoretical
limit is 64k, practical one is usually smaller - on Linux systems
it depends on net.ipv4.ip_local_port_range.
Most natural solution is to add more backends. Under normal
conditions you will add more backend servers as your system will
grow, so you’ll never hit the problem in the first place. If you
need to handle more than 64k connections to a single backend
server, consider listening on multiple addresses in your backend
application (e.g., listening on multiple ports).
Other available solutions are:
use UNIX domain sockets (this works when you have everything on
a single host);
add more local addresses on nginx side and use proxy_bind to
balance users between these addresses.
As long as you are using TCP/IP and have only one backend (ip +
port), and only one local address on nginx side, then you are
limited by the number of local ports nginx can use.
I currently have nginx running on the same host that my backends.
I have 4 of them listening on different ports on 127.0.0.1.
In this situation may I expect 4 times 65000 simultaneous connections ?
add more local addresses on nginx side and use proxy_bind to
balance users between these addresses.
Yes, I’ve seen this, however I didn’t catch how to dynamically assign a
value to proxy_bind from a pool of IP addresses in nginx (not Nginx
Plus).
balance users between these addresses.
Yes, I’ve seen this, however I didn’t catch how to dynamically assign a
value to proxy_bind from a pool of IP addresses in nginx (not Nginx Plus).
The all proxy_bind functionality is available in the open source
version of nginx. See the docs: Module ngx_http_proxy_module
add more local addresses on nginx side and use proxy_bind to
balance users between these addresses.
Yes, I’ve seen this, however I didn’t catch how to dynamically
assign a value to proxy_bind from a pool of IP addresses in nginx
(not Nginx Plus).
The blog post I sent has nothing -plus specific in this area.
On Tue, May 10, 2016 at 05:37:55PM +0200, Artur wrote:
Thanks for your answer.
Le 10/05/2016 à 17:04, Maxim D. a écrit :
As long as you are using TCP/IP and have only one backend (ip +
port), and only one local address on nginx side, then you are
limited by the number of local ports nginx can use.
I currently have nginx running on the same host that my backends.
I have 4 of them listening on different ports on 127.0.0.1.
In this situation may I expect 4 times 65000 simultaneous connections ?
Yes.
(Note though, that this may not be true on all OSes. And this
also won’t be true when using proxy_bind, as OS will have to
choose a local port before the destination address is known.)
add more local addresses on nginx side and use proxy_bind to
balance users between these addresses.
Yes, I’ve seen this, however I didn’t catch how to dynamically assign a
value to proxy_bind from a pool of IP addresses in nginx (not Nginx Plus).
The blog post as linked by Maxim K. uses the split_clients
module, it’s not nginx-plus specific. See here for details:
Depending on your particular case, there may be even easier
solutions. E.g., if you have two URIs in your application with
more or less equal load, you can use two locations with distinct
addresses configured statically: