I had nginx 0.6.32 and just upgraded to 0.7.61 and still have the same
memleak issue.
What happens is as soon as I start nginx, it starts using ram and
continues to use more and more and more over a 16 hour period is
consumes 8gb ram and all the swap and then errors out because it cant
use any more.
This never used to happen until recently and the only difference at all
in the config is more server entries.
I know, but this problem has never occured until recently… Once the
request is done, it should remove the memory allocation, but it looks
like maybe it isn’t?
The only difference in a month ago and now is that we have more server
entries, and more requests per second. It used to not even use a gig of
ram doing 200 requests/sec and now it keeps using more and more ram
until it fills the entire ram and swap and errors out 8gb+ …
What would you suggest?
On Mon, Aug 31, 2009 at 04:17:03PM -0400, Paul wrote:
I know, but this problem has never occured until recently… Once the
request is done, it should remove the memory allocation, but it looks
like maybe it isn’t?
No. These workers run since Aug 7 and handle up to 3,800r/s:
The only difference in a month ago and now is that we have more server
entries, and more requests per second. It used to not even use a gig of
ram doing 200 requests/sec and now it keeps using more and more ram
until it fills the entire ram and swap and errors out 8gb+ …
What would you suggest?
Just 250 simultaneous proxied connections take 8G. And 250 connections
are too little in modern world. Why at all you have set up so huge
buffers ?
Had some issues with people uploading/downloading files before on 0.6
and the buffers fixed it…
What would you suggest I change the config to? I will try and let you
know if I still see the problem.
Thank you.
I know, but this problem has never occured until recently… Once the
request is done, it should remove the memory allocation, but it looks
like maybe it isn’t?
The only difference in a month ago and now is that we have more server
entries, and more requests per second. It used to not even use a gig of
ram doing 200 requests/sec and now it keeps using more and more ram
until it fills the entire ram and swap and errors out 8gb+ …
What would you suggest?
There’s many more factors than simple requests per second and number of
server entries:
the speed of the proxied backends: if the backend is slow, then that
memory will be held onto for much longer.
the speed of clients: slow clients can cause the request processing
cycle to be slowed down, causing memory to be held onto for longer.
resource contention: more backends means slower overall performance
as the OS can’t cache as efficiently (both CPU and disk), must handle
more context switches, more disk seeks, etc.
In short, one resource shortage (say CPU or I/O) can cause systemic
resource shortages as other resources can’t be freed in a timely fashion
(in turn causing even more resource shortages), so the problem quickly
spirals out of control.
These problems are pretty much unavoidable on a single server, the only
question is at what point will you encounter them. Basically there’s a
breaking point in the scalability of any particular system. Tuning is
the art of pushing that breaking point out as far as possible. Because
you’ve got excessively large buffers, you’ve almost certainly brought
that breaking point upon yourself much earlier than need be.
As Igor mentions, 32MB seems really, really excessive (an application
that generates responses of that size calls for one or more dedicated
servers and brain surgery for the developer). Maybe you should try
something closer to 8 kilobytes and see if that addresses your issues.
Yes, it was timeouts, but that could have been fixed in the newer nginx
versions. I’m just wondering why
the ram usage hasn’t occured before and and is now… It never used to
use much at all and now it’s using an
exorbitant amount of ram with nothing major changed… That’s why I
figured something was wrong.
It takes it about 12 hours or so to use 6-7gb of ram. That is another
reason why I thought something might be wrong.
I’ll try changing the buffers and watching the RAM usage again and see
what happens.
Thank you, and Igor, for your inputs.
Oh I also notice another issue, when ram usage is high like this, when i
reload nginx it does this:
root 5776 0.0 0.1 19528 4876 ? Ss Aug30 0:00 nginx:
master process /usr/local/nginx/sbin/nginx -c conf/ng1.conf
www 5777 2.3 2.3 184060 94364 ? S Aug30 28:53 nginx:
worker process is shutting down
www 5778 2.3 2.3 315500 95496 ? S Aug30 29:23 nginx:
worker process is shutting down
www 5781 2.2 3.8 342904 154144 ? S Aug30 28:08 nginx:
worker process is shutting down
www 5784 2.4 4.9 508552 199036 ? S Aug30 30:17 nginx:
worker process is shutting down
www 5785 2.4 3.0 349364 122764 ? S Aug30 30:18 nginx:
worker process is shutting down
www 5786 2.3 5.8 511264 234928 ? R Aug30 29:27 nginx:
worker process is shutting down
www 5787 2.3 2.7 209608 111840 ? S Aug30 29:29 nginx:
worker process is shutting down
www 5788 2.3 6.5 406672 263352 ? S Aug30 29:24 nginx:
worker process is shutting down
www 5789 2.3 2.4 275580 98532 ? S Aug30 29:25 nginx:
worker process is shutting down
www 5790 2.3 3.5 316852 145448 ? S Aug30 29:10 nginx:
worker process is shutting down
www 5791 2.3 6.5 505276 265980 ? S Aug30 29:13 nginx:
worker process is shutting down
www 5792 2.4 1.5 255980 62228 ? S Aug30 30:05 nginx:
worker process is shutting down
www 5793 2.3 5.4 546796 221996 ? S Aug30 29:34 nginx:
worker process is shutting down
www 5794 2.3 2.8 250940 116300 ? S Aug30 28:39 nginx:
worker process is shutting down
www 5795 2.3 2.1 246528 86908 ? S Aug30 29:08 nginx:
worker process is shutting down
www 5796 2.4 3.9 340724 161644 ? S Aug30 30:20 nginx:
worker process is shutting down
www 6347 2.8 0.7 43356 28640 ? S 17:10 0:13 nginx:
worker process
www 6348 2.7 0.6 42472 27636 ? S 17:10 0:13 nginx:
worker process
www 6349 2.4 0.6 40776 25672 ? S 17:10 0:11 nginx:
worker process
www 6350 2.3 0.6 41864 27104 ? S 17:10 0:11 nginx:
worker process
www 6351 2.3 0.6 42728 28028 ? S 17:10 0:11 nginx:
worker process
www 6352 2.3 0.6 43040 27828 ? S 17:10 0:11 nginx:
worker process
www 6353 2.6 0.7 43972 29360 ? S 17:10 0:12 nginx:
worker process
www 6354 2.4 0.7 46104 30892 ? S 17:10 0:11 nginx:
worker process
www 6355 2.0 0.6 40252 25776 ? S 17:10 0:09 nginx:
worker process
www 6356 2.9 0.7 46408 31396 ? S 17:10 0:14 nginx:
worker process
www 6357 2.2 0.6 41600 26816 ? S 17:10 0:10 nginx:
worker process
www 6358 2.3 0.6 42536 27960 ? S 17:10 0:11 nginx:
worker process
www 6359 2.4 0.6 40564 26120 ? S 17:10 0:11 nginx:
worker process
www 6360 2.4 0.6 42272 27656 ? S 17:10 0:11 nginx:
worker process
www 6361 2.8 0.7 46764 30336 ? S 17:10 0:14 nginx:
worker process
www 6362 2.1 0.6 40224 25636 ? S 17:10 0:10 nginx:
worker process
and the processes stay in ‘process is shutting down’ for a LONG time…
I’ve waited quite a while before
doing a restart instead of reload to clear them out. Not sure what
would cause this?
Most of my connection timeouts are set between 60-120seconds. Except SSL
session one is 10m, but i’ve waited
hours for the old processes to shut down.
I know this… but there isn’t any slow requests going on Or shouldn’t
be anyway. Just odd it never used to
take more than a few mins, must have been a new site we added or
something that is keeping the connections open long.
On Mon, Aug 31, 2009 at 06:10:16PM -0400, Paul wrote:
Oh I also notice another issue, when ram usage is high like this, when i
reload nginx it does this:
root 5776 0.0 0.1 19528 4876 ? Ss Aug30 0:00 nginx:
master process /usr/local/nginx/sbin/nginx -c conf/ng1.conf
www 5777 2.3 2.3 184060 94364 ? S Aug30 28:53 nginx:
worker process is shutting down
www 5778 2.3 2.3 315500 95496 ? S Aug30 29:23 nginx:
[…]
and the processes stay in ‘process is shutting down’ for a LONG time…
I’ve waited quite a while before
doing a restart instead of reload to clear them out. Not sure what
would cause this?
Most of my connection timeouts are set between 60-120seconds. Except SSL
session one is 10m, but i’ve waited
hours for the old processes to shut down.
nginx worker processes wait for all currently processed requests
to finish before exiting. It can take hours if there are active
but slow requests or requests with big responses.
You may use normal network diagnostic tools (netstat, tcpdump,
…) to see what actually happens on the wire.
Maxim D.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.