Nginx response rate increases with more upstream ports

Dear All,
I am trying to resolve a particular issue that is not getting solved
easily.

Nginx is able to route requests but the speed of routing is slowing
down…when I am increasing the number of upstream ports.

I have an app which responds in under 500ms. When I route those requests
through nginx to about 50 ports, nginx response rates increases to
1500-2500
ms whereas the app can handle large concurrent requests and there are as
many as 50 ports.

I have tried this with both fair and e-y load balancer but with similar
results.

I have also tuned nginx configuration for response times, timeouts etc
but
still I see similar results.

Can anyone advise me if they have experienced a similar problem earlier
and
if they have a solution for this?.

Thanks in advance

Asif

Asif A. writes:

On Tue, Jun 16, 2009 at 11:50:16AM +0530, Asif A. wrote:

I have tried this with both fair and e-y load balancer but with similar
results.

I have also tuned nginx configuration for response times, timeouts etc but
still I see similar results.

Can anyone advise me if they have experienced a similar problem earlier and
if they have a solution for this?.

What do you mean by upstream ports ? This

upstream backend {
    server  backend:9000;
    server  backend:9001;
    server  backend:9002;
    server  backend:9003;
    ...
}

?

In this case you run 50 backend processes and they compete for CPU.

Do all the upstreams run on the same machine ?

2009/6/16 Igor S. [email protected]:

?

In this case you run 50 backend processes and they compete for CPU.

Hmm
I have a total of 40 unique upstream server:port listed, in about 12
different upstream{} blocks.

Does this mean I should run 40 nginx child processes to get the most
efficiency?
Or something which divides equally into 40 (10, 20)?

2009/6/16 Igor S. [email protected]:

No, I said about backend processes, but not about nginx workers.

Oh, my apologies!

On Tue, Jun 16, 2009 at 11:26:46AM +0100, Avleen Vig wrote:

?

In this case you run 50 backend processes and they compete for CPU.

Hmm
I have a total of 40 unique upstream server:port listed, in about 12
different upstream{} blocks.

Does this mean I should run 40 nginx child processes to get the most efficiency?
Or something which divides equally into 40 (10, 20)?

No, I said about backend processes, but not about nginx workers.

Yes - thats what I meant …the upstream are pointing to different web
server
ports…
Running too many of them causes nginx to delay while routing.

Any reasons why this is happening?.

regards

ASif Ali

Avleen,

Are you able to post your full config ?

Cheers

Dave

On Wed, Jun 17, 2009 at 06:49:39PM +0530, Asif A. wrote:

Yes - thats what I meant …the upstream are pointing to different web server
ports…
Running too many of them causes nginx to delay while routing.

Any reasons why this is happening?.

If all 50 ports are on the same host, then it means that you run 50
copies
of web server on the host and they competes for CPU. It is not related
to nginx. Why do you run 50 copies of the web server at all ?