Nginx proxy overhead with / out keep alive requests

I have a test setup to measure the nginx overhead when plugged in front
of a Jboss tomcat server.
In tomcat I have deployed a test.jsp and use ab to measure the
performance with the following scenarios

With out keep alive ab option
ab → (ab -n 5000 -c 5
http://jboss.tomcat.url/test.jsp)
ab → → (ab -n 5000 -c 5
http://nginx.proxy.url/test.jsp)

With keep alive ab option (-k)
ab → (ab -n 5000 -c 5 -k
http://jboss.tomcat.url/test.jsp)
ab → → (ab -n 5000 -c 5 -k
http://nginx.proxy.url/test.jsp)

The performance numbers WITHOUT keep alive is almost same BUT WITH keep
alive option the performance numbers are very different and nginx takes
about 5 secs more than the page accessed directly via tomcat. Tomcat
takes only 0.503430 on an average

Why should there be so much of deviation with Keep alive? Is there
anything I am missing?

-------------Tomcat per result using ab keep alive (ab -n 5000 -c 5 -k
http://jboss.tomcat.url/test.jsp)-----------------------

Server Software: Apache-Coyote/1.1
Server Hostname: jboss.tomcat.host
Server Port: 9080

Document Path: /test.jsp
Document Length: 301 bytes

Concurrency Level: 5
Time taken for tests: 0.503430 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 4952
Total transferred: 2944760 bytes
HTML transferred: 1505000 bytes
Requests per second: 9931.87 [#/sec] (mean)
Time per request: 0.503 [ms] (mean)
Time per request: 0.101 [ms] (mean, across all concurrent
requests)
Transfer rate: 5710.82 [Kbytes/sec] received

-------------Nginx perf result using ab keep alive (ab -n 5000 -c 5 -k
http://nginx.proxy.url/test.jsp) --------------------------

Server Software: nginx
Server Hostname: nginx.proxy.url
Server Port: 80

Document Path: /test.jsp
Document Length: 301 bytes

Concurrency Level: 5
Time taken for tests: 5.440499 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 4952
Total transferred: 2654760 bytes
HTML transferred: 1505000 bytes
Requests per second: 919.03 [#/sec] (mean)
Time per request: 5.440 [ms] (mean)
Time per request: 1.088 [ms] (mean, across all concurrent
requests)
Transfer rate: 476.43 [Kbytes/sec] received

-------------Tomcat per result WITHOUT ab keep alive (ab -n 5000 -c 5
http://jboss.tomcat.url/test.jsp)-----------------------

Server Software: Apache-Coyote/1.1
Server Hostname: jboss.tomcat.host
Server Port: 9080

Document Path: /test.jsp
Document Length: 301 bytes

Concurrency Level: 5
Time taken for tests: 4.658429 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 2920000 bytes
HTML transferred: 1505000 bytes
Requests per second: 1073.32 [#/sec] (mean)
Time per request: 4.658 [ms] (mean)
Time per request: 0.932 [ms] (mean, across all concurrent
requests)
Transfer rate: 612.01 [Kbytes/sec] received

-------------Nginx perf result WITHOUT ab keep alive (ab -n 5000 -c 5
http://nginx.proxy.url/test.jsp) --------------------------

Server Software: nginx
Server Hostname: nginx.proxy.url
Server Port: 80

Document Path: /test.jsp
Document Length: 301 bytes

Concurrency Level: 5
Time taken for tests: 4.916966 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 2630000 bytes
HTML transferred: 1505000 bytes
Requests per second: 1016.89 [#/sec] (mean)
Time per request: 4.917 [ms] (mean)
Time per request: 0.983 [ms] (mean, across all concurrent
requests)
Transfer rate: 522.27 [Kbytes/sec] received

=========Configuration=================

Tomcat maxKeepAlive connector is set to 100

Nginx keepalive_requests 100;
worker_processes 4;
worker_connections 4098;
use epoll;
multi_accept on;

Posted at Nginx Forum:

Hello!

On Tue, Jun 12, 2012 at 04:30:51PM -0400, paphillon wrote:

Why should there be so much of deviation with Keep alive? Is there
anything I am missing?

Unless you’ve configured keepalive to upstreams (see
Module ngx_http_upstream_module) nginx will not use keepalive
connections to tomcat. And the expected result is: nginx is in
par with tomcat used directly without keepalive.

Nnumbers extracted from your data for clarity:

tomcat + keepalive: 9931.87 r/s
tomcat w/o keepalive: 1073.32 r/s
nginx + keepalive: 919.03 r/s
nginx w/o keepalvie: 1016.89 r/s

All nginx numbers are about tomcat’s one without keepalive, as
expected (see above).

The limiting factor is clearly tomcat’s connection establishment
cost, which drops performance from 10k r/s to 1k r/s. You may
want to configure upstream keepalive to cope with it if it’s really
matters in you case (i.e. if real requests are as fast as test one
you’ve used; usually real request processing takes much more than
connection establishment). See the link above for details.

Maxim D.

Hi Maxim,

Thanks for the insight! I had a hunch on the keepalive setting from
nginx to tomcat, but did not really find anything in the document about
that. I will give it a try and let you know the results. I am not sure
if I will be able to use this keepalive in production as it requires
upstream config and differs from the way we have architect Nginx
configuration.

I am using nginx as a dynamic proxy which routes the clients calls
(primarily web service calls) to different servers depending on the key
the client passes via cookie or http headers. The url’s are in a map in
the below form

 map $http_void $header_based_url{
    default "no_http_header";
    key1 server_instance_1_url;
    key2 server_instance_2_url;
    ......
}

location /xxx {
proxy_pass $header_based_url;
}
And to make it more simple to add / maintain the server url’s, the URL’s
are stored in a flat file

I am not sure how I can use the keepalive over here other than the
upstream or how do I make this upstream config compatible.

Regards

Posted at Nginx Forum:

upstream with keepalive has the results almost comparable with tomcat,
so yes keepalive between nginx and tomcat really does the trick. :slight_smile:

Unfortunately we cannot use the upstream as explained in my previous
post, unless upstream can offer something like below

map $http_void $header_based_url{

   default "no_http_header";
   #These Key => URL's are currently stored in a flat file
   key1 server_instance_1_url_host;
   key2 server_instance_2_url_host;
    ......
 }

upstream http_backend {

  server $header_based_url_host;

  keepalive 100;

}


location /xxx {
   proxy_pass http_backend;
}

Posted at Nginx Forum: