Nginx performance with aritificial latency - help?

Hello All!

I am attempting to do stress testing of a custom proxy server.
Basically,
the scenario looks somewhat like the following:

performance testing software (client) —> proxy server —> nginx
(server).

The nginx server is an Ubuntu 14.04 install. I have done a small amount
of
reading on tuning nginx for performance, so I have increased the number
of
worker_processes to fit my cpu, set worker_rlimit_nofile very high, told
nginx to use epoll, sendfile, tcp_nopush, and tcp_nodelay. I have also
increased the number of worker_connections to 4000, though in my case I
did
not see any change from its previous default value of 768. The nginx
server
is serving up static content only.

Under line-speed circumstances, everything works well and the nginx
server
appears to be able to match (if not exceed) the abilities of the proxy
server. Performance issues and bugs worked out of the proxy server, I
decided I wanted to attempt to simulate an artificial latency. To do
this,
I decided to use the linux tc plugin netem. I used the following command
to
create an artificial latency between the nginx server and the proxy
server:

tc qdisc add dev eth0 root netem delay 30ms 20ms distribution normal

Prior to running this command, I could fully saturate the 1 gbps
connection
rate of the nginx server. Now that there is latency, it would make sense
that the connection rate would diminish. It dropped to a maximum of 450
mbps. To increase the speed, therefore, I decided to increase the number
of
connections. To my surprise, this had no effect. The speed stayed at 450
mbps. I don’t get any error messages on either end, the connections just
take longer to resolve. However, the load level on the nginx server does
not change.

Thinking something about the way netem worked had caused the line speed
to
drop, I used the tool iperf to test the connection speed between the
client
and the server, I got the full 1gbps.

At this point, I bypassed the proxy software thinking it was the
problem.
This did leave the hardware with the proxy server on it routing the
traffic. It is linux based. Bypassing the proxy software did not change
the
performance.

Does anyone have any suggestions on how I can solve this? I would have
expected an increase in the number of connections to yield an increase
in
speed, but this was not the result. It seems with the artificial latency
nginx is only able to handle approximately 3,000 requests per second,
where
without the latency I saw over 7,000.

Thank you all for your help, and I will try to provide more information
if
that will help anyone help me solve this problem.

Hello!

On Thu, Nov 20, 2014 at 09:49:36PM -0600, Jacob Burkamper wrote:

[…]

Does anyone have any suggestions on how I can solve this? I would have
expected an increase in the number of connections to yield an increase in
speed, but this was not the result. It seems with the artificial latency
nginx is only able to handle approximately 3,000 requests per second, where
without the latency I saw over 7,000.

Have you checked your OS limits on network buffers? If there
are not enough buffers or buffers are undersized, this may have
negative effect on bandwidth with increased latency.


Maxim D.
http://nginx.org/

This has been a bit of a learning experience for me, and as such I am
not
currently too well versed in tweaking the Linux kernel’s TCP code.
However,
if the buffers were incorrect, I would expect to see dropped packets,
correct? I took a packet capture of a simple ApacheBench test exhibiting
the problem, but I couldn’t find any evidence of dropped packets
anywhere.

Even so, I’m willing to try changes. I don’t suppose you could suggest
some
improvements or point me in the direction of good reading? I’d
appreciate
it.

Hello!

On Fri, Nov 21, 2014 at 01:33:43PM -0600, Jacob Burkamper wrote:

This has been a bit of a learning experience for me, and as such I am not
currently too well versed in tweaking the Linux kernel’s TCP code. However,
if the buffers were incorrect, I would expect to see dropped packets,
correct? I took a packet capture of a simple ApacheBench test exhibiting
the problem, but I couldn’t find any evidence of dropped packets anywhere.

AFAIR, Linux networking code starts to use smaller socket buffers at
some
point, so it’s quite possible that you won’t see any errors.

Even so, I’m willing to try changes. I don’t suppose you could suggest some
improvements or point me in the direction of good reading? I’d appreciate
it.

Something like “linux network buffers tuning” gives reasonable
results when googled.


Maxim D.
http://nginx.org/