10 000 req/s: tpd2 - why it is so fast?

Hi all,

I have come across an article whose author claimed that his web server
can handle 10 000 requests per second. Because I am far from expert on
high performance web servers so I would like to ask you if his
benchmarks has any flaw or his web server is too simple to be slow?

I have never thought that a LISP-based implementation can be as fast as
C-based implementations such as Nginx. Is there any magic here?

You can read his article here John Fremlin's blog: teepeedee2 achieves 10k requests/second

Thanks

pcdinh

That benchmark shows that you can send a “hello world” message over a
localhost connection very fast. This is an interesting result on its own
but
has nothing to do with real life workloads of http servers.

Denis.

On Fri, Oct 09, 2009 at 05:30:19PM +0200, Dinh P. wrote:

I have come across an article whose author claimed that his web server
can handle 10 000 requests per second. Because I am far from expert on
high performance web servers so I would like to ask you if his
benchmarks has any flaw or his web server is too simple to be slow?

I have never thought that a LISP-based implementation can be as fast as
C-based implementations such as Nginx. Is there any magic here?

You can read his article here John Fremlin's blog: teepeedee2 achieves 10k requests/second

On notebook running FreeBSD 7.0 on Pentium M 1.70GHz using
configuration:

server {
    listen      8000;
    return      404;
    access_log  off;
}

I have got these results via localhost:

ab -n 30000 -c 10 ~8200 r/s
ab -n 30000 -c 10 -k ~20000 r/s

This means that this microbenchmark tests mostly TCP connection
establishment via localhost: keepalive is 2.4 faster.

On Fri, Oct 09, 2009 at 08:26:32PM +0400, Igor S. wrote:

You can read his article here John Fremlin's blog: teepeedee2 achieves 10k requests/second
I have got these results via localhost:

ab -n 30000 -c 10 ~8200 r/s
ab -n 30000 -c 10 -k ~20000 r/s

This means that this microbenchmark tests mostly TCP connection
establishment via localhost: keepalive is 2.4 faster.

BTW, using embedded perl:

server {
    listen 8010;
    access_log  off;

    location = /test {
        perl 'sub {
           my $r = shift;
           $r->send_http_header("text/html");
           $r->print("<h1>Hello ", $r->variable("arg_name"), 

“”);
return OK;
}';
}
}

“ab -n 30000 -c 10 -k” has got ~7800 r/s.

FWIW, in our testing (using something slightly more sophisticated than
ab),
we can sustain about 70k requests per second in nginx. (Obviously,
perfect
network, small objects, fast clients, etc.) This is on normal “server”
hardware.

On Oct 09, Akins, Brian wrote:

FWIW, in our testing (using something slightly more sophisticated than ab),
we can sustain about 70k requests per second in nginx. (Obviously, perfect
network, small objects, fast clients, etc.) This is on normal “server”
hardware.

Could you tell us what did you use instead of ab? I’ve always found the
benchmark tool to be a limitation when performing such tests with nginx.

On 10/24/09 12:56 PM, “Arvind Jayaprakash” [email protected] wrote:

Could you tell us what did you use instead of ab? I’ve always found the
benchmark tool to be a limitation when performing such tests with nginx.

It’s a benchmarker that we wrote based on curl. It takes in a large
list of
urls, forks, and does requests lots of requests.

http://curl-loader.sourceforge.net/

I believe is what he’s referring to. I had usually found it to be fairly
client heavy. Siege works quite well for my needs or httperf

Hello,

On 10/10/2009 01:42 AM, Igor S. wrote:

BTW, using embedded perl:
return OK;
}';
}
}

“ab -n 30000 -c 10 -k” has got ~7800 r/s.

In case you are curious, John has posted an update
comparing teepeedee2 vs the above perl module on his laptop.
Here is the link:

http://john.freml.in/teepeedee2-vs-nginx

Hello!

On Thu, Oct 29, 2009 at 09:50:25AM +0300, Igor S. wrote:

ab -n 30000 -c 10 -k ~20000 r/s
location = /test {

he can not run the benchmark with keepalive ?
Well, it’s the “useless benchmarks about nothing” game as
presented by Alex Kapranoff on last Highload++ conference. It’s
not about server speed, it’s about multiple useless numbers and
fun. Key thing is to keep benchmarks as equal as possible, so
using keepalive here is no-option as he didn’t on previous
benchmarks.

Using “-c1” instead of “-c10” (as used in original post) looks
like a bug which rendered new results completely irrelevant. So
nothing to talk about.

Maxim D.

On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev B. wrote:

This means that this microbenchmark tests mostly TCP connection
my $r = shift;
comparing teepeedee2 vs the above perl module on his laptop.
Here is the link:

John Fremlin's blog: teepeedee2 vs nginx's perl module

For some reason, he ran “ab -c1” instead of “ab -c10”, while nginx may
run perl in 2 workers on Core2 Duo (if worker_processes are 2). I
believe,
it will twice the benchmark result. Second, he still mosty tests TCP
connection establishment via localhost instead of server speed. Why
he can not run the benchmark with keepalive ?

On Thu, Oct 29, 2009 at 01:38:24PM +0300, Maxim D. wrote:

 server {
     }

For some reason, he ran “ab -c1” instead of “ab -c10”, while nginx may
benchmarks.
I meant, why can not he re-run the whole benchmark with keepalive.

On Thu, Oct 29, 2009 at 01:38:24PM +0300, Maxim D. wrote:

 server {
     }

For some reason, he ran “ab -c1” instead of “ab -c10”, while nginx may
benchmarks.

Using “-c1” instead of “-c10” (as used in original post) looks
like a bug which rendered new results completely irrelevant. So
nothing to talk about.

BTW the benchmark is really strange, first he mentions C10K problem
(10,000
simultaneous connections), but then talks about 10,000 requests record
per seconds via just 10 simultaneous connections. This is very, very
different thing. I believe only varnish and nginx in this set are ever
able to keep C10K. As to varnish, I do not understand what it does in
the benchmark at all. As I understand varnish is only a caching proxy
server and it can not generate dynamic responses (expect error pages).

On 10/30/09 2:53 AM, “Igor S.” [email protected] wrote:

I believe only varnish and nginx in this set are ever
able to keep C10K.

FWIW, in the interest of not spreading FUD - apache2 with the worker MPM
can
comfortable handle 10k+ simultaneous connections. It takes a ton of
RAM,
but I see it done everyday.

On Fri, Oct 30, 2009 at 09:16:46AM -0400, Akins, Brian wrote:

On 10/30/09 2:53 AM, “Igor S.” [email protected] wrote:

I believe only varnish and nginx in this set are ever
able to keep C10K.

FWIW, in the interest of not spreading FUD - apache2 with the worker MPM can
comfortable handle 10k+ simultaneous connections. It takes a ton of RAM,
but I see it done everyday.

Yes, Apache2 should be able to handle c10k using threads.
BTW, is the ton of RAM a virtual memory (for threads stacks) or physical
one ?

On Fri, Oct 30, 2009 at 10:29:01AM -0400, Akins, Brian wrote:

On 10/30/09 9:29 AM, “Igor S.” [email protected] wrote:

Yes, Apache2 should be able to handle c10k using threads.
BTW, is the ton of RAM a virtual memory (for threads stacks) or physical one ?

Both. I can get some numbers when I get back into office.

It would be interesting.
What is type of load : dynamic, static or both ?
Have you tried prefork MPM for the load ?

On 10/30/09 9:29 AM, “Igor S.” [email protected] wrote:

Yes, Apache2 should be able to handle c10k using threads.
BTW, is the ton of RAM a virtual memory (for threads stacks) or physical one ?

Both. I can get some numbers when I get back into office.

regards!

Posted at Nginx Forum:

On 10/30/09 10:42 AM, “Igor S.” [email protected] wrote:

It would be interesting.
What is type of load : dynamic, static or both ?

Both and some proxying thrown in there as well. We use fastcgi and/or
proxy
to an app server for “really” dynamic stuff.

Have you tried prefork MPM for the load ?

Yes, we had memory issues. Also, it made some of the “custom” stuff we
do is
more suited to a “few” process with “many” threads.

Obviously, I’m very interesting in nginx :slight_smile: I’ve been impressed so far.

What is the max requests that nginx can handle assuming that it is
making a
reverse proxy call to an application that responds in less than 100 ms?

Lets take hypothetically for example, it has 500 ports to send the
requests
to…

regards

Asif