I have come across an article whose author claimed that his web server
can handle 10 000 requests per second. Because I am far from expert on
high performance web servers so I would like to ask you if his
benchmarks has any flaw or his web server is too simple to be slow?
I have never thought that a LISP-based implementation can be as fast as
C-based implementations such as Nginx. Is there any magic here?
That benchmark shows that you can send a “hello world” message over a
localhost connection very fast. This is an interesting result on its own
but
has nothing to do with real life workloads of http servers.
On Fri, Oct 09, 2009 at 05:30:19PM +0200, Dinh P. wrote:
I have come across an article whose author claimed that his web server
can handle 10 000 requests per second. Because I am far from expert on
high performance web servers so I would like to ask you if his
benchmarks has any flaw or his web server is too simple to be slow?
I have never thought that a LISP-based implementation can be as fast as
C-based implementations such as Nginx. Is there any magic here?
FWIW, in our testing (using something slightly more sophisticated than
ab),
we can sustain about 70k requests per second in nginx. (Obviously,
perfect
network, small objects, fast clients, etc.) This is on normal “server”
hardware.
FWIW, in our testing (using something slightly more sophisticated than ab),
we can sustain about 70k requests per second in nginx. (Obviously, perfect
network, small objects, fast clients, etc.) This is on normal “server”
hardware.
Could you tell us what did you use instead of ab? I’ve always found the
benchmark tool to be a limitation when performing such tests with nginx.
he can not run the benchmark with keepalive ?
Well, it’s the “useless benchmarks about nothing” game as
presented by Alex Kapranoff on last Highload++ conference. It’s
not about server speed, it’s about multiple useless numbers and
fun. Key thing is to keep benchmarks as equal as possible, so
using keepalive here is no-option as he didn’t on previous
benchmarks.
Using “-c1” instead of “-c10” (as used in original post) looks
like a bug which rendered new results completely irrelevant. So
nothing to talk about.
On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev B. wrote:
This means that this microbenchmark tests mostly TCP connection
my $r = shift;
comparing teepeedee2 vs the above perl module on his laptop.
Here is the link:
For some reason, he ran “ab -c1” instead of “ab -c10”, while nginx may
run perl in 2 workers on Core2 Duo (if worker_processes are 2). I
believe,
it will twice the benchmark result. Second, he still mosty tests TCP
connection establishment via localhost instead of server speed. Why
he can not run the benchmark with keepalive ?
On Thu, Oct 29, 2009 at 01:38:24PM +0300, Maxim D. wrote:
server {
}
For some reason, he ran “ab -c1” instead of “ab -c10”, while nginx may
benchmarks.
Using “-c1” instead of “-c10” (as used in original post) looks
like a bug which rendered new results completely irrelevant. So
nothing to talk about.
BTW the benchmark is really strange, first he mentions C10K problem
(10,000
simultaneous connections), but then talks about 10,000 requests record
per seconds via just 10 simultaneous connections. This is very, very
different thing. I believe only varnish and nginx in this set are ever
able to keep C10K. As to varnish, I do not understand what it does in
the benchmark at all. As I understand varnish is only a caching proxy
server and it can not generate dynamic responses (expect error pages).
I believe only varnish and nginx in this set are ever
able to keep C10K.
FWIW, in the interest of not spreading FUD - apache2 with the worker MPM
can
comfortable handle 10k+ simultaneous connections. It takes a ton of
RAM,
but I see it done everyday.
I believe only varnish and nginx in this set are ever
able to keep C10K.
FWIW, in the interest of not spreading FUD - apache2 with the worker MPM can
comfortable handle 10k+ simultaneous connections. It takes a ton of RAM,
but I see it done everyday.
Yes, Apache2 should be able to handle c10k using threads.
BTW, is the ton of RAM a virtual memory (for threads stacks) or physical
one ?