Performance

Hi,

I read this article:
http://mongrel.rubyforge.org/docs/how_many_mongrels.html
and decided to play around a little bit…

I didn’t test it on mongrel, but on a Nginx (on a default page saying
“Welcome to nginx”)… just to get the feel for the numbers… I’m using
Slicehost VPS with 256 MB RAM…

Here are the results:

$ time httperf --server www.server.com --port 80 --num-conns 25
httperf --client=0/1 --server=unebo --port=80 --uri=/ --send-buffer=4096
–recv-buffer=16384 --num-conns=25 --num-calls=1
Maximum connect burst length: 1

Total: connections 25 requests 25 replies 25 test-duration 10.048 s

Connection rate: 2.5 conn/s (401.9 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 362.1 avg 401.9 max 451.5 median 402.5 stddev
15.4
Connection time [ms]: connect 204.2
Connection length [replies/conn]: 1.000

Request rate: 2.5 req/s (401.9 ms/req)
Request size [B]: 56.0

Reply rate [replies/s]: min 2.4 avg 2.4 max 2.4 stddev 0.0 (2 samples)
Reply time [ms]: response 197.6 transfer 0.1
Reply size [B]: header 216.0 content 151.0 footer 0.0 (total 367.0)
Reply status: 1xx=0 2xx=25 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.53 system 9.00 (user 5.3% system 89.6% total 94.9%)
Net I/O: 1.0 KB/s (0.0*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

real 0m10.059s
user 0m0.528s
sys 0m9.013s

$ time httperf --server www.server.com --port 80 --num-conns 25 --rate
2.5
httperf --client=0/1 --server=unebo --port=80 --uri=/ --rate=2.5
–send-buffer=4096 --recv-buffer=16384 --num-conns=25 --num-calls=1
Maximum connect burst length: 1

Total: connections 25 requests 25 replies 25 test-duration 10.005 s

Connection rate: 2.5 conn/s (400.2 ms/conn, <=3 concurrent connections)
Connection time [ms]: min 323.6 avg 411.9 max 822.9 median 394.5 stddev
90.9
Connection time [ms]: connect 214.4
Connection length [replies/conn]: 1.000

Request rate: 2.5 req/s (400.2 ms/req)
Request size [B]: 56.0

Reply rate [replies/s]: min 2.4 avg 2.4 max 2.4 stddev 0.0 (2 samples)
Reply time [ms]: response 197.5 transfer 0.1
Reply size [B]: header 216.0 content 151.0 footer 0.0 (total 367.0)
Reply status: 1xx=0 2xx=25 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.46 system 7.59 (user 4.6% system 75.9% total 80.4%)
Net I/O: 1.0 KB/s (0.0*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

real 0m10.014s
user 0m0.456s
sys 0m7.596s

Since the article said “start with 100”, I’m a little bit ‘worried’,
because I hit 10s with only 25 connections… Is this normal? Can someone
make more sense out of this so we can make a fruitful discussion out of
it? I’m really interested in understanding more about the topic.

Thank you!
david

Probably has something to do with the fact that that page recommend you
have the server next to you and one switch between you and the server.
You test is over the internet so you will have massive lag compared to
the other scenario. So the page you mention is basically testing a
server, where as you are testing a server, and every server/switch
between it and you. No wonder your times suck compared to a direct
100Mbps comparison.


Anthony

:slight_smile:

Anthony R. wrote:

Probably has something to do with the fact that that page recommend you
have the server next to you and one switch between you and the server.
You test is over the internet so you will have massive lag compared to
the other scenario. So the page you mention is basically testing a
server, where as you are testing a server, and every server/switch
between it and you. No wonder your times suck compared to a direct
100Mbps comparison.


Anthony