Caucho Resin: faster than nginx?

FYI:
http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/

" Using industry standard tool and methodology, Resin Pro web server was
put to the test versus Nginx, a popular web server with a reputation for
efficiency and performance. Nginx is known to be faster and more
reliable
under load than the popular Apache HTTPD. Benchmark tests between Resin
and
Nginx yielded competitive figures, with Resin leading with fewer errors
and
faster response times. In numerous and varying tests, Resin handled 20%
to
25% more load while still outperforming Nginx. In particular, Resin was
able to sustain fast response times under extremely heavy load while
Nginx
performance degraded. "

On Saturday 18 August 2012 03:18:32 Adam Z. wrote:

faster response times. In numerous and varying tests, Resin handled 20% to
25% more load while still outperforming Nginx. In particular, Resin was
able to sustain fast response times under extremely heavy load while Nginx
performance degraded. "

What nginx configuration was used during the testing? Did they tune it?
Did Resin use an equivalent level of logging? What build options were
used
to build nginx? Why did they test on 1k page? I don’t think that the
average
size of typical web-page and its elements are about 1 Kb. Does it mean
that
the Resin cannot effectively handle files of more size? What about
memory
usage? And after all, why did they use the latest version of Resin and
relatively old version of nginx?

wbr, Valentin V. Bartenev

P.S.

vbart@vbart-laptop ~/Development/Nginx/tests/wrk $ curl -i
http://localhost:8000/1k.html
HTTP/1.1 200 OK
Server: nginx/1.3.5
Date: Sat, 18 Aug 2012 03:10:13 GMT
Content-Type: text/html
Content-Length: 1063
Last-Modified: Sat, 18 Aug 2012 02:40:43 GMT
Connection: keep-alive
ETag: “502f00ab-427”
Accept-Ranges: bytes

0 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
1 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
2 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
3 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
4 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
5 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
6 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
7 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
8 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
9 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
0123456789 0123456789 0123456789
vbart@vbart-laptop ~/Development/Nginx/tests/wrk $ cat ../build/test.conf #error_log logs/error.log debug;

worker_processes 2;
worker_priority -5;
worker_cpu_affinity 1000 0010;

events {
accept_mutex off;
}

http {
sendfile on;
access_log off;
tcp_nopush on;

open_file_cache max=16;
open_file_cache_valid 1h;

server {
    location / { }
}

}
vbart@vbart-laptop ~/Development/Nginx/tests/wrk $ grep “model name”
/proc/cpuinfo | uniq
model name : Intel(R) Core™ i3 CPU M 350 @ 2.27GHz
vbart@vbart-laptop ~/Development/Nginx/tests/wrk $ ./wrk -r 3m -c 10 -t
1 --pipeline 100 http://localhost:8000/1k.html
Making 3000000 requests to http://localhost:8000/1k.html
1 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.79ms 50.47us 6.03ms 75.42%
Req/Sec 170.72k 450.75 171.00k 72.03%
3000005 requests in 17.54s, 3.63GB read
Requests/sec: 171078.30
Transfer/sec: 212.25MB

On Saturday 18 August 2012 07:17:27 Valentin V. Bartenev wrote:
[…]

3000005 requests in 17.54s, 3.63GB read
Requests/sec: 171078.30
Transfer/sec: 212.25MB

All the same

  • clang 3.1 → gcc-4.7.1
  • removed all unused modules

vbart@vbart-laptop ~/Development/Nginx/tests/wrk $ ./wrk -r 3m -c 10 -t
1 --pipeline 100 http://localhost:8000/1k.html
Making 3000000 requests to http://localhost:8000/1k.html
1 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.70ms 232.75us 5.39ms 76.29%
Req/Sec 206.94k 281.90 207.00k 94.85%
3000008 requests in 14.46s, 3.63GB read
Requests/sec: 207533.34
Transfer/sec: 257.29MB

vbart@vbart-laptop ~/Development/Nginx/tests/build $ sbin/nginx -V
nginx version: nginx/1.3.5
built by gcc 4.7.1 (Gentoo 4.7.1 p1.0, pie-0.5.3)
configure arguments: --prefix=/home/vbart/Development/Nginx/tests/build
–with-
cc-opt=‘-O3 -march=native’ --without-http-cache
–without-http_charset_module –
without-http_gzip_module --without-http_ssi_module
–without-http_userid_module
–without-http_access_module --without-http_auth_basic_module --without-
http_autoindex_module --without-http_status_module
–without-http_geo_module –
without-http_map_module --without-http_split_clients_module --without-
http_referer_module --without-http_rewrite_module
–without-http_proxy_module –
without-http_fastcgi_module --without-http_uwsgi_module --without-
http_scgi_module --without-http_memcached_module --without-
http_limit_conn_module --without-http_limit_req_module --without-
http_empty_gif_module --without-http_browser_module --without-
http_upstream_ip_hash_module --without-http_upstream_least_conn_module

without-http_upstream_keepalive_module

Hey,

Why did they test on 1k page?

Because in Resin “Small static files are cached in memory, improving
performance by avoiding the filesystem entirely. Small files like
1-pixel
images can be served with little delay.” (source:
Web Server: Static Files - Resin 4.0 Wiki).

Biased benchmark (Resin serving from memory vs nginx opening, reading,
serving and closing files).

Best regards,
Piotr S. < [email protected] >

which version of resin did they use, the open source or pro version?
mike

On Fri, Aug 17, 2012 at 11:18 PM, Adam Z. [email protected] wrote:

to sustain fast response times under extremely heavy load while Nginx
performance degraded. "


Adam
[email protected]


nginx mailing list
[email protected]
nginx Info Page


James Michael DuPont
Member of Free Libre Open Source Software Kosova http://flossk.org
Saving wikipedia™ articles from deletion
http://SpeedyDeletion.wikia.com
Contributor FOSM, the CC-BY-SA map of the world http://fosm.org
Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3

More details:
http://blog.caucho.com/2012/07/05/nginx-120-versus-resin-4029-performance-tests/
.

On Fri, Aug 17, 2012 at 10:14 PM, Mike Dupont <

We are making a nginx benchmark under 10Gbe network. For an empty page,
we
get about 700k rps of nginx, in compare with about 100k rps of resin
pro.

In caucho’s test, they use i7 4 core / 8 HT, 2.8 GHZ, 8Meg Cache, 8 GB
RAM,
and I use duo intel e5645. I think the result can be improved through
some
tuning.

We tuned server configuration and nginx configuration, but didn’t tune
much
on resin. We didn’t find any configuration of caucho’s testing, neither
nginx nor resin. so i wonder how to make the rps of resin go above 100k?

On Sat, Aug 18, 2012 at 3:26 PM, Mike Dupont
<[email protected]

Resin Pro 4.0.29, so whats the point? We are talking about open source
software here, no?
mike

On Sat, Aug 18, 2012 at 6:39 AM, Adam Z. [email protected] wrote:

On Fri, Aug 17, 2012 at 11:18 PM, Adam Z. [email protected] wrote:

and
Adam
James Michael DuPont

nginx Info Page

James Michael DuPont
Member of Free Libre Open Source Software Kosova http://flossk.org
Saving wikipedia™ articles from deletion
http://SpeedyDeletion.wikia.com
Contributor FOSM, the CC-BY-SA map of the world http://fosm.org
Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3

I use Xeon E5 32core CPU with 10G NIC, for empty page,get nearly
400-500k
rps on nginx 1.2.4,
How to get 700k rps or more(1000k rps)?

Arnold
Liu Lantao Wrote:

We tuned server configuration and nginx configuration, but didn’t tune
much
on resin. We didn’t find any configuration of caucho’s testing,
neither
nginx nor resin. so i wonder how to make the rps of resin go above
100k?

Posted at Nginx Forum: