Is this normal performance?

I’m starting to benchmark my application and was wondering if this is
normal results / degradation when new layers are added.

Basically I was worried about the ~35% decline when it hits mongrel, and
the following 40% decline when it hits a blank rails app. Getting just
over a third of the theoretical max seems pretty awful when its just a
blank page.

Can anyone suggest either some config settings, or where to look to see
what improvements can be made?

This is from httperf - It was run on my desktop pointed towards a
webserver in another room (2 hops away max). I ran a 5 conn warm-up
prior to running the real test.

(apache - gif image)
httperf --server=test --port=80 --uri=/bg.gif --num-conns=10000

Reply rate [replies/s]: min 625.1 avg 653.0 max 687.0 stddev 31.4 (3
samples)
Reply rate [replies/s]: min 450.0 avg 495.7 max 545.6 stddev 50.9 (4
samples)
Reply rate [replies/s]: min 494.8 avg 520.7 max 546.6 stddev 36.6 (2
samples)
Reply rate [replies/s]: min 554.6 avg 581.6 max 600.8 stddev 24.1 (3
samples)

avg: 562

(mongrel - gif image)
httperf --server=test --port=3000 --uri=/images/bg.gif --num-conns=8000
Reply rate [replies/s]: min 347.2 avg 362.5 max 370.4 stddev 10.8 (4
samples)
Reply rate [replies/s]: min 337.4 avg 368.6 max 382.4 stddev 21.0 (4
samples)
Reply rate [replies/s]: min 351.6 avg 361.6 max 378.8 stddev 11.9 (4
samples)
Reply rate [replies/s]: min 344.0 avg 361.7 max 376.4 stddev 14.7 (4
samples)

avg: 363

% = 64.5% of Potential Max

(mongrel - blank controller / index file, no global layout etc.)
httperf --server=test --port=3000 --uri=/test --num-conns=4000
Reply rate [replies/s]: min 205.6 avg 210.9 max 217.0 stddev 5.7 (3
samples)
Reply rate [replies/s]: min 193.5 avg 205.3 max 215.4 stddev 11.1 (3
samples)
Reply rate [replies/s]: min 205.8 avg 215.9 max 226.2 stddev 10.2 (3
samples)
Reply rate [replies/s]: min 192.6 avg 209.1 max 218.2 stddev 14.3 (3
samples)

avg: 210

37% of Potential Max
58% of Potential Mongrel Max

On Nov 26, 2007 12:59 PM, Bob Br [email protected]
wrote:

I’m starting to benchmark my application and was wondering if this is
normal results / degradation when new layers are added.

Yup. There is quite a bit of Ruby executed per request, and Ruby is a
slow language.
By the time you add a real application, though (with typical
round-trips of 20-50 msec), all this overhead becomes something like
~10-20% of your total CPU time, so it’s not really a big deal.

Can anyone suggest either some config settings, or where to look to see what improvements can be made?

Two common pitfalls that make a big difference in the tests of [render
:text => ‘Hello, World’] variety are:

  • running Rails in development environment (forgetting to run Rails
    with -e production )
  • storing sessions on file system (use database or memcached instead,
    or turn sessions off altogether)


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

Thanks for the reply.

I made sure to be in production mode, though I am using the default
session setup for rails (pstore). I was considering running memcache,
but read on the forum that it won’t store activerecord info, and I am
currently putting some AR stuff in there.

If I’m misinformed, great! The server has 10+ GB of ram that would
easily handle the workload. Otherwise what would you suggest for
sessions? This is a suse linux box connected to a MS SQL database using
the ODBC connection. DB sessions, activerecord, something else?

The other thing I am trying to fix is my app is chatty, with some ajax
magic + frequent saves to the database. We are trying to get a mongrel
cluster setup which should help with some of this, but didn’t know if
there were other bottle necks to check out.

On Nov 26, 2007 3:05 PM, Bob Br [email protected]
wrote:

Otherwise what would you suggest for sessions?
Go with database as a well-rounded default. Switch to memcached if
database becomes the bottleneck.

but read on the forum that it won’t store activerecord info, and I am currently putting some AR stuff in there.
Not sure if I read it right, but if the above means “serialized model
instances in the session”, you shouldn’t do that. Sessions should hold
IDs.

connected to a MS SQL database using the ODBC connection.
Duh… not that SQLServer is a bad product… but Rails community
basically doesn’t care too much about it. I wonder if you have to
solve some crappy little problems because of that. Definitely was my
experience with the Rails+SQLServer combo a year ago, and Rails+Oracle
just recently.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

On Nov 26, 2007 3:48 PM, Bob Br [email protected]
wrote:

Just for fun, I changed the environment to active record sessions, and
it dropped from 200ish to ~25.

Whoa! This is what, 40 msec for a simple “retrieve row by primary key”
type operation? You have much bigger problems than just session
storage, if that’s the case in your prod environment.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

Well, very slightly in its defense it did have to create a couple
thousand new rows… yeah, who am I kidding, the thing is pokey.

Me and a partner are trying to get this thing in shape, but neither of
us are ruby / webserver gurus. We realize something isn’t working quite
right here and while we are looking to get mongrel clustering going +
this is going to be load balanced between 2 machines, there is something
else going on. Until we figure out what it is, extra hardware won’t
really solve the problem.

for my testing, I’m pointing to a specific machine, and only have a
single mongrel processor going.

Expanding on what I posted earlier its a suse 10 enterprise machine, we
had to setup the ODBC connection to MS sql server using free tds and I
think there might be one other kludge used in there. My buddy was the
one who got the servers talking, but I think its pretty much a stock
install from the gems. I’ll double check if there was much else done to
these machines.

regardless, do you have any other suggestions on where we might look to
either see what problems are occurring, or tuning suggestions? With the
current setup we are not able to leave the MS Sql environment, so some
of this is here to stay.

Bob

Yeah, the db appears to be a bit of a bottleneck as we have to jump
through some hoops to get this stuff connected.

Just for fun, I changed the environment to active record sessions, and
it dropped from 200ish to ~25. I switched it back, got 200, tried AR
again and got the 20s again. So I don’t know think I’ll be trying
database solutions. I’ll try and rip apart the code to see how many AR
objects I was storing in the session (just 1 or 2 I think) and see about
switching that out so I can use memcache.

Alexey V. wrote:

On Nov 26, 2007 3:05 PM, Bob Br [email protected]
wrote:

Otherwise what would you suggest for sessions?
Go with database as a well-rounded default. Switch to memcached if
database becomes the bottleneck.

but read on the forum that it won’t store activerecord info, and I am currently putting some AR stuff in there.
Not sure if I read it right, but if the above means “serialized model
instances in the session”, you shouldn’t do that. Sessions should hold
IDs.

connected to a MS SQL database using the ODBC connection.
Duh… not that SQLServer is a bad product… but Rails community
basically doesn’t care too much about it. I wonder if you have to
solve some crappy little problems because of that. Definitely was my
experience with the Rails+SQLServer combo a year ago, and Rails+Oracle
just recently.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

On Mon, 26 Nov 2007 23:05:21 +0100
Bob Br [email protected] wrote:

Thanks for the reply.

I made sure to be in production mode, though I am using the default
session setup for rails (pstore). I was considering running memcache,
but read on the forum that it won’t store activerecord info, and I am
currently putting some AR stuff in there.

I’ll comment on your perf numbers separately, but don’t use PStore. If
you read somewhere that PStore is the fastest store (or that using
sessions–which are stored in the database–is faster than the database)
then please tell me where you found that advice so I can have it killed.

Thanks.


Zed A. Shaw

On Mon, 26 Nov 2007 23:48:40 +0100
Bob Br [email protected] wrote:

Yeah, the db appears to be a bit of a bottleneck as we have to jump
through some hoops to get this stuff connected.

Just for fun, I changed the environment to active record sessions, and
it dropped from 200ish to ~25. I switched it back, got 200, tried AR
again and got the 20s again. So I don’t know think I’ll be trying
database solutions. I’ll try and rip apart the code to see how many AR
objects I was storing in the session (just 1 or 2 I think) and see about
switching that out so I can use memcache.

When you run multiple processes, you’ll wish you’d avoided the PStore.


Zed A. Shaw

On Mon, 26 Nov 2007 20:59:39 +0100
Bob Br [email protected] wrote:

I’m starting to benchmark my application and was wondering if this is
normal results / degradation when new layers are added.

Basically I was worried about the ~35% decline when it hits mongrel, and
the following 40% decline when it hits a blank rails app. Getting just
over a third of the theoretical max seems pretty awful when its just a
blank page.

What are your performance goals for the application in req/sec and
kbytes/sec.

Optimize to meet those after the application is working. A working
application can be made faster with more hardware, a non-working
application doesn’t do shit.

Reply rate [replies/s]: min 625.1 avg 653.0 max 687.0 stddev 31.4 (3
avg: 562

This seemes pathetic for an Apache config, and the wide range in your
averages says that your box sucks or your testing it over a crappy
connection.

I’m sorry Bob, but your poor little server just doesn’t cut it.

If you believe anything Brian McCallister says, he gets 18,0000 req/sec
out of his magical Apache stack (riiiiiiiiight).
http://programming.reddit.com/info/60pkl/comments/ he gets the
following:

Apache HTTPD minimal module which mod_hello_world just prints hello
world: 18,823.58
Apache HTTPD server-scoped mod_wombat mod_wombat handler: 17,856.76
Apache Tomcat JSP 5.5.20: 17,644.40
Jetty 6.1.1 JSP: 12,449.36
Mongrel HttpHandler, not Rails: 2,378.05

Now, those have to be the most bogus unbelievable numbers ever, but
let’s assume that they’re true and he’s getting 18k req/sec out of
Apache. What you should be asking (apart from how the fuck did he do
that without a giant cluster of machines):

  • I’ve never seen Apache Tomcat approach a speed that close to a fine
    tuned Apache server ever. If he’s got a Tomcat that goes as fast as
    Apache, then why bother using Apache? Tomcat is about 75% of the code
    and only uses like 200% of the RAM. Instead of piling everything
    through Apache to Tomcat, cut out the middle man.
  • If he’s got Wombat at near Apache Tomcat speeds, then again, why the
    hell bother with Apache at all? This means you could again cut out the
    middle man and potentially do your own similar server that’d be much
    faster even.
  • Jetty is known as lightning faster than Tomcat, but when did this
    change? I fired up a Jetty and Tomcat on my laptop and Jetty was about
    10% faster. What’d they do to their Tomcat that’s so much better?
  • Consider Mongrel’s About 3000 lines of code and Apache is about 3
    million I’ll take that as a win. :slight_smile: But I’m sure Brian still has it
    stuck in his head that no, Mongrel’s speed has nothing to do with Ruby’s
    shitty GC, threads, and IO, but it’s all about his pet peeve dumbass
    feature pipelined requests and keep-alives. 'Cause all those browsers
    doing only 4 requests max per server per connection are totally using
    his little magic hack keep-alive bullshit.
  • Wait a minute, if JSP on Tomcat is that close to Apache, then WHAT THE
    FUCK is every Java framework in the world doing? I bust out just a
    simple page that uses a JSP, Hibernate, and Controller and I maybe get
    10 req/sec without serious tunage. If Brian’s got some kickass cluster
    going, then there’s no way he setup mongrel right for this test. He
    probably put Mongrel on his laptop then put the apache on a 200x200
    cluster with 300 gig per node and slapped the Tomcat on a 300 way Azul.

Finally, the big question to ask, since I’ve never seen a Tomcat beat
a well tuned Apache server on any similar hardware, is how’d they
cripple their apache to make it as slow as Tomcat? :slight_smile:

(mongrel - gif image)
httperf --server=test --port=3000 --uri=/images/bg.gif --num-conns=8000
avg: 363

If that’s mongrel serving a file then that could be a good speed. 1/2
of your apache speed is actually rocking fast. It’s usually much less.
In fact, I’d seriously say that your measurement method is probably
wrong.

% = 64.5% of Potential Max

(mongrel - blank controller / index file, no global layout etc.)
httperf --server=test --port=3000 --uri=/test --num-conns=4000
avg: 210

Mongrel doesn’t have “controllers” so if this is a Rails action in a
controller then that’s not so bad compared to your apache performance.
If this is a mongrel handler then I’d say re-test as your measurements
are probably wrong.

Posted via http://www.ruby-forum.com/.

Why do people post anonymously? Seriously, that’s really irritating.
I’m sure Bob has a reason, but just come out and say who your are.
Nobody cares if Super Secret Microsoft Supported Company.com is using
Mongrel. Hell Oracle finally came clean.

BTW, they get 400 req/sec on a finely tuned JRuby setup. You should
have seen me laugh at that, especially since I can’t get a moderately
sized test suite to run in less than 10 minutes.


Zed A. Shaw

On Tue, 27 Nov 2007 03:20:36 +0100
Bob Br [email protected] wrote:

Expanding on what I posted earlier its a suse 10 enterprise machine, we
had to setup the ODBC connection to MS sql server using free tds and I
think there might be one other kludge used in there. My buddy was the
one who got the servers talking, but I think its pretty much a stock
install from the gems. I’ll double check if there was much else done to
these machines.

Hehe, wait, you’re running your AR through ODBC off Windows to a remote
server? OH man you are so screwed.

First, work on your apache performance (as mentioned before). That
should find all the places you’re having machine level performance
problems, then work on just getting mongrel fast.

I’ll tell ya right now, Mongrel on windows blows for perf, so if it
matters to your company, demand they deploy to Linux no matter what. If
perf don’t matter, then tell them you’re done. :slight_smile:

If I read you wrong and Mongrel is in some other config, but has to talk
to the database through ODBC off unix, then sure as hell do NOT put the
session in that DB. Google around as I think you can store just the
session in a real man’s database for the speed, or do as Alex suggests
and try memcache (although that’s not super reliable for large setups).


Zed A. Shaw

Naw, its not quite that bad off

The entire setup is more like so:
W2k3 server running SQL 2000 with a few databases, doesn’t do anything
else. Can’t really get away from this machine as we need to regularly
pull in new info from some of the other databases on there, and MS does
not play nice with other dbs.

2 webservers that are load balanced. Each is a quad-core 2.6+ ghz
opteron servers with 11-12GB of ram, and using a SAN back end - so there
is the slight headache of how to get the ruby log files to work since
they are pointing to shared storage, and use the same config script to
boot up. Tried using an environment variable of the hostname to
redirect the log directory, but no dice.
These servers will also be hosting a couple other low to moderate
traffic php sites using apache, but right now they are idling while we
are configuring them, waiting for the switchover later this week.

So the Mongrel, ruby etc is all on the linux boxes, we just have the
headache of connecting them to the MS database. The sessions are stored
on the hard drives as is the default for rails, but just for S&G I
decided to put the sessions in the DB for a few minutes to see how bad
it would get. I was actually surprised, figuring it would be 1/2 or 1/3
as fast… not 1/10!

As you said, I should spend a little time getting apache tweaked, then
mongrel and work my way down. Anyway to get a ballpark of what the
apache max performance ought to be to know when I’m getting close and
can move on?

The sessions are stored
on the hard drives as is the default for rails

This default is damn slow, compared to the database, once it has to
look up a file in a directory containing 20 thousand other files.

to get a ballpark of what the apache max performance ought to be
Several thousand hits/sec.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

On Nov 26, 2007 8:41 PM, Zed A. Shaw [email protected] wrote:

Oracle finally came clean.
Yup, although they’ve been running a Rails app with several thousand
employees as users for some months before that. :slight_smile:

BTW, they get 400 req/sec on a finely tuned JRuby setup. You should have seen me
laugh at that, especially since I can’t get a moderately sized test suite to run in less than 10 minutes.
Clustered in-memory session storage, rather simple page, and a
database server on a separate box. And Oracle’s J2EE thingie is an
in-process Apache 1.3 mod (Oracle App Server is “just” Apache 1.3 with
some extra-special mods). So, several hundred hits/sec is not
extraordinary. You could get similar numbers with Mongrel and MRI on
that rig.

Frankly, it’s not like we had any technical reason to run Mix under
OAS, other than to show that it can be done. It turned out easier than
anyone expected. In the right kind of circumstances
(Java/Oracle-dominated IT shop), I would consider using the same setup
for a normal project. Session clustering stuff in OAS is downright
slick.

A funny problem with JRuby is a huge difference between cold and warm
performance (before and after JIT compiler). Running test suites is
mostly cold, hence it is slooooooooow. Web app performance, on the
other hand, is roughly on par with MRI, but only after a warmup
period. It improves significantly after the first 20 hits (when JRuby
AST-to-bytecode compiler kicks in), and then keeps gradually getting
better for the next several thousand hits (as JVM’s bytecode-to-native
JIT compiler is working its magick).

Mongrel … has to talk to the database through ODBC off unix
That’s what Bob was talking about, yes.

store just the session in a real man’s database
SQLServer (since version 2000, when it stopped being Sybase under
another name), is not bad at all. Although in the market where MySQL 5
goes for $0, selling SQLServer for some number of kilobucks should
be… a challenge, I guess. It is definitely fast enough. Rails
adapter => FreeTDS => SQLServer combo may not be, however.

This reminds me… I sometimes wonder if JRuby may be much faster than
MRI on Windows. :slight_smile:


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

Hey Zed, didn’t realize you had posted a handful of replies there and
just caught the last one when I sent that reply one so heres a little
backtracking.

Pstore – using it because it is the default rails uses, not because it
was reported to be super efficient. As mentioned previously, the DB
version is a no go (unless we use our mysql db just for the sessions)
and memcache will require a little bit of a rewrite of the code, but
should be worth it. Up until about a month ago, everything was going
more or less fine for our users. We have gotten complaints about
unresponsiveness on the server and had planned a hardware upgrade
anyways, but decided to do some benchmarking prior to switching over.
The results were not good, signaling some other problem – like a stock
apache config for instance. We will be switching away from pstore asap,
now that you’ve show me the light.

What are your performance goals for the application in req/sec and
kbytes/sec.

The performance goals are actually pretty modest. Perhaps 50 page
requests when the database is being hit, perhaps less even. Running
mongrel cluster should buy some extra performance, but I’d still like to
get that baseline improved.

This seemes pathetic for an Apache config, and the wide range in your
averages says that your box sucks or your testing it over a crappy
connection.

As described in the prior post, the server is a decent machine. If I
recall correctly, I’m hitting the switch on a 10/100 card, and
everything after that is running on gig connections. I’m in the same
subnet as these servers, so there shouldn’t be more then a hop or two
between us. What I didn’t mention that I was testing by using a VM
machine running linux from my desktop, with a core all to itself.
Didn’t think it would be that resource intensive, but I’ll get httperf
on real hardware on Tuesday and see how much a difference that makes.

(mongrel - blank controller / index file, no global layout etc.)
That was an error on my part – I meant to say ruby through mongrel,
without any actual code being processed. It was shorthand from when I
was benchmarking, and I just copied it over.

Why do people post anonymously?
Bob B., nice to meet you. Didn’t mean to irritate, just trying to
avoid excess spam and whatnot. Thought I was doing better then most by
registering for an actual forum account as opposed to using guest.

Nobody cares if Super Secret Microsoft Supported Company.com is using
Mongrel.
Not a MS shop here, we’d go another route and use OSS whenever we can,
but with some of our enterprise / specialized software we don’t have
another choice. Ya work with what you got.

Its been eye opening so far. Going to be a busy couple of days while I
dig into all of this!

Bob B. - Not to be confused with an anonymous poster :slight_smile:

On Mon, 26 Nov 2007 22:25:34 -0700
“Alexey V.” [email protected] wrote:

Frankly, it’s not like we had any technical reason to run Mix under
OAS, other than to show that it can be done. It turned out easier than
anyone expected. In the right kind of circumstances
(Java/Oracle-dominated IT shop), I would consider using the same setup
for a normal project. Session clustering stuff in OAS is downright
slick.

Way to go. Gotta blow the big guys to get ahead eh?

A funny problem with JRuby is a huge difference between cold and warm
performance (before and after JIT compiler). Running test suites is
mostly cold, hence it is slooooooooow. Web app performance, on the
other hand, is roughly on par with MRI, but only after a warmup
period. It improves significantly after the first 20 hits (when JRuby
AST-to-bytecode compiler kicks in), and then keeps gradually getting
better for the next several thousand hits (as JVM’s bytecode-to-native
JIT compiler is working its magick).

Yes, and with Sun and the Fowlbots making their money from servers only
we probably won’t see an improvement in start-up speed any time soon.

Mongrel … has to talk to the database through ODBC off unix
That’s what Bob was talking about, yes.

store just the session in a real man’s database
SQLServer (since version 2000, when it stopped being Sybase under
another name), is not bad at all. Although in the market where MySQL 5
goes for $0, selling SQLServer for some number of kilobucks should
be… a challenge, I guess. It is definitely fast enough. Rails
adapter => FreeTDS => SQLServer combo may not be, however.

I was more referring to the ODBC nightmare, but also SQLServer is known
for being yet another of those databases that only a grand expert could
ever speed up.

This reminds me… I sometimes wonder if JRuby may be much faster than
MRI on Windows. :slight_smile:

Actually, it is, since it runs about the same speed everywhere.


Zed A. Shaw

This may be just focusing on one part of your deployment, but why not
try out http://litespeedtech.com/ to replace the mongrel+apache layer
in one step?

We use it for a couple of our sites which are fairly well trafficked
(millions of uniques per month between them):

http://www.medications.com

Sure, its not open source, but there is a free version and you can get
the paid version when its worth it. We use the paid version for
funadvice.com and the free version for medications.com

We’ve noticed much more snappier response times, and we no longer have
to worry about how many mongrels to run, etc. as that’s taken care of.
Sure it won’t make your core rails app faster, but at least now with a
fairly mature layer to manage the web serving part, you can focus your
efforts on your app. Moreover, there’s nice real-time monitoring built
into litespeed itself so you can see how many requests per second
you’re getting, memory and load you’re using etc. For multiple servers
there’s the load balancer as well. It also handles apache style
mod-rewrite statements quite well.

Nothing’s perfect, but believe me, much of the memory issues, and
uptime issues go right away. I no longer have to run monit for
watching a pack of mongrels. We still use mongrels for development and
pre-production chores, but for production its litespeed all the way.

Anyway… just my two cents (and hey, I don’t work for litespeed).

On Nov 27, 2007 6:54 AM, Zed A. Shaw [email protected] wrote:

“Alexey V.” [email protected] wrote:

In the right kind of circumstances (Java/Oracle-dominated IT shop),
I would consider using the same setup for a normal project.
Way to go. Gotta blow the big guys to get ahead eh?

Duh… get off your high horse, will ya? “Java/Oracle-dominated IT
shop” => “Oracle is already there”. “I would consider using the same
setup for a normal project” => “it actually works”. Commercial
close-sourced middleware is a Bad Thing ™, but it helps to know
what works and what doesn’t. JRuby/Rails on OAS does.

It also has a couple of particularly useful features: in-memory
session clustering that doesn’t have a single point of failure, and
ESI caching.

but also SQLServer is known for being yet another of those databases that only a grand expert could ever speed up.
I’ve heard the same thing about it, but it doesn’t match my experience.

This reminds me… I sometimes wonder if JRuby may be much faster
than MRI on Windows. :slight_smile:
Actually, it is, since it runs about the same speed everywhere.
If that is true, it has interesting implications for the original
poster. As in “if you find out that FreeTDS/ODBC is the bottleneck,
try running your app under JRuby on Windows”.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

On Tue, 27 Nov 2007 10:14:55 -0700
“Alexey V.” [email protected] wrote:

close-sourced middleware is a Bad Thing ™, but it helps to know
what works and what doesn’t. JRuby/Rails on OAS does.

Grow a fucking sense of humor will ya? Espcially if you’re going to
whore yourself out for that all elusive “synergy” that was probably
tossed around 100 times during the meeting between TW and ORA.

but also SQLServer is known for being yet another of those databases that only a grand expert could ever speed up.
I’ve heard the same thing about it, but it doesn’t match my experience.

This reminds me… I sometimes wonder if JRuby may be much faster
than MRI on Windows. :slight_smile:
Actually, it is, since it runs about the same speed everywhere.
If that is true, it has interesting implications for the original
poster. As in “if you find out that FreeTDS/ODBC is the bottleneck,
try running your app under JRuby on Windows”.

Yep, and deploying Java to Windows is totally easy. Maybe you could get
TW to offer him up 100 programmers with only 2 weeks of training as
“Rails Experts” to get him a sweet Oracle setup going?


Zed A. Shaw

Well, I apologize for wasting a bit of your time. By putting httperf on
one of the new machines and pointing it towards the other webserver, the
results are a lot better. Since the tool was so small, it didn’t occur
to me how much of a little piggy it could be. Won’t happen again, my
quota of looking like a total idiot in a public forum is once per
application.

I don’t know if it will make a large change in the results, but I added
the --hog flag since the number of connections is a fair bit larger.

<new hardware, no load>
httperf --server=test --port=80 --uri=/bg.gif --num-conns=31000 --hog
Reply rate [replies/s]: min 2699.0 avg 2748.7 max 2798.4 stddev 70.3 (2
samples)
Reply rate [replies/s]: min 2691.0 avg 2732.1 max 2773.2 stddev 58.1 (2
samples)
Reply rate [replies/s]: min 2629.0 avg 2645.0 max 2661.0 stddev 22.6 (2
samples)
Reply rate [replies/s]: min 2669.0 avg 2694.4 max 2719.8 stddev 35.9 (2
samples)

httperf --server=test --port=3000 --uri=/images/bg.gif --num-conns=25000
–hog
Reply rate [replies/s]: min 1317.3 avg 1323.4 max 1327.7 stddev 5.4 (3
samples)
Reply rate [replies/s]: min 1324.5 avg 1326.7 max 1329.9 stddev 2.8 (3
samples)
Reply rate [replies/s]: min 1323.5 avg 1333.5 max 1340.1 stddev 7.3 (4
samples)
Reply rate [replies/s]: min 1321.5 avg 1334.8 max 1346.3 stddev 11.6 (4
samples)

(hitting a blank ruby controller & blank page)
httperf --server=test --port=3000 --uri=/test --num-conns=6000 --hog
Reply rate [replies/s]: min 301.8 avg 320.9 max 336.2 stddev 17.5 (3
samples)
Reply rate [replies/s]: min 318.0 avg 332.1 max 344.2 stddev 13.2 (3
samples)
Reply rate [replies/s]: min 320.6 avg 321.7 max 322.8 stddev 1.1 (3
samples)
Reply rate [replies/s]: min 328.4 avg 333.7 max 338.6 stddev 5.1 (3
samples)

<existing hardware, possible light load, testing just apache>
httperf --server=old --port=80 --uri=/bg.gif --num-conns=8000 --hog
Reply rate [replies/s]: min 706.6 avg 707.0 max 707.6 stddev 0.9 (2
samples)
Reply rate [replies/s]: min 663.2 avg 669.7 max 675.9 stddev 8.6 (2
samples)
Reply rate [replies/s]: min 627.0 avg 660.5 max 693.2 stddev 47.1 (2
samples)

<existing hardware, possible light load, same blank controller / code as
above>
httperf --server=old --port=3000 --uri=/test --num-conns=3000 --hog
Reply rate [replies/s]: min 138.6 avg 157.0 max 162.6 stddev 8.4 (3
samples)
Reply rate [replies/s]: min 138.2 avg 153.7 max 163.2 stddev 13.6 (3
samples)
Reply rate [replies/s]: min 147.0 avg 156.9 max 162.2 stddev 8.6 (3
samples)

Alexey,

to get a ballpark of what the apache max performance ought to be
Several thousand hits/sec.

is the 2.7k replies from the straight apache about what you would
expect, or is it still on the low side?

The performance dropped by about 1/2 when going through Mongrel which
Zed said is very good, so assuming that Apache is working well, thats
reassuring.

Bob B.