Load balancers and mongrel

We have a load balancer sending requests to one of X boxes and one of
N mongrel processes on that box.

Since each mongrel processes is multi-threaded but it has a mutex
around the section that calls rails, we end up with several requests
queued up waiting when they could have gone to another box with a
free process.

For example, boxA, and boxB.

boxA has mongrels 1 through 10
boxB has mongrels 11 through 20

Load balancer sends a request to boxA mongrel 5.
Load balancer sends a request to boxB mongrel 12.
Load balancer sends a request to boxA mongrel 5 again.
It has to wait for the 1st request still running on boxA mongrel 5.

How can we help the load balancer know it should have sent the
request to any number of other free mongrels vs. queuing up threads
that have to wait?

Thanks.

On 10/26/07, Andrew A. [email protected] wrote:

boxA has mongrels 1 through 10

That’s the bad logic of your balancer.

If it’s a hardware one, check if there are some algorithms you can
tweak.

For a software-based one, check the configuration file if there is
some param about spreading / distributing the requests.

Also, could be helpful if you mix the box A and box B list of members
for the cluster:

A: 192.168.0.15
B: 192.168.0.16

A-mongrel1
B-mongrel11
A-mongrel2
B-mongrel12

etc…


Luis L.
Multimedia systems

Leaders are made, they are not born. They are made by hard effort,
which is the price which all of us must pay to achieve any goal that
is worthwhile.
Vince Lombardi

You’re coming in just after a discussion about this – the problem of
request queuing in Mongrel. It occurs when a round-robin style load
balancer cycles back to a Mongrel that still hasn’t finished processing
the request that the load balancer had sent in its previous cycle
through the pool.

If you’re running Apache’s mod_proxy_balancer then see this:

http://rubyforge.org/pipermail/mongrel-users/2007-October/004145.html

and a correction here:
http://rubyforge.org/pipermail/mongrel-users/2007-October/004146.html

We’ve also found it beneficial to partition parts of the mongrel
cluster into separate functional groups. For example we run 3 sets of
mongrels.

The first handle dynamic pages generated by the app and are as fast as
the db can find and serve the information, the 2nd generate images
from thumbnails we store on disk and have a higher cpu cost, but are
also constrained by the time taken to send_file the resulting data
back to the client (don’t forget this has to happen before the rails
lock is released).

The third mongrel group handle image uploads from our members and can
take a very long time to process images if they were large or poorly
formed.

This separation allowed us to channel requests that had similar
resource usage requirements into different pools. Using a smarter web
front end like nginx also allows queuing of the request at the web
server rather than nginx, which means when the upload requests for the
100mb tiff blows up, it doesn’t take 20 other requests that are
sitting on the same mongrels queue with it.

We found Apaches mod_balancer to be useless in handling proper
distribution of requests even after we had segregated the mongrels. My
advice is to switch to nginx or some smart hardware if you can afford
it.

Cheers

Dave

We’ve also found it beneficial to partition parts of the mongrel
cluster into separate functional groups. For example we run 3 sets of
mongrels.

The first handle dynamic pages generated by the app and are as fast as
the db can find and serve the information, the 2nd generate images
from thumbnails we store on disk and have a higher cpu cost, but are
also constrained by the time taken to send_file the resulting data
back to the client (don’t forget this has to happen before the rails
lock is released).

The third mongrel group handle image uploads from our members and can
take a very long time to process images if they were large or poorly
formed.

This separation allowed us to channel requests that had similar
resource usage requirements into different pools. Using a smarter web
front end like nginx also allows queuing of the request at the web
server rather than nginx, which means when the upload requests for the
100mb tiff blows up, it doesn’t take 20 other requests that are
sitting on the same mongrels queue with it.

We found Apaches mod_balancer to be useless in handling proper
distribution of requests even after we had segregated the mongrels. My
advice is to switch to nginx or some smart hardware if you can afford
it.

Cheers

Dave

Hi Dave,

On 10/28/07, Dave C. [email protected] wrote:

We’ve also found it beneficial to partition parts of the mongrel
cluster into separate functional groups. For example we run 3 sets of
mongrels.

I’m still learning a lot in terms of system administration and
deployment for Ruby on Rails applications, however I’m puzzled how you
could divide the mongrels in 3 sets based on function (dynamic pages,
thumbnails and image uploads).

Could you please shed some light?

We do it based on the URL. We know from the URL which controller we
expect the resource to land on, so using the url and the method, nginx
can redirect the request to a particular mongrel group.

Cheers

Dave

Thank you. Simple solutions are always beautiful.