I’m worried about queuing.
I’m working to remediate a Rails app ( not mine ) that can take as long as
a minute to generate certain pages.
Consider the case where mod_proxy_balancer sends a request to an application
server that’s 1 second into a 60-second rails page. With queuing, wouldn’t
the second request sit in the queue for 59 seconds while waiting for the
first request to complete?
Yes. This brings up an interesting issue, though. 1 minute to render
a page is horrific, and if everything is that slow, there are no good
solutions. But if the app only has certain pages that are that slow,
while the rest are fast, it brings up an interesting, solveable
scaling quandry.
The current workaround is to have a hundred of Rails app server instances
chewing up huge amounts of RAM. I would quite happily trade that in for the
memory and performance overhead of whatever multiplexing or multithreading
scheme is used in the app server. All I need is a guarantee that the app
server will not call select if 1) it’s already working on a request and 2)
it won’t open and close a socket for a request it’s not prepared to handle.
If the app doesn’t call accept() Apache will gracefully move on to another
app server.
(nod) Mongrel accpets requests as they come in. If it reaches its
request limit, it starts killing things. That request limit, unless
you have set it lower, is the select() limit of 1024 file handles.
In a case where you have some actions that are very, very slow, a
mongrel that is in this situation probably just crawls and takes
forever to render anything. It also uses a ton of RAM.
If one is using an evented_mongrel, you still have the problem that
you mention, that a fast action sitting behind a slow one has to wait
for the slow one to finish. The slow one will, however, finish a lot
faster, and the overall RAM usage will be a LOT lower, too.
With Swiftiply, a fast action won’t ever be queued up behind a slow
action unless there are no available backends to take any actions at
all. Then the fast action will wait until one of the backends
finishes its unit of work and returns a response. As soon as that
happens, the fast action that has been waiting for a backend will be
picked up and handled.
I’ll look at swiftiply – this is the first I’ve heard of it.
swiftiply.swiftcore.org
#swiftcore on irc.freenode.net
Kirk H.