Steve M. wrote:
irrespective of their availability, leaving some mongrels idle and some
overloaded.)
If you are using mod_proxy_balancer then I’m fairly sure that “the slow
returning mongrel” challenge has been solved using “max” and “acquire”
parameters to BalancerMember.
It’s working for me, and I tested using a pool of Nitro servers using
controller method that sleeps. I was hoping that Rafael G. would
report back with his results on this with is issue ( though with “conn
refused” I think his issue may be a little different ).
Here’s the product of my devinations and experiments:
<Proxy balancer://myserverpool> # no trailing slash
BalancerMember http://192.168.10.10:10000 keepalive=on max=1 lbset=0
acquire=1 timeout=1
BalancerMember http://192.168.10.10:10001 keepalive=on max=1 lbset=1
acquire=1 timeout=1
BalancerMember http://192.168.10.10:10002 keepalive=on max=1 lbset=0
acquire=1 timeout=1
If I understand what’s being discussed (not a given) the system Robert
is proposing would mean that a mongrel would only get requests from
port 80 when it was ready to call “receive()” again from the IP stack.
So mongrels would consume what they can off a common stack.
Yes. Minor nit – each process is in an accept() loop. Accept a
connection, process the request (it’s in processing that the read
occurs). After processing and then closing the socket to the client,
enter accept() to wait for another request.
This would mean that I don’t have to balance all my Rails processes to
keep them returning results at roughly equal intervals. If my servers
are heavily loaded now, it seems like a slow returning mongrel can
cause havoc even if it’s just slow by a couple of seconds…
Servers within the pool would auto-balance. You’d still have to balance
across pools if they’re running on multiple hosts. But even so, yes,
this is an improvement.
Swiftiply as I understand it also addresses the Mongrel/Rails request
queuing problem – I haven’t researched it yet. The max/acquire
settings are working for us, and I experimented with a sleeping
controller method to verify that it works.
So given the mongrels a common pool of requests to consume from and
using such a low level pool such as an IP port seems great, and I’d
vote for it. The fact that my development box doesn’t run against the
same code is totally ok.
Is there a cheap or demo version of Visual Studio I could try this on?
I’ve got XP Home Edition running under VMWare, and could code a simple
test to see if this works there. While I’m at it, I could verify that
this works on Mac OSX as well…
I’m sure Windows people who run in production will have other opinions
but this idea, if it’s practical, seems very elegant and useful. +1
Steve
It may be a horrendous hack for reasons I’m unaware of. But it worked,
and in a pretty demanding scenario, essentially unchanged for eight
years and counting ( ZDNet, then CNET ad servers ) so I didn’t research
it too deeply. I looked for problems like immediate error returns
from accept(), deadlocks, delays with growing queues of accept()
callers, but didn’t find any. I keep meaning to read through Linux’
TCP/IP stack to find out why it was so. Maybe there’s a non-standard
quirk in there that makes it all just dumb luck… darn… now I need to
know…