We have been using nginx in combination with 3 mongrel instances on our
servers. Recently we began testing the use of haproxy to act as a middle
man
to more efficiently handle our queues. When nginx was talking directly
with
our mongrels we had no issues, but now with haproxy we do seem to be
experiencing some 502 errors. We monitor our mongrels using monit and
when a
mongrel instance begins to take up too large of a memory footprint we
have
monit restart the mongrel instance. It appears that the 502 errors are
thrown on the last request that the mongrel handles prior to restarting
(this is how it appears, but we can’t be certain this is correct). The
log
from haproxy seems to imply that it was waiting for “complete, valid
response HEADERS from the server” when it throws the 502 error. Is it
possible that the mongrel headers are sent back differently on the last
request prior to a restart? We do know that the last request is being
processed because a user was trying to create a record and received a
502,
but found that it did successfully create the record, so the problem
seems
to be solely in the response headers after the request has been
processed.
If you are interested you can see the breakdown of the haproxy error
here:
http://pastie.org/private/nfvbtq236x8jp9ynj4dg
I don’t know if this is an issue with haproxy or with mongrel so I have
also
asked the folks at haproxy to look at the issue.
Any insight you can offer from the mongrel point of view would be
greatly
appreciated. Thanks!
Ben F.