This weekend for scalability we tried putting our nginx servers behind
amazon’s elastic load balancers and came across a road block: it does
not
transparently pass the user IP and header information to nginx. This
caused
issues with several pieces of nginx we use including the IP allow / deny
rules, the limit_req module, and the limit_con module. Has anyone
successfully put nginx behind a load balancer? Any ideas on how to make
this work?
On Mon, Nov 21, 2011 at 07:25:39AM -0500, Rami E. wrote:
Hi Guys,
This weekend for scalability we tried putting our nginx servers behind
amazon’s elastic load balancers and came across a road block: it does not
transparently pass the user IP and header information to nginx. This caused
issues with several pieces of nginx we use including the IP allow / deny
rules, the limit_req module, and the limit_con module. Has anyone
successfully put nginx behind a load balancer? Any ideas on how to make
this work?
We implemented to module and still had some trouble. A lot of the
connections would return " 503 Service Temporarily Unavailable". Our
configuration works fine without the load balancer but then gives these
503
errors behind the load balancer.
Looking into the error logs I notice a lot of these errors both with and
without the load balancer “connect() failed (111: Connection refused)
while
connecting to upstream”. Could this be the reason that we are having
issues?
On Fri, Nov 25, 2011 at 09:54:14AM -0500, Rami E. wrote:
Hi Maxim,
We implemented to module and still had some trouble. A lot of the
connections would return " 503 Service Temporarily Unavailable". Our
configuration works fine without the load balancer but then gives these 503
errors behind the load balancer.
nginx itself will only return 503 if it hits either limit_conn or
limit_req.
If you see this returned by nginx, and it only happens with load
balancer, this may indicate you’ve not configured realip module
properly (or your load balancer doesn’t provide appropriate
headers) and you are hitting per-ip limits configured due to all
requests appear to be from load balancer.
Check if client’s ip logged is really client’s one, not an ip of
your load balancer.
Looking into the error logs I notice a lot of these errors both with and
without the load balancer “connect() failed (111: Connection refused) while
connecting to upstream”. Could this be the reason that we are having
issues?
Unlikely, but It’s a good idea to track and fix this in any case.
Thanks for the help Maxim! We disabled our limit_req and that seemed to
have fixed the problem. Looking at the logs it seems that only 1/3 of the
requests are correctly getting the new IP assigned via the realIP module,
the remainder are still logging the load balancer IP. This probably is more
of an issue with the amazon load balancer but do you have any idea on what
may be going on?
Also, where would you recommend as aplaceto start tracking and fixing the
other issue?
You have not established that load balancer is setting X-Forwarded-For
or some other header to pass Real IP.
Does the amazon pseudo device have enough sockets available to connect
to nginx?
What do you expect from adding this layer? Were you hitting
performance limits? Has it improved or degraded performance?
Thanks for the help Maxim! We disabled our limit_req and that seemed to
have fixed the problem. Looking at the logs it seems that only 1/3 of
the
requests are correctly getting the new IP assigned via the realIP
module,
the remainder are still logging the load balancer IP. This probably is
more of an issue with the amazon load balancer but do you have any idea
on
what may be going on?
Also, where would you recommend as a place to start tracking and fixing
the
other issue?
As a quick update, it looks like this has happened before. The load
balancer bounces off of several internal IP’s sometimes and nginx picks
the
last one only. Does anyone know of a workaround to remove the last two
trusted IPs from the x-forwarded-for header?
In looking at the $proxy_add_x_forwarded_for variable I believe that the
Loadbalancer is in fact passing the variables but nginx is taking the
wrong
value?
Here is what i get from the variables.
$proxy_add_x_forwarded_for: 217.27.244.18, 10.160.43.200, 10.160.43.200
$remote_addr: 10.160.43.200
Does this mean that nginx is taking the last value in the
x-forwarded-for?
To answer your other question, yes, the Amazon plenty of sockets
available
to connect to nginx.
The reason we added this was to be able to utilize Amazon’s auto scaling
feature for redundancy and scalability. We have not been able to test
performance yet since we are still trying to make it work but as soon as
we
do I will be sure to report back.
Thanks for the note and the clever workaround. We were able to tweak
this
to work with but then it still left a lot of the other functions we were
using such as deny/allow, limit_con, ect not working. Instead we went
back
to amazon and it turns out they were able to correct the behavior of
their
load balancer.
I wanted to report back on the performance of putting nginx behind an
elb.
We compared elb to haproxy and on amazon’s cloud we got better
performance
through the elb than we did an haproxy instance. There is minimal impact
on
end user performance for adding this extra step. We did this for
redundancy to allow us to automatically fail over to another zone if the
current zone or instance go down.
Thanks for the note and the clever workaround. We were able to tweak
this
to work with but then it still left a lot of the other functions we were
using such as deny/allow, limit_con, ect not working. Instead we went
back
to amazon and it turns out they were able to correct the behavior of
their
load balancer.
I wanted to report back on the performance of putting nginx behind an
elb.
We compared elb to haproxy and on amazon’s cloud we got better
performance
through the elb than we did an haproxy instance. There is minimal impact
on
end user performance for adding this extra step. We did this for
redundancy to allow us to automatically fail over to another zone if the
current zone or instance go down.
Happy to answer any questions about the setup.
Rami
Hi Rami,
We are looking to implement this on our web servers - did you have to
change the default configuration of the ELB?
Would you be able to copy and paste the final configuration of nginx
that works?
$remote_addr(OSI 3layer) and $http_x_forwarded_for(OSI 7layer) of these
two
variables is different. behind LB server may be you can use config like
this.
http {
map $proxy_add_x_forwarded_for $xff_pass {
default “xff403”;
~^61.152.90.8 “xffpass”;
~^61.172.241. “xffpass”;
limit_req_zone $http_x_forwarded_for zone=one:10m
rate=200r/s;
…
server {
if ($xff_pass !~* xffpass) {
return 403;
}
limit_req zone=one;
…
}
}