On Monday 19 May 2014 17:32:07 newnovice wrote:
I tried with: " proxy_pass_header Connection;" – which
should do this according to documentation: “Permits passing otherwise
disabled header fields from a proxied server to a client.”
(from upstream_server to client)
No, it shouldn’t. If you will follow the link from “otherwise
disabled”,
you will see the full list of such headers, that can be specified.
I wrote in general, not about the “Connection” header.
But this did not pass thru the Connection header coming from the
upstream_server to the client of this proxy.
Valentin, can you please elaborate on how you suggest doing this: "I can try
Sorry, typo: s/I can/You can/
by using combination of add_header and $upstream_http_* variables." ???
are you saying this ‘variable’ can be developed to pass on the
upstream_http_connection header in the response ???
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_http_
wbr, Valentin V. Bartenev
thanks/
use the lua-module
it is possible?
19.05.2014 18:47, Valentin V. Bartenev пишет:
Any update on this? Seeing this problem with nginx leaving connections
open
on AWS Beanstalk using nginx to proxy to puma/ruby on rails.
nginx version: nginx/1.6.2
NAME=“Amazon Linux AMI”
VERSION=“2015.03”
ID=“amzn”
ID_LIKE=“rhel fedora”
VERSION_ID=“2015.03”
PRETTY_NAME=“Amazon Linux AMI 2015.03”
ANSI_COLOR=“0;33”
CPE_NAME=“cpe:/o:amazon:linux:2015.03:ga”
HOME_URL=“AWS | Amazon Linux AMI”
Amazon Linux AMI release 2015.03
Linux ip-172-31-12-138 3.14.42-31.38.amzn1.x86_64 #1 SMP Wed May 13
20:33:05
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Posted at Nginx Forum:
On Monday 13 July 2015 12:02:44 dgobaud wrote:
Any update on this? Seeing this problem with nginx leaving connections open
on AWS Beanstalk using nginx to proxy to puma/ruby on rails.
[…]
Do you have cache or proxy store configured?
wbr, Valentin V. Bartenev
Don’t think so - config is below and the relevant url is
/dashboard_stream
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80;
server_name _ localhost; # need to listen to localhost for
worker
tier
if ($http_x_forwarded_proto = "http") { return 301
https://$host$request_uri; }
location / {
proxy_pass http://my_app; # match the name of upstream
directive
which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /dashboard_stream {
proxy_pass http://my_app; # match the name of upstream
directive
which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
proxy_cache off;
proxy_http_version 1.1;
}
}
Valentin V. Bartenev Wrote:
nginx mailing list
[email protected]
nginx Info Page
Posted at Nginx Forum:
Hello!
On Tue, Jul 14, 2015 at 12:43:53PM -0400, dgobaud wrote:
I also just tested regular http and it is having the same problem.
First of all, please clarify what’s the problem you are seeing.
That is, please provide details/stats/logs that show the problem.
–
Maxim D.
http://nginx.org/
I also just tested regular http and it is having the same problem.
dgobaud Wrote:
worker tier
}
}
Do you have cache or proxy store configured?
wbr, Valentin V. Bartenev
nginx mailing list
[email protected]
nginx Info Page
Posted at Nginx Forum:
Hi! Thanks for replying. It appears the issue might actually be on the
AWS
side. AWS is looking into it. Think its an issue with the elastic load
balancer.
Posted at Nginx Forum:
Any update on this issue? Is there a fix I can apply on the AWS side?
Posted at Nginx Forum:
Yes the answer is on the elastic load balancer you must use protocol TCP
or
SSL - not HTTP or HTTPS. The HTTP/HTTPS listeners keep the connections
open
for reuse…
Posted at Nginx Forum: