Nginx behind a reverse proxy sending 499

We have a Java based reverse proxy(developed in-house) which is talking
to
Nginx which is a proxy_pass for gunicorn server(python/django). The HTTP
request flows from Java reverse proxy (JRPxy) to nginx to gunicorn. All
these servers are running on the same machine.

Previously JRPxy was sending Connection: keep-alive to nginx to reuse
the
connections. However we decided to instead send Connection: close header
and
use a new connection for every request. Since we made this change we see
nginx returning 499 status code.

I debugged the JRPxy at my end. I see that each time we write the
request
headers & body and the very next moment we try to read nginx response we
get
0 (no bytes) or -1(eof) as the number of bytes read. When we get 0 we
eventually get -1 subsequently (EOF after reading no bytes).

From the perspective of code, we do Socket.shutdownOutput()
(Socket (Java Platform SE 7 ))
each time we send Connection:close header. In Java’s terms it indicates
to
the remote socket that it is done sending data
(java - purpose of socket.shutdownOutput() - Stack Overflow).
If I comment this line alone and still sending the Connection:close
header,
I get valid 200 OK response.

I have caputred the netstat output to see the connection state. When we
do
Socket.shutdownOutput() we see TIME_WAIT from nginx’s end indicating
that
nginx initiated the socket close and is now waiting for an ACK from
JRPxy.


tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342
TIME_WAIT - timewait (59.17/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342
TIME_WAIT - timewait (58.14/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342
TIME_WAIT - timewait (57.12/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342
TIME_WAIT - timewait (56.09/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342
TIME_WAIT - timewait (55.07/0/0)

However if I comment the Socket.shutdownOutput() I see the netstat
output in
reverse way. This time JRPxy is in TIME_WAIT state, indicating it
initiated
the socket close.


tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888
TIME_WAIT - timewait (59.59/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888
TIME_WAIT - timewait (58.57/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888
TIME_WAIT - timewait (57.54/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888
TIME_WAIT - timewait (59.87/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888
TIME_WAIT - timewait (56.52/0/0)
tcp6 0 0 :::8888 :::*
LISTEN
12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888
TIME_WAIT - timewait (58.85/0/0)

By any chance is Socket.shutdownOutput() indicating to nginx that it is
closing the connection and hence nginx is sending 499? If that is true
then
should nginx treat this as half-close and still send back the data?

My other assumption is that nginx is responding very quickly and closing
the
socket immediately even before JRPxy gets a chance to read from the
socket.
This is less likely as there are delays due to gunicorn processing.

Posted at Nginx Forum:

To debug the issue further, I wrote a simple Java based HTTP client.
This
client would open a socket to the nginx server, write the request line
(GET
/ HTTP/1.1), write the host header, write the Connection:close header
and
commit the request. While committing the write the CRLF character twice.

In this setup, I tested by having a proxy pass to www.google.com and not
having a proxy pass (nginx servers the default index.html.

If there is no proxy pass, nginx never gives a 499 status code. Even if
I do
Socket.shutdownOutput(), nginx give a valid 200 response. This is
irrespective of the Connection header (keepalive/close).

If there is proxy pass, I get a valid response if I don’t do
Socket.shutdownOutput(). But if I do Socket.shutdownOutput(), I get 499
irrespective of Connection header (keepalive/close). This implies that
nginx
is treating client’s Socket.shutdownOutput() as client closing the
connection despite all data being written to the socket.

Posted at Nginx Forum:

Hello!

On Sun, Jan 11, 2015 at 01:37:31AM -0500, ppwm wrote:

irrespective of the Connection header (keepalive/close).

If there is proxy pass, I get a valid response if I don’t do
Socket.shutdownOutput(). But if I do Socket.shutdownOutput(), I get 499
irrespective of Connection header (keepalive/close). This implies that nginx
is treating client’s Socket.shutdownOutput() as client closing the
connection despite all data being written to the socket.

In HTTP, it’s generally a bad idea to shutdown the socket before
you’ve got the response. While not strictly prohibited, the
server will likely think that the client bored waiting for a
response and closed the connection, so there is no need to return
any response. Google for something like “http tcp half-close” for
more details.

The “proxy_ignore_client_abort” directive can be used if you want
nginx to be compatible with such clients for some reason, see
here:

http://nginx.org/r/proxy_ignore_client_abort


Maxim D.
http://nginx.org/