Proxy module buffering and timeouts

Given the following nginx config:


master_process off;

daemon off;

events {
worker_connections 16384;
}

http {
error_log stderr debug;
access_log on;
log_not_found on;
client_body_buffer_size 64k;
client_body_in_single_buffer on;

upstream nodes {
server 127.0.0.1:8000 max_fails=0;
server 127.0.0.1:8001 max_fails=0;
server 127.0.0.1:8002 max_fails=0;

keepalive 16384;

}

server {
listen *:7070 backlog=16384 reuseport;
keepalive_requests 2147483647;

location /demo {
  proxy_pass http://nodes;
  proxy_connect_timeout 1ms;
  proxy_read_timeout 1ms;
  proxy_send_timeout 1ms;
  proxy_buffering on;
}

}
}

When requesting nginx server I get either the response from the upstream
server:

curl -i http://localhost:7070/demo

HTTP/1.1 200 OK
Server: nginx/1.9.4
Date: Fri, 04 Sep 2015 12:33:00 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive

{“message”: “Hello World!”}

or a timeout response:

curl -i http://localhost:7070/demo

HTTP/1.1 504 Gateway Time-out
Server: nginx/1.9.4
Date: Fri, 04 Sep 2015 12:24:34 GMT
Content-Type: text/html
Content-Length: 182
Connection: keep-alive

504 Gateway Time-out

504 Gateway Time-out


nginx/1.9.4

but also i sometimes randomly get partially cut responses:

curl -i http://localhost:7070/demo

HTTP/1.1 200 OK
Server: nginx/1.9.4
Date: Fri, 04 Sep 2015 12:24:35 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive

curl: (18) transfer closed with 27 bytes remaining to read

How this can be fixed? Since proxy buffering is on, i expect nginx
always
return either 502 error page on upstream timeout or a response from an
upstream.

Here’s a simple nodejs script of upstream nodes to reproduce the case:

var http = require(‘http’);
var util = require(‘util’);
var cluster = require(‘cluster’);

var SERVER_COUNT = 8;
var HELLO_WORLD = ‘“message”: “Hello World!”}’;
var LOCALHOST = ‘127.0.0.1’;
var PORT = 8000;

function simple(request, response) {
response.writeHead(200, {
‘Content-Type’: ‘text/plain’,
‘Content-Length’: HELLO_WORLD.length + 1
});
response.write(‘{’);
response.end(HELLO_WORLD);
}

function createServer(port, handler) {
http.createServer(handler).listen(port, LOCALHOST);
util.log(util.format(‘Server running at http://%s:%d/’, LOCALHOST,
port));
}

if (cluster.isMaster) {
for (var c = 0; c < SERVER_COUNT; c++) {
cluster.fork({
port: PORT + c
})
}
} else {
createServer(process.env.port, simple);
}

Posted at Nginx Forum:

Hello!

On Fri, Sep 04, 2015 at 08:42:54AM -0400, donatasm wrote:

[…]

curl: (18) transfer closed with 27 bytes remaining to read

How this can be fixed?

By fixing your backend to return a full response or not return it
at all.

Since proxy buffering is on, i expect nginx always
return either 502 error page on upstream timeout or a response from an
upstream.

No, this is wrong expectation. Buffering means that nginx will
avoid doing an extra work for partially filled body buffers, and
it will be allowed to buffer parts of a response in the filter
chain. No attempt will be made to obtain a full response and
check it’s length (and such a behaviour is not at all possible if
the response is big enough). As long as a response header is
received from an upstream server, it will be passed to the client,
and then nginx will start proxying the response body. An error
can be returned only if the header was not yet passed to the
client.


Maxim D.
http://nginx.org/

Ok, thanks!

Posted at Nginx Forum: