What proxy module does in that case? You said earlier HTTP lacks flow
conrol
too. So what is the difference?
Posted at Nginx Forum:
What proxy module does in that case? You said earlier HTTP lacks flow
conrol
too. So what is the difference?
Posted at Nginx Forum:
On Jul 20, 2013, at 9:05 , momyc wrote:
What proxy module does in that case? You said earlier HTTP lacks flow conrol
too. So what is the difference?
The proxy module stops reading from backend, but it does not close
backend connection.
It reads again from backend when some buffers will send to the slow
client.
–
Igor S.
On Jul 20, 2013, at 8:41 , momyc wrote:
OK, it probably closes connection to backend server. Well, in case of
multiplexed FastCGI Nginx should do two things:
- send FCGI_ABORT_REQUEST to backend for given request
- start dropping records for given request if it still receives records
from backend for given request
Suppose a slow client. Since nginx receives data quickly backend will
send data quickly too because it does not know about slow client.
At some point buffered data surpasses limit and nginx has to abort
connection to backend. It does not happen if backend knows a real speed
of the client.
–
Igor S.
What do you mean by “stop readning”? Oh, you just stop checking if
anything
is ready for reading. I see. Well, this is rude flow control I’d say.
Proxied server could unexpectedly drop connection because it would think
Nginx is dead.
There is a nice feature I don’t remember how exactly it’s called when
some
content could be buffered on Nginx (in proxy mode) and there is strict
limit
of how much could be buffered and when it goes to file. This is what
could
be used for that case. If buffer overflow happens close client, abort
backend, drop records for that request. Keep connection and keep
receiving
and de-multiplexing records for good requests.
Posted at Nginx Forum:
“abort backend” meant “abort request”
Posted at Nginx Forum:
On Jul 20, 2013, at 9:23 , momyc wrote:
What do you mean by “stop readning”? Oh, you just stop checking if anything
is ready for reading. I see. Well, this is rude flow control I’d say.
Proxied server could unexpectedly drop connection because it would think
Nginx is dead.
TCP will say to backend that nginx is alive. It can drop only after some
timeout.
There is a nice feature I don’t remember how exactly it’s called when some
content could be buffered on Nginx (in proxy mode) and there is strict limit
of how much could be buffered and when it goes to file. This is what could
be used for that case. If buffer overflow happens close client, abort
backend, drop records for that request. Keep connection and keep receiving
and de-multiplexing records for good requests.
Yes, but it is useless to buffer a long polling connection in a file.
–
Igor S.
If it’s time to close backend connection in non-multiplexed
configuration
just send FCGI_ABORT_REQUEST for that particular request, and start
dropping
records for that request received from the backend.
Please shoot me any other questions about problems with implementing
that
feature.
Posted at Nginx Forum:
it is useless to buffer a long polling connection in a file.
For Nginx there is no any difference between long-polling or other
request.
It would’t even know. All it should care is how much to buffer and for
how
long to keep those buffers until droping them and aborting request. I do
not
see any technical problem here.
Posted at Nginx Forum:
Yes, but it is useless to buffer a long polling connection in a file.
Buffering of some data on Web-server is fine as long as client receives
whatever server has sent or client gets closed connection. If sending is
not
possible after buffers are full dropping client connection and aborting
request is not a problem. Problems like that should be dealt with on
higher
level of abstraction.
Posted at Nginx Forum:
On Jul 20, 2013, at 11:52 , momyc wrote:
it is useless to buffer a long polling connection in a file.
For Nginx there is no any difference between long-polling or other request.
It would’t even know. All it should care is how much to buffer and for how
long to keep those buffers until droping them and aborting request. I do not
see any technical problem here.
There is no technical problem. There is an issue of practical utility of
such backend. There are two types of backend:
The first one uses a large amount of memory to process request. It
should
send a generated response as soon as possible and then moves to a next
request.
nginx can buffer thousands of such responses and sends them to clients.
Persistent connection between nginx and backend and nginx buffering help
in
this case. Multiplexing just complicates the backend logic without any
benefit.
The bottle neck here is not number of connections to a single listen
port (64K)
but amount of memory.
The second type of backend uses a small amount of memory per request,
can process simultaneously thousands of clients and does NOT need
buffering
at all. Multiplexing helps such backends but only together with a flow
control.
–
Igor S.
I would also like to add a vote for FCGI multiplexing.
There is no obligation for backends, since non-implementing backends can
indicate FCGI_CANT_MPX_CONN in response to a FCGI_GET_VALUES request by
nginx. The other poster has already mentioned FCGI_ABORT_REQUEST and
dropping response packets from dangling requests.
My scenario is that I have a variety of requests: some take a while,
but
others are a quick URL rewrite culminating in a X-Accel-Redirect. This
rewrite involves complicated logic which is part of my overall backend
application., which I would rather not factor out and rewrite into a
nginx
module The actual computation for the URL rewrite is miniscule compared
to
the overhead of opening/closing a TCP connection, so FCGI request
multiplexing would be of great help here.
If the overhead of a multiplexed FCGI request starts to approach doing
the
work directly in an nginx module, it would give a valuable alternative
to
writing modules. This would avoid the pitfalls of writing modules (code
refactoring, rewriting in C, jeopardizing nginx worker process, etc.).
Posted at Nginx Forum:
Hello!
On Fri, Oct 04, 2013 at 09:43:41AM -0400, DevNginx wrote:
application., which I would rather not factor out and rewrite into a nginx
module The actual computation for the URL rewrite is miniscule compared to
the overhead of opening/closing a TCP connection, so FCGI request
multiplexing would be of great help here.If the overhead of a multiplexed FCGI request starts to approach doing the
work directly in an nginx module, it would give a valuable alternative to
writing modules. This would avoid the pitfalls of writing modules (code
refactoring, rewriting in C, jeopardizing nginx worker process, etc.).
Your use case seems to be perfectly covered by a keepalive connections
support, which is already here. See Module ngx_http_upstream_module.
–
Maxim D.
http://nginx.org/en/donation.html
Your use case seems to be perfectly covered by a keepalive connections
support, which is already here. See Module ngx_http_upstream_module.
OK, yeah that would work for me. Thanks.
There is still the possibility that long running requests could clog the
connections, but I can work around that by listening on two different
ports
and having nginx route the quickies to their dedicated port.
Posted at Nginx Forum:
Question about FastCGI: How it handle simultaneous connections with one
process when PHP itself is blocking language ? What if I have something
“sleep(100)” . Wont it block the process for the other users ?
Thanks
Maxim D. wrote in post #1101079:
Hello!
On Sat, Mar 09, 2013 at 10:43:47PM +0800, Ji Zhang wrote:
But I also find an interesting article on how great this feature is,
back to 2002:
FastCGI — The Forgotten TreasureThis article seems to confuse FastCGI multiplexing with
event-based programming. Handling multiple requests in a single
process is great - and nginx does so. But you don’t need FastCGI
multiplexing to do it.and perform asynchronously.
Does my point make sense? or some other more substantial reasons?
You are correct, since FastCGI is used mostly for local
communication, multiplexing on application level isn’t expected to
be beneficial. Another reason is that multiplexing isn’t
supported (and probably will never be) by the major FastCGI
application - PHP.There were several discussions on FastCGI multiplexing here, and
general consensus seems to be that FastCGI multiplexing might
be useful to reduce costs of multiple long-polling connections to
an application, as it will reduce number of sockets OS will have
to maintain. It’s yet to be demonstrated though.–
Maxim D.
nginx: donation
Hello!
On Sun, Sep 14, 2014 at 09:22:48PM +0200, Wter S. wrote:
Question about FastCGI: How it handle simultaneous connections with one
process when PHP itself is blocking language ? What if I have something
“sleep(100)” . Wont it block the process for the other users ?
Thanks
FastCGI doesn’t imply PHP (and, actually, PHP doesn’t imply
blocking as well - there are some event-driven PHP frameworks out
there).
As of now, implementation of the FastCGI protocol in PHP doesn’t
support FastCGI multiplexing at all, and that’s one of the reasons
why nginx doesn’t implement FastCGI multiplexing as well. Quoting
the message you’ve replied to:
… Another reason is that multiplexing isn’t
supported (and probably will never be) by the major FastCGI
application - PHP.
–
Maxim D.
http://nginx.org/
Hello!
On Mon, Sep 15, 2014 at 06:16:58PM +0200, Wter S. wrote:
Then how Nginx is able to handle thousands simultaneous requests (where
some of them contains blocking IO operations) with only one process (or
let say 10 processes) ?
That’s because nginx is event-driven server and uses non-blocking
IO whenever possible.
–
Maxim D.
http://nginx.org/
Maxim D. wrote in post #1157635:
Hello!
On Sun, Sep 14, 2014 at 09:22:48PM +0200, Wter S. wrote:
FastCGI doesn’t imply PHP
As of now, implementation of the FastCGI protocol in PHP doesn’t
support FastCGI multiplexing at all, and that’s one of the reasons
why nginx doesn’t implement FastCGI multiplexing as well.Maxim D.
http://nginx.org/
Then how Nginx is able to handle thousands simultaneous requests (where
some of them contains blocking IO operations) with only one process (or
let say 10 processes) ?
Thanks !
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.
Sponsor our Newsletter | Privacy Policy | Terms of Service | Remote Ruby Jobs