How to disable buffering when using FastCGI?

Is there a way to disable buffering when using FastCGI? (for a Comet
style application)

Is there an option “fastcgi_buffering off”, working like the option
“proxy_buffering off” in the HTTP proxy module?

Thanks,

Nicolas Grilly

Nicolas Grilly wrote:

Is there a way to disable buffering when using FastCGI? (for a Comet
style application)

Is there an option “fastcgi_buffering off”, working like the option
“proxy_buffering off” in the HTTP proxy module?

Have you tried:

fastcgi_buffers 0 0;

?

Phillip B Oldham
ActivityHQ
[email protected] mailto:[email protected]


Policies

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.

Hello!

On Tue, Oct 13, 2009 at 12:17:05PM +0200, Nicolas Grilly wrote:

Is there a way to disable buffering when using FastCGI? (for a Comet
style application)

Is there an option “fastcgi_buffering off”, working like the option
“proxy_buffering off” in the HTTP proxy module?

No, there is no such option. Buffering can’t be disabled for
fastcgi.

Maxim D.

2009/10/13 Phillip O. [email protected]:

Nicolas Grilly wrote:
Is there a way to disable buffering when using FastCGI? (for a Comet
style application)

Is there an option “fastcgi_buffering off”, working like the option
“proxy_buffering off” in the HTTP proxy module?

Have you tried:

fastcgi_buffers 0 0;

Yes, I have tried this option, but it disables buffering only for the
FastCGI module, not for the output filters (especially gzip and SSL
modules). So, this option is not enough to completely disable
buffering.

Hello Nicolas,

Tuesday, October 13, 2009, 9:20:00 PM, you wrote:

fastcgi.
Hello Maxim,

Is there no such option just because nobody implemented it? Or is it
because of some kind of technical constraint?

Yes. It’s because of FastCGI protocol internals. It splits “stream”
into blocks max 32KB each. Each block has header info (how many bytes
it contains, etc). So nginx can’t send content to the client until it
get the whole block from upstream.

Do you recommend to people developing Comet style application to use
HTTP proxying instead of FastCGI?

Yes. Nginx can establish pipe between backend and client just after
headers are sent in case when proxy_buffering = off and gzip = off.

Is it difficult to implement the option “fastcgi_buffering off”, using
the same technique as in the source code of module HTTP proxy?

It is not possible. See comment above.

Thanks for your advice,

On Tue, Oct 13, 2009 at 13:19, Maxim D. [email protected] wrote:

On Tue, Oct 13, 2009 at 12:17:05PM +0200, Nicolas Grilly wrote:

Is there a way to disable buffering when using FastCGI? (for a Comet
style application)

Is there an option “fastcgi_buffering off”, working like the option
“proxy_buffering off” in the HTTP proxy module?

No, there is no such option. šBuffering can’t be disabled for
fastcgi.

Hello Maxim,

Is there no such option just because nobody implemented it? Or is it
because of some kind of technical constraint?

Do you recommend to people developing Comet style application to use
HTTP proxying instead of FastCGI?

Is it difficult to implement the option “fastcgi_buffering off”, using
the same technique as in the source code of module HTTP proxy?

Thanks for your advice,

Nicolas Grilly

Hello!

On Tue, Oct 13, 2009 at 04:20:00PM +0200, Nicolas Grilly wrote:

fastcgi.

Hello Maxim,

Is there no such option just because nobody implemented it? Or is it
because of some kind of technical constraint?

Something like this. FastCGI requires buffer processing which
isn’t compatible with current code for unbuffered connections.

Do you recommend to people developing Comet style application to use
HTTP proxying instead of FastCGI?

For now you should either close & reopen connections, or use HTTP
proxy instead.

Is it difficult to implement the option “fastcgi_buffering off”, using
the same technique as in the source code of module HTTP proxy?

Current “proxy_buffering off” implementation is something wierd
and should be nuked, IMHO. The same implementation for FastCGI
just won’t work.

I believe buffering control in upstream module (which includes
fastcgi, proxy and memcached) should be changed to something more
flexible. In particular, fastcgi should be aware of FastCGI
record boundaries, and shouldn’t try to buffer too much as long as
it got full record.

I’ve posted some preliminary patches for this as a part of backend
keepalive support work, but they are a bit stale now.

Maxim D.

Hello again Maxim,

2009/10/13 Maxim D. [email protected]:

On Tue, Oct 13, 2009 at 04:20:00PM +0200, Nicolas Grilly wrote:

Is there no such option just because nobody implemented it? Or is it
because of some kind of technical constraint?

Something like this. šFastCGI requires buffer processing which
isn’t compatible with current code for unbuffered connections.

Understood.

Do you recommend to people developing Comet style application to use
HTTP proxying instead of FastCGI?

For now you should either close & reopen connections, or use HTTP
proxy instead.

So, for now, I guess my best bet is to use HTTP proxying :slight_smile:

record boundaries, and shouldn’t try to buffer too much as long as
it got full record.

I’ve posted some preliminary patches for this as a part of backend
keepalive support work, but they are a bit stale now.

It would be a perfect solution! If the fastcgi module is aware of
FastCGI record boundaries and stops buffering after having received a
full record, then the problem is solved. This gives to the FastCGI
backend complete control over the amount of buffering, sending short
records in order to limit buffering, or sending long records (around
8KB) for normal buffering. Is it your plan for the future of the
upstream module?

Cheers,

Nicolas

Hello Denis!

Thanks for your explanations.

2009/10/13 Denis F. Latypoff [email protected]:

Tuesday, October 13, 2009, 9:20:00 PM, you wrote:

Is there no such option just because nobody implemented it? Or is it
because of some kind of technical constraint?

Yes. It’s because of FastCGI protocol internals. It splits “stream”
into blocks max 32KB each. Each block has header info (how many bytes
it contains, etc). So nginx can’t send content to the client until it
get the whole block from upstream.

I agree that when a FastCGI backend sends a record to the web server,
the web server must wait for the complete record before forwarding it
to the client. This implies a lot of buffering if the records sent by
the FastCGI backend are very long.

Alternatively, the FastCGI backend can choose to send very short
records (for example 50 bytes) and then the web server must be able to
forward each record immediately after reception, without any
buffering.

Source: the FastCGI specification
(http://www.fastcgi.com/drupal/node/6?q=node/22) and its Python
implementation
(http://trac.saddi.com/flup/browser/flup/server/fcgi_base.py)

But even when the records sent by the FastCGI backend are very short
(around 50 bytes), Nginx doesn’t send them immediately. Nginx seems to
buffer over FastCGI record boundaries. Am I correct?

Is it difficult to implement the option “fastcgi_buffering off”, using
the same technique as in the source code of module HTTP proxy?

It is not possible. See comment above.

May be we can force Nginx to send data back to the client just after
having received each FastCGI record? Is it possible?

Thanks a lot for your advice,
Cheers,

Nicolas Grilly

Hello Maxim,

2009/10/13 Maxim D. [email protected]:

to optimize brain-damaged backends. šBut as long as fastcgi
finished record and not started another one - it’s probably a good
idea to pass data we got so far downstream. šAnd current aproach
won’t work with keepalive connections anyway.

But please keep in mind that I’m not Igor.

Thank you for your explanations! I will keep an eye on the evolution
of the FastCGI buffering, but I understand this is a complex topic. In
the meantime, I will use HTTP proxying.

Nicolas Grilly

Hello!

On Tue, Oct 13, 2009 at 08:24:53PM +0200, Nicolas Grilly wrote:

[…]

record boundaries, and shouldn’t try to buffer too much as long as
8KB) for normal buffering. Is it your plan for the future of the
upstream module?

Complete control isn’t really good thing, as it limits the ability
to optimize brain-damaged backends. But as long as fastcgi
finished record and not started another one - it’s probably a good
idea to pass data we got so far downstream. And current aproach
won’t work with keepalive connections anyway.

But please keep in mind that I’m not Igor.

Maxim D.