Hello
In our setup we have an NGNX as front-end and several back-end.
The problem is our load profile, we have a lot of simple and fast http
requests, and very few but very heavy in terms of time and BE cpu
requests.
So my idea is to use proxy_next_upstream for simple request as usual and
it
works perfectly.
And for heavy requests based on URL I want to passthrough it to BE with
lowest CPU load by specifying small proxy_connect_timeout and using
proxy_next_upstream timeout.
But in case of all system overload with heavy requests i don’t want them
to
travel through all BEs because of proxy_read_timeout is about 1 minute.
I was hoping to set a request_timeout to the same value as
proxy_read_timeout and from my point of view this should prevent heavy
requests to travel all upstreams based on proxy_read_timeout, but they
do.
I ve found a similar topic but the proposition was to make two new
options
to proxy_next_upstream such as timeout_tcp timeout_http or something
similar.
On Wed, Mar 06, 2013 at 05:28:12AM -0500, yvlasov wrote:
But in case of all system overload with heavy requests i don’t want them to
travel through all BEs because of proxy_read_timeout is about 1 minute.
I was hoping to set a request_timeout to the same value as
proxy_read_timeout and from my point of view this should prevent heavy
requests to travel all upstreams based on proxy_read_timeout, but they do.
I ve found a similar topic but the proposition was to make two new options
to proxy_next_upstream such as timeout_tcp timeout_http or something
similar.
Thanks for your future advices and comments.
I think that some aggregate upstream timeout, which will prevent
switch to a next upstream server if passed, whould be better
solution to this problem.