I’m new to nginx and have been migrating my organization’s web server
away from apache/mod-php and to nginx 1.0 /php-fpm (php 5.3.6)
We use Amazon Web Services and pipe requests to the nginx servers from
the AWS load balancers (HAProxy).
On the HAProxy built-in admin stats, I see ~200 current sessions for the
server, but I only see ~80 php-fpm processes running (ps -ef, and also
checking the php-fpm.log).
Here’s what I have configured in…
nginx.conf:
work_processes 8;
worker_connections 1024;
php-fpm.conf:
pm.max_children = 200
pm.start_servers = 10
pm.max_spare_servers = 15
pm.max_requests = 500
Given these #s, I would think that php-fpm should be getting more
requests, but I’m not certain if
- Nginx is a bottleneck due to my configuration (maybe some other
settings I’m not aware of)
- There’s an innate latency from AWS to Nginx when proxying incoming
requests and out of the nginx/php control.
- There isn’t a direct 1-1 correlation between php-fpm process to
request.
I just want to make sure that I’m not introducing a bottleneck due to my
configuration and want to make sure we’re handling all the requests we
should be.
Thoughts?
Thanks in advance.
Posted at Nginx Forum:
On the HAProxy built-in admin stats, I see ~200 current sessions for the
server, but I only see ~80 php-fpm processes running (ps -ef, and also
checking the php-fpm.log).
A session doesn’t really mean that everything is php related - there
might be a static content fetches (for example if the same
nginx/backend serves also images) and/or keepalive connections. Besides
a single php child if the application code is optimal can
manage to complete way more than 1 request per a given time period (1
second).
Personally I feel that 80 php process for 200 req/sec is somewhat too
much - I try to to keep like 200-300 req/sec per 10-20 childs
and bash the programmers for every page which doesn’t load in 0.0x secs.
You didn’t post the FPM manager line (eg something like pm = dynamic)
that way the php-fpm master process spawns extra childs when
there is a need (also informs in the log file if the max setting is too
low) and kills the unneeded ones.
I just want to make sure that I’m not introducing a bottleneck due to my
configuration and want to make sure we’re handling all
the requests we should be.
For sure nginx won’t be the bottleneck.
The php part depends on the specific application - eg there easily can
be code that completes in nano/micro-seconds while the same
time some people easily manage to write endless loops etc (btw php-fpm
has a nice feature to monitor/backtrace such scripts and
forcibly kill if they take too long ).
But to make 100% sure HaProxy offers a neat status page where you can
see the connection/request error counts that way indicating if
there is a problem at the backends.
For extra debug you always enable logging on both instances (frontend /
backend) and compare the incoming / forwarded and served
requests.
rr
A session doesn’t really mean that everything is
php related - there might be a static content
fetches (for example if the same
nginx/backend serves also images) and/or keepalive
connections.
Ah yes, our nginx only handles php requests, but we do get random
requests for non-existent js and image files what are rejected (in the
nginx error.log), so those must contribute to the 200+ session listed on
the HAProxy but that don’t get assigned to a php process.
So a more accurate #s of valid php requests would be to look be (#
concurrent session - # of error) at any given moment on the HAProxy
stats page, right?
I’m still not quite certain how the 8 work_processes value comes into
play when optimizing nginx… I know you should set it to the # of
cores, but with AWS, their calculation of cores doesn’t directly pertain
to the # of physical cores on the box. But beyond that, is it just a
matter of experimenting with values to see what results are better?
But maybe that’s out of the scope of this thread for now.
Besides a single php child if the
application code is optimal can
manage to complete way more than 1 request per a
given time period (1 second).
Personally I feel that 80 php process for 200
req/sec is somewhat too much - I try to to keep
like 200-300 req/sec per 10-20 childs
and bash the programmers for every page which
doesn’t load in 0.0x secs.
I don’t think HAProxy is calculating 200 req/sec, but rather 200
concurrent requests at that given moment when the stats page refreshes.
I ran a quick benchmark on the cli using “ab -n 200 -c 100
http://localhost/…” and see we’re doing about 400+ req/sec,
given my conf file.
I’m assuming by having 200 max_children in the conf file, that’s setting
a max of 200 php requests at any given moment (child-to-request ratio is
1:1).
But anyways, what #s should I be tweaking in the conf files, if
possible, to achieve better #s? Isn’t the spawning of 80 processes done
automatically and the only thing I can control is capping it (with
max_children)?
You didn’t post the FPM manager line (eg something
like pm = dynamic) that way the php-fpm master
process spawns extra childs when
there is a need (also informs in the log file if
the max setting is too low) and kills the unneeded
ones.
I have it set as dynamic.
error counts that way indicating if
nginx mailing list
[email protected]
nginx Info Page
Posted at Nginx Forum: