I’m setting up a brand new second server for my Rails applications,
and am having a problem with server load getting out of control.
The machine is a quad core 2Ghz with 4G Ram and a single 500GB disk.
This is a web/app server with no database.
When I direct traffic to the server and its stable, the thing runs
great - low Load for a few hundred Rails requests per minute.
If I up the load (either more users show up, or I deploy with
Capistrano), the Load level increases to 1-2, and then never comes
back down.
With enough traffic going to the server, the load sometimes gets out
of control, slowly increasing until I direct the traffic back to my
other server.
I’m having real trouble diagnosing where the load is coming from. CPU
usage is really low, disk peaks during deploy, but goes back down,
network traffic isn’t huge.
Anyone got any ideas on what I can try next?
This is a brand new server, so there may be something broken or badly
configured.
I have been using atop, and it shows spikes in disk usage during
deployment, then again after deployment (there’s a lot of cache on the
machine, so I assume its being flushed there)
The processes are not using any more CPU during this time, but the
load is slowly going up, and response times are getting longer. CPU
Wait Time isn’t going up either.
The stack is: Ubuntu 7 Server, Nginx, Rails 2.02, Mongrel 1.1.4, MySQL
2.7 driver to a remote MySQL server. This is the same configuration
as my server running Ok
There’s nothing notable in any log I can find.
I think there’s a hardware problem, but the disk diagnostics I ran
came up clean. I’m having my server provider take a look. This
server is clearly unacceptable - Quad Core, 4GB RAM can’t run 3
requests per second of my Rails app without the load going over 10
after a deploy