Server timeouts

Hi guys,

Does anyone know if mongrel has a built-in timeout for requests that
take too long? If so, how do you increase it?

I’ve found that some server requests are timing out. Complicated
requests are already being farmed out to asynchronous processes. But
some of the more basic requests are timing out when we have our peak
server load in the mornings (the bottleneck is the database server).

There are a couple possible locations for the timeout I’m seeing:

  1. Mongrel
  2. Lighttpd
  3. Client browser / application

On my local development server, it does not timeout, leading me to
believe it’s not #3.

Any feedback is appreciated.

Thanks,
Pete

Pete,

What about the DB is the bottleneck? I’m assuming you have already
looked at indexing and the like.

Mike B.

Quoting Pete DeLaurentis [email protected]:

Yeah, all the foreign keys are indexed. I’m also caching what I can
in our ruby app’s memory.

It seems to be the database writes that are taking up a lot of the
time. The other offender are some multiple JOINs we’re doing.

I’ve ordered more processors for the database server, and am clearing
off a second machine so I can get a little MySQL cluster going. Any
advice on this is welcome since I’ve never scaled a database server
before.

As the database gets faster, it will help reduce the timeouts. But
demand + server load has a way of sneaking up on you. I’m looking
for a way of making sure our server is less likely to timeout on
users when we run into system bottlenecks (now + in the future).

Thanks,
Pete

Pete, Quoting Pete DeLaurentis [email protected]:

Yeah, all the foreign keys are indexed. I’m also caching what I can
in our ruby app’s memory.

It seems to be the database writes that are taking up a lot of the
time. The other offender are some multiple JOINs we’re doing.

I’ve ordered more processors for the database server, and am clearing
off a second machine so I can get a little MySQL cluster going. Any
advice on this is welcome since I’ve never scaled a database server
before.

I’ve never done that either…what about memcached…or that’s already
similar to what you are doing in memory?

We use it on one of our non-rails apps and it seems to do a good job of
improving performance…I don’t know or understand a huge amount about
it, but thought I would bring it up.

Maybe one of the “Smarter Folks” would be able to help out with that.

Mike B.

Hi Pete,

If you suspect queries are your pain point then I would do the
following to get a sense of what you app is doing under the hood.

Get a copy of https://hackmysql.com/archive/mysqlreport/ and use it against
your db. It can assist in tweaking innodb or myisam table configs.

Turn on slow query logging, add the lines

log_slow_queries=/usr/local/mysql/data/mysql-slow.log (adjust path as
necessary)
log-queries-not-using-indexes

To your /etc/my.cnf and restart the db. This will log any query that
takes longer than 1 second to run (will only spot the really heinous
ones, in a busy web app a 1 second query is game over). This also logs
queries which do not use and index, hence scale badly as your tables
grow.

Recent versions of linux have an excellent vmstat and iostat, have
them running in a wind during busy times of the day and see how your
db machine handles it. Is it really out of CPU, or does your dataset
not fit in memory and you pulling huge blobs of data off the disk
because of poorly specified queries, or worse spending it rashly by
creating many temporary tables.

I have found that time spent optimising db queries, and rethinking
the datamodel to be more sql efficient pays off in the long run vs
throwing hardware at the problem.

Cheers

Dave

Thanks Mike. You seem plenty smart enough to me…

I used memcached for awhile, but found that the amount of non-
changing stuff I had was really small. So, I switched this small
amount of data to my rails process memory, and got a nice speed boost
out of that.

If I had larger amounts of constant data, then memcached would be a
good bargain, because it would allow me to save memory in my
individual processes.

-Pete

Thanks for the DB profiling tips Dave. From some basic top
monitoring, I’ve seen that my server is topping out at 100% CPU.
I’ll check out mysqlreport and learn more about why it’s using this
kind of power.

-Pete

Thanks Wayne. After some further investigation, my suspicion is on
the Lighttpd web-server.

When I run in development mode with Mongrel using web-brick the
timeouts don’t happen at all. This indicates it’s not a mongrel or
rails / ORM issue.

It makes sense that web-servers would have some timeouts for long
running requests. Now I just have to find the switch where I can
tweak them.

Thanks,
Pete

On 11/2/07, Pete DeLaurentis [email protected] wrote:

Thanks,
Pete

Pete,
Run the Mongrels in debugging mode and see what you can glean from this.
If
this is DB caused then the timeouts are more likely caused by an ORM
timeout.

Let us know,

~Wayne

On 11/7/07, Pete DeLaurentis [email protected] wrote:

requests. Now I just have to find the switch where I can tweak them.

Thanks,
Pete

Pete,

I am glad you are making progress. We would appreciate if you let us
know
what the issue is and how it was resolved once you figure it out so that
we
can better assist people in the future :slight_smile:

Thank you!

~Wayne

Thanks Wayne. I will definitely let the group know.

This community is an awesome resource. When trying to get feedback
via other software forums (in this case lighttpd) it really makes me
appreciate the difference.

Cheers,
Pete