Hello,
On 3 Jan 2008, at 17:45, Evan W. wrote:
Building on the last messages about Fastthread, can we get a detailed
survey of the different ways people are deploying their applications?
It will help with near-future Mongrel development.
We have a few machines with different setups depending on when they
were deployed - some stuff from 2005 is still on Apache2/FastCGI,
which is slightly upsetting, but the most recent apps are as follows,
and new ones will be similar, though perhaps with the addition of
swiftiply.
Please include the following things:
- Framework, if any (Camping, Merb, Rails, Nitro, Ramaze, IOWA,
Rack…)
Rails, currently 1.2.3. Looking at Merb right now.
1.1.3
- Mongrel handlers used (rails, dirhandler, camping, cgiwrapper…)
rails
- How many mongrel routes and handlers per route registered (if you
don’t know, it’s probably <= 2)
So probably <= 2
- Any Mongrel plugins used (mongrel_upload_progress, mongrel_gzip,
mongrel_cow_cluster, mongrel_experimental…)
None
- Mongrel runners used (mongrel_rails, mongrel::cluster,
mongrel_service, RV, others… please be very specific about which
options of the runner you use. For example, some people use
mongrel::cluster but only for the --clean functionality, not for the
clustering).
mongrel::cluster for managing all mongrels.
- Number of mongrels per server per app
This is still moot for us really; we’re trying to work out the best
profile per app, as we do a lot of short-lived campaign-based sites.
We had an app get whacked a few weeks ago with 16 mongrels running
behind nginx, and the process load was getting really high. Initial
thought was to up the number of mongrels so they could cut through the
queue quicker, but of course all this did was increase memory usage
significantly, meaning with 32 mongrels starting to eat up to 230MB of
RAM each, the system started thrashing pretty quickly. So we killed
them and cut it down to 8 and things actually seemed a lot smoother,
but by that time the load had dropped significantly, so it’s hard to
tell.
In summary, we’ll most likely start out with 8 and see how it goes.
- Monitoring system (runit, monit, god…)
Ahem … just getting going with monit
- Proxy or software loadbalancer, if any (apache mod_proxy_balancer,
nginx, pen…)
latest nginx, 0.5.xx branch
- HW loadbalancer, if any (Netscaler…)
n/a
- Caching strategy (memcached fragments, memcached object, squid,
rails page cache, rails page fragments, ESI)
memcached objects, rails page cache as appropriate.
- Whether you serve media assets via mongrel itself, as opposed to
through a webserver
Any flat files go out direct through nginx, doesn’t seem any point in
loading mongrel with this when nginx is sooo fast.
- Operating system including distribution or version (OS X 10.4.10,
Ubuntu/Linux 7.10, WinXP SP2, OpenBSD 4.1…)
CentOS 5
- Architecture, via ‘uname -a’ preferably (x86, x86_64, Sparc, PPC,
Arm (ha), JRuby)
Linux xxx.xxx.xxx 2.6.9-42.ELsmp #1 SMP Sat Aug 12 09:39:11 CDT 2006
i686 i686 i386 GNU/Linux
dual-core xeon 3.0ghz
Database is on an identical separate machine in the instance I’m
getting stats from, might be on the same host for other apps.
- Ruby version including custom distribution patches,
(1.8.6p110+threadhooks, 1.8.5, JRuby 1.1b1, Rubinius trunk… also
note where you got it, in case it isn’t clear, for example, OS X 10.5
built-in, Ubuntu apt, Instant Rails, direct compile from source)
ruby 1.8.6 (2007-03-13 patchlevel 0) [i686-linux] - compiled from
source.
After reading the recent threads this is going to move up to a later
patchlevel!
- Rubygems (yes/no, version)
yes, recently upgraded from 0.94 to 1.0.1
Please mention anything else about your system that’s kind of weird,
and anything that’s been particularly troublesome regarding mongrel
deployment.
To be honest since mongrel_cluster came out it’s been relatively
trouble-free. Well, compared to Apache
Starting and stopping can be a bit temperamental when instances have
died or been killed outside mongrel_cluster, but I guess that’s to be
expected, and as I understand it that’s a cluster issue rather than a
mongrel issue per se.
The only real issue is memory usage. When we see 230MB mongrels as
above when most stuff is being served out of memcached, leaks spring
to mind, but we’ve so far been unsucessful in establishing whether
it’s in our code, a plug-in, the framework, whether it’s related to
the unpatched ruby, etc …
PS. You can get some of the Ruby information via the ‘tattle’ gem:
$ gem install tattle --ignore-dependencies
$ tattle report
user_key,
prefix, /poke/software/server/install/ruby-1.8.6
ruby_version, 1.8.6
host_vendor, pc
ruby_install_name, ruby
build, i686-pc-linux-gnu
target_cpu, i686
arch, i686-linux
rubygems_version, 1.0.1
SHELL, /bin/sh
host_os, linux-gnu
report_time, Fri Jan 04 09:48:29 +0000 2008
host_cpu, i686
LIBRUBY, libruby-static.a
LIBRUBY_SO, libruby.so.1.8.6
target, i686-pc-linux-gnu
Incidentally the site that got hammered is
http://www.goodthingsshouldneverend.co.uk/
It’s supposed to be a “never-ending web page” - imagine my delight
when I heard that brief for the first time!
Cheers guys, keep up the great work.
Igor
–
Igor C. // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749
5355 // www.pokelondon.com