Nitro on Apache

I’m for the first time trying to actually deploy a Nitro app on my
shared hosting account. I’m trying to use FastCGI since that’s what I
had success with using Rails. It took some tweaking of the FastCGI
adapter, it seems unmaintained.

I’m mainly stealing ideas from Rails here, I created a dispatch.fcgi
that just loads Nitro, sets the mode to :live and requires app.rb. If I
run it gives the two green lines :

03/10 05:13:11 INFO: Og uses the Mysql store.
03/10 05:13:11 INFO: Starting Fastcgi server in live mode, listening at
0.0.0.0:9000

My .htaccess should be good. Now if I visit the site I get a server 500
error. The error log mentions :

[Wed Oct 03 05:11:59 2007] [error] [client 61.225.17.58] FastCGI:
incomplete headers (0 bytes) received from server “/home/ar
nebrasseur/subdomains/vocab/dispatch.fcgi”

Have other people gotten this to work? Do I have other options having
only limited control over the apache configuration?

Thanks for any pointers!

(ab)


Arne B.
http://www.arnebrasseur.net
[email protected]

I use Apache + mod_proxy_ballancer + Mongrel.

I will try to prepare a short tutorial.

-g.

fcgi is not worth the trouble.

I suggest running Nitro standalone and using Apache’s reverse proxying
features as a front-end.

With Apache 2.2 there’s mod_proxy_balancer, which is very nice.

With both Apache 2.0 and 2.2 you can use mod_rewrite like this:

<VirtualHost :80>
ServerName robmela.com
ServerAlias www.robmela.com
ServerAlias blog.robmela.com
RewriteEngine On
RewriteRule ^/admin(.
) https://nb.robmela.com/admin$1 [L,R]
RewriteRule ^/(.*) http://127.0.0.1:9001/$1 [L,P]
#ProxyRequests off
#ProxyPass / http://127.0.0.1:9001/
#ProxyPassReverse / http://127.0.0.1:9001/
#ProxyPreserveHost on

There is one caveat for heavily loaded sites or applications with
long-running pages… which I’ll get into later as I’m writing an Apache
/ Mongrel how-to that covers some gotchas.

Can I do this from .htaccess?

(ab)

Robert M. schreef:

<VirtualHost *:80>


Nitro-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/nitro-general


Arne B.
http://www.arnebrasseur.net
[email protected]

On 10/3/07, Robert M. [email protected] wrote:

NB: There’s a major problem with the way Mongrel 1.0.1 handles
connection limits. If it’s over it’s configured connection limit It
accepts, then immediately closes with no processing. Things would work
much better with mod_proxy_balancer if monger were modified ( perhaps
configurably ) to simply not call accept() when it’s reached its
configured connection limit.

Use evented_mongrel or swiftiply w/ swiftiplied_mongrel. It will help
this. It was an oversight on my part that evented mongrel doesn’t use
epoll if it is available. This has been patched and will be available
in the next version, which means you could have more than 1024 (the
limit on select()) connections queued up without any problems, if you
had to (on Linux 2.6.x based systems).

The problem is less likely to be triggered if Mongrel is running Nitro,
but it’s still a flaw. It’s a MAJOR headache on a busy Rails site with
slow-loading pages.

evented_mongrel will let you queue up lots of requests without
incurring the RAM and performance killing overhead of threads in
Mongrel.

Swiftiply with swiftiplied_mongrel will load balance your slow
requests across your backends optimally, with no crowding on any one
backend, while again ensuring that you don’t have RAM and performance
killing overhead of threads in Mongrel.

George had said that he was going to make sure Nitro 0.50 had builtin
support for using both of these. I am not sure of the status of that
or if he needs anything from me. George?

Kirk H.

I’m preparing a lengthy how-to. I’ve spent many pressured hours on at
http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

NB: There’s a major problem with the way Mongrel 1.0.1 handles
connection limits. If it’s over it’s configured connection limit It
accepts, then immediately closes with no processing. Things would work
much better with mod_proxy_balancer if monger were modified ( perhaps
configurably ) to simply not call accept() when it’s reached its
configured connection limit.

The problem is less likely to be triggered if Mongrel is running Nitro,
but it’s still a flaw. It’s a MAJOR headache on a busy Rails site with
slow-loading pages.

For details on mod_proxy and mod_proxy_balancer, look at
http://httpd.apache.org/docs/2.2/mod/mod_proxy.html

There’s also the mod_rewrite trick for handling static files – cuts
request times from tens of milliseconds to tenths of milliseconds, and
with a lot less CPU – very important on asset-heavy, busy sites… see
the RewriteCond…

RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f
RewriteRule (.*) $1 [L]
ProxyPass / balancer://myserverpool/   #### NB: trailing slash
ProxyPassReverse / balancer://myserverpool/  ### NB: trailing slash
ProxyPreserveHost on

<Proxy balancer://myserverpool > # no trailing slash
BalancerMember http://192.168.10.10:10000 keepalive=on max=10
lbset=0 timeout=1
BalancerMember http://192.168.10.10:10001 keepalive=on max=10
lbset=1
BalancerMember http://192.168.10.11:10000 keepalive=on max=10
lbset=0 timeout=1

I’ve also got a Apache module that I’m polishing up that very simply
handles very large hashed directory caches ( millions of entries ).
Very useful for rails on large sites:

emailMyName- Low-cost email address - 1GB email storage - Block spam - Web & POP3 email access - Get your name as your email address
or
/myappdir/public/image/1/7/8/2/1782.gif.cache

Hi,

thank you for adding the tutorial (it is however quite short and
might have fit better into a tip).

I had to clean up a little due to a slight misunderstanding between
you and oxy which was more like ‘wtf’ ;).
It led me on a small debugging trip to find out why there wasn’t
any content.

Anyway, for tutorials the “description” field is supposed to be a
small entry on what the tutorial contains. This is shown on the
front page.

Inside the tutorial then there should be one or more pages which
you can add. This is available to help structure your tutorial in
easyly digestable chunks.

Anyway, thanks for using oxywtf. :slight_smile:

Feel free to change my hastily added description.

Jo


Nitro-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/nitro-general

Have you ever tried to make an app thread-safe? The dependencies are
so bizarre that the testability of the app vastly increases in
complexity. Bugs are subtle and don’t happen often – even then, they
are not readily reproducible. Worse, they may not even be in your code!

I’ve been a proponent (in the past) of some kind of MT Rails, but the
more I think about injecting mutexes or bracketing sections of code
with semaphores, etc, etc, hoping I’ve made my code reentrant, the
less I like it when compared to the multiprocess model. Note also
that a good deal of any Web app is made up of instance objects which
are typically not long-lived and don’t benefit from MT. So you get to
think about a whole different set of things: Where do I store app-
global data and how do I store thread-local data?

This can be a nightmare for request/response cycles that typically
take milliseconds and for which a multiprocess alternative exists.

Just my $.02

I’m worried about queuing.

I’m working to remediate a Rails app ( not mine ) that can take as long as
a minute to generate certain pages.

Consider the case where mod_proxy_balancer sends a request to an application
server that’s 1 second into a 60-second rails page. With queuing, wouldn’t
the second request sit in the queue for 59 seconds while waiting for the
first request to complete?

Yes. This brings up an interesting issue, though. 1 minute to render
a page is horrific, and if everything is that slow, there are no good
solutions. But if the app only has certain pages that are that slow,
while the rest are fast, it brings up an interesting, solveable
scaling quandry.

The current workaround is to have a hundred of Rails app server instances
chewing up huge amounts of RAM. I would quite happily trade that in for the
memory and performance overhead of whatever multiplexing or multithreading
scheme is used in the app server. All I need is a guarantee that the app
server will not call select if 1) it’s already working on a request and 2)
it won’t open and close a socket for a request it’s not prepared to handle.
If the app doesn’t call accept() Apache will gracefully move on to another
app server.

(nod) Mongrel accpets requests as they come in. If it reaches its
request limit, it starts killing things. That request limit, unless
you have set it lower, is the select() limit of 1024 file handles.

In a case where you have some actions that are very, very slow, a
mongrel that is in this situation probably just crawls and takes
forever to render anything. It also uses a ton of RAM.

If one is using an evented_mongrel, you still have the problem that
you mention, that a fast action sitting behind a slow one has to wait
for the slow one to finish. The slow one will, however, finish a lot
faster, and the overall RAM usage will be a LOT lower, too.

With Swiftiply, a fast action won’t ever be queued up behind a slow
action unless there are no available backends to take any actions at
all. Then the fast action will wait until one of the backends
finishes its unit of work and returns a response. As soon as that
happens, the fast action that has been waiting for a backend will be
picked up and handled.

I’ll look at swiftiply – this is the first I’ve heard of it.

swiftiply.swiftcore.org

#swiftcore on irc.freenode.net

Kirk H.

Kirk H. wrote:

Yes. This brings up an interesting issue, though. 1 minute to render
a page is horrific, and if everything is that slow, there are no good
solutions. But if the app only has certain pages that are that slow,
while the rest are fast, it brings up an interesting, solveable
scaling quandry.

I didn’t write it… just wanna make that clear… I inherited it and
have solved enough of it that at least users can get on to the system,
which gives them a chance to grumble about the speed.

This is the offending code in gems/mongrel-1.0.1/lib/mongrel.rb. I’ve
set num_processors in mongrel_cluster.yml to 1, and it seems a fairly
straightforward workaround to replace accept ( line 722 ) with a polling
loop:

def accept_workaround
reap_dead_workers
if worker queue length < max_processors
return accept
else
sleep 0.1 seconds # more properly: wait for a barrier condition
end
end

Gets me out of my current hell. A proper fix would have a barrier
condition replacing the sleep, but compared to what I’m up against now,
a 0.1 second polling loop is nothing.

Swiftiply looks good – scanning the docs now. As an alternative, the
above, unless I’m missing something, would also mesh seamlessly with
mod_proxy_balancer.

  • query worker_list.lengthand if it’s greater than num_processors, b

    721 while true
    722 begin
    723 client = @socket.accept
    724
    725 if $tcp_cork_opts
    726 client.setsockopt(*$tcp_cork_opts) rescue nil
    727 end
    728
    729 worker_list = @workers.list
    730
    731 if worker_list.length >= @num_processors
    732 STDERR.puts “Server overloaded with
    #{worker_list.length} processors (#@num_processors max).
    Dropping connection.”
    733 client.close rescue Object
    734 reap_dead_workers(“max processors”)
    735 else

On 10/3/07, s.ross [email protected] wrote:

Have you ever tried to make an app thread-safe? The dependencies are so
bizarre that the testability of the app vastly increases in complexity. Bugs
are subtle and don’t happen often – even then, they are not readily
reproducible. Worse, they may not even be in your code!

Are you talking to me or to Robert? You attribute the quote at the
bottom of your post to me, but those aren’t my words – they are
Robert’s.

I have made apps threadsafe. IOWA is threadsafe, and IOWA apps are
generally threadsafe. It’s not a terrible nightmare, but often there
are simpler alternatives, as you suggest.

This can be a nightmare for request/response cycles that typically take
milliseconds and for which a multiprocess alternative exists.

It can be a nightmare for a lot of reasons. The main complains about
the multiprocess alternative are the management issues. Managing your
pool of backend processes, monitoring their individual health,
stopping/restarting, sizing the pool to the loads, etc…

These are all issues that I am working towards solving with Swiftiply,
regardless of what framework one uses.

Kirk H.

For sure – it’d be labor intensive and tedious to take a legacy
framework with all its associated plugins from disparate sources and
make it thread-safe. You’d have to make the default mode
single-threaded and allow a year or two for plugins and new development
to catch up.

There are actually very few cases where a framework user would need to
consider concurrency. Most of the shared resources are things like
session stores, database connections – and all that’s taken care of by
the framework. Presumably a developer advanced enough to create a
shared resource is also someone who has a clue as to what needs to be
done.

This seems the way to go, but unfortunately mod_proxy is not an option
for me. I’m on shared hosting with little chance of changing apache’s
configuration, except by .htaccess files. It seems both cgi and fcgi
adapters are simply broken because of changes to the adapter interface.
E.g. what used to be class methods are now instance methods, but that’s
not all.

Would it be hard to fix them? Or just one of them? I’m not too concerned
about performance at this point, if it runs I’d be happy. If they’re not
being fixed please add a big BROKEN sign or remove them altogether.

Thank you (once again)!

(ab)

George M. schreef:

First to Kirk: Sorry about the attribution – my mistake. Second,
this is more a general comment on the desirability and cost/benefit
of using threads in short-duration processes such as request/response
cycles. The cost of tracking down and fixing a concurrency bug may be
higher than adding a mongrel or whatever. If swiftiply solves some or
much of this problem, I’ll be very happy. In the meantime, the cheap
way to remove blocking is add servers in separate processes.

Rob, if Rails is any example, features and stability will trump
concurrency for a long time. Certainly, threading is worth keeping on
the radar, but then you have to open the discussion of whether green
threads really do what you want and which of the next-generation Ruby
engines are likely to succeed in providing an efficient threading
model – efficient enough to make thread-safety a priority.

I believe one philosophy that typically works is the “sin no more”
philosophy. If you know you didn’t get concurrency right in the first
place, at least don’t knowingly create new problems.

Hmmmm?

George had said that he was going to make sure Nitro 0.50 had builtin
support for using both of these. I am not sure of the status of that
or if he needs anything from me. George?

I am about to bring live a first (preview) version of my current
project. I
most certainly want to experiment with your code :wink:
This will happen sooner rather than later. So please, be a bit more
patient.

-g.

PS: And thanks for your great work and willingnes to help me make this a
great deployment option for Nitro apps.

They run on debian. I don’t have access to the main apache config. I can
only use a web panel to select a directory as document root, the version
of PHP and if I want to enable FastCGI on that domain. That’s it. The
rest has to be done with .htaccess files.

They make a big thing out of supporting Ruby/Rails, but in the end the
support totally sucks. They have a watchdog process that seemingly
randomly kills user processes, so every few pages you get a “server 500”
because the fcgi process got killed.

But they’re cheap and I payed two years in advance so I’ll have to live
with it a little longer. It’s Dreamhost BTW, don’t go there if you’re
serious about using Ruby stuff.

I’ve been looking at it the last few days and have more or less
functioning CGI/FastCGI adapters. They need more testing. After that
I’ll send a patch and write something on oxywtf about deploying with
fcgi.

Thanks for responding!

(ab)

Robert M. schreef:

apache’s configuration, except by .htaccess files. It seems both cgi

(ab)


Nitro-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/nitro-general


Arne B.
http://www.arnebrasseur.net
http://www.zhongwiki.com
http://www.bankske.org
[email protected]

Here’s an initial cracking of the nut. Needs much refinement.

I have no idea whether this handles posts or not – my hope is all the
gnarly details are now handled by handle_context in adapter.rb.

If the approach is acceptable then there’s probably a lengthy method in
raw/cgi.rb that can be removed.

require “raw/adapter”

No multi-threading.

Og.thread_safe = false if defined?(Og) and Og.respond_to?(:thread_safe)

module Raw

A plain CGI adapter. To be used only in development

environments, this adapter is extremely slow for

live/production environments. This adapter is provided for

the sake of completeness.

class CgiAdapter
include AdapterHandlerMixin

def start(server)     # for server in context of CGI this is start,

middle and end!
@application = server # expected by handle_context in adapter.rb
context = Context.new(server)
context.env = ENV
uri = ENV[‘REQUEST_URI’]
script_name = ENV[‘SCRIPT_NAME’]
context.env[‘REQUEST_URI’] = uri.sub(/#{script_name}/i, ‘’)
handle_context( context )
puts “Content-type: #{context.content_type}”
context.response_headers[‘Content-length’] =
context.output_buffer.length
context.response_headers.each { |k,v| puts “#{k}: #{v}” }
puts “\n”
puts context.output_buffer
end
end
end

Patience, I’m almost done with this.

(ab)

Robert M. schreef:

class CgiAdapter
handle_context( context )

If they’re not being fixed please add a big BROKEN sign or remove


Nitro-general mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/nitro-general


Arne B.
http://www.arnebrasseur.net
http://www.zhongwiki.com
http://www.bankske.org
[email protected]

Ok, this should fix FastCGI (and also CGI).

Cgi.process has been factored out into a seperate handler class and
adapted to the way things currently work.

I’ve tested both CGI and FastCGI with Lighttpd and wrote a Tip on OxyWTF
on how to configure Lighttpd for FastCGI.

(ab)

Arne B. schreef: