I was then able to run my application on both servers of a Mongrel
Cluster, started from the command line, like this:
$ mongrel_cluster_ctl start
But when I try to make a service out of my cluster, the two Mongrel
servers start, but my application logs the following error message:
Oracle/OCI libraries could not be loaded: libclntsh.so.11.1: cannot
open shared object file: No such file or directory - /usr/local/lib/
ruby/site_ruby/1.8/i686-linux/oci8lib.so
This library file exists and has read permission for all, and the path
to it is set in LD_LIBRARY_PATH.
I’d appreciate if someone could help me out on this. Thanks,
Here’s more : I started a mongrel_cluster by hand (the application
works fine) and compared it to the one started as a service. According
to “ps -ef”, they’re identical, EXCEPT for the PRIORITY level (5 by
hand, 0 as a service). Could this be causing my problem?
Mongrel_Cluster started by hand:
[chris@localhost log]$ ps -ef | grep mongrel
chris 4781 1 5 05:55 ? 00:00:02 /usr/local/bin/ruby /
usr/local/bin/mongrel_rails start -d -e production -a 0.0.0.0 -c /home/
chris/kitry/FDS_Server --user chris --group chris -p 4001 -P log/
mongrel.4001.pid -l log/mongrel.4001.log
chris 4784 1 5 05:55 ? 00:00:02 /usr/local/bin/ruby /
usr/local/bin/mongrel_rails start -d -e production -a 0.0.0.0 -c /home/
chris/kitry/FDS_Server --user chris --group chris -p 4002 -P log/
mongrel.4002.pid -l log/mongrel.4002.log
Mongrel_Cluster started as a service:
[chris@localhost ~]$ ps -ef | grep mongrel
chris 2759 1 0 06:04 ? 00:00:02 /usr/local/bin/ruby /
usr/local/bin/mongrel_rails start -d -e production -a 0.0.0.0 -c /home/
chris/kitry/FDS_Server --user chris --group chris -p 4001 -P log/
mongrel.4001.pid -l log/mongrel.4001.log
chris 2762 1 0 06:04 ? 00:00:02 /usr/local/bin/ruby /
usr/local/bin/mongrel_rails start -d -e production -a 0.0.0.0 -c /home/
chris/kitry/FDS_Server --user chris --group chris -p 4002 -P log/
mongrel.4002.pid -l log/mongrel.4002.log
What script are you using to start mongrel_cluster as a service? My
guess is that the script in question runs as root, and then switches
to the proper user prior to starting the mongrel cluster. Anytime a
process becomes a user, the environment it runs in is considerably
more “crippled” than usual. Things like .bashrc, for example, won’t
get sourced, so the environment ends up missing lots of important
details. The environment that you see logged in as chris and the
environment that the service sees when it is logged in as chris are
two very different things.
A good start would be to look at the top of the init script that is
getting executed for the service. Depending on whether it calls /bin/
bash or /bin/sh there are two different sets of files that you might
need to edit to provide an appropriate environment. (This is assuming
that your default shell is bash, and that sh is a symlink to bash.)
Look in the “INVOCATION” section of the bash man page for more
details.
Hope that helps.
–
Alex Malinovich
Director of Deployment Services
PLANET ARGON, LLC
design // development // hosting
Anyway, I’m already doing what you suggest, that is: exporting the
LD_LIBRARY_PATH from within a shell script in /etc/profile.d. And
although the environment variable is indeed set when I open a terminal
window, it is not set when the mongrel_rails processes are started as
a service. So if I remove the related line from the mongrel_rails
script, they are no longer able to access the database.
Anyone has an idea how to set an environment variable that is visible
to a service upon startup?
Anyway, I’m already doing what you suggest, that is: exporting the
LD_LIBRARY_PATH from within a shell script in /etc/profile.d. And
although the environment variable is indeed set when I open a terminal
window, it is not set when the mongrel_rails processes are started as
a service. So if I remove the related line from the mongrel_rails
script, they are no longer able to access the database.
Anyone has an idea how to set an environment variable that is visible
to a service upon startup?