Justin C. wrote:
Alex Y. wrote:
puts "Woo!"
Why does it behave like this? Is it traditional for mutexes (mutices?)
to be designed this way?
–
Alex
As far as “traditional” behavior, it can go either way. Sometimes
counting semaphores are used, which may be acquired multiple times and
must be released a corresponding number of times. POSIX threads, for
example, provide both options. In this case, however, it is just a
binary semaphore: either it is locked or not. The Mutex#synchronize call
checks if the lock is available. If not (even it is the current thread
that locked it), it blocks, as you noticed. That is its defined behavior
in Ruby.
What would be broken if, hypothetically, the mutex behaviour were
changed not to block if the current thread already holds the lock? Would
any possible implementation introduce a race condition?
For a bit of background, this came up while writing a (fairly complex)
Capistrano recipe. Capistrano uses #set and #fetch methods to allow the
user to give lazily evaluated settings, like this:
set(:foo){File.join(fetch(:deploy_to),"foo")}
set(:deploy_to){"/tmp"}
puts fetch(:foo) # <-- this is where the previous two blocks get
called
#set and #fetch both protect the inner variable in the same mutex (per
setting key, that is). This bit me when I was trying to set up a lazily
evaluated hash, like this:
def add_to_settings(key,&val)
set(:hsh, {}) unless exists?(:hsh)
set(:hsh) do
fetch(:hsh).merge(key => val.call)
end
end
The idea is that I’d build up a stack of procs which would be called
when, some time later, I called fetch(:hsh), to return a fully merged
hash. Unfortunately, because the inner fetch tries to synchronize on the
same mutex that the outer set already holds, this deadlocks. I’ve had to
sidestep set and fetch and forego any thread safety because of this, and
so far it hasn’t been a problem.
Given that I’ve got a workaround, it’s more interesting than annoying,
but I’m intrigued by the design decision.
–
Alex