I’m making a site which allows subscribers to scrape their unnamed
social network profile for changes regularly using BackgrounDrb. I can:
A) Create 1 worker which will update all subscriber profiles at once. I
like this because its simple. I don’t like this because it will create a
server-intensive traffic spike.
B) Instantiate a new worker which will run every 24 hours after a person
signs up. I like this because it eliminates the spike but still updates
everyone. But I’m not sure what effect having an worker running for each
user will have on server memory.
B) Instantiate a new worker which will run every 24 hours after a person
signs up. I like this because it eliminates the spike but still updates
everyone. But I’m not sure what effect having an worker running for each
user will have on server memory.
you definitely don’t want one worker running for each user. Each
worker is a new process and takes up resources. Try it first with
just one worker that gets started on bdrb server start and loops,
works on jobs in the queue, then sleeps and does it again.
Here is a simple example, you may want to add a pending and executed
flag to the queue items.:
class PublishWorker < BackgrounDRb::Worker::Base
def do_work(args={})
loop do {
urls_to_publish = PublishQueue.find(:all, :limit => 20)
urls_to_publish.each do |url|
# code here to work with urls
end
sleep args[:sleep]
}
end
end
PublishWorker.register
Cheers-
– Ezra Z.
– Lead Rails Evangelist
– [email protected]
– Engine Y., Serious Rails Hosting
– (866) 518-YARD (9273)
reboots and also allows commandline interaction. it’s a different
beast than backgrounddrb though.
For the first proof of concept for the project I am currently working
on,
I took the easy route of spawning a backgroundrb worker for each task
request. That, of course, ran into issues as the number of jobs grew
(in that case, running on a 1.? GHz/1GB laptop was after a couple of
dozen tasks. For the prototype I am currently building, I am using rq
for both of those reasons and to simply scale across several machines
(currently, I am farming the tasks out to 6 machines).
Yeah I have the best luck with backgroundrb when I run a set number
of immortal workers that just loop and pull jobs from a queue.
How is this different from B? Why would rq be necessary for this?
it wouldn’t be. i’ll be bundling rq for rails in the next week or
two. one advantage (i think) is that rq is durable across machine
reboots and also allows commandline interaction. it’s a different
beast than backgrounddrb though.
-I went through your tutorial and documentation but I couldn’t figure
out how to start workers on load. I thought it was via
background_schedules.yml but on start it didn’t run.
Ezra Z. wrote:
…worker is a new process and takes up resources. Try it first with
just one worker that gets started on bdrb server start and loops,
works on jobs in the queue, then sleeps and does it again.
Two things:
-I went through your tutorial and documentation but I couldn’t figure
out how to start workers on load. I thought it was via
background_schedules.yml but on start it didn’t run.
-I’m not understanding the purpose for sleep in this case. Is the worker
doing something after it has elapsed that requires sleep?
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.