[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Slave not removing slave_proc_* files

Jos Backus

8/17/2007 6:00:00 PM

Hi. In order to be able to run swiftiply_mongrel_rails under daemontools I
patched swiftiply_mongrel_rails to use Ara's Slave library. The patch does
essentially this:

slaves = []
3.times do |i|
require 'slave'
require "#{File.dirname(__FILE__)}/mongrel_rails"
slaves << Slave.object(:async => true) {
Mongrel::Runner.new.run(args) # args == similar to mongrel_rails
# command line args
}
end
slaves.each {|t| t.join}

(See http://rubyforge.org/pipermail/swiftiply-users/2007-August/0...
for the actual patch).

Note that Mongrel::Runner.new.run never returns, hence the use of the :async
option. It is just a wrapper around mongrel_rails.

When a SIGTERM is sent to the swiftiply_mongrel_rails process, the following
output is seen:

http://pastie.cabo...

This shows swiftiply_mongrel_rails as well as its slaves exiting, and
daemontools subsequently restarting swiftiply_mongrel_rails.

Now for the problem: after this, /tmp holds 3 slave_proc_* UNIX domain sockets
that have not been unlinked. Subsequent restarts yield more slave_proc_*
sockets, and so on.

So my question is: what could cause these sockets not to be removed? am I
using Slave incorrectly? I instrumented slave.rb to make sure the
Kernel.at_exit method is called and it is - it's just that the associated
block isn't executed in the SIGTERM case. It works fine when a SIGUSR2 signal
is sent - the sockets are cleaned up as expected. A simple test script
suggests that Kernel.at_exit works okay on the platform (ruby 1.8.5 from the
CentOS testing repo on CentOS 4).

Any help is appreciated...

--
Jos Backus
jos at catnook.com

7 Answers

ara.t.howard

8/18/2007 4:03:00 PM

0


On Aug 17, 2007, at 11:59 AM, Jos Backus wrote:

> Hi. In order to be able to run swiftiply_mongrel_rails under
> daemontools I
> patched swiftiply_mongrel_rails to use Ara's Slave library. The
> patch does
> essentially this:
>
> slaves = []
> 3.times do |i|
> require 'slave'
> require "#{File.dirname(__FILE__)}/mongrel_rails"
> slaves << Slave.object(:async => true) {
> Mongrel::Runner.new.run(args) # args == similar to mongrel_rails
> # command line args
> }
> end
> slaves.each {|t| t.join}

what is this code supposed to do exactly? slave.rb puts an object in
another process which cannot outlive it's parent, but this object is
meant to be used: a handle to it is expected. iff
Mongrel::Runner.new.run never returns then all those slaves are
essentially half baked: they will not have a reference to their
objects - in particular the lifelines (sockets) have not been setup
completely. so essentially all this code is close to simply

fork { Mongrel .... }

and never setting up collection of the child. however i'm not 100%
clear on how mongrel implements run nor what you are really trying to
do here?

>
> (See http://rubyforge.org/pipermail/swiftiply-users/20...
> 000054.html
> for the actual patch).
>
> Note that Mongrel::Runner.new.run never returns, hence the use of
> the :async
> option. It is just a wrapper around mongrel_rails.

all async does is wrap your code with a thread.

>
> When a SIGTERM is sent to the swiftiply_mongrel_rails process, the
> following
> output is seen:

> http://pastie.cabo...


yeah - that makes sense: slave.rb is trying to exit because the
parent has died - the question is why SystemExit is being rescued.
perhaps a blanket 'resuce Exception' in swiftiply?


>
> f
> This shows swiftiply_mongrel_rails as well as its slaves exiting, and
> daemontools subsequently restarting swiftiply_mongrel_rails.
>
> Now for the problem: after this, /tmp holds 3 slave_proc_* UNIX
> domain sockets
> that have not been unlinked. Subsequent restarts yield more
> slave_proc_*
> sockets, and so on.
>
> So my question is: what could cause these sockets not to be
> removed? am I
> using Slave incorrectly? I instrumented slave.rb to make sure the
> Kernel.at_exit method is called and it is - it's just that the
> associated
> block isn't executed in the SIGTERM case. It works fine when a
> SIGUSR2 signal
> is sent - the sockets are cleaned up as expected. A simple test script
> suggests that Kernel.at_exit works okay on the platform (ruby 1.8.5
> from the
> CentOS testing repo on CentOS 4).
>
> Any help is appreciated...

for SIGTERM the socket should be cleaned up by the parent process -
however it seems like the normal exit chain is being interupted by
swiftiply since SystemExit is being rescued. can you find/show the
code that is rescuing the SystemExit call and see what's happening
there: are the normal exit handlers getting called in the case of
SystemExit being thrown? in otherwords the code should look vaguely
like this

cfp:~ > cat a.rb
begin
exit 42
rescue Exception => e
p e.class
p e.status
exit e.status if SystemExit === e
p :somethig_else
end


cfp:~ > ruby a.rb
SystemExit
42

regards.


a @ http://draw...
--
we can deny everything, except that we have the possibility of being
better. simply reflect on that.
h.h. the 14th dalai lama




Jos Backus

8/18/2007 6:54:00 PM

0

Hi Ara,

On Sun, Aug 19, 2007 at 01:03:10AM +0900, ara.t.howard wrote:
>
> On Aug 17, 2007, at 11:59 AM, Jos Backus wrote:
>
>> Hi. In order to be able to run swiftiply_mongrel_rails under daemontools I
>> patched swiftiply_mongrel_rails to use Ara's Slave library. The patch does
>> essentially this:
>>
>> slaves = []
>> 3.times do |i|
>> require 'slave'
>> require "#{File.dirname(__FILE__)}/mongrel_rails"
>> slaves << Slave.object(:async => true) {
>> Mongrel::Runner.new.run(args) # args == similar to mongrel_rails
>> # command line args
>> }
>> end
>> slaves.each {|t| t.join}
>
> what is this code supposed to do exactly?

Mongrel::Runner.new.run(args) will start running relative to the parent,
meaning that (since we are talking about the swiftiplied Mongrel here) it will
connect to the swiftiply proxy and handle any requests sent by it. The parent
isn't interested in making any method calls to the slaves; the slaves are
meant to run forever.

> slave.rb puts an object in
> another process which cannot outlive it's parent, but this object is meant
> to be used: a handle to it is expected. iff Mongrel::Runner.new.run never
> returns then all those slaves are essentially half baked: they will not
> have a reference to their objects - in particular the lifelines (sockets)
> have not been setup completely. so essentially all this code is close to
> simply
>
> fork { Mongrel .... }
>
> and never setting up collection of the child. however i'm not 100% clear
> on how mongrel implements run nor what you are really trying to do here?

So my guess that I am using Slave wrong appears correct. The above explains
why the sockets remain. Since Slave.object returns a thread I figured I could
just join them to block the parent.

My idea is to have a simple way to host a bunch of Rails apps/multiple
instances of the same Rails app. So I am trying to start N mongrel_rails
processes managed by swiftiply_mongrel_rails which in turn will be managed by
daemontools. To accomplish this, I added the change in the post below
mentioned to mongrel_rails.rb so that rather than using backticks I can just
call this Mongrel::Runner.run method (maybe that is the wrong approach and I
should just use backticks/Kernel.system()).

Slave seemed like an easy way to manage the children and avoid zombies. Is
there a correct way of doing this with Slave? Or is it the wrong tool for the
job? In that case I'll use fork/exec, Process.waitall and a SIGTERM handler in
the parent which kills the children, etc. to get the job done.

>>
>> (See
>> http://rubyforge.org/pipermail/swiftiply-users/2007-August/0...
>> for the actual patch).
>>
>> Note that Mongrel::Runner.new.run never returns, hence the use of the
>> :async
>> option. It is just a wrapper around mongrel_rails.
>
> all async does is wrap your code with a thread.
>
>>
>> When a SIGTERM is sent to the swiftiply_mongrel_rails process, the
>> following
>> output is seen:
>
>> http://pastie.cabo...
>
>
> yeah - that makes sense: slave.rb is trying to exit because the parent has
> died - the question is why SystemExit is being rescued. perhaps a blanket
> 'resuce Exception' in swiftiply?

Honestly, I don't know. Maybe Kirk can comment. The only relevant `rescue
Exception' in swiftiply (in src/swiftcore/{evented,swiftiplied,}_mongrel.rb)
doesn't seem involved (to check, I added a warn which isn't seen upon sending
SIGTERM).

>> This shows swiftiply_mongrel_rails as well as its slaves exiting, and
>> daemontools subsequently restarting swiftiply_mongrel_rails.
>>
>> Now for the problem: after this, /tmp holds 3 slave_proc_* UNIX domain
>> sockets
>> that have not been unlinked. Subsequent restarts yield more slave_proc_*
>> sockets, and so on.
>>
>> So my question is: what could cause these sockets not to be removed? am I
>> using Slave incorrectly? I instrumented slave.rb to make sure the
>> Kernel.at_exit method is called and it is - it's just that the associated
>> block isn't executed in the SIGTERM case. It works fine when a SIGUSR2
>> signal
>> is sent - the sockets are cleaned up as expected. A simple test script
>> suggests that Kernel.at_exit works okay on the platform (ruby 1.8.5 from
>> the
>> CentOS testing repo on CentOS 4).
>>
>> Any help is appreciated...
>
> for SIGTERM the socket should be cleaned up by the parent process - however
> it seems like the normal exit chain is being interupted by swiftiply since
> SystemExit is being rescued. can you find/show the code that is rescuing
> the SystemExit call and see what's happening there: are the normal exit
> handlers getting called in the case of SystemExit being thrown? in
> otherwords the code should look vaguely like this
>
> cfp:~ > cat a.rb
> begin
> exit 42
> rescue Exception => e
> p e.class
> p e.status
> exit e.status if SystemExit === e
> p :somethig_else
> end
>
>
> cfp:~ > ruby a.rb
> SystemExit
> 42

There are no references to SystemExit in swiftiply-0.6.1:

# pwd
/usr/lib/ruby/gems/1.8/gems/swiftiply-0.6.1
# grep -r SystemExit .
#

Thanks for your help, Ara.

--
Jos Backus
jos at catnook.com

ara.t.howard

8/18/2007 9:04:00 PM

0


On Aug 18, 2007, at 12:54 PM, Jos Backus wrote:

> Hi Ara,

how do.

>>
>> what is this code supposed to do exactly?
>
> Mongrel::Runner.new.run(args) will start running relative to the
> parent,
> meaning that (since we are talking about the swiftiplied Mongrel
> here) it will
> connect to the swiftiply proxy and handle any requests sent by it.
> The parent
> isn't interested in making any method calls to the slaves; the
> slaves are
> meant to run forever.

does mongrel fork to accomplish that? if not why not simply

threads << Thread.new{ Mongrel::Runner.new.run(args) }

>
> So my guess that I am using Slave wrong appears correct. The above
> explains
> why the sockets remain. Since Slave.object returns a thread I
> figured I could
> just join them to block the parent.
>

you could *if* mongrel returned an object. the fact that i never
returns leaves slave half baked...


> My idea is to have a simple way to host a bunch of Rails apps/
> multiple
> instances of the same Rails app. So I am trying to start N
> mongrel_rails
> processes managed by swiftiply_mongrel_rails which in turn will be
> managed by
> daemontools. To accomplish this, I added the change in the post below
> mentioned to mongrel_rails.rb so that rather than using backticks I
> can just
> call this Mongrel::Runner.run method (maybe that is the wrong
> approach and I
> should just use backticks/Kernel.system()).
>
> Slave seemed like an easy way to manage the children and avoid
> zombies. Is
> there a correct way of doing this with Slave? Or is it the wrong
> tool for the
> job? In that case I'll use fork/exec, Process.waitall and a SIGTERM
> handler in
> the parent which kills the children, etc. to get the job done.
>

i think the LifeLine class of slave is exactly what you need and that
you can re-use it. basically the idea is this

socket = Socket.pair


cid = fork

if cid
stuff
else
Thread.new{ socket.read rescue exit }
stuff
end

in summary, the child reads from a pipe to the parent. if the read
ever returns the parent has died - so exit. this is the 'zombie'
preventer of slave.rb. if you look at slave.rb and grep for LifeLine
and @lifeline you'll see the usage. summary is:

lifeline = LifeLine.new

cid = fork

unless cid # child
@lifeline.catch
@lifeline.cling
else
@lifeline.throw
end


and the poor man's version:



cfp:~ > cat a.rb
r, w = IO.pipe
cid = fork

unless cid ### child
w.close

Thread.new{
begin
r.read
ensure
STDERR.puts 'parent died... exiting!'
Kernel.exit
end
}

sleep and 'pretend some processing is going on...'
else
r.close
sleep 2 and 'pretend the parent exited'
end



cfp:~ > ruby a.rb
parent died... exiting!


> There are no references to SystemExit in swiftiply-0.6.1:
>
> # pwd
> /usr/lib/ruby/gems/1.8/gems/swiftiply-0.6.1
> # grep -r SystemExit .
> #
>

right. but there *should* be *if* blanket Exceptions are rescued.

> Thanks for your help, Ara.

always good to have testers. er, i mean users ;-)

a @ http://draw...
--
we can deny everything, except that we have the possibility of being
better. simply reflect on that.
h.h. the 14th dalai lama




Jos Backus

8/18/2007 10:07:00 PM

0

On Sun, Aug 19, 2007 at 06:03:34AM +0900, ara.t.howard wrote:
>
> On Aug 18, 2007, at 12:54 PM, Jos Backus wrote:
[snip]
> does mongrel fork to accomplish that? if not why not simply
>
> threads << Thread.new{ Mongrel::Runner.new.run(args) }

AfaIk, Mongrel doesn't fork so this won't work. (Hey, that rhymes!)

>>
>> So my guess that I am using Slave wrong appears correct. The above
>> explains
>> why the sockets remain. Since Slave.object returns a thread I figured I
>> could
>> just join them to block the parent.
>>
>
> you could *if* mongrel returned an object. the fact that i never returns
> leaves slave half baked...

Yeah, I see that now. Otoh, if it returned it would mean that it would have
stopped running, which is, uh, undesirable. ;-)

>> Slave seemed like an easy way to manage the children and avoid zombies. Is
>> there a correct way of doing this with Slave? Or is it the wrong tool for
>> the
>> job? In that case I'll use fork/exec, Process.waitall and a SIGTERM
>> handler in
>> the parent which kills the children, etc. to get the job done.
>>
>
> i think the LifeLine class of slave is exactly what you need and that you
> can re-use it. basically the idea is this
>
> socket = Socket.pair
>
>
> cid = fork
>
> if cid
> stuff
> else
> Thread.new{ socket.read rescue exit }
> stuff
> end
>
> in summary, the child reads from a pipe to the parent. if the read ever
> returns the parent has died - so exit. this is the 'zombie' preventer of
> slave.rb. if you look at slave.rb and grep for LifeLine and @lifeline
> you'll see the usage. summary is:
>
> lifeline = LifeLine.new
>
> cid = fork
>
> unless cid # child
> @lifeline.catch
> @lifeline.cling
> else
> @lifeline.throw
> end

I tried this and couldn't get it to work. For some reason the Mongrel children
don't progress to connect to the swiftiply proxy. Not sure why.

> and the poor man's version:
>
>
>
> cfp:~ > cat a.rb
> r, w = IO.pipe
> cid = fork
>
> unless cid ### child
> w.close
>
> Thread.new{
> begin
> r.read
> ensure
> STDERR.puts 'parent died... exiting!'
> Kernel.exit
> end
> }
>
> sleep and 'pretend some processing is going on...'
> else
> r.close
> sleep 2 and 'pretend the parent exited'
> end
>
>
>
> cfp:~ > ruby a.rb
> parent died... exiting!

The poor man's version appears to work great. Thanks! I'll post a new patch to
the swiftiply list.

>> There are no references to SystemExit in swiftiply-0.6.1:
>>
>> # pwd
>> /usr/lib/ruby/gems/1.8/gems/swiftiply-0.6.1
>> # grep -r SystemExit .
>> #
>>
>
> right. but there *should* be *if* blanket Exceptions are rescued.
>
>> Thanks for your help, Ara.
>
> always good to have testers. er, i mean users ;-)

Surely one day soon there'll be an opportunity for me to use Slave, but this
doesn't seem to be the right fit, contrary to my earlier impression.

Thanks again for the neat blocking-pipe-read-in-thread trick, Ara. I can see
myself using it frequently in the future.

Cheers,
--
Jos Backus
jos at catnook.com

ara.t.howard

8/19/2007 12:22:00 AM

0


On Aug 18, 2007, at 4:07 PM, Jos Backus wrote:

> AfaIk, Mongrel doesn't fork so this won't work. (Hey, that rhymes!)

no that's good - if it doesn't fork this might work great - if it did
it could not. might want to give it whirl...

ack on the the rest of your post!

cheers.

a @ http://draw...
--
we can deny everything, except that we have the possibility of being
better. simply reflect on that.
h.h. the 14th dalai lama




Jos Backus

8/19/2007 1:40:00 AM

0

On Sun, Aug 19, 2007 at 09:22:10AM +0900, ara.t.howard wrote:
>
> On Aug 18, 2007, at 4:07 PM, Jos Backus wrote:
>
>> AfaIk, Mongrel doesn't fork so this won't work. (Hey, that rhymes!)
>
> no that's good - if it doesn't fork this might work great - if it did it
> could not. might want to give it whirl...

I'm not sure I understand. Are you suggesting that swiftiply_mongrel_rails run
all these Mongrels (who themselves use threads) in threads instead of forking?

If Mongrel doesn't fork, how does this give me multiple independent Mongrel
instances? If this Mongrel executes a blocking db call, won't this block the
whole process and prevent other threads from executing? I thought the idea was
to fork multiple Mongrels so that each can execute blocking calls without
affecting other requests being served by other Mongrels.

Can you elaborate please?

Cheers,
--
Jos Backus
jos at catnook.com

ara.t.howard

8/19/2007 5:43:00 AM

0


On Aug 18, 2007, at 7:40 PM, Jos Backus wrote:

>
> I'm not sure I understand. Are you suggesting that
> swiftiply_mongrel_rails run
> all these Mongrels (who themselves use threads) in threads instead
> of forking?
>
> If Mongrel doesn't fork, how does this give me multiple independent
> Mongrel
> instances? If this Mongrel executes a blocking db call, won't this
> block the
> whole process and prevent other threads from executing? I thought
> the idea was
> to fork multiple Mongrels so that each can execute blocking calls
> without
> affecting other requests being served by other Mongrels.
>
> Can you elaborate please?

i was just wondering if it might work - i've been suprised both by
the the things that work, and also the things that do not work,
within ruby threads - i'd guess that it would not work but was
brainstorming out loud.

back to your regularly scheduled programming...

;-)

a @ http://draw...
--
we can deny everything, except that we have the possibility of being
better. simply reflect on that.
h.h. the 14th dalai lama