[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

ability to run finalizers at a given point of a program?

Guillaume Cottenceau

4/12/2005 4:44:00 PM

Hi,

I'm considering the possibility to run the finalizers at a given point
of a program. I have written the following program, and I'm
experiencing the following unexpected behaviour: I call GC.start in
the hope that finalizers of unscoped objects that have one will be
run, but it seems they aren't.

http://www.zarb.org/~gc/t/prog/destructor/nodes...

I have written the following program in ocaml, which has a closer GC
implementation than perl or python (these have reference counting,
ocaml doesn't) and it seems that contrary to Ruby, finalizers are run
as expected when a GC run is forced (however, when the closure used as
finalizer has a reference to anything inside the object, this counts
as a reference thus object is always scoped, so this is not really
useful as a finalizer/destructor anyway).

http://www.zarb.org/~gc/t/prog/destructor/des...

Do you guys have any insight with ruby behaviour? I use 1.8.2 on Linux.

--
Guillaume Cottenceau - http://zar...


25 Answers

ts

4/12/2005 5:01:00 PM

0

>>>>> "G" == Guillaume Cottenceau <gcottenc@gmail.com> writes:

G> http://www.zarb.org/~gc/t/prog/destructor/nodes...

The code is

G> class Foo
G> def initialize
G> puts "*** constructor"
G> end
G> end
G>
G>
G> def scopeme
G> foo = Foo.new
G> ObjectSpace.define_finalizer(foo, proc { puts "*** pseudo-destructor" })
G> end
G>
G> scopeme
G> puts "Foo was out-scoped."
G>
G> GC.start
G> puts "Gc was run."

you must know that the gc is a little special, try it with

svg% cat b.rb
#!/usr/local/bin/ruby
class Foo
def initialize
puts "*** constructor"
end
def self.final
proc { puts "*** pseudo-destructor" }
end
end

def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, Foo.final)
end

a = scopeme
puts "Foo was out-scoped."

GC.start
puts "Gc was run."

svg%

svg% b.rb
*** constructor
Foo was out-scoped.
*** pseudo-destructor
Gc was run.
svg%




Guy Decoux


Robert Klemme

4/12/2005 5:55:00 PM

0


"Guillaume Cottenceau" <gcottenc@gmail.com> schrieb im Newsbeitrag
news:dc3bf8580504120943240e01e4@mail.gmail.com...
> Hi,
>
> I'm considering the possibility to run the finalizers at a given point
> of a program. I have written the following program, and I'm
> experiencing the following unexpected behaviour: I call GC.start in
> the hope that finalizers of unscoped objects that have one will be
> run, but it seems they aren't.
>
> http://www.zarb.org/~gc/t/prog/destructor/nodes...

Apart from what Guy wrote already, why do you need to determine the point in
time when finalizers are called? The whole idea of GC and finalization is
that you *don't* care when it happens. If you need to ensure (hint, hint)
that some cleanup code is invoked at some point in time then the
transactional pattern employed by File.open() and others might be more
appropriate:

def do_work
x = create_x_somehow
begin
yield x
ensure
# always called, even in case of exception
x.cleanup
end
end

do_work do |an_x|
puts an_x.to_u
end

And another remark: as opposed to Java finalizers, Ruby finalizers are
guaranteed to be invoked, even on program exit:

$ ruby -e 'ObjectSpace.define_finalizer(Object.new){ puts "called" }'
called
$ ruby -e 'o=Object.new;ObjectSpace.define_finalizer(o){ puts "called" }'
called

Btw, you can also define exit handlers with at_exit:
http://www.ruby-doc.org/core/classes/Kernel.ht...

Kind regards

robert

Guillaume Cottenceau

4/12/2005 6:05:00 PM

0

What version of Ruby are you running? With your example I can see:

*** constructor
Foo was out-scoped.
Gc was run.
*** pseudo-destructor

(which tends to prove that GC.start did not trigger the finalizer -
just the same as my program actually)

What difference do you pretend your program is making with mine?

--
Guillaume Cottenceau - http://zar...


ES

4/12/2005 6:13:00 PM

0

Guillaume Cottenceau wrote:
> Hi,
>
> I'm considering the possibility to run the finalizers at a given point
> of a program. I have written the following program, and I'm
> experiencing the following unexpected behaviour: I call GC.start in
> the hope that finalizers of unscoped objects that have one will be
> run, but it seems they aren't.

If you really want to somehow 'delete' the objects (rather than just
free some memory) at a certain point, you might want to use ensure.
It works 'outside' exception handling, too.

def foo()
# Something

ensure
# Finalize
end

> http://www.zarb.org/~gc/t/prog/destructor/nodes...
>
> I have written the following program in ocaml, which has a closer GC
> implementation than perl or python (these have reference counting,
> ocaml doesn't) and it seems that contrary to Ruby, finalizers are run
> as expected when a GC run is forced (however, when the closure used as
> finalizer has a reference to anything inside the object, this counts
> as a reference thus object is always scoped, so this is not really
> useful as a finalizer/destructor anyway).
>
> http://www.zarb.org/~gc/t/prog/destructor/des...
>
> Do you guys have any insight with ruby behaviour? I use 1.8.2 on Linux.

E



Guillaume Cottenceau

4/12/2005 6:19:00 PM

0

> Apart from what Guy wrote already, why do you need to determine the point in
> time when finalizers are called? The whole idea of GC and finalization is
> that you *don't* care when it happens. If you need to ensure (hint, hint)

Yes. This was partly an academic question, partly because I feel in my
guts that java people pretending that "it's not a problem we don't
have multiple inheritance because you don't need it" and "it's not a
problem we don't have destructors because you don't need it" are plain
wrong (I do know this is making implementation easier so I'd like that
people admit that) (and I am a bit sorry that ruby, otherwise a great
language, follows this path), and partly because I've bumped into a
similar problem in a java program I work on for a living (the need to
free DB connections associated with out of scope java objects).

In other words, in my opinion there are cases where IO can be freed
with a try|begin/catch|rescue/finally|ensure but other cases where for
example an object is a wrapper around some IO, and in such
circumstances it makes good sense to free this IO when the object it
out of scope instead of explicitely calling a close/free method,
especially when there can be several locations of your program that
make use of such an object. With a reference counting implementation
of a GC (Perl, Python) the use of destructors for such a matter is
immediate (and that may explain why they provide destructors, btw) but
with a mark & sweep or another "asynchronous" GC it becomes a problem;
this problem can possibly be workarounded by explicitely calling the
GC from carefully crafter locations ("when a new request enters" comes
to mind when you deal with server-based service), however I admit this
is far from ideal. But even that seems impossible with ruby (and java)
according to the results of my short program.

> And another remark: as opposed to Java finalizers, Ruby finalizers are
> guaranteed to be invoked, even on program exit:
>
> $ ruby -e 'ObjectSpace.define_finalizer(Object.new){ puts "called" }'
> called
> $ ruby -e 'o=Object.new;ObjectSpace.define_finalizer(o){ puts "called" }'
> called

Yes, and this is a very good point, I know that.

Thanks for your message.

--
Guillaume Cottenceau - http://zar...


gabriele renzi

4/12/2005 6:43:00 PM

0

Guillaume Cottenceau ha scritto:

<snip>
> In other words, in my opinion there are cases where IO can be freed
> with a try|begin/catch|rescue/finally|ensure but other cases where for
> example an object is a wrapper around some IO, and in such
> circumstances it makes good sense to free this IO when the object it
> out of scope instead of explicitely calling a close/free method,
> especially when there can be several locations of your program that
> make use of such an object.

I think you misunderstood slightly the precious message.
In ruby, whenever you want this kind of "create & use & destroy quickly"
idiom, you don't call a free/close method explicitly, you rely on
methods that handle it for you, say:

open('file') do |f|
bla bla
end

it is the #open call that cares of freeing the resource, there is no
need to handle it by yourself.

In java you don't have blocks, so you have to always use "finally".


Glenn Parker

4/12/2005 9:22:00 PM

0

ts wrote:
>
> you must know that the gc is a little special...

I'd like to hear a little bit more about *why* the Ruby GC is special.

--
Glenn Parker | glenn.parker-AT-comcast.net | <http://www.tetrafoi...


Robert Klemme

4/12/2005 9:23:00 PM

0


"Guillaume Cottenceau" <gcottenc@gmail.com> schrieb im Newsbeitrag
news:dc3bf85805041211175838456@mail.gmail.com...
>> Apart from what Guy wrote already, why do you need to determine the point
>> in
>> time when finalizers are called? The whole idea of GC and finalization
>> is
>> that you *don't* care when it happens. If you need to ensure (hint,
>> hint)
>
> Yes. This was partly an academic question, partly because I feel in my
> guts that java people pretending that "it's not a problem we don't
> have multiple inheritance because you don't need it" and "it's not a
> problem we don't have destructors because you don't need it" are plain
> wrong (I do know this is making implementation easier so I'd like that
> people admit that) (and I am a bit sorry that ruby, otherwise a great
> language, follows this path),

Yes, but with significant differences: 1. finalizers are guaranteed to be
invoked (other than in Java) and 2. you cannot ressurect an object from the
finalizer (a really odd property of Java). Plus, there are other elegant
means to deal with automated resource deallocation (method + block).

> and partly because I've bumped into a
> similar problem in a java program I work on for a living (the need to
> free DB connections associated with out of scope java objects).

Use "finally". You can as well mimic Ruby behavior by defining a callback
interface (which in Ruby would be a block) like this:

interface Action {
public void doit(Connection tx) throws SQLException;
}

class DbPool {
void doit(Action action) throws SQLException {
Connection tx = getFromPool();
try {
action.doit( tx );
}
finally {
returnToPool( tx );
}
}
}

That is just slightly more verbose than the Ruby equivalent but as save
(i.e. cleanup is always properly done).

> In other words, in my opinion there are cases where IO can be freed
> with a try|begin/catch|rescue/finally|ensure but other cases where for
> example an object is a wrapper around some IO, and in such
> circumstances it makes good sense to free this IO when the object it
> out of scope instead of explicitely calling a close/free method,

As I tried to explain in my last post you don't need to invoke the cleanup
explicitely because you can encapsulate that in a method that takes a block.

> especially when there can be several locations of your program that
> make use of such an object. With a reference counting implementation
> of a GC (Perl, Python) the use of destructors for such a matter is
> immediate (and that may explain why they provide destructors, btw) but
> with a mark & sweep or another "asynchronous" GC it becomes a problem;
> this problem can possibly be workarounded by explicitely calling the
> GC from carefully crafter locations ("when a new request enters" comes
> to mind when you deal with server-based service), however I admit this
> is far from ideal. But even that seems impossible with ruby (and java)
> according to the results of my short program.

No, explicitely invoking GC is definitely *not* the solution for this. In
Ruby there is "ensure" either used directly or from a method that receives a
block. Even if some instance is a wrapper around an IO instance this
pattern can be applied - and it's the most appropriate one.

Another reason not to use GC for this is that you don't have access to the
GC'ed instance in the finalizer which makes things overly complicated.

I really think you are trying to use the wrong tool for the problem at hand.

>> And another remark: as opposed to Java finalizers, Ruby finalizers are
>> guaranteed to be invoked, even on program exit:
>>
>> $ ruby -e 'ObjectSpace.define_finalizer(Object.new){ puts "called" }'
>> called
>> $ ruby -e 'o=Object.new;ObjectSpace.define_finalizer(o){ puts "called" }'
>> called
>
> Yes, and this is a very good point, I know that.
>
> Thanks for your message.

You're welcome.

Cheers

robert

Guillaume Cottenceau

4/12/2005 10:15:00 PM

0

> > In other words, in my opinion there are cases where IO can be freed
> > with a try|begin/catch|rescue/finally|ensure but other cases where for
> > example an object is a wrapper around some IO, and in such
> > circumstances it makes good sense to free this IO when the object it
> > out of scope instead of explicitely calling a close/free method,
>
> As I tried to explain in my last post you don't need to invoke the cleanup
> explicitely because you can encapsulate that in a method that takes a block.

As I tried to explain as well, let's try not to stay on the usual
"your algorithm is broken" answer, and consider the problem (you might
want to think you're considering a purely academic question, if that
helps).

Ok, since I know that no one will want to do that without a more
precise example, here it is: what happens when the resource is
allocated and worked on first, then in a totally different part of the
program, much later, results are extracted from it - and this
extraction can also possibly be performed multiple times, then again
later (laaaaaater) the object is collected? Does this block trick
still work? It seems not, if I understand it correctly. And, may I
add, the "destructor semantics" simply perfectly apply to such
circumstance. E.g. putting in the object's class itself some code to
be run when object disappears, whenever and on whatever circumstance
this is the case.

--
Guillaume Cottenceau - http://zar...


Glenn Parker

4/12/2005 10:51:00 PM

0

Guillaume Cottenceau wrote:
>
> Ok, since I know that no one will want to do that without a more
> precise example, here it is: what happens when the resource is
> allocated and worked on first, then in a totally different part of the
> program, much later, results are extracted from it - and this
> extraction can also possibly be performed multiple times, then again
> later (laaaaaater) the object is collected? Does this block trick
> still work? It seems not, if I understand it correctly.

Fair enough, long-lived objects are not suitable for the block-wrapping
trick. But that still doesn't explain the nature of the work you want
to do in a finalizer.

Do you know when the last "extraction" has been done (making it safe for
your finalizer to run)? Can you use that knowledge to explicitly run
the finalization method, instead of waiting for the GC?

--
Glenn Parker | glenn.parker-AT-comcast.net | <http://www.tetrafoi...