Robert Klemme
4/12/2005 9:23:00 PM
"Guillaume Cottenceau" <gcottenc@gmail.com> schrieb im Newsbeitrag
news:dc3bf85805041211175838456@mail.gmail.com...
>> Apart from what Guy wrote already, why do you need to determine the point
>> in
>> time when finalizers are called? The whole idea of GC and finalization
>> is
>> that you *don't* care when it happens. If you need to ensure (hint,
>> hint)
>
> Yes. This was partly an academic question, partly because I feel in my
> guts that java people pretending that "it's not a problem we don't
> have multiple inheritance because you don't need it" and "it's not a
> problem we don't have destructors because you don't need it" are plain
> wrong (I do know this is making implementation easier so I'd like that
> people admit that) (and I am a bit sorry that ruby, otherwise a great
> language, follows this path),
Yes, but with significant differences: 1. finalizers are guaranteed to be
invoked (other than in Java) and 2. you cannot ressurect an object from the
finalizer (a really odd property of Java). Plus, there are other elegant
means to deal with automated resource deallocation (method + block).
> and partly because I've bumped into a
> similar problem in a java program I work on for a living (the need to
> free DB connections associated with out of scope java objects).
Use "finally". You can as well mimic Ruby behavior by defining a callback
interface (which in Ruby would be a block) like this:
interface Action {
public void doit(Connection tx) throws SQLException;
}
class DbPool {
void doit(Action action) throws SQLException {
Connection tx = getFromPool();
try {
action.doit( tx );
}
finally {
returnToPool( tx );
}
}
}
That is just slightly more verbose than the Ruby equivalent but as save
(i.e. cleanup is always properly done).
> In other words, in my opinion there are cases where IO can be freed
> with a try|begin/catch|rescue/finally|ensure but other cases where for
> example an object is a wrapper around some IO, and in such
> circumstances it makes good sense to free this IO when the object it
> out of scope instead of explicitely calling a close/free method,
As I tried to explain in my last post you don't need to invoke the cleanup
explicitely because you can encapsulate that in a method that takes a block.
> especially when there can be several locations of your program that
> make use of such an object. With a reference counting implementation
> of a GC (Perl, Python) the use of destructors for such a matter is
> immediate (and that may explain why they provide destructors, btw) but
> with a mark & sweep or another "asynchronous" GC it becomes a problem;
> this problem can possibly be workarounded by explicitely calling the
> GC from carefully crafter locations ("when a new request enters" comes
> to mind when you deal with server-based service), however I admit this
> is far from ideal. But even that seems impossible with ruby (and java)
> according to the results of my short program.
No, explicitely invoking GC is definitely *not* the solution for this. In
Ruby there is "ensure" either used directly or from a method that receives a
block. Even if some instance is a wrapper around an IO instance this
pattern can be applied - and it's the most appropriate one.
Another reason not to use GC for this is that you don't have access to the
GC'ed instance in the finalizer which makes things overly complicated.
I really think you are trying to use the wrong tool for the problem at hand.
>> And another remark: as opposed to Java finalizers, Ruby finalizers are
>> guaranteed to be invoked, even on program exit:
>>
>> $ ruby -e 'ObjectSpace.define_finalizer(Object.new){ puts "called" }'
>> called
>> $ ruby -e 'o=Object.new;ObjectSpace.define_finalizer(o){ puts "called" }'
>> called
>
> Yes, and this is a very good point, I know that.
>
> Thanks for your message.
You're welcome.
Cheers
robert