[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Re: Managing metadata about attribute types

Austin Ziegler

11/6/2003 3:28:00 AM

On Wed, 5 Nov 2003 18:08:04 +0900, Ryan Pavlik wrote:
> Austin Ziegler <austin@halostatue.ca> wrote:
> <big snip>
>> It isn't inferior, and you don't need type info. Remember -- an
>> object should validate or transform its own data.
> <big snip>
> I'm trying to stay out of this because I mostly disagree. This however
> warrants addressing.
>
> This is demonstrably wrong.

Actually, it's demonstrably correct. It's exactly to the point and
perfectly accurate regarding how one should deal with data in a
dynamically typed language such as Ruby. In Text::Format, I have a
method #hyphenator= which accepts any object that responds to
#hyphenate_to with an arity of 2 or 3. I explicitly reject any other
object. In documentation, I make it clear that #hyphenate_to should
return an array of two objects. In this way, I don't care what
*class* an object is, I just care that it's type is a hyphenator (as
defined above).

> An object cannot validate and transform its own data in this
> context in any reasonably general manner.

Your StrongTyping module doesn't help with this, either, Ryan. It's
not a conversion module.

> It's simple when you're addressing a few basic types... String,
> Float, Integer, Hash and Array.

Not really. If I were to do the following:

class Foo
def foo=(x)
@foo = x.to_f
end
end

Foo.new.foo = [4, 5]

I'd get an error. That much is clear. I can program defensively in
Foo#foo= by catching the case where x does not respond to #to_f
(either as an exception or by #respond_to?(:to_f)), but I'll still
want to raise an error. But your StrongTyping module doesn't help
here, either:

class Foo
define foo=(x)
expect(x, Numeric)
@foo = x
end
end

Whee. You've semi-automated the error checking. Rather than
attempting to convert x to a float, you require that it be a
numeric. Great. Foo#foo= still is expected to work as a float, but
if you're given a Fixnum, you won't get the expected results
necessarily.

Even in your "academic" example on the StrongTyping page falls down
with a bit of defensive programming:

def sendMsg(bridge, n, m)
sleep(n)
bridge.open
sleep(m)
ensure
bridge.close
end

If you know you're going to be doing something that could leave
things in an intermediate state, ensure that they don't get left in
such a state. The bridge is closed when you pass "m" as a string.

> This isn't general, though. What if I want a Foo, and you give me
> a Bar? Foo was from one module (which shouldn't know about Bar),
> and Bar was from another module (which shouldn't know about Foo).
> There is there no #to_foo (which may be fortunate depending on
> your culinary preferences). This leaves us with only a few
> options:

> * We just don't allow it. This does no good for us.

But this is *exactly* what StrongTyping does, Ryan. It doesn't allow
types that it doesn't know how to deal with.

> * We convert through an intermediary type. This is inefficient and
> may lose data or not work, either.

> * We decide the approach is wrong and do something else.

> Using #to_* methods are the ruby equivalent of type casting. The
> point in this case is not to _convert_ types, it's to provide the
> right type in the first place. Instead of giving the attribute a
> string and expecting it to be parsed, we want to create the
> desired type and hand that off.

Oh, bollocks. Item's @cost is expected to work as a float. Not an
integer, a float. If I want to ensure that it works as a float,
Item#cost= should attempt to *make* the provided parameter into a
float. If someone is going to pass me something that can't be
converted to what I expect, they're going to get an error. If I'm
expecting a Foo, though, then I should probably do something like:

def foo=(x)
@x = Foo(x)
end

That's *if* a Foo can be converted from other types. Or, maybe, I'm
just looking for a particular method call, in which case I can
defensively program as I did with Text::Format. Can it still be
bitten by someone who accepts the parameters in a different order or
expects different things? Sure. But that's not exactly something
that I can program against in any case ... unless I'm using strong
typing, and IMO that does the wrong thing.

Note that I've done much the same thing with Ruwiki recently -- I've
abstracted out what a Ruwiki::Wiki::Token needs to know into a
Handler. It responds to certain methods. As the needs for what a
Wiki Token needs to know increase, the Handler will increase as
well. I'm not even checking to see if the token handler is a
specific class. I just expect that it will respond to those methods.

As Rich Kilmer said, are you checking for behaviour or namespace?

> It has nothing to do with the #attr= function. Strict type
> checking at that point is merely a convenience. It's all about
> getting the input into a useful format without writing n^2
> functions (for n classes). This is the primary reason I wrote
> StrongTyping in fact; the strict checking has merely helped with
> debugging a whole lot.

It has *everything* to do with the #attr= function in the case given.
The OP is dealing with an XML document, which means that everything
is a string and #to_f works. The OP has a library of objects where
certain things are expected to be floats -- but nothing is done to
ensure that. It could be done through StrongTyping, or it could be
done through the proc-based attr_accessor that I posted. Or, the two
could be combined:

attr_accessor proc { |x| expect(x, [Numeric, String]); x.to_f }, :cost

or

attr_accessor proc { |x| expect(x, Float); x }, :cost

The problem here is ultimately that your objects have to know what
they expect and how they are expected to be used *in general*. If
you've got Item#cost, you can expect that it will be used in
mathematic operations. You'll probably do such operations yourself.
So, why not do some sort of conversion on the data to make it into
what you expect it will be when you're defining your attribute
accessor?

A lot of people coming from strict typing backgrounds forget that
this is done implicitly by the compiler ... or rather, because the
type is strictly specified it isn't necessary to do such
conversions. They're done automatically when the types can be
converted.

-austin
P.S. It was suggested that:
attr_accessor proc { |x| x.to_f }, :cost
is overkill. Thus, the version I attached last night includes:
attr_accessor [:cost] => :to_f
I may add other enhancements to that mechanism (as providing
multiple method symbols in an array and chaining them).
--
austin ziegler * austin@halostatue.ca * Toronto, ON, Canada
software designer * pragmatic programmer * 2003.11.05
* 10.54.20



3 Answers

Simon Kitching

11/6/2003 6:06:00 AM

0

On Thu, 2003-11-06 at 16:27, Austin Ziegler wrote:
> On Wed, 5 Nov 2003 18:08:04 +0900, Ryan Pavlik wrote:
> > Austin Ziegler <austin@halostatue.ca> wrote:
> > <big snip>
> >> It isn't inferior, and you don't need type info. Remember -- an
> >> object should validate or transform its own data.
> > <big snip>
> > I'm trying to stay out of this because I mostly disagree. This however
> > warrants addressing.
> >
> > This is demonstrably wrong.
>
> Actually, it's demonstrably correct. It's exactly to the point and
> perfectly accurate regarding how one should deal with data in a
> dynamically typed language such as Ruby. In Text::Format, I have a
> method #hyphenator= which accepts any object that responds to
> #hyphenate_to with an arity of 2 or 3. I explicitly reject any other
> object. In documentation, I make it clear that #hyphenate_to should
> return an array of two objects. In this way, I don't care what
> *class* an object is, I just care that it's type is a hyphenator (as
> defined above).
>
> > An object cannot validate and transform its own data in this
> > context in any reasonably general manner.
>
> Your StrongTyping module doesn't help with this, either, Ryan. It's
> not a conversion module.

I don't think that the concept of strict typechecking deserves quite
such a roasting :-)

For me, when looking at a method like
def foo(param)
...
end
the question is "what contract is the param required to adhere to?".
And maybe "what is the contract of the returned object".

If the code is strictly-typed, like
void foo(Map map)
end
then I know exactly what contract the param must adhere to: it's
documented in the javadoc on the Map class. [Isn't that the definition
of "type"? A contract of behaviour?]

Ok, there are flaws to strict typing. The first is that the contracts
tend to over-specify. The foo method probably only wants a few of the
methods from the Map class, but the concrete class of the object I pass
must implement them *all*. In fact, the Java collections class has an
ugly hack to resolve this issue: methods are allowed to throw an
Unsupported exception, which means an object might only partially fulfil
the required contract. However 90% is probably good enough.

And for java's Object-based collections, you lose part of the contract:
what is the object type *in* the map. Generics (templates for Java) will
resolve this issue (I hope).

In programming languages which are always distributed with
non-obfuscated source code, the source can be inspected to determine the
contract. This isn't too bad a solution, provided the library author
writes clean code and comments it well.

Some developers are well enough disciplined to put the contract in
comments. This is fairly rare, though. And there is no guarantee that
those comments are actually correct and up-to-date.

And then there is "try it and see", where the user has to discover the
contract by trial and error. I'm not so fond of this.

In the end, the problem is simply one of human<->human communication.
The author of a library needs to tell the user of the library about
various contracts. Strict typing is one end of the spectrum of this
communication. It results in well-specified code, but at the cost of
extra developer labour. Ruby is the other end; it results (generally) in
completely unspecified code (see "read the source" above) but imposes
little overhead on the developer.

Compiler-assisted programming, where the compiler tells the user when
they got it wrong is just a bonus. I wouldn't miss this as much as the
lack of *communication* about types.

I don't think that a true measure of the effectiveness of strict typing
can be had by saying "I wrote a big application and didn't need strict
typing". I suggest you try *using* a big library someone else wrote, and
see if you miss it :-). Then deliver that app to a customer and see if
they turn up any cases where you made a mistake about the contract of a
method or parameter.

Obviously I'm biased; my experience is mainly in strictly-typed
languages. For smallish apps I can clearly see the benefits of Ruby.

In fact, given a combination of good unit-tests and well-modularised
code (so I can see all the places an object is manipulated and therefore
deduce its contract), I could be convinced of Ruby for large projects
too. But I think I will always wish that the *type* of parameters (and
return values!) were simply documented via type declarations like Java.


Cheers,

Simon


Ryan Pavlik

11/6/2003 6:17:00 AM

0

On Thu, 6 Nov 2003 12:27:43 +0900
Austin Ziegler <austin@halostatue.ca> wrote:

> On Wed, 5 Nov 2003 18:08:04 +0900, Ryan Pavlik wrote:
<snip>
> > This is demonstrably wrong.
>
> Actually, it's demonstrably correct. It's exactly to the point and
> perfectly accurate regarding how one should deal with data in a
> dynamically typed language such as Ruby. In Text::Format, I have a
> method #hyphenator= which accepts any object that responds to
> #hyphenate_to with an arity of 2 or 3. I explicitly reject any other
> object. In documentation, I make it clear that #hyphenate_to should
> return an array of two objects. In this way, I don't care what
> *class* an object is, I just care that it's type is a hyphenator (as
> defined above).

This is a specific case that does not generalize.

> > An object cannot validate and transform its own data in this
> > context in any reasonably general manner.
>
> Your StrongTyping module doesn't help with this, either, Ryan. It's
> not a conversion module.

Actually it does, but not the way you think. The point in using the
ST module is not the type verification; many ruby people get too hung
up on the type checking bit.

The real area of interest is the fact it _documents_ what you want;
you can ask it what type it is expecting with the various query
functions the ST module provides.

> > It's simple when you're addressing a few basic types... String,
> > Float, Integer, Hash and Array.
>
> Not really. If I were to do the following:

<big typechecking examples snipped>

> If you know you're going to be doing something that could leave
> things in an intermediate state, ensure that they don't get left in
> such a state. The bridge is closed when you pass "m" as a string.

Again, this misses the point. The original question was "how do I ask
what a particular thing wants?" My answer was to use the ST module,
because you do exactly that:

class Foo
def foo=(x); expect(x, Numeric); @x=x; end
end

Or more conveniently:

class Foo
attr_accessor_typed Numeric, :foo
end

Now you can simple say:

foo = Foo.new
t = get_arg_types(foo.method(:foo=)) # => [[Numeric]]

Now, how is this useful? We can do "third-party" input processing,
where it properly belongs. Having this code in either the source
class (String) or the destination class (for the XML) is wrong, since
it results in unnecessary coupling, and thus limits extensibility.

As a third party, we can have an interface for extending the input
processing from any position.

> > * We just don't allow it. This does no good for us.
>
> But this is *exactly* what StrongTyping does, Ryan. It doesn't allow
> types that it doesn't know how to deal with.

No, Austin, this isn't what it's about. This is not about type
checking arguments passed to a method---that's just a handy side
effect in this case. It's about querying beforehand.

<snip>
> > Using #to_* methods are the ruby equivalent of type casting. The
> > point in this case is not to _convert_ types, it's to provide the
> > right type in the first place. Instead of giving the attribute a
> > string and expecting it to be parsed, we want to create the
> > desired type and hand that off.
>
> Oh, bollocks. Item's @cost is expected to work as a float. Not an
> integer, a float. If I want to ensure that it works as a float,
> Item#cost= should attempt to *make* the provided parameter into a
> float. If someone is going to pass me something that can't be
> converted to what I expect, they're going to get an error. If I'm
> expecting a Foo, though, then I should probably do something like:
>
> def foo=(x)
> @x = Foo(x)
> end

Er, that works in C++, but not ruby, unless you define a Foo()
function. I'm not sure ruby will allow both a Foo class and a Foo
function, but either way it's not any sort of working standard.

However, given Foo(x) makes a Foo out of x, this is definitely not a
good solution. We want a Foo. Any Foo should work...

class Bar < Foo
:
end

...even if it's a Bar. Inheritence is one of the big three of OOP;
doing the sort of re-creation you have here defeats the point. Even
C++ typecasting preserves the identity of a thing.

> That's *if* a Foo can be converted from other types. Or, maybe, I'm
> just looking for a particular method call, in which case I can
> defensively program as I did with Text::Format. Can it still be
> bitten by someone who accepts the parameters in a different order or
> expects different things? Sure. But that's not exactly something
> that I can program against in any case ... unless I'm using strong
> typing, and IMO that does the wrong thing.

I'm not sure what you mean by "the wrong thing"; since you've been
focusing solely on strict type checking, that may be it. I fail to
see how this is "the wrong thing", though.

In any case, you could conceivably use ST to handle this case. I've
been contemplating a module that augments this and provides call
context and semantics in addition to types. Duck typing people may
like this more, since it would do implicit type conversions. You
wouldn't care what arguments a method took; you'd provide it with the
data at hand, coupled with its semantic documentation:

def foo(*args)
x, y = find_in_context([:x, Float, :point],
[:y, Float, :point])
:
end

We could call it in a number of ways:

# Specify semantics explicitly:
foo(:x => 1, :y => 2)

# "Point" would have semantic tagging for :point, :x, and :y:
point = Point.new(1, 2)
foo(point)

# Have one in context:
context_push Point.new(0, 0)

Circle.new # Both gather their point
foo # from the existing context

Ambiguity and insufficient data would both raise exceptions.
Conceivably these could raise to the user level, and have the user
provide discernment or additional data.

Programming with a context pool is exceedingly useful in a number of
areas, but I'm straying far from the original point. One of the bits
of information here is still type, and for the _same_reason_ I
discussed above: so a third party can query and handle things.

> Note that I've done much the same thing with Ruwiki recently -- I've
> abstracted out what a Ruwiki::Wiki::Token needs to know into a
> Handler. It responds to certain methods. As the needs for what a
> Wiki Token needs to know increase, the Handler will increase as
> well. I'm not even checking to see if the token handler is a
> specific class. I just expect that it will respond to those methods.

I tried a similar thing once (actually it was quite a bit different in
application, but similar in form), but found it leads to far too much
information redundancy. I should be able to say, once, "this is what
this means, this is what this wants, this is what it does". I
shouldn't have to write it again for every class. Granted, if you
only have one class (Ruwiki::Wiki::Token), then it's not much of a
problem.

But the ability to query the type it wants lets you solve the problem
in a general manner, so you don't have to do it again for _any_ class,
and you can extend the general case with a minimal amount of
additional code.

Basically, when I have 200 classes, I don't want another 200 classes
to do interpretation. It makes the idea of writing that 201st class
not seem very pleasant.

> As Rich Kilmer said, are you checking for behaviour or namespace?

OK, let's look at it this way. Say we write things in a duck-typed
manner. We could provide something similar to ST for efficient
checking:

def foo(h)
expect_duck h, :[], :[]=
:
h["something"] = ...
:
end

Right? We treat it not by type, but what it acts like. It provides
us with #[] and #[]=, so (ignoring any semantic issues) it's enough of
a duck for us.

Now, this may begin to get tedious:

def foo(a, b)
expect_duck a, [:first, :last, :[], :each],
b, [:first, :last, :[], :each]
:
end

So, after a time, we start defining common sets so we don't have to do
that every time:

ARRAY_DUCK = [:first, :last, :[], :each]

def foo(a, b)
expect_duck a, ARRAY_DUCK, b, ARRAY_DUCK
:
end

Now, you probably see where I'm going with this. Where I've already
gone, actually. This is a simple reinvention of classes, just lacking
important semantic information.

You can also treat a class (or module) itself as the behavioral
specification, and with it you gain the important semantic
differentiation between the proverbial ducks and platypuses.
You can simply ask, "is this a Foo?", and know that it will act like
you want. When you make a thing that acts like a Foo, you can
subclass or include Foo, to show what you mean.

> > It has nothing to do with the #attr= function. Strict type
> > checking at that point is merely a convenience. It's all about
> > getting the input into a useful format without writing n^2
> > functions (for n classes). This is the primary reason I wrote
> > StrongTyping in fact; the strict checking has merely helped with
> > debugging a whole lot.
>
> It has *everything* to do with the #attr= function in the case given.
> The OP is dealing with an XML document, which means that everything
> is a string and #to_f works.

Austin, as above, not every type may be a simple string, float,
integer, array, or hash. There are no other standard #to_* functions.
(Even array and hash are pushing that one.) It would require either
String know how to interpret every type, or the procs you write
know how to interpret every type. Eventually you're going to want to
add more types, and this method doesn't really allow for inheritence,
and it's not otherwise very extensible.

<snip>
> The problem here is ultimately that your objects have to know what
> they expect and how they are expected to be used *in general*. If
> you've got Item#cost, you can expect that it will be used in
> mathematic operations. You'll probably do such operations yourself.
> So, why not do some sort of conversion on the data to make it into
> what you expect it will be when you're defining your attribute
> accessor?

The question is where it should be done. This is a perfect example.
Money should never (ever!) be done with Floats. People don't like it
when they lose money due to lack of precision.

I've had to write a Currency module which uses integers, but provides
various currency formats and manipulations. If we had Item#cost, we
could not solve the problem with #to_* conversions:

class Item
attr_accessor proc { |x| x.to_?? } :cost
end

What do we convert to? What we want is Currency---it doesn't matter
what type. It could be USD, Euros, Yen, or whatever. There is no
general #to_currency, and we can't just convert it to an integer,
because we lose what the integer means (200 USD is not Y200). We
don't really have anything to demand that it responds to; currency is
more or less a number like anything else, it just has a unit attached.

With StrongTyping, it's easy:

class Item
attr_accessor Currency, :cost
end

Now, _before_ we take "$200.05" and assign it to cost, the ST module
allows us to ask what it wants, and we can, as a third party, know how
to convert input type (a String) to the desired type (Currency).
(In this case, for a general solution, I'd recommend something like
the conversion table I discussed in [ruby-talk:74206].)

> A lot of people coming from strict typing backgrounds forget that
> this is done implicitly by the compiler ... or rather, because the
> type is strictly specified it isn't necessary to do such
> conversions. They're done automatically when the types can be
> converted.

Yes, but they're both standard and extensible. In C++ I can do:

operator int() {
return (int)this->something;
}

and when I "(int)thing", that routine gets called. OTOH, I don't
think this is really the right solution, as this sort of thing is too
coupled with static typing, and static typing and OOP have no business
being in the same language.

> P.S. It was suggested that:
> attr_accessor proc { |x| x.to_f }, :cost
> is overkill. Thus, the version I attached last night includes:
> attr_accessor [:cost] => :to_f
> I may add other enhancements to that mechanism (as providing
> multiple method symbols in an array and chaining them).
<snip>

Now, with all I've said above, I haven't really addressed
attr-with-proc stuff. Actually, I think it'd be a pretty neat idea to
be able to tag extra code onto attributes without having to do a
full-out def.

I don't think it's the right solution to this problem, though.

--
Ryan Pavlik <rpav@mephle.com>

"Do not question wizards, for they are quick to
turn you into a toad." - 8BT

Ryan Pavlik

11/6/2003 6:40:00 AM

0

On Thu, 6 Nov 2003 15:05:58 +0900
Simon Kitching <simon@ecnetwork.co.nz> wrote:

<snip>
> I don't think that the concept of strict typechecking deserves quite
> such a roasting :-)

Yeah same, I'm not sure why it's a big deal... it's been pretty
helpful in debugging. If I specify something it doesn't like, I see
exactly where it came from. There are cases where, for instance, I
might assign an attribute, or add something to a list that gets used
later, only to have an error occur in never-never land. No indication
as to who the offending party was.

> For me, when looking at a method like
> def foo(param)
> ...
> end
> the question is "what contract is the param required to adhere to?".
> And maybe "what is the contract of the returned object".
>
> If the code is strictly-typed, like
> void foo(Map map)
> end
> then I know exactly what contract the param must adhere to: it's
> documented in the javadoc on the Map class. [Isn't that the definition
> of "type"? A contract of behaviour?]

As in my (just) previous message, this is exactly what it is. Even if
you say "I want this thing to respond to #[] and #[]=", you could
label that set... and it basically becomes a class.

> Ok, there are flaws to strict typing. The first is that the contracts
> tend to over-specify. The foo method probably only wants a few of the
> methods from the Map class, but the concrete class of the object I pass
> must implement them *all*. In fact, the Java collections class has an
> ugly hack to resolve this issue: methods are allowed to throw an
> Unsupported exception, which means an object might only partially fulfil
> the required contract. However 90% is probably good enough.

This is where mixins are nice. You can do "microtyping" (is that a
word yet?), and pull in each set of interfaces. This has the added
benefit of attached semantics, so you know that your #[] isn't a call
to a block, but an array dereference, for instance.

Most of the time you don't even need this... it happens naturally with
superclasses.

> And for java's Object-based collections, you lose part of the contract:
> what is the object type *in* the map. Generics (templates for Java) will
> resolve this issue (I hope).

Generics sound too much like templates to me. IMO, templates,
generics, and the like, are all hacks to get around the fact the
language is static.

Personally, I think that if you want something that's not a generic
array, you should just make a subclass that filters its contents.
I realize that this isn't easy in ruby for many of the base classes,
but it's a simple and effective solution.

Instead of making subclasses, you could even just test the filter.
There are other related solutions.

> In programming languages which are always distributed with
> non-obfuscated source code, the source can be inspected to determine the
> contract. This isn't too bad a solution, provided the library author
> writes clean code and comments it well.
<snip>
> In the end, the problem is simply one of human<->human communication.
> The author of a library needs to tell the user of the library about
> various contracts. Strict typing is one end of the spectrum of this
> communication. It results in well-specified code, but at the cost of
> extra developer labour. Ruby is the other end; it results (generally) in
> completely unspecified code (see "read the source" above) but imposes
> little overhead on the developer.

I generally don't like to have to do things that the computer could
just as easily do a better job of, with information I need to give it
anyway. The extra work on my part of entering the classname here and
there isn't so strenuous and labor-intensive that it cuts into
productivity. Debugging and copying information, though, does.

My theory is that I should only have to tell the computer _once_, in
_one_ place, what I mean, and it should be able to use that
information repeatedly in as broad a manner as possible. Thus, if I
specify my method wants a Date object, it should do everything from
making sure I get one to automatically providing the user with the
appropriate widget.

> Compiler-assisted programming, where the compiler tells the user when
> they got it wrong is just a bonus. I wouldn't miss this as much as the
> lack of *communication* about types.

Basically, I agree. The reason I wrote ST was not just because I was
nervous about getting types wrong, it was because I needed to document
what those types were, in a manner the code could ask itself about
them. As above, the extra checking has improved debugging time
drastically, and that's definitely a bonus.

> I don't think that a true measure of the effectiveness of strict typing
> can be had by saying "I wrote a big application and didn't need strict
> typing". I suggest you try *using* a big library someone else wrote, and
> see if you miss it :-). Then deliver that app to a customer and see if
> they turn up any cases where you made a mistake about the contract of a
> method or parameter.

I concur. It's not _necessary_, in the sense it can't be done
without. I could write a huge application with it, and then remove it
later, and the application would still run.

That, of course, is not the point.

> Obviously I'm biased; my experience is mainly in strictly-typed
> languages. For smallish apps I can clearly see the benefits of Ruby.

I am lax with strick checking with many scripts and modules; usually
this is because it doesn't matter, and I won't need to query things
anyway, but I've often regretted it as well.

> In fact, given a combination of good unit-tests and well-modularised
> code (so I can see all the places an object is manipulated and therefore
> deduce its contract), I could be convinced of Ruby for large projects
> too. But I think I will always wish that the *type* of parameters (and
> return values!) were simply documented via type declarations like Java.

This will work, but I don't have like it---redundant work just diverts
me from the problem at hand. A thousand programmers having to
re-deduce the contract is a lot of wasted time.

--
Ryan Pavlik <rpav@mephle.com>

"Do not question wizards, for they are quick to
turn you into a toad." - 8BT