[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Re: holub and OOP flavors

Joe Cheng

9/4/2003 2:56:00 AM

Hi Sean,

> If x does the fooing, I presume that x is part of a level of abstraction
> already in-place. That is, if I can call 'printf(stderr, "something")',
> the I consider stderr to be part of an abstraction called "printing, but
> to an error stream". If stderr is part of a group of functions, all of
> which were developed BEFORE my OO program came along, then to prevent my
> program from a whole lot of non-OO code, I would wrap up stderr into an
> object.

When I said "x does the fooing" I didn't mean x, the programming construct,
has method foo. For the purposes of this discussion I'm referring to
concepts that do not yet exist as programming constructs--starting from a
clean slate, as it were.

> Therefore, that "* x.foo(y) is more natural because it's x that does the
> fooing" is, to me, almost exclusively the rule.

According to your assumptions, i.e. x is a part of a level of abstraction
already in-place, I totally agree. I also agree that if the abstraction
doesn't fit your needs, you write an object adapter.

> We've also already talked a lot about the difference between
> string.print and device.print(string) and experience tells me that the
> most useful, re-usable pattern is device.print(string), which is a
> Real-World-imitating pattern.

Sure, that's a simple example. I can think of situations where it would
make a lot more sense to have printee.printTo(device) when the printee is
more complex than a string.

To turn it around a little... consider Ruby's blocks and iterators. Surely
they're not a Real-World-imitating pattern? If you have a box of pencils
and you want to sharpen them all, you take them out one by one (or all at
once) and sharpen them. You don't pass a pencil-sharpening module to the
box and tell the box to apply it to each pencil. Yet blocks and iterators
are tremendously useful and convenient.

> Therefore, the statement "* In the Real World, you don't tell a y to
> fooTo x, you just foo the y with x" is a darn good reason to organize
> objects a certain way.

It is certainly a factor. Just not the most important one, IMO.

> Also, regarding developer habits: I think experience teaches a person a
> whole lot that you can't get out of a book; most of that experience
> comes from "being there" and not "reading about it". So, if I know an
> excellent, excellent coder who shows me some code and can't articulate
> why it's a good way to do something, I don't reject his efforts on that
> basis alone. His experience tells me there's probably much more wisdom
> in the decisions he's made than anyone could reasonably articulate
> without spending a fair amount of time reflecting.
>
> Therefore, "* It just feels wrong, I don't know why" is often a
> perfectly valid reason.

And plenty of immature coders might also have intuitions that are not at all
correct. If a Dog is-an Animal, does that mean a DogList is-an AnimalList?
A lot of programmers would "feel" the answer is yes. The correct answer is
no.

I'm not saying that you or I should not go with our gut when we're writing
code--none of us go through an internal discussion like this when we work on
each project, we just do it. I'm just saying if you're looking for a set of
guiding principles for designing systems of classes, there are more powerful
ones than "not feeling right" (and really, I was specifically targeting the
"not feeling right" because the abstractions are incongruous to the natural
world).

> > Instead I think the thought process should sound more like:
> > * x is more likely to change in the future, so I would like to keep its
> > implementation particularly opaque and its public interface particularly
> > compact. Therefore I will choose x.foo(y).
>
> I find that "changing code" is not a criteria for design a set of
> classes, but often I *will* break code which is decidely transient into
> its own class so that unique implementations can be provided more
> easily. So, I agree more or less.

I find that likelihood of future change is one of the most important driving
forces in all of my designs. Perhaps it is a reflection of the kind of
projects and companies I've been exposed to, but when I read a spec I
automatically think "What parts of this are most likely to grow or
change--probably right before the deadline?".

> > * Many classes are likely to have an "is-a" relationship to y, and I
would
> > like future developers to be able to provide their own implementations
of
> > fooTo, so I will choose y.fooTo(x).
>
> Organizing classes according to their abstractions (devices, strings,
> images, etc.), and by-task (printing, encrypting, etc.) will already
> show "is-a" relationships.

If you don't have a polymorphic fooTo method, then what difference does it
make if there are is-a relationship in place? (I'm making the assumption
that whatever fooTo does, it can vary enough from y-subclass to y-subclass
that you couldn't simply pull data out using the y interface.)

> I agree, you do have to design to allow developers to extend. You need
> to pick out your inheritance lines, keep methods using the lowest base
> class possible, etc. This is a finer point than was previously
> mentioned. There are other issues, too. But they don't override the
> basics: natural organization, encapsulate your abstractions.
> Unification of abstractions is another issue. Often you need to import
> code that is utterly, shockingly different from your projects design
> patterns.
>
> But I find that, if I stick to Real-World patterns and look for my
> inheritance lines, that's everything. If my code base is filled with
> classes that have natural relationships and are ordered into inheritance
> lines that minimize change from one level of inheritance to another, I
> find from there I can program just about anything I want, extend it
> later as much as I want, and I'm happy as a clam.

I agree that most of the time, if obvious Real-World analogs are available
it is generally going to work out best to use them. The reason, I believe,
is because the Real-World abstractions are *stable* abstractions--whereas
less "natural" abstractions that happen to fit the spec's 3 or 4 use cases
are less likely to withstand the use cases that will be added with the next
version of the spec, and so you end up having to rewrite the code that
modeled those abstractions.

But what happens when the Real-World analogs are not so clear? Compare
Holub's visual proxy architecture to pseudo-MVC (or PAC or whatever he calls
it). Or how about designing non-blocking IO libraries, or modular RPC
mechanisms, or compilers. Trying to find real-world analogs to these
problems can be counterproductive, but the underlying principles of
complexity management and extension points still apply. So I think there's
value in conditioning yourself to thinking in terms of the underlying
principles, and let the fact that your designs often resemble the natural
world simply be a nice side-effect--as opposed to making resemblance to the
natural world your goal.

> > The latter two questions may not be as intuitive as the former, but I
> > believe they are much more relevant if the goal of a design is to
> > effectively manage complexity.
>
> Manging complexity is what it's all about. If you can get the
> complexity down far enough, there is no limit to what a project can
acheive.

Yeah, that was a half-rhetorical "if"... you might not be too worried about
managing complexity if you're just trying to get a throwaway demo done for
Friday so you can spend the weekend with the kids. :)