Luke Graham
4/13/2005 1:06:00 AM
On 4/12/05, flaig@sanctacaris.net <flaig@sanctacaris.net> wrote:
> Am Montag, 11. April 2005 17:02 schrieb ruby-talk-admin@ruby-lang.org:
> > flaig@sanctacaris.net wrote:
> > > Apart from explicitly creating threads, it would be nice if
> > > the Ruby system could be taught to automatically recognize
> > > parallelizable code and optimally distribute it across a
> > > multiprocessor system -- implicitly. That would be a big
> > > advange for high-level programming in general! I do not know
> > > the state of the art in this, I only remember that the
> > > Atari/Inmos guys failed do do this in Occam, back in the 1980s.
> > > Do you think there is a serious chance to get such a thing working?
> >
> > The only programming environment I'm familiar with where somebody
> > implemented automatic parallel optimization is Fortran (although I'm
> > sure there are others). ÂFortran's branching and memory models are
> > constrained enough to allow for some clever analysis. ÂLoops where each
> > iteration has no impact on the next can be discovered and converted into
> > short-term fine-grained parallel execution. ÂIn that case, the original
> > code has no concept of threading, it just runs faster during the inner
> > loops.
> >
> > None of that would carry over to a thread-aware language with a dynamic
> > type system.
>
> Do you really think so?
> Fortran has a pretty simple enumerative loop which can be optimized for parallelization, provided your compiler is smart enough.
> Higher-level languages, by contrast, contain (or at least may contain) structures such as MAP which tell the compiler/interpreter: "This refers to an entire block of data" and may be extended to: "So distribute the workload as you think fit." This would not even require any analysis but just a fistful of code in the part of the compiler that handles the respective statement.
> Of course, it might break backward compatibility as it does away with the tacit assumption that the iterations are executed in any guaranteed order...
Parallel programming, both local and distributed, is one of the great
research topics of this decade. Some good google keywords - orca,
amoeba, erlang. For things more complicated than a do-loop, it still
takes a human to break the problem into message-passing or an
equivalent. There are plenty of extensions to C/Fortran/whatever to
help in these things.
--
spooq