[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Ruby for massively multi-core chips?

Bil Kleb

1/23/2007 7:25:00 AM

How to best evolve Ruby to accommodate 80-core
CPU programming?

http://www.ddj.com/dept/architect...

Or does it already?

Later,
--
Bil Kleb
http://fun3d.lar...
16 Answers

Eric Hodel

1/23/2007 7:59:00 AM

0

On Jan 22, 2007, at 23:25, Bil Kleb wrote:

> How to best evolve Ruby to accommodate 80-core
> CPU programming?
>
> http://www.ddj.com/dept/architect...
>
> Or does it already?

Keep Koichi employed?

--
Eric Hodel - drbrain@segment7.net - http://blog.se...

I LIT YOUR GEM ON FIRE!


M. Edward (Ed) Borasky

1/23/2007 3:57:00 PM

0

Eric Hodel wrote:
> On Jan 22, 2007, at 23:25, Bil Kleb wrote:
>
>> How to best evolve Ruby to accommodate 80-core
>> CPU programming?
>>
>> http://www.ddj.com/dept/architect...
>>
>> Or does it already?
>
> Keep Koichi employed?
>
I think it's time I posted my "we've been here before" rant about
concurrency and massively parallel computers on my blog. :) For
starters, do a Google search for the writings of Dr. John Gustafson, who
is now a senior researcher at Sun Microsystems. :)

--
M. Edward (Ed) Borasky, FBG, AB, PTA, PGS, MS, MNLP, NST, ACMC(P)
http://borasky-research.blo...

If God had meant for carrots to be eaten cooked, He would have given rabbits fire.


Matt Lawrence

1/23/2007 4:00:00 PM

0

Daniel Berger

1/23/2007 4:57:00 PM

0


Bil Kleb wrote:
> How to best evolve Ruby to accommodate 80-core
> CPU programming?
>
> http://www.ddj.com/dept/architect...
>
> Or does it already?

Possible but not easy with fork + ipc I think. Otherwise, no. Neither
does Perl or Python.

So far the only language I've seen specifically designed for multiple
cpus/cores is Fortress, and it's alpha.

Regards,

Dan

Ron M

1/23/2007 6:40:00 PM

0

Bil Kleb wrote:
> How to best evolve Ruby to accommodate 80-core
> CPU programming?

A version of NArray that parallelizes it's
work (a task that could be made easier using
OpenMP or similar) would work especially well
if your CPU-intensive part of your application
is math intensive.

For more mundane tasks (web serving) an
obvious answer would be to simply fork
off 80 ruby processes which would efficiently
use the 80 cores.



Eric Hodel

1/23/2007 6:56:00 PM

0

On Jan 23, 2007, at 07:57, M. Edward (Ed) Borasky wrote:
> Eric Hodel wrote:
>> On Jan 22, 2007, at 23:25, Bil Kleb wrote:
>>> How to best evolve Ruby to accommodate 80-core
>>> CPU programming?
>>>
>>> http://www.ddj.com/dept/architect...
>>>
>>> Or does it already?
>>
>> Keep Koichi employed?
>>
> I think it's time I posted my "we've been here before" rant about
> concurrency and massively parallel computers on my blog. :) For
> starters, do a Google search for the writings of Dr. John
> Gustafson, who is now a senior researcher at Sun Microsystems. :)

See also Koichi's 2005 RubyConf presentation.

--
Eric Hodel - drbrain@segment7.net - http://blog.se...

I LIT YOUR GEM ON FIRE!


gga

1/23/2007 8:29:00 PM

0


Daniel Berger ha escrito:
> Bil Kleb wrote:
> > How to best evolve Ruby to accommodate 80-core
> > CPU programming?
> >
> > http://www.ddj.com/dept/architect...
> >
> > Or does it already?
>
> Possible but not easy with fork + ipc I think. Otherwise, no. Neither
> does Perl or Python.
>
> So far the only language I've seen specifically designed for multiple
> cpus/cores is Fortress, and it's alpha.

Fortress, eh? Never heard of it...

Actually, there's a couple of languages you could use on that machine
that are far, far from beta.

Your best bet for that machine is LUA at this point in time. LUA is
multi-thread ready and pretty stable. As long as you don't do any OO
and blink a little, Lua's syntax looks like Ruby.
Nowhere near as nice to do OO in it as in Ruby (or Python, for that
matter), but doable. It is a tiny little bit nicer than Perl's OO (but
not by much).

And good old and somewhat dusty TCL has always been thread friendly.
TCL's OO is kind of a big mess, as it is not native to the language and
there are 2 or 3 frameworks for doing so. However, TCL's big plus is
that it has been around the block for a long, long time.

Tom Pollard

1/24/2007 3:13:00 AM

0


On Jan 23, 2007, at 10:57 AM, M. Edward (Ed) Borasky wrote:
> I think it's time I posted my "we've been here before" rant about
> concurrency and massively parallel computers on my blog. :) For
> starters, do a Google search for the writings of Dr. John
> Gustafson, who is now a senior researcher at Sun Microsystems. :)

SUN'S GUSTAFSON ON ENVISIONING HPC ROADMAPS FOR THE FUTURE
http://www.taborcommunications.com/hpcwire/h...
05/0114/109060.html

[...]
You may recall that Sun acquired the part of Cray that used to be
Floating Point Systems. When I was at FPS in the 1980s, I managed the
development of a machine called the FPS-164/MAX, where MAX stood for
Matrix Algebra Accelerator. It was a general scientific computer with
special-purpose hardware optimized for matrix multiplication (hence,
dense matrix factoring as well). One of our field analysts, a well-
read guy named Ed Borasky, pointed out to me that our architecture
had precedent in this machine developed a long time ago in Ames,
Iowa. He showed me a collection of original papers reprinted by Brian
Randell, and when I read Atanasoff's monograph I just about fell off
my chair. It was a SIMD architecture, with 30 multiply-add units
operating in parallel. The FPS-164/MAX used 31 multiply-add units,
made with Weitek parts that were about a billion times faster than
vacuum tubes, but the architectural similarity was uncanny. It gave
me a new respect for historical computers, and Atanasoff's work in
particular. And I realized I shouldn't have been such a cynic about
the historical display at Iowa State.
[...]

I can see why you're a fan. ;-)

Tom


M. Edward (Ed) Borasky

1/24/2007 4:22:00 AM

0

Tom Pollard wrote:
>
> On Jan 23, 2007, at 10:57 AM, M. Edward (Ed) Borasky wrote:
>> I think it's time I posted my "we've been here before" rant about
>> concurrency and massively parallel computers on my blog. :) For
>> starters, do a Google search for the writings of Dr. John Gustafson,
>> who is now a senior researcher at Sun Microsystems. :)
>
> SUN'S GUSTAFSON ON ENVISIONING HPC ROADMAPS FOR THE FUTURE
> http://www.taborcommunications.com/hpcwire/hpcwireWWW/05/0114/1...
>
> [...]
> You may recall that Sun acquired the part of Cray that used to be
> Floating Point Systems. When I was at FPS in the 1980s, I managed the
> development of a machine called the FPS-164/MAX, where MAX stood for
> Matrix Algebra Accelerator. It was a general scientific computer with
> special-purpose hardware optimized for matrix multiplication (hence,
> dense matrix factoring as well). One of our field analysts, a
> well-read guy named Ed Borasky, pointed out to me that our
> architecture had precedent in this machine developed a long time ago
> in Ames, Iowa. He showed me a collection of original papers reprinted
> by Brian Randell, and when I read Atanasoff's monograph I just about
> fell off my chair. It was a SIMD architecture, with 30 multiply-add
> units operating in parallel. The FPS-164/MAX used 31 multiply-add
> units, made with Weitek parts that were about a billion times faster
> than vacuum tubes, but the architectural similarity was uncanny. It
> gave me a new respect for historical computers, and Atanasoff's work
> in particular. And I realized I shouldn't have been such a cynic about
> the historical display at Iowa State.
> [...]
>
> I can see why you're a fan. ;-)
>
> Tom
>
>
>
Yeah, John and I worked together at FPS. But what I'm getting at is that
John and I (and others within FPS and elsewhere in the supercomputing
segment) would have endless discussions about the future of
high-performance computing, with some saying it just *had* to be
massively parallel SIMD, others saying it just *had* to be moderately
parallel MIMD, and others saying, "programming parallel vector machines
is just too hard -- the guys over at Intel are doubling the uniprocessor
clock speed every 18 months -- in five years you'll have a Cray on your
desktop".

That was "only" about 20 years ago ... I've got a 1.3 gigaflop Athlon
Tbird that's still more horsepower than I need, but back then if you
wanted 1.3 gigaflops you had to chain together multiple vector machines.
But my real point is that no matter what solution you proposed, "the
programmers weren't ready", "the languages weren't ready", "the
compilers weren't ready", "the architectures weren't ready", "the
components weren't ready", etc. I hear the same whining today about
dual-cores, clusters, scripting languages and today's generation of
programmers. And it's just as bogus now as it was then. Except that
there's 20 years more practical experience and theoretical knowledge
about how to do parallel and concurrent computing. So actually it's
*more* bogus now!

--
M. Edward (Ed) Borasky, FBG, AB, PTA, PGS, MS, MNLP, NST, ACMC(P)
http://borasky-research.blo...

If God had meant for carrots to be eaten cooked, He would have given rabbits fire.


M. Edward (Ed) Borasky

1/24/2007 4:35:00 AM

0

Ron M wrote:
> Bil Kleb wrote:
>
>> How to best evolve Ruby to accommodate 80-core
>> CPU programming?
>>
>
> A version of NArray that parallelizes it's
> work (a task that could be made easier using
> OpenMP or similar) would work especially well
> if your CPU-intensive part of your application
> is math intensive.
>
> For more mundane tasks (web serving) an
> obvious answer would be to simply fork
> off 80 ruby processes which would efficiently
> use the 80 cores.
>
Uh ... be careful ... processes take up space in cache and in RAM. The
only thing that would be sharable is the memory used for code ("text" in
Linux terms). I think what you want is *lightweight* processes a la
Erlang, which Ruby doesn't have yet. It does have *threads*, though.

--
M. Edward (Ed) Borasky, FBG, AB, PTA, PGS, MS, MNLP, NST, ACMC(P)
http://borasky-research.blo...

If God had meant for carrots to be eaten cooked, He would have given rabbits fire.