[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Re: Happy Numbers (#93

Daniel Martin

9/2/2006 1:06:00 PM

Paul Lutus <nospam@nosite.zzz> writes:

> One can't argue with results, but this is some kind of language-specific
> anomaly if one considers the operations (presumably) taking place behind
> the scenes. For arbitrary numbers n,p and the operation n^p,
>
> n^p = e^(log(n)*p)

That's not the way anyone does exponentiation with integers. Instead,
you represent the exponent as a sum of powers of 2 (which you already
have as its binary representation) and then do something like, e.g.,
this to find n ** 10:

n2 = n * n
n4 = n2 * n2
n8 = n4 * n4
n ** 10 = n8 * n2

Still, n ** 2 does result in a multiplication, but I think that what
we're seeing is that n * n involves two lookups of the value of the
variable "n", whereas n ** 2 involves only one retrieval of n. That
extra lookup operation swamps any of the actual mathematical
operations. This also explains why with a large literal simple
multiplication is faster than doing ** 2.

--
s=%q( Daniel Martin -- martin@snowplow.org
puts "s=%q(#{s})",s.map{|i|i}[1] )
puts "s=%q(#{s})",s.map{|i|i}[1]

8 Answers

Jean-Claude Arbaut

9/2/2006 1:21:00 PM

0

Daniel Martin wrote:
>
> That's not the way anyone does exponentiation with integers. Instead,
> you represent the exponent as a sum of powers of 2 (which you already
> have as its binary representation) and then do something like, e.g.,
> this to find n ** 10:
>
> n2 = n * n
> n4 = n2 * n2
> n8 = n4 * n4
> n ** 10 = n8 * n2
>
> Still, n ** 2 does result in a multiplication, but I think that what
> we're seeing is that n * n involves two lookups of the value of the
> variable "n", whereas n ** 2 involves only one retrieval of n. That
> extra lookup operation swamps any of the actual mathematical
> operations. This also explains why with a large literal simple
> multiplication is faster than doing ** 2.
>
BUT: a simple n**2 can fail, whereas a n*n cannot. Look at the thread
"Bignum limits ?" to see the reason (and it won't work only in ruby
1.8.5 I think).

Just to play with this:

a=2**262144;0
b=a**20;0
c=b**2;0
d=b*b;0


And just for the hell of it:
d

Paul Lutus

9/2/2006 5:32:00 PM

0

Daniel Martin wrote:

> Paul Lutus <nospam@nosite.zzz> writes:
>
>> One can't argue with results, but this is some kind of language-specific
>> anomaly if one considers the operations (presumably) taking place behind
>> the scenes. For arbitrary numbers n,p and the operation n^p,
>>
>> n^p = e^(log(n)*p)
>
> That's not the way anyone does exponentiation with integers.

Not small ones, anyway, but because we're considering a language in which
integers can grow without bound, my suggestion doesn't work.

> Instead,
> you represent the exponent as a sum of powers of 2 (which you already
> have as its binary representation) and then do something like, e.g.,
> this to find n ** 10:
>
> n2 = n * n
> n4 = n2 * n2
> n8 = n4 * n4
> n ** 10 = n8 * n2

Here is a general form that works on the same principle:

-------------------------------------

#!/usr/bin/ruby

if(ARGV.size() < 2)
puts "usage: number, power for n ** p"
exit 0
end

def pwr(n,p)
r = 1
while(p != 0)
r *= n if (p & 1 != 0)
p >>= 1
n *= n
end
return r
end

n,p = ARGV[0].to_i,ARGV[1].to_i

puts "#{n} ** #{p} = #{pwr(n,p)}"

-------------------------------------

For limited-precision numbers, as the exponent becomes larger, this approach
must represent a case of diminishing returns compared to the logarithm
method. But if the numbers can be any size and no arbitrary-size logarithm
computation is available, there is no contest, and, given integer
arguments, something like the above method is likely to be that used in
Ruby for all but floats.

> Still, n ** 2 does result in a multiplication, but I think that what
> we're seeing is that n * n involves two lookups of the value of the
> variable "n", whereas n ** 2 involves only one retrieval of n. That
> extra lookup operation swamps any of the actual mathematical
> operations. This also explains why with a large literal simple
> multiplication is faster than doing ** 2.

Well, speaking to the principle, for the case of either n ** 2 or n ** n,
only one memory access is required, because a fast register copy could be
made for either approach.

On the other hand, for an interpreted language like Ruby, the value of n
might really have been accessed twice, at relatively low speed, so I think
you are right -- this could easily explain the speed difference.

--
Paul Lutus
http://www.ara...

Paul Lutus

9/2/2006 5:58:00 PM

0

Paul Lutus wrote:

/ ...

> Well, speaking to the principle, for the case of either n ** 2 or n ** n,

Whoops. s/n ** n/n * n/

--
Paul Lutus
http://www.ara...

Arnaud Bergeron

9/2/2006 6:30:00 PM

0

I did my own benchmarking and my results are contrary to what was said before.

Artemis:~ anakha$ cat temp.rb
require 'benchmark'

n = 413

Benchmark.bmbm(10) do |x|
x.report("n * n") { 100000.times { n * n } }
x.report("n ** 2") { 100000.times { n ** 2 } }
end
Artemis:~ anakha$ ruby temp.rb
Rehearsal ---------------------------------------------
n * n 0.360000 0.010000 0.370000 ( 0.494618)
n ** 2 1.060000 0.020000 1.080000 ( 1.425096)
------------------------------------ total: 1.450000sec

user system total real
n * n 0.350000 0.000000 0.350000 ( 0.508331)
n ** 2 1.060000 0.020000 1.080000 ( 1.629833)


It shows that n * n is faster.

However this may be an architecture difference. (for reference I'm on
a mac, a G3)

Paul Lutus

9/2/2006 7:08:00 PM

0

Arnaud Bergeron wrote:

> I did my own benchmarking and my results are contrary to what was said
> before.

/ ... snip demonstration in which n * n is 3.2 x faster than n ** 2

> It shows that n * n is faster.
>
> However this may be an architecture difference. (for reference I'm on
> a mac, a G3)

Interesting. It seems to be implementation-related after all. On the other
hand, if this and the prior result are averaged, there seems little overall
difference, so for a typical programmer with an arbitrary platform, it
might be a toss-up.

Here are my tests (Linux on an Intel Xeon, clock 3 GHz):

-----------------------------------------

#!/usr/bin/ruby

require 'benchmark'

T=1000000
n = 413

Benchmark.bmbm(10) do |x|
x.report("n * n") { T.times { n * n } }
x.report("n ** 2") { T.times { n ** 2 } }
end

-----------------------------------------

Result:

Rehearsal ---------------------------------------------
n * n 1.033333 0.000000 1.033333 ( 0.615591)
n ** 2 2.083333 0.000000 2.083333 ( 1.262649)
------------------------------------ total: 3.116667sec

user system total real
n * n 1.133333 0.000000 1.133333 ( 0.686552)
n ** 2 2.283333 0.000000 2.283333 ( 1.377575)

In this example (n = 413), n * n is about twice as fast as n ** 2.

But ... if I choose a larger value for n (41398765432), so that the
computation is carried out using Bignum:

Rehearsal ---------------------------------------------
n * n 1.566667 0.016667 1.583333 ( 0.953782)
n ** 2 1.633333 0.000000 1.633333 ( 0.992337)
------------------------------------ total: 3.216667sec

user system total real
n * n 1.583333 0.000000 1.583333 ( 0.954614)
n ** 2 1.650000 0.016667 1.666667 ( 1.007131)

A much smaller advantage for n * n when n is a Bignum.

Then, using a much bigger number (413987654321987654321):

Rehearsal ---------------------------------------------
n * n 1.816667 0.000000 1.816667 ( 1.095099)
n ** 2 1.866667 0.000000 1.866667 ( 1.128506)
------------------------------------ total: 3.683333sec

user system total real
n * n 1.850000 0.016667 1.866667 ( 1.139751)
n ** 2 1.850000 0.000000 1.850000 ( 1.114387)

In this case, n * n loses its advantage by a slight margin.

Overall, a toss-up.

--
Paul Lutus
http://www.ara...

James Gray

9/2/2006 11:50:00 PM

0

On Sep 2, 2006, at 1:30 PM, Arnaud Bergeron wrote:

> However this may be an architecture difference. (for reference I'm on
> a mac, a G3)

My original test was on a MacBook Pro (an Intel Mac).

James Edward Gray II

James Gray

9/2/2006 11:54:00 PM

0

On Sep 2, 2006, at 1:30 PM, Arnaud Bergeron wrote:

> I did my own benchmarking and my results are contrary to what was
> said before.

Your test doesn't use Bignums. That could be the difference.

James Edward Gray II

Arnaud Bergeron

9/3/2006 1:09:00 AM

0

On 9/2/06, James Edward Gray II <james@grayproductions.net> wrote:
> On Sep 2, 2006, at 1:30 PM, Arnaud Bergeron wrote:
>
> > I did my own benchmarking and my results are contrary to what was
> > said before.
>
> Your test doesn't use Bignums. That could be the difference.

I redid it with a Bignum and got about 0.7s for each. So yeah, with
bignums there's not much difference.

> James Edward Gray II
>
>


--
"What is your function in life?" - Killer