Michael Ulm
6/11/2008 7:15:00 AM
jzakiya wrote:
--big snip--
> Help me out here.
>
> By what basis do say "that eventually Atkins method must be quicker"?
>
> The test I've run in Ruby and Python with my different versions show
> they pull away from the SoA as N gets bigger. Why to you think the
> math I do is asymptotically bounded? Did you read that I took the SoA
> generator functions and implemented them with my methodology and beat
> the SoA by over a factor of 2. I'm limited to 1GB on my laptop, so I
> haven't been able to do Ns into the billions (yet) but people with 2-4
> GB of memory should be able to test my routines up to those sizes of
> N.
>
> I'm really hoping that some people will ACTUALLY rigorously test my
> versions against the SoA, which is why I released my findings. But all
> my tests show my methodology, in its various specific implementations,
> is 'better' in many aspects, and not just speed.
>
> My method is shorter and easier to code (in any language), easier to
> understand, extensible to accommodate better generator functions, and
> inherently able to be done in parallel. In fact, my method SCREAMS to
> be done in parallel, which I emphasized repeatedly in my paper.
>
> If you have the capacity, please SHOW ME some benchmarks that prove
> the SoA is better than the various SoA versions beyond some point.
>
> Yeh, the primality tester I showed in my paper just fell out of the
> number theory I used to do the prime generators. I realized then it
> wasn't the best numerical method to test REALLY BIG numbers, but it
> was just so cool to demonstrate the conceptual brevity of reversing
> the process to generate primes to test numbers for being prime. I
> realized early that it was only practical for "normal" numbers, but
> for most people that's sufficient, and it's practical to do because
> it's short and easy to code and understand. I'm still thinking about
> ways to make numerically useful tests for large numbers using this
> number theory, so stay tuned.
>
Let me start off by saying, that I really like your implementation of
your sieve and I consider it quite useful for practical purposes. I'm
not quite sure if it really differs from the trick used in wheel
factorization, but even then it is a very clear implementation and very
quick.
Now for the SoA, there are several methods to measure performance. One
way is to implement each algorithm and measure the time of execution.
This is often quite practical, but has some problems. This approach
clearly depends a lot on the programming language used, the particular
implementation of an algorithm and maybe some externalities (like IO speed).
This makes it somewhat controversial to compare algorithms in this way,
although in practice that is often all one needs.
A way to overcome the shortcomings of benchmarking is to analyze the code
and examine just how much work is needed to perform it. This can be done
in a very fine-grained way up to counting how many additions and multiplications
are needed in an algorithm (look in D. E. Knuth's book "The Art of Programming"
for examples of this). For most purposes, such detailed analysis is
overkill, and a more simplified approach is taken. There, one estimates the
number of steps one has to perform in the algorithm for very large inputs.
This estimation is usually given in big O notation. E.g. an algorithm of
order O(n) would need about n steps for an input of size n. To be more precise,
there exist constants a and b such that the algorithm will take between
a*n and b*n steps.
The crown of 'fastest' algorithm usually goes to the one with the smallest
asymptotic behaviour, i.e. the one for which the function insidethe O is
smaller. This does not necessarily reflect benchmarking behaviour in the
real world for inputs of realistic sizes. Analysis gives that your sieve is
of order O(n), and the SoA is of order O(n / log log n) (However, since the
SoA is the more complex algorithm one would expect that the associated
constants are greater than the constants needed in your case). So, the SoA
is the fastest known algorithm for this measure. This has nothing to do
with benchmarking.
So, in practical applications, one would just about always use a simpler sieve
instead of the SoA. You have demonstrated in your benchmarks that for normal
input sizes your sieve is better than SoA, but the SoA has the better
asymptotic behaviour.
HTH,
Michael