Bartc
5/2/2011 11:05:00 PM
"David Mathog" <dmathog@gmail.com> wrote in message
news:19e82a37-33d4-4837-8d1e-dd0d1901a435@h36g2000pro.googlegroups.com...
> I just observed that
>
> double var;
> double N; /* an integer */
> var=pow(2.0,N)
> fprintf(stdout,"%.*lf\n",0,var);
>
> Returns what appears to be the right value for everything up to
> N=1023. At 1024 it returns inf.
>
> Example:
> 2^256
> 115792089237316195423570985008687907853269984665640564039457584007913129639936
>
> (The highest value I could find in a quick web search, to verify that
> the long string was correct.)
>
> Verified that the double wasn't somehow miraculously carrying that
> much precision (linking in an arbitrary precision library or something
> along those lines), by repeating the calculation with
>
> var=pow(2.0,N)-100
>
> As expected, it returned the same value for large N as did the
> original. So no miraculous precision in general.
>
> I assume that fprintf is somehow deriving all of these (correct)
> digits from the exponent when it goes to print the double. Other than
> the integer powers of 2, are there any other "extended precision"
> values that can be found this way? Is this standard behavior, or just
> something the gcc compiler does?
Try printing 2^256 in binary (which will be something like:
10000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000)
Then you can see there is very little precision involved.
Doubles have a limited maximum exponent, typically 11 bits (a range of about
2048, including negative exponents), so is not surprising it goes wrong at
+1024.
--
Bartc