Michael W. Ryder
5/27/2008 6:24:00 PM
Tim Hunter wrote:
> Michael W. Ryder wrote:
>> I can not see how you can say that 0.1 != 1/10. I tried looking up the
>> standard but the paper I looked at, about implementing the standard in a
>> language, made no mention of converting floating point numbers to
>> rational numbers.
>
> Ruby uses binary numbers, not decimal numbers. In binary, 1/10 cannot be
> represented exactly, just like 1/3 cannot be represented exactly in decimal.
>
But if I told you that 10 dimes does not equal $1 you would freak. I
know that computers do not normally represent fractional numbers
correctly, but part of that is the fault of the programmers and chip
designers opting for "good enough". If all arithmetic on a computer was
done using something like BCD there would not be this problem. Agreed
that in the deep dark past using something like BCD was noticeably
slower but with the raw power available today, most of which is wasted
there is no reason to accept second or third best.
Anyway, there is no way for you, me, or a computer program to know if
0.1 is 1/10 or .999999999999999 or 1.00000000000001 so one can only
choose one and hope that it is the right choice. Personally, as someone
who works in business, I prefer the 1/10 solution as that is the way
they expect it in business. They do not want to see 10 * .1 equaling
..99999999 or 1.00000001, they have to see 1.