Robert Klemme
2/21/2009 10:00:00 PM
On 21.02.2009 22:33, Roger Pack wrote:
>> Float computer math is full of those effects - and so are the archives
>> of this list with articles similar to yours.
>
> I hate those subtle rounding errors. Any thoughts if I were to suggest
> Ruby by default use BigDecimal instead of float?
Yes. There are several reasons against IMHO:
1. existing code might be broken
2. efficiency (Float is faster than BigDecimal)
3. standards (AFAIK the Float implementation is backed by an ISO
standard while I am not sure whether this is the case for BigDecimal).
4. limited use: while your particular example will work as you expect,
BigDecimal is still not a real number (in math terms) but still a
rational number => rounding errors for other numbers will be introduced
when switching from Float to BigDecimal by default.
Sorry to disappoint you, but I'd say this is the learning experience
everybody in programming has to undergo: floats are inherently unsafe,
even more, computer representations of mathematical constructs like
numbers are always imperfect and it is your responsibility of the writer
of a program to ensure you properly deal with this mismatch.
One solution which comes to mind is this: have an extension or rather a
command line switch which changes the default behavior. But this in
itself is not without problems either: library code might suddenly start
to fail when Ruby is started with the BigDecimal flag etc.
Kind regards
robert