in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?
+1
Then newbies wouldn’t be caught again and again. More experienced
programmers would know what they’re doing and could choose explicitly
choose floats when speed requires it.
in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?
I think so.
OTOH, to reduce the potential impact on existing code
– including on performance, where binary floating point, since it has more
prevalent hardware support, is likely to outperform in most cases – it might be better
to introduce a simple syntax for decimal floating point literals, e.g. an integer or
floating point expression with a trailing (no intervening whitespace) “d” like 1.0d
would be treated as a decimal floating point value.
I like that, though I would prefer to default to BigDecimal and require
an extra 0.0f for floats, but that’s just me.
Of course, in reality switching to BigDecimal by default would break so
much existing code I can’t imagine them ever doing it.
Re: “it causes problems when you eventually mix BD’s with floats” I
would suggest trying to overcome these problems by keeping BigDecimals
by default when the two are mixed. One can check if a float value
“matches a known decimal” by something like:
def float_to_big_decimal f
if( (“%f” % f).to_f == f)
# this float looks like a default decimal value
return BigDecimal(“%f” % f)
else
return BigDecimal.new(“%.20g” % f)
end
end
it should work, but doesn’t seem to work:
BigDecimal.new(“0.20000000000000007”)
=> #<BigDecimal:1eeee78,‘0.2000000000 0000006661E0’,20(28)>
Is that last BigDecimal value a bug (the 6’s)?
That being said, computer science is so entrenched in normal IEEE floats
that I doubt it is realistic for BigDecimal to become the default.
We’re stuck.
So maybe the suggestion of adding the optional post-fix, like
1.1d
would be nice, but nobody would use it probably, since it’s not the
default. Sigh.
I can see supporting parallel numeric towers - one precise and an
imprecise, performance oriented one - and letting the programmer
decide which to use. But in that case, I think there should only be
explicit conversions between them.
Maybe a command line parameter could specify if you want to use “floats”
or “bigdecimals” by default. That has potential.
Also of note is that 1.9.2 currently displays floats “in their gory
details” if they don’t match a known decimal value. There’s also this
ticket:
http://redmine.ruby-lang.org/issues/show/2152
to display floats’ “gory details” as well as in readable form.
Currently we only have “gory details” in 1.9.2 and “readability” in
1.9.1, so the ticket is to suggest we want both. Go there and +1 it if
you want it
Cheers.
[rp]