Microrant on Ruy's Math Skills

On Sat, Jan 28, 2012 at 04:32:14AM +0900, Tony A. wrote:

I think the real path forward here would be to propose that BigDecimal is
loaded by default in Ruby 2.0, and to propose some type of literal
representation for them, such as 1.1D that was suggested before.

The thing that stops people from using BigDecimal right now is the
inconvenience of doing BigDecimal(“1.1”), and the output of BigDecimal#to_s
and #inspect is difficult to interpret.

Use BigDecimal#to_s(‘F’) to get something that looks like “1.1” (instead
of BigDecimal#to_s with no arguments, which gives you something that
looks like “0.11E1”). Of course, that “to_s(‘F’)” is a pain in the
fourth point of contact to type with any regularity.

On Fri, Jan 27, 2012 at 03:05, Josh C. [email protected] wrote:

They are unsuitable for more uses than they are suitable for, and they
contradict
the idea that abstractions shouldn’t leak implementation.

I think I understand what you’re getting at here, but it’s potentially
misleading. There’s not an abstraction leak when you keep in mind that
the
framework you’re in is that 1.1 is a float, and therefore has a certain
representation within the machine and is subject to manipulations within
some specified system.

If you’re viewing things as: 1.1 is a concrete realisation, within the
machine, of the abstract mathematical notion of the precise rational
number
11/10, then yes there is a leak across the mathematics-machine boundary,
because dealing with the float 1.1 is not the same as symbolically
dealing
with 11/10 in standard mathematics. Treating the former as the latter
will
cause you problems. Much like there are instances where dealing with a
Hash
as an abstract map will cause problems when you encounter things like
hash collisions. (Incidentally, that’s why I think it’s probably better
that a Hash is a Hash and not a Map.)

Apologies if this isn’t too relevant to you personally, Josh, I just
think
this hasn’t been all that clear so far. (Perhaps I’m off the mark, too!)

On Jan 27, 2012, at 1:39 PM, Chad P. wrote:

If you think of 1.1 as notation for a much more complex floating point
number, which is not the same as 1.1, that doesn’t mean the abstraction
doesn’t exist: it means you’re unravelling it in your head to accommodate
the implementation’s divergence from decimal 1.1. In essence, the fact
it looks like 1.1 (but isn’t) is the abstraction itself.

I think you are right about the leakiness of the floating point internal
representation vs. external representation.

What I do find surprising when this discussion pops up (and it pops up
with frightening regularity on ruby-talk) is that there are so many
programmers who are unaware of the issues surrounding floating point
representation. This is not a Ruby issue but is actually a very common
situation across a huge number of programming languages.

I found http://rosettacode.org/wiki/Literals/Floating_point to be
an interesting laundry list of floating point literals. Almost every
language defaults to having decimal floating point literals.
A couple variations:

– ISO C99 has hexadecimal floats: 0x1.fp3 = ( (1 + 15/16) * 2^3 )
– PL1 has binary floats: 111.0101e7b = (111.0101 * 2^7) = (7.3125 *
2^7)

I’m sure there are still some gotcha’s regarding the mapping of abstract
hex or decimal floats into the reality of the underlying hardware
representation.

A more common approach (but not universal) is support for fixed point
decimal literals and arithmetic. For example, most SQL implementations
have support for fixed point arithmetic and literals.

http://en.wikipedia.org/wiki/Fixed-point_arithmetic#Implementations

This is where the special comparison method
proposals make sense: if such a method can guarantee that it is accurate
up to a known, “standard” precision, it’s easy to think “Floats are as
they appear up to precision X,” and just move on with your life, because
it works; without them, we only have something like == as currently
implemented for Float, whose primary value (as far as I can see) is to
provide a tool for learning about the implementation of the Float type,
because there’s no simple rule of thumb for “accuracy up to precision X”.

Why the ill-will towards Float#==? Despite the problems associated with
floating point representation and computation I don’t see how discarding
or modifying the semantics of #== would help the situation.

what we have is the need to implement a
comparison method of our own individual choosing every single time we
want to be able to rely on accuracy of decimal math.

Only if you insist on using floating point values as a substitute for
real decimal values (e.g. BigDecimal or something similar). Even then
you need to be aware of how the results of arithmetic computations
are going to be stored. What ‘value’ do you expect for this expression:

BigDecimal(“1.0”) / BigDecimal(“3.0”)

It can’t be an exact representation of the arithmetic result within
the context of BigDecimal. So you can switch to Rational:

Rational(1) / Rational(3)

Fantastic. You’ve now got 1/3 stored internally. What are you going
to do when you want to throw that up on a web page or export it to a
CSV file to be imported into a spreadsheet? Probably convert it to
a decimal floating point value but how exact do you want to get:

“%.60f” % (Rational(1)/Rational(3)).to_f
=> “0.333333333333333314829616256247390992939472198486328125000000”

Hmm. That introduces the decimal/binary problem. How about:

(Rational(1)/Rational(3)).to_d(20).to_s(‘f’) #=> “0.33333333333333333333”
(Rational(1)/Rational(3)).to_d(30).to_s(‘f’) #=>
“0.333333333333333333333333333333”
(Rational(1)/Rational(3)).to_d(70).to_s(‘f’) #=>
“0.3333333333333333333333333333333333333333333333333333333333333333333333”

Of course what happens when you want to compute something like square
root
with rational values? The result can’t be exactly represented as a
rational
so you are back to the representation problem:

Rational(2).to_d(20).sqrt(20).to_r
=>
(5656854249492380195206754896838792313/4000000000000000000000000000000000000)

No magic bullets. You still have to think about what
format/representation
is appropriate for your use.

Note that a decimal “up to precision X” is also an abstraction, but at
least it is an abstraction that would leak far, far less often, because
of the case of things like rounding. I think the only way around that,
given the fact there are limits to how much RAM we have available, would
be to store rational literals (e.g. 2/3 instead of 0.666 . . .) somewhere
to provide a back-up method for rounding numbers.

Sure, you can use Ruby’s Rational class if you want like shown above.
Still
doesn’t get rid of the problems.

Someone tell me if I’m mistaken about some part of that – preferably
without invective.

I don’t think you are mistaken, but I also don’t have a handle on what
you
think should happen or what is missing in Ruby. The issues surrounding
numeric
computation (representation, overflow, accuracy, precision, conversion)
are
inherent in the problem domain. Programmers need to be aware of them
and
use appropriate tools as necessary: Fixnum, Bignum, Float, BigDecimal,
Rational, Complex, etc.

Gary W.

On Jan 27, 2012, at 3:52 PM, Chad P. wrote:

Look back at the beginning of what I said. It boils down to this:

One or two alternate comparison methods for Float that have known, clear,
easily reasoned boundaries between where it works and where it does not,
should be pretty easy to include in the Float implementation – and to
make succinct so we don’t have to deal with fugliness in our code at
every turn when dealing with decimal numbers.

Now we disagree. Trying to shoe-horn Floats into behaving like fixed
point
numeric values is just the wrong approach. I understand wanting to have
a standard comparison method for Float but I don’t think that is a
useful solution when you really want fixed point math (i.e. wrong tool
for the job but still a useful tool to have for other jobs).

Tony’s suggestions seem like a better path to me:

I think the real path forward here would be to propose that BigDecimal is
loaded by default in Ruby 2.0, and to propose some type of literal
representation for them, such as 1.1D that was suggested before.

The thing that stops people from using BigDecimal right now is the
inconvenience of doing BigDecimal(“1.1”), and the output of BigDecimal#to_s
and #inspect is difficult to interpret.

I do think though that the syntax/convenience thing is somewhat of a
distraction.

In any real program (i.e. not some one-off examples via irb), you aren’t
going
to be coding explicit constructors for BigDecimal or any other numeric
type. It
will be something more like:

sample.temperature = params[‘temperature’]

Where you’ve previously defined the setter to do conversion from
strings:

def temperature=(text_value)
self[‘temperature’] = BigDecimal(text_value)
end

In rails you can define a column to be of type :decimal and have the
framework
manage all the conversion to/from BigDecimal. I think this is the more
typical
way that fixed-point math would be handled in a real program (i.e. via
the
framework or library).

Gary W.

On Sat, Jan 28, 2012 at 05:26:01AM +0900, Gary W. wrote:

– ISO C99 has hexadecimal floats: 0x1.fp3 = ( (1 + 15/16) * 2^3 )
– PL1 has binary floats: 111.0101e7b = (111.0101 * 2^7) = (7.3125 * 2^7)

I’m sure there are still some gotcha’s regarding the mapping of abstract
hex or decimal floats into the reality of the underlying hardware
representation.

There’s always the matter of limited room for precision in memory. The
question of what happens when you reach that limit then comes up
(usually
resulting in truncation, I think).

A more common approach (but not universal) is support for fixed point
decimal literals and arithmetic. For example, most SQL implementations
have support for fixed point arithmetic and literals.

http://en.wikipedia.org/wiki/Fixed-point_arithmetic#Implementations

I’d be happy with a fixed point implementation that offers very
intuitive – or at least succinct and expressive – syntax.

Why the ill-will towards Float#==? Despite the problems associated with
floating point representation and computation I don’t see how discarding
or modifying the semantics of #== would help the situation.

It’s not ill will toward Float#== (at least for me). It’s
disappointment
that, when I want to do something that fits the common case, I have to
implement it myself or use something like BigDecimal (which is far from
succinct in how it is used).

BigDecimal(“1.0”) / BigDecimal(“3.0”)

It can’t be an exact representation of the arithmetic result within
the context of BigDecimal. So you can switch to Rational:

Rational(1) / Rational(3)

These things are much more cumbersome to use than something I might
implement myself as a comparison method for Float – so for any cases
where the usage is nontrivial (that is, where the very verbose syntax
needs to be employed over and over and over again), they might actually
not be the best option. The point I was make was that the situation is
far from ideal, or even just mildly inconvenient a lot of the time.

(Rational(1)/Rational(3)).to_d(20).to_s(‘f’) #=> “0.33333333333333333333”
(Rational(1)/Rational(3)).to_d(30).to_s(‘f’) #=>
“0.333333333333333333333333333333”
(Rational(1)/Rational(3)).to_d(70).to_s(‘f’) #=>
“0.3333333333333333333333333333333333333333333333333333333333333333333333”

Can you not see how ugly and cumbersome that is?

Of course what happens when you want to compute something like square root
with rational values? The result can’t be exactly represented as a rational
so you are back to the representation problem:

Rational(2).to_d(20).sqrt(20).to_r
=> (5656854249492380195206754896838792313/4000000000000000000000000000000000000)

No magic bullets. You still have to think about what format/representation
is appropriate for your use.

Thank you for making my point: it would be nice to have a “standard” way
of doing things where, for the majority of cases, the results are
unsurprising without having to consciously account for intermittently
weird results.

Note that a decimal “up to precision X” is also an abstraction, but at
least it is an abstraction that would leak far, far less often, because
of the case of things like rounding. I think the only way around that,
given the fact there are limits to how much RAM we have available, would
be to store rational literals (e.g. 2/3 instead of 0.666 . . .) somewhere
to provide a back-up method for rounding numbers.

Sure, you can use Ruby’s Rational class if you want like shown above. Still
doesn’t get rid of the problems.

Are you even reading what I’m writing? My point is “Let’s see if we can
minimize the problem for the common case, with very clear boundaries on
when things go astray, as an alternative to the way it is currently done
by default – which is to have the abstraction leak past boundaries that
we can only really predict by doing binary math.”

Someone tell me if I’m mistaken about some part of that – preferably
without invective.

I don’t think you are mistaken, but I also don’t have a handle on what you
think should happen or what is missing in Ruby. The issues surrounding numeric
computation (representation, overflow, accuracy, precision, conversion) are
inherent in the problem domain. Programmers need to be aware of them and
use appropriate tools as necessary: Fixnum, Bignum, Float, BigDecimal,
Rational, Complex, etc.

Look back at the beginning of what I said. It boils down to this:

One or two alternate comparison methods for Float that have known,
clear,
easily reasoned boundaries between where it works and where it does not,
should be pretty easy to include in the Float implementation – and to
make succinct so we don’t have to deal with fugliness in our code at
every turn when dealing with decimal numbers.

Thomas S. wrote in post #1041913:

So simple…
1.1 - 1.to_f == 0.1

false

Does Rails scale?

It’s a silly question.

The person asking the question does not understand the problem that s/he
is attempting to solve.

This thread reminds me of that.

Did the OP provide any context for the problem, other than Pi, 1/3,
Infinity, 0.1 are just approximations in the computer world?

For those of you that are now “jaded” behind this new found knowledge,
please consider, the slide rule was accurate enough to land humans on
the moon.

Good god! What problem are you trying to solve?

The floating point capability on modern low-end hardware is
exceptionally fantastic.

One of the most important performance metrics is generally considered to
be floating point operations per second.

Finally, keep in mind many of the people involved in designing and
implementing these standards are smarter than you :wink:

On Sat, Jan 28, 2012 at 06:24:35AM +0900, Gary W. wrote:

Now we disagree. Trying to shoe-horn Floats into behaving like fixed point
numeric values is just the wrong approach. I understand wanting to have
a standard comparison method for Float but I don’t think that is a
useful solution when you really want fixed point math (i.e. wrong tool
for the job but still a useful tool to have for other jobs).

That may be the case, but the problem here is that we lack a succinct,
fairly intuitive syntax for fixed point numbers, apparently with no easy
way to provide it. We’re stuck with either making everybody live with
totally unobvious and cumbersome ways to do fixed point math (which
means
very few people will use them, even when they should), or rewriting the
parser (apparently) – unless we provide a couple of least-surprise
comparison methods for Float as I suggested.

Pick one.

. . . or let me know if there’s another option. Don’t forget that the
“rewriting the parser” choice is likely to run into stiff opposition.

I do think though that the syntax/convenience thing is somewhat of a
distraction.
Really? I think it’s a huge stumbling block to adoption amongst casual
coders, and an invitation for them to show up on this mailing list and
complain about how math in Ruby is “broken” periodically.

self['temperature'] = BigDecimal(text_value)
    end

. . . and by “you” you mean me, evidently, or you, or one of the other
people who already knows about BigDecimal and the limitations of IEEE
standard floating point implementations. You may not be aware of this,
but I’m pretty sure half (or more) of the people learning Ruby right
now
either don’t know about the specifics of the problem and how to get
around it or have long since forgotten having heard about it and need to
be reminded.

In rails you can define a column to be of type :decimal and have the framework
manage all the conversion to/from BigDecimal. I think this is the more typical
way that fixed-point math would be handled in a real program (i.e. via the
framework or library).

That assumes . . .

  1. you’re using Rails

  2. you know about this approach and the problem it fixes

On Fri, Jan 27, 2012 at 1:32 PM, Tony A.
[email protected]wrote:

I think the real path forward here would be to propose that BigDecimal is
loaded by default in Ruby 2.0, and to propose some type of literal
representation for them, such as 1.1D that was suggested before.

The thing that stops people from using BigDecimal right now is the
inconvenience of doing BigDecimal(“1.1”), and the output of BigDecimal#to_s
and #inspect is difficult to interpret.

lol, reminds me of http://twitter.com/#!/tpope/status/160389717244915712

On Thursday, January 26, 2012 7:05:39 PM UTC-5, Garthy D wrote:

Hi all,

Such a large discussion from a simple initial post that was, in all
likelihood, an attempt to troll for as many responses as possible. It
looks like it has been wildly successful.

There was no attempt to troll --what a boring concept. It was exactly as
stated, a tiny rant, born of a mildly frustrating debugging session that
boiled down to this. Honestly, I half expected exactly 0 replies. My
other
half expected maybe few “Yea, been there.” or “That’s floating point,
don’t
you know!”, but that’s about it. I did not in the least expect to be so
delightfully surprised by such an interesting conversation. One that
even
led to some improvements to my code --which is always a good thing.

And yes I am aware of IEEE 754 and have been, oh… since at least '85
or
so, probably first picked up info about it from reading K&R — Over 25
years, and I’m still debugging the damn things!!! :wink:

On Sat, Jan 28, 2012 at 10:43 AM, Peter V.
<[email protected]

wrote:

That means to remain truely backwards compatible, we would need something
like:

1.1 #=> Float (machine dependent)
1.1BD #=> BigDecimal (“unlimited” size)
1.1D #=> Decimal (e.g. picking the standardized IEEE 754 2008 Decimal128
fomat)

I don’t think it’d be very Ruby like to have a separate literal syntax
depending on the size of the number. Some precedent:

1000000000000000.class
=> Fixnum

10000000000000000000000000000000000000000000000000.class
=> Bignum

On Fri, Jan 27, 2012 at 8:51 AM, Matthias Wächter
[email protected]wrote:

theory, there is already a library from Intel
[http://software.intel.com/en-**us/articles/intel-decimal-**

floating-point-math-library/http://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library/

]
one can use to do decimal floating-point math.

wow, thanks for the update. i hope ruby can indeed support that even
if i need to recompile manually.

If I understand correctly, the IEEE 754-2008 decimal formats are fixed
size,
so even the largest of them (128 bit)

decimal128 floating-point format - Wikipedia

is fixed size limited to 34 digits.

But the current Ruby BigDecimal implementation can go well beyond that:

$ ruby -v
ruby 1.9.3p0 (2011-10-30 revision 33570) [i686-linux]
002:0> require ‘bigdecimal’
=> true
003:0> require ‘bigdecimal/util’
=> true
005:0> a =
BigDecimal.new(“123456789012345678901234567890123456789012345678901234567890”)
=> #<BigDecimal:985aa30,‘0.1234567890 1234567890 1234567890 1234567890
1234567890 123456789E60’,63(72)>
006:0> b = a +1
=> #<BigDecimal:9859324,‘0.1234567890 1234567890 1234567890 1234567890
1234567890 1234567891E60’,63(144)>
007:0> b - a
=> #BigDecimal:98639a0,‘0.1E1’,9(81)
008:0> (b - a).to_s
=> “0.1E1”
009:0> (b - a).to_s(‘F’)
=> “1.0”

That means to remain truely backwards compatible, we would need
something
like:

1.1 #=> Float (machine dependent)
1.1BD #=> BigDecimal (“unlimited” size)
1.1D #=> Decimal (e.g. picking the standardized IEEE 754 2008 Decimal128
fomat)

HTH,

Peter

On Sat, Jan 28, 2012 at 11:24 PM, Tony A.
[email protected]wrote:

fomat)

Tony A.

Yeah, but Fixnum and Bignum have different behaviour. I am pretty sure
they
are substitutable for eachother in every way, they just handle different
domains for performance reasons (transparent to the user, without any
reasonable exceptions). Float and BigDecimal, however, do have different
behaviour and hence do have the need to specify which you want.

On Jan 29, 2012, at 2:26 AM, Jon L. wrote:

to nearest (and if equidistant, round down),
to nearest (and if equidistant, round up), or
to nearest (and if equidistant, round so that the last digit is even) aka.
bankers rounding?

Only then can you know what to expect.

Agree 100%. I was just trying to illustrate that even if you have some
wonderfully crafted big decimal literal syntax, you still need to
address
the context issues you just listed.

1.0D / 3.0D

Might be easier to type but it is only part (perhaps a small part) of
the puzzle.

Gary W.

On Jan 27, 2012, at 3:26 PM, Gary W. wrote:

What ‘value’ do you expect for this expression:

BigDecimal(“1.0”) / BigDecimal(“3.0”)

Decimal math operations need use a default rounding context or require
one
to set a
rounding context before performing an operation.

Do you round towards +infinity, -infinity,
towards 0, away from 0,
to nearest (and if equidistant, round down),
to nearest (and if equidistant, round up), or
to nearest (and if equidistant, round so that the last digit is even)
aka.
bankers rounding?

Only then can you know what to expect.

On Mon, Jan 30, 2012 at 6:56 AM, Chad P. [email protected] wrote:

rounding context before performing an operation.
wonderfully crafted big decimal literal syntax, you still need to address
the context issues you just listed.

1.0D / 3.0D

Might be easier to type but it is only part (perhaps a small part) of the
puzzle.

The point of offering something other than “all you get is floats and
libraries”, of course, is that a simple set of such rules is a lot easier
to deal with than the floating point mess that is common now.

It seems many people use the “floating point mess” without major
issues. So it cannot be as bad as you make it sound.

It’s a lot
easier to deal with “it truncates decimals past the Nth place” or any
other such simple rule than “Uh . . . it’s probably going to do some
weird shit when you start doing math – weird shit that requires you to
do binary math to predict the outcome. You really just shouldn’t use
it. Use some verbose library syntax instead. You like typing – right?”

Any other efficient floating point math would have to choose a
limited representation. You would get unexpected (for those who are
not aware of numerics anyway) rounding issues anyway. That would not
be any better than the situation today, which has the advantage of a
long established standard, so it is known to many. Having an
arbitrary precision math as default would make many applications
slower than necessary which do not need that high precision math.

If you need to do precise math all the time there are other tools
better suited for that. You’ll even find free / open source ones:

So at the moment I believe the current situation is the best
compromise. If you have suggestions for improvements which do not
make things worse for many when avoiding the unexpected behavior I am
curious to hear them. For a new language the situation is totally
different of course.

Cheers

robert

On Mon, Jan 30, 2012 at 12:22 AM, Robert K.
[email protected]wrote:

It seems many people use the “floating point mess” without major
issues. So it cannot be as bad as you make it sound.

Floating points are a great choice for approximating continuous values
and
thus working on things which require both high performance and
approximating real-world data sources. This includes things like games,
non-gaming related 3D applications, perceptual media including audio and
video codecs, and everything involved in working with perceptual media
on
computers such as non-linear editing, speech synthesis, and speech
recognition.

People don’t often do these things in Ruby. I’d say they’re uncommon use
cases.

Something people do quite often in Ruby: deal with money. Money isn’t
continuous, it’s discrete. A decimal type is a much better choice for
dealing with money than a floating point.

As I said before, I think Ruby could benefit from decimal literals. They
can’t solve all use cases. They can’t be a replacement for rationals,
such
as in the 1D/3D example (actually I’m not entirely convinced of that
either, but that’s a sidebar)

I have found many uses for BigDecimal before and have seen Fixnums used
where BigDecimal would probably be more appropriate (i.e. “count cents,
not
dollars!”) where having (Big)Decimal literals would probably change
people’s minds about that sort of thing.

On Mon, Jan 30, 2012 at 10:03:04AM +0900, Gary W. wrote:

the context issues you just listed.

1.0D / 3.0D

Might be easier to type but it is only part (perhaps a small part) of the
puzzle.

The point of offering something other than “all you get is floats and
libraries”, of course, is that a simple set of such rules is a lot
easier
to deal with than the floating point mess that is common now. It’s a
lot
easier to deal with “it truncates decimals past the Nth place” or any
other such simple rule than “Uh . . . it’s probably going to do some
weird shit when you start doing math – weird shit that requires you to
do binary math to predict the outcome. You really just shouldn’t use
it. Use some verbose library syntax instead. You like typing –
right?”

On Mon, Jan 30, 2012 at 7:32 PM, Tony A. [email protected]
wrote:

Floating points are a great choice for approximating continuous values and
thus working on things which require both high performance and
approximating real-world data sources. This includes things like games,
non-gaming related 3D applications, perceptual media including audio and
video codecs, and everything involved in working with perceptual media on
computers such as non-linear editing, speech synthesis, and speech
recognition.

That’s right, lots of use cases. And I bet that in these use cases
the programmer never even wants to test two floats for equality.
Representation problems or not, the probability that two points on a
continuum coincide is vanishingly small. So seeking a way to impose
such a test on the floating point type is inappropriate, as many have
said.

Gavin

On Jan 30, 2012, at 9:32 AM, Tony A. wrote:

non-gaming related 3D applications, perceptual media including audio and
video codecs, and everything involved in working with perceptual media on
computers such as non-linear editing, speech synthesis, and speech
recognition.

Let me add “statistics gathering” to the list…

People don’t often do these things in Ruby. I’d say they’re uncommon use
cases.

… and suddenly, you have a very common use case. I have no client
where it doesn’t happen.

Graphing is also not uncommon.

Something people do quite often in Ruby: deal with money. Money isn’t
continuous, it’s discrete. A decimal type is a much better choice for
dealing with money than a floating point.

Yes, but a dedicated money type that encodes the currency is also a much
better
choice. Also the standard in handling monetary values is not using a
decimal
representation anyways: you just encode the smallest value as an
Integer.

As I said before, I think Ruby could benefit from decimal literals. They
can’t solve all use cases. They can’t be a replacement for rationals, such
as in the 1D/3D example (actually I’m not entirely convinced of that
either, but that’s a sidebar)
I have found many uses for BigDecimal before and have seen Fixnums used
where BigDecimal would probably be more appropriate (i.e. “count cents, not
dollars!”) where having (Big)Decimal literals would probably change
people’s minds about that sort of thing.

Counting cents is perfectly valid, fits every database (even your fancy
NOSQL
database that doesn’t have a decimal type) and is the proper way to do
math
involving money. The base value is not the Dollar, it is the
cent. This doesn’t hold true if you venture into the realms of
fractional
cents, but thats really uncommon (an can be solved by using Rational).

If money is the reason against float, its the wrong reason. It may be a
tempting
error, but it only shows that the developer in question did not read a
single
line about handling money in software - which is another common problem,
but
not one that you can actually fix by the choice of literals.

On Mon, Jan 30, 2012 at 1:01 AM, Florian G.
[email protected]wrote:

involving money. The base value is not the Dollar, it is the
cent.

Having worked on these sorts of systems, I really hate them. Having to
constantly multiply and divide by 100 because, sorry, in the real world
it’s dollars, not cents, that people actually work with and familiar
with,
you leave yourself open for all sorts of off by two orders of magnitude
errors doing these sorts of conversions all over the place.