Thursday, February 19, 2009

Floating Point numbers and User Input

Computers can store decimal numbers in a variety of ways. One of the most widely used formats is IEEE754 floating point numbers.

Floating point numbers are popular because hardware support (in the form of coprocessors or microcode) makes floating point arithmetic very fast.

The tradeoff when using floating point number is that they typically represent a decimal number inexactly. So .001 may be 0.001000002342... As long as the imprecision is beneath your level of concern this isn't a problem.

Of course, errors accumulate particularly during multiplication, so even if this lack of precision doesn't bother you today it might bother you tomorrow when you start doing lots of multiplication.

The other form, popularized in COBOL, is an exact decimal format. Unfortunately, as far as I know, there is no hardware support for exact decimal arithmetic. So it's done in software and is much slower than floating point arithmetic.

What's this got to do with input from the user?

I'm mulling over options for dealing with decimal numbers that are specified by the user (either directly or indirectly from a file). So far I'm settling on using exact decimals to accept user input and store it in files. My rationale is that the choice of internal representation isn't something the user should be bothered with. It's an artifact of the way this particular app (and most apps) handles decimal arithmetic. If floating point is really good enough, why bother frustrating the user by converting their 1.052 into 1.05200003333 when they see it?

No comments :

Post a Comment