Jump to content

0.1 + 0.2 !== 0.3


dgabrahams

Recommended Posts

Hi All,

 

In this video:

 

http://www.youtube.com/watch?v=lTWGoL1N-Kc

 

Douglas Crockford states that 0.1 + 0.2 !== 0.3. I performed this test:

<script>var test = 0.1 + 0.2;document.write("test = " + test);</script>

And the answer on the screen was: 0.30000000000000004. This of course proves his point. However:

<script>var test = 0.1 + 0.3;document.write("test = " + test);</script>

Gives a result of: 0.4 - which is correct. Why is this so? I know that it is something to do with the implementation of the floating point standard, but can anyone provide any further information? Why does it work for 0.1 + 0.3?

Link to comment
Share on other sites

It is amazing to me that Javascript is able to use floating point numbers for everything without a need for an integer type. What you are seeing is ordinary. I suspect they have fine-tuned the floating point scheme to auto-round near integer values. There is no simple answer to your question.

Link to comment
Share on other sites

The explanation is here in detail: http://en.wikipedia.org/wiki/Floating_point#Representable_numbers.2C_conversion_and_rounding

 

You can't represent 0.1 as a sum of 2^N where N is a positive or negative whole number, so the computer tries to approximate. Often the rounding error is only noticeable after adding to another number that had a rounding error in the same direction.

  • Like 1
Link to comment
Share on other sites

The explanation is here in detail: http://en.wikipedia.org/wiki/Floating_point#Representable_numbers.2C_conversion_and_rounding

 

You can't represent 0.1 as a sum of 2^N where N is a positive or negative whole number, so the computer tries to approximate. Often the rounding error is only noticeable after adding to another number that had a rounding error in the same direction.

 

Thanks, I had a feeling it was something like this, I have done some work with binary. I feel it is quite a flaw in the language, if the maths doesn't work, that could cause trouble (or maybe it works too well creating such accurate approximations, difficult to work with though??).

Edited by supasoaker
Link to comment
Share on other sites

Generally, when you are dealing with computed values, you never assume perfect accuracy will be achieved unless you are working with integers, bits, or a special situation such as infinite precision functions (such as the Java BigDecimal).

Link to comment
Share on other sites

Generally, when you are dealing with computed values, you never assume perfect accuracy will be achieved unless you are working with integers, bits, or a special situation such as infinite precision functions (such as the Java BigDecimal).

 

Thanks - didn't know about BigDecimal, I don't want perfect accuracy, I just want 0.1 + 0.2 to equal 0.3!!! :P

Link to comment
Share on other sites

Well, the old computer joke is that 2 + 2 = 3.999999.

 

The primary programming situation where this is unforgivable is financial accounting where you want everything to be precise and add up exactly. You can't allow the total for the month to be a penny off.

Edited by davej
Link to comment
Share on other sites

Well, the old computer joke is that 2 + 2 = 3.999999.

 

The primary programming situation where this is unforgivable is financial accounting where you want everything to be precise and add up exactly. You can't allow the total for the month to be a penny off.

Actually, that's not a computer joke. That's a joke about engineers. Back then before calculators were so easily accessible engineers used something called a slide rule. It did a lot of basic operations, if you did 2 + 2 on it you could get 3.9928.

 

Computers are actually really accurate, the problem is when you're not specifying how many decimal positions you want. For money, you never need more than three digits after the point, so what you have to do is round to the third digit after each operation.

 

In Javascript, a way to round to the third decimal place is the following:

 

var x = 0.1 + 0.2;x = Math.round(x * 1000);x = x * 0.001;

Some languages let you specify a digit to round to, but Javascript only rounds to the unit so this has to be done.

 

Hardly any economic, scientific or engineering application needs 20 decimal places, so errors don't matter that much.

Link to comment
Share on other sites

  • 2 weeks later...

Foxy Mod - Thanks for the rounding idea, that should help solve this as far as implementation is concerned.

 

More Human Than Human - thanks for the link!

 

Believe it or not I found this issue in MS Excel years ago, no matter how accurate I made some numbers they never resulted in a rounded figure - was tearing my hair out - this is of course before I knew about how the numbers were actually made and the short comings of possible standards...

Edited by supasoaker
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...