Can you believe that 0.1 + 0.2 is not equal to 0.3 ?!
This weird thing seems to happen in most of programming languages. In this article, I’m going to investigate it. First Let’s review single precision and double precision floating-point numbers, two ways for storing floating-point numbers. (all in binary mode)
- Single Precision Number (binary32): The IEEE single precision floating point standard representation requires 32 bits, which may be represented as numbered from 0 to 31, left to right. The first bit is the sign bit, S, the next 8 bits are the exponent bits, ‘E’, and the final 23 bits are the fraction ‘F’. It reduces the amount of precision achieved.
- Double Precision Number (binary64): The IEEE double precision floating point standard representation requires 64 bits, which may be represented as numbered from 0 to 63, left to right. The first bit is the sign bit, S, the next 11 bits are the exponent bits, ‘E’, and the final 53 bits (52 bits explicitly stored) are the fraction ‘F’. Double floating point precision are used where high arithmetic precision is required and numbers like -2/19 have to be used.
Recently I saw 0.1 + 0.2 != 0.3 in a programming quiz and I just got curious about what’s going on! Let’s review what’s happening. There are two types of floating-point number representation: base 10 and base 2. For example 0.125 is equal to 1/10 + 2/100 + 5/1000 is in base 10 (decimal fraction) and 0.001 is equal to 0/2 + 0/4 + 1/8 is in base 2 (binary fraction). The problem comes into scene as most fraction numbers can not be represented by binary fraction, so an estimation of that will be stored.
On most machines today, floats are approximated using a binary fraction, with the numerator using the first 53 bits starting with the most significant bit, and with the denominator as a power of two.
In the case of 1/10, the binary fraction is 3602879701896397 / (2 ** 55)
which is close to but not exactly equal to the true value of 1/10. When you enter 0.1 in python interpreter, the output is 0.1. But if you enter
print('{:.30f}'.format(0.1))
OR
print(format(0.1, '.30f'))
the output looks like:
0.100000000000000005551115123126
Actually it may be a little bit different on different computers, but this is how python sees 0.1! it truncates useless numbers when you just print 0.1. Same for 0.2:
0.200000000000000011102230246252
When you compute 0.1 + 0.2, Representation error of both numbers accumulate and it becomes:
0.30000000000000004
which is grater than 0.3, so it’s not equal to 0.3! When you print out the fraction 1/10, it shows 0.1. Just remember, even though the printed result looks like the exact value of 1/10, the actual stored value is the nearest representable binary fraction.
Note that this is in the very nature of binary floating-point: this is not a bug in Python (or Perl, C, C++, Java, Fortran, and many others), and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic.
But you may ask, how do you calculated 3602879701896397 / (2 ** 55)
for 1/10?! As python tutorial says:
IEEE-754 “double precision” (is used in almost all machines for floating point arithmetic) doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can, of the form J/2**N where J is an integer containing exactly 53 bits. For 1/10 we have
1 / 10 ~= J / (2**N)
We can rewrite this as
J ~= 2**N / 10
We know that J has exactly 53 bits (so J is greater than or equal to 2**52
but it’s smaller than 2**53
), so according to
2**52 <= 2**N // 10 < 2**53
the best value for N is 56. Why? If you multiply all three statements by 10 (I assumed 10 is ~>2**3, where the notation ~> means a little more), you will find out that it looks like
10*(2**52) <= 2**56 < 10(2**53)
THEN
(~> 2**3)*(2**52) <= 2**56 < (~> 2**3)*(2**53)
THEN
(~> 2**55) <= 2**56 < (~> 2**56)
and the only integer value between ~> 2**55 and ~> 2**56 is 2**56, so N is 56 and it’s the only value for N that leaves J with exactly 53 bits. So as we know
2**56 // 10 = 7205759403792793 (quotient)
and
2**56 % 10 = 6 (remainder)
Because remainder is greater than 10/2 (half of 10), the quotient should round up and J becomes 7205759403792794. So 1/10 is equal to
7205759403792794 / (2**56) = 3602879701896397 / (2**55)
Since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10! Rounding up makes J = (2**56) / 10 the following statement:
J < (2**56) / 10 < J + epsilonTHENJ / (2**56) < 1/10
To see the actual value of 1/10, you should multiply J by 10**55, so you can see the value out to 55 decimal digits:
(3602879701896397 * (10**55)) // (2**55)
and the output looks like
1000000000000000055511151231257827021181583404541015625
This number is stored in computer instead of 0.1 :)
Thanks for reading. Hope you enjoy. Always be a LEARNER ;) Join us in our Telegram channel.