How does JavaScript runtime convert BINARY (Double-precision floating-point format) back to DECIMAL...











up vote
3
down vote

favorite













This question already has an answer here:




  • How does JavaScript determine the number of digits to produce when formatting floating-point values?

    1 answer




Give a decimal number 0.2



EX



var theNumber= 0.2;


I ASSUME it would be stored in memory as (based on double-precision 64-bit floating point format IEEE 754)



0-01111111100-1001100110011001100110011001100110011001100110011001


That binary number is actually rounded to fit 64 bit.



If we take that value and convert it back to decimal, we will have



0.19999999999999998

(0.1999999999999999833466546306226518936455249786376953125)


Not exactly 0.2



My question is, when we ask for decimal value of theNumber (EX: alert(theNumber)), how does JavaScript runtime know theNumber is originally 0.2?










share|improve this question















marked as duplicate by Eric Postpischil, chŝdk, Community Nov 8 at 8:00


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.











  • 1




    Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do theNumber + 0 you still get 0.2 as the result, so the + 0 is apparently a no-op. However theNumber + 1 - 1 is now incorrect because it does use the underlying value for mathematical operations.
    – vlaz
    Nov 7 at 11:27






  • 1




    The value you get for 0.2 for me in Chrome is -> 001100110011001100110011001100110011001100110011001101 If you do -> theNumber.toFixed(54) you will get 0.200000000000000011102230246251565404236316680908203125, Doing the +1 -1 as @vlaz You will then get 0.199999999999999955591079014993738383054733276367187500 So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs.
    – Keith
    Nov 7 at 11:32






  • 1




    Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of 0.2? The correct value is 0.200000000000000011102230246251565404236316680908203125.
    – Eric Postpischil
    Nov 7 at 12:49










  • Thanks, guys. Looks like I gotta read more specs.
    – vothaison
    Nov 7 at 17:40










  • @eric, i thought 0.2 actually went into memory. 😥
    – vothaison
    Nov 7 at 17:41















up vote
3
down vote

favorite













This question already has an answer here:




  • How does JavaScript determine the number of digits to produce when formatting floating-point values?

    1 answer




Give a decimal number 0.2



EX



var theNumber= 0.2;


I ASSUME it would be stored in memory as (based on double-precision 64-bit floating point format IEEE 754)



0-01111111100-1001100110011001100110011001100110011001100110011001


That binary number is actually rounded to fit 64 bit.



If we take that value and convert it back to decimal, we will have



0.19999999999999998

(0.1999999999999999833466546306226518936455249786376953125)


Not exactly 0.2



My question is, when we ask for decimal value of theNumber (EX: alert(theNumber)), how does JavaScript runtime know theNumber is originally 0.2?










share|improve this question















marked as duplicate by Eric Postpischil, chŝdk, Community Nov 8 at 8:00


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.











  • 1




    Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do theNumber + 0 you still get 0.2 as the result, so the + 0 is apparently a no-op. However theNumber + 1 - 1 is now incorrect because it does use the underlying value for mathematical operations.
    – vlaz
    Nov 7 at 11:27






  • 1




    The value you get for 0.2 for me in Chrome is -> 001100110011001100110011001100110011001100110011001101 If you do -> theNumber.toFixed(54) you will get 0.200000000000000011102230246251565404236316680908203125, Doing the +1 -1 as @vlaz You will then get 0.199999999999999955591079014993738383054733276367187500 So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs.
    – Keith
    Nov 7 at 11:32






  • 1




    Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of 0.2? The correct value is 0.200000000000000011102230246251565404236316680908203125.
    – Eric Postpischil
    Nov 7 at 12:49










  • Thanks, guys. Looks like I gotta read more specs.
    – vothaison
    Nov 7 at 17:40










  • @eric, i thought 0.2 actually went into memory. 😥
    – vothaison
    Nov 7 at 17:41













up vote
3
down vote

favorite









up vote
3
down vote

favorite












This question already has an answer here:




  • How does JavaScript determine the number of digits to produce when formatting floating-point values?

    1 answer




Give a decimal number 0.2



EX



var theNumber= 0.2;


I ASSUME it would be stored in memory as (based on double-precision 64-bit floating point format IEEE 754)



0-01111111100-1001100110011001100110011001100110011001100110011001


That binary number is actually rounded to fit 64 bit.



If we take that value and convert it back to decimal, we will have



0.19999999999999998

(0.1999999999999999833466546306226518936455249786376953125)


Not exactly 0.2



My question is, when we ask for decimal value of theNumber (EX: alert(theNumber)), how does JavaScript runtime know theNumber is originally 0.2?










share|improve this question
















This question already has an answer here:




  • How does JavaScript determine the number of digits to produce when formatting floating-point values?

    1 answer




Give a decimal number 0.2



EX



var theNumber= 0.2;


I ASSUME it would be stored in memory as (based on double-precision 64-bit floating point format IEEE 754)



0-01111111100-1001100110011001100110011001100110011001100110011001


That binary number is actually rounded to fit 64 bit.



If we take that value and convert it back to decimal, we will have



0.19999999999999998

(0.1999999999999999833466546306226518936455249786376953125)


Not exactly 0.2



My question is, when we ask for decimal value of theNumber (EX: alert(theNumber)), how does JavaScript runtime know theNumber is originally 0.2?





This question already has an answer here:




  • How does JavaScript determine the number of digits to produce when formatting floating-point values?

    1 answer








javascript ieee-754






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 7 at 13:05









Uwe Keim

27.2k30127210




27.2k30127210










asked Nov 7 at 11:08









vothaison

1,1511012




1,1511012




marked as duplicate by Eric Postpischil, chŝdk, Community Nov 8 at 8:00


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.






marked as duplicate by Eric Postpischil, chŝdk, Community Nov 8 at 8:00


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.










  • 1




    Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do theNumber + 0 you still get 0.2 as the result, so the + 0 is apparently a no-op. However theNumber + 1 - 1 is now incorrect because it does use the underlying value for mathematical operations.
    – vlaz
    Nov 7 at 11:27






  • 1




    The value you get for 0.2 for me in Chrome is -> 001100110011001100110011001100110011001100110011001101 If you do -> theNumber.toFixed(54) you will get 0.200000000000000011102230246251565404236316680908203125, Doing the +1 -1 as @vlaz You will then get 0.199999999999999955591079014993738383054733276367187500 So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs.
    – Keith
    Nov 7 at 11:32






  • 1




    Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of 0.2? The correct value is 0.200000000000000011102230246251565404236316680908203125.
    – Eric Postpischil
    Nov 7 at 12:49










  • Thanks, guys. Looks like I gotta read more specs.
    – vothaison
    Nov 7 at 17:40










  • @eric, i thought 0.2 actually went into memory. 😥
    – vothaison
    Nov 7 at 17:41














  • 1




    Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do theNumber + 0 you still get 0.2 as the result, so the + 0 is apparently a no-op. However theNumber + 1 - 1 is now incorrect because it does use the underlying value for mathematical operations.
    – vlaz
    Nov 7 at 11:27






  • 1




    The value you get for 0.2 for me in Chrome is -> 001100110011001100110011001100110011001100110011001101 If you do -> theNumber.toFixed(54) you will get 0.200000000000000011102230246251565404236316680908203125, Doing the +1 -1 as @vlaz You will then get 0.199999999999999955591079014993738383054733276367187500 So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs.
    – Keith
    Nov 7 at 11:32






  • 1




    Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of 0.2? The correct value is 0.200000000000000011102230246251565404236316680908203125.
    – Eric Postpischil
    Nov 7 at 12:49










  • Thanks, guys. Looks like I gotta read more specs.
    – vothaison
    Nov 7 at 17:40










  • @eric, i thought 0.2 actually went into memory. 😥
    – vothaison
    Nov 7 at 17:41








1




1




Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do theNumber + 0 you still get 0.2 as the result, so the + 0 is apparently a no-op. However theNumber + 1 - 1 is now incorrect because it does use the underlying value for mathematical operations.
– vlaz
Nov 7 at 11:27




Interesting question. Never actually thought of that but it seems that JS will still hold the original value, too. Moreover, if you do theNumber + 0 you still get 0.2 as the result, so the + 0 is apparently a no-op. However theNumber + 1 - 1 is now incorrect because it does use the underlying value for mathematical operations.
– vlaz
Nov 7 at 11:27




1




1




The value you get for 0.2 for me in Chrome is -> 001100110011001100110011001100110011001100110011001101 If you do -> theNumber.toFixed(54) you will get 0.200000000000000011102230246251565404236316680908203125, Doing the +1 -1 as @vlaz You will then get 0.199999999999999955591079014993738383054733276367187500 So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs.
– Keith
Nov 7 at 11:32




The value you get for 0.2 for me in Chrome is -> 001100110011001100110011001100110011001100110011001101 If you do -> theNumber.toFixed(54) you will get 0.200000000000000011102230246251565404236316680908203125, Doing the +1 -1 as @vlaz You will then get 0.199999999999999955591079014993738383054733276367187500 So to me it looks like the default rendering for a number has some standard truncating, to how many decimals this is I'v not been able to find, it's maybe somewhere in the specs.
– Keith
Nov 7 at 11:32




1




1




Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of 0.2? The correct value is 0.200000000000000011102230246251565404236316680908203125.
– Eric Postpischil
Nov 7 at 12:49




Where did you get 0.1999999999999999833466546306226518936455249786376953125 for the result of 0.2? The correct value is 0.200000000000000011102230246251565404236316680908203125.
– Eric Postpischil
Nov 7 at 12:49












Thanks, guys. Looks like I gotta read more specs.
– vothaison
Nov 7 at 17:40




Thanks, guys. Looks like I gotta read more specs.
– vothaison
Nov 7 at 17:40












@eric, i thought 0.2 actually went into memory. 😥
– vothaison
Nov 7 at 17:41




@eric, i thought 0.2 actually went into memory. 😥
– vothaison
Nov 7 at 17:41












3 Answers
3






active

oldest

votes

















up vote
3
down vote



accepted










JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)



Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.



When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.



In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)



If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.






share|improve this answer























  • Thanks, Eric. Now Ima read your answer a few more times. 🤣
    – vothaison
    Nov 7 at 17:43


















up vote
2
down vote













In fact, 0.2 is represented by other bit sequence than you posted.
Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.


Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.






console.log(0.2)
console.log(0.05 + 0.15)
console.log(0.02 + 0.18)

console.log(0.3)
console.log(0.1 + 0.2)
console.log(0.05 + 0.25)





From ECMAScript Language Specification:




11.8.3.1 Static Semantics: MV
A numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]




You may be also interested in following section:




6.1.6 Number type
[...]
In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner.
[...(whole procedure is described)]
(This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)







share|improve this answer























  • Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
    – vothaison
    Nov 7 at 17:47










  • Wait. So the binary is rounded up? Not truncated?
    – vothaison
    Nov 7 at 18:03






  • 1




    0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
    – barbsan
    Nov 8 at 7:44






  • 1




    @vothaison I've added some references
    – barbsan
    Nov 8 at 8:08










  • I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
    – vothaison
    Nov 8 at 8:28


















up vote
0
down vote













So, my ASSUMPTION is wrong.



I have written a small program to do the experiment.



The binary value that goes to memory is not



0-01111111100-1001100110011001100110011001100110011001100110011001


The mantissa part is not 1001100110011001100110011001100110011001100110011001



It got that because I truncated the value, instead of rounding it. :((



1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010



The correct binary value should be:



0-01111111100-1001100110011001100110011001100110011001100110011010


The full decimal of that value is:



0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125


not



0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5


And as Eric's answer, all decimal numbers, if are converted to the binary



0-01111111100-1001100110011001100110011001100110011001100110011010


will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).






share|improve this answer




























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    3
    down vote



    accepted










    JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)



    Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.



    When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.



    In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)



    If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.






    share|improve this answer























    • Thanks, Eric. Now Ima read your answer a few more times. 🤣
      – vothaison
      Nov 7 at 17:43















    up vote
    3
    down vote



    accepted










    JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)



    Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.



    When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.



    In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)



    If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.






    share|improve this answer























    • Thanks, Eric. Now Ima read your answer a few more times. 🤣
      – vothaison
      Nov 7 at 17:43













    up vote
    3
    down vote



    accepted







    up vote
    3
    down vote



    accepted






    JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)



    Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.



    When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.



    In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)



    If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.






    share|improve this answer














    JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)



    Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.



    When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.



    In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)



    If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 7 at 13:01

























    answered Nov 7 at 12:47









    Eric Postpischil

    69.1k873150




    69.1k873150












    • Thanks, Eric. Now Ima read your answer a few more times. 🤣
      – vothaison
      Nov 7 at 17:43


















    • Thanks, Eric. Now Ima read your answer a few more times. 🤣
      – vothaison
      Nov 7 at 17:43
















    Thanks, Eric. Now Ima read your answer a few more times. 🤣
    – vothaison
    Nov 7 at 17:43




    Thanks, Eric. Now Ima read your answer a few more times. 🤣
    – vothaison
    Nov 7 at 17:43












    up vote
    2
    down vote













    In fact, 0.2 is represented by other bit sequence than you posted.
    Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.


    Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.






    console.log(0.2)
    console.log(0.05 + 0.15)
    console.log(0.02 + 0.18)

    console.log(0.3)
    console.log(0.1 + 0.2)
    console.log(0.05 + 0.25)





    From ECMAScript Language Specification:




    11.8.3.1 Static Semantics: MV
    A numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]




    You may be also interested in following section:




    6.1.6 Number type
    [...]
    In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner.
    [...(whole procedure is described)]
    (This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)







    share|improve this answer























    • Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
      – vothaison
      Nov 7 at 17:47










    • Wait. So the binary is rounded up? Not truncated?
      – vothaison
      Nov 7 at 18:03






    • 1




      0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
      – barbsan
      Nov 8 at 7:44






    • 1




      @vothaison I've added some references
      – barbsan
      Nov 8 at 8:08










    • I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
      – vothaison
      Nov 8 at 8:28















    up vote
    2
    down vote













    In fact, 0.2 is represented by other bit sequence than you posted.
    Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.


    Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.






    console.log(0.2)
    console.log(0.05 + 0.15)
    console.log(0.02 + 0.18)

    console.log(0.3)
    console.log(0.1 + 0.2)
    console.log(0.05 + 0.25)





    From ECMAScript Language Specification:




    11.8.3.1 Static Semantics: MV
    A numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]




    You may be also interested in following section:




    6.1.6 Number type
    [...]
    In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner.
    [...(whole procedure is described)]
    (This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)







    share|improve this answer























    • Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
      – vothaison
      Nov 7 at 17:47










    • Wait. So the binary is rounded up? Not truncated?
      – vothaison
      Nov 7 at 18:03






    • 1




      0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
      – barbsan
      Nov 8 at 7:44






    • 1




      @vothaison I've added some references
      – barbsan
      Nov 8 at 8:08










    • I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
      – vothaison
      Nov 8 at 8:28













    up vote
    2
    down vote










    up vote
    2
    down vote









    In fact, 0.2 is represented by other bit sequence than you posted.
    Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.


    Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.






    console.log(0.2)
    console.log(0.05 + 0.15)
    console.log(0.02 + 0.18)

    console.log(0.3)
    console.log(0.1 + 0.2)
    console.log(0.05 + 0.25)





    From ECMAScript Language Specification:




    11.8.3.1 Static Semantics: MV
    A numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]




    You may be also interested in following section:




    6.1.6 Number type
    [...]
    In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner.
    [...(whole procedure is described)]
    (This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)







    share|improve this answer














    In fact, 0.2 is represented by other bit sequence than you posted.
    Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.


    Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.






    console.log(0.2)
    console.log(0.05 + 0.15)
    console.log(0.02 + 0.18)

    console.log(0.3)
    console.log(0.1 + 0.2)
    console.log(0.05 + 0.25)





    From ECMAScript Language Specification:




    11.8.3.1 Static Semantics: MV
    A numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]




    You may be also interested in following section:




    6.1.6 Number type
    [...]
    In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner.
    [...(whole procedure is described)]
    (This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)







    console.log(0.2)
    console.log(0.05 + 0.15)
    console.log(0.02 + 0.18)

    console.log(0.3)
    console.log(0.1 + 0.2)
    console.log(0.05 + 0.25)





    console.log(0.2)
    console.log(0.05 + 0.15)
    console.log(0.02 + 0.18)

    console.log(0.3)
    console.log(0.1 + 0.2)
    console.log(0.05 + 0.25)






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 8 at 8:07

























    answered Nov 7 at 12:08









    barbsan

    2,131521




    2,131521












    • Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
      – vothaison
      Nov 7 at 17:47










    • Wait. So the binary is rounded up? Not truncated?
      – vothaison
      Nov 7 at 18:03






    • 1




      0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
      – barbsan
      Nov 8 at 7:44






    • 1




      @vothaison I've added some references
      – barbsan
      Nov 8 at 8:08










    • I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
      – vothaison
      Nov 8 at 8:28


















    • Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
      – vothaison
      Nov 7 at 17:47










    • Wait. So the binary is rounded up? Not truncated?
      – vothaison
      Nov 7 at 18:03






    • 1




      0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
      – barbsan
      Nov 8 at 7:44






    • 1




      @vothaison I've added some references
      – barbsan
      Nov 8 at 8:08










    • I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
      – vothaison
      Nov 8 at 8:28
















    Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
    – vothaison
    Nov 7 at 17:47




    Thanks, Barb. i thought 0.2 was the one that went into memory. So, first it is "tranformed" into its true number form. I'm gonna read more specs.
    – vothaison
    Nov 7 at 17:47












    Wait. So the binary is rounded up? Not truncated?
    – vothaison
    Nov 7 at 18:03




    Wait. So the binary is rounded up? Not truncated?
    – vothaison
    Nov 7 at 18:03




    1




    1




    0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
    – barbsan
    Nov 8 at 7:44




    0.2 is only human-readable representation, in fact all numbers are kept in binary form. Binary is neither rounded nor truncated - it's decimal representation is rounded
    – barbsan
    Nov 8 at 7:44




    1




    1




    @vothaison I've added some references
    – barbsan
    Nov 8 at 8:08




    @vothaison I've added some references
    – barbsan
    Nov 8 at 8:08












    I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
    – vothaison
    Nov 8 at 8:28




    I have seen those pages, but didn't understand it until after reading answers/comments from you guys. Thanks. :D
    – vothaison
    Nov 8 at 8:28










    up vote
    0
    down vote













    So, my ASSUMPTION is wrong.



    I have written a small program to do the experiment.



    The binary value that goes to memory is not



    0-01111111100-1001100110011001100110011001100110011001100110011001


    The mantissa part is not 1001100110011001100110011001100110011001100110011001



    It got that because I truncated the value, instead of rounding it. :((



    1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010



    The correct binary value should be:



    0-01111111100-1001100110011001100110011001100110011001100110011010


    The full decimal of that value is:



    0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125


    not



    0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5


    And as Eric's answer, all decimal numbers, if are converted to the binary



    0-01111111100-1001100110011001100110011001100110011001100110011010


    will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).






    share|improve this answer

























      up vote
      0
      down vote













      So, my ASSUMPTION is wrong.



      I have written a small program to do the experiment.



      The binary value that goes to memory is not



      0-01111111100-1001100110011001100110011001100110011001100110011001


      The mantissa part is not 1001100110011001100110011001100110011001100110011001



      It got that because I truncated the value, instead of rounding it. :((



      1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010



      The correct binary value should be:



      0-01111111100-1001100110011001100110011001100110011001100110011010


      The full decimal of that value is:



      0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125


      not



      0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5


      And as Eric's answer, all decimal numbers, if are converted to the binary



      0-01111111100-1001100110011001100110011001100110011001100110011010


      will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).






      share|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        So, my ASSUMPTION is wrong.



        I have written a small program to do the experiment.



        The binary value that goes to memory is not



        0-01111111100-1001100110011001100110011001100110011001100110011001


        The mantissa part is not 1001100110011001100110011001100110011001100110011001



        It got that because I truncated the value, instead of rounding it. :((



        1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010



        The correct binary value should be:



        0-01111111100-1001100110011001100110011001100110011001100110011010


        The full decimal of that value is:



        0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125


        not



        0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5


        And as Eric's answer, all decimal numbers, if are converted to the binary



        0-01111111100-1001100110011001100110011001100110011001100110011010


        will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).






        share|improve this answer












        So, my ASSUMPTION is wrong.



        I have written a small program to do the experiment.



        The binary value that goes to memory is not



        0-01111111100-1001100110011001100110011001100110011001100110011001


        The mantissa part is not 1001100110011001100110011001100110011001100110011001



        It got that because I truncated the value, instead of rounding it. :((



        1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010



        The correct binary value should be:



        0-01111111100-1001100110011001100110011001100110011001100110011010


        The full decimal of that value is:



        0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125


        not



        0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5


        And as Eric's answer, all decimal numbers, if are converted to the binary



        0-01111111100-1001100110011001100110011001100110011001100110011010


        will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 8 at 7:53









        vothaison

        1,1511012




        1,1511012















            這個網誌中的熱門文章

            Tangent Lines Diagram Along Smooth Curve

            Yusuf al-Mu'taman ibn Hud

            Zucchini