Numpy losing precision when converting long int to float












1















It seems numpy is losing precision on numpy.int64 values when converted to float types.



My numpy version is 1.15.4 which seemed to fix this error.



Here is an example:



>>> value = 734625324872288246
>>> value_plus_1 = 734625324872288246 + 1
>>> items = [value, value_plus_1]
>>> value.bit_length()
60
>>> value_plus_1.bit_length()
60
>>> import numpy as np
>>> a = np.array(items, dtype = np.float128) # larger than needed for value
>>> a
array([7.34625325e+17, 7.34625325e+17], dtype=float128)
>>> a.astype(np.int64) # larger than needed for value
array([734625324872288256, 734625324872288256])
>>> np.__version__
'1.15.4'


As you can see, now both values in the array are equivalent, which shows a loss in precision that I am assuming happens on casting to float.



My question is; Is there something I am doing wrong when creating numpy arrays that can be rectified to not lose precision?










share|improve this question



























    1















    It seems numpy is losing precision on numpy.int64 values when converted to float types.



    My numpy version is 1.15.4 which seemed to fix this error.



    Here is an example:



    >>> value = 734625324872288246
    >>> value_plus_1 = 734625324872288246 + 1
    >>> items = [value, value_plus_1]
    >>> value.bit_length()
    60
    >>> value_plus_1.bit_length()
    60
    >>> import numpy as np
    >>> a = np.array(items, dtype = np.float128) # larger than needed for value
    >>> a
    array([7.34625325e+17, 7.34625325e+17], dtype=float128)
    >>> a.astype(np.int64) # larger than needed for value
    array([734625324872288256, 734625324872288256])
    >>> np.__version__
    '1.15.4'


    As you can see, now both values in the array are equivalent, which shows a loss in precision that I am assuming happens on casting to float.



    My question is; Is there something I am doing wrong when creating numpy arrays that can be rectified to not lose precision?










    share|improve this question

























      1












      1








      1








      It seems numpy is losing precision on numpy.int64 values when converted to float types.



      My numpy version is 1.15.4 which seemed to fix this error.



      Here is an example:



      >>> value = 734625324872288246
      >>> value_plus_1 = 734625324872288246 + 1
      >>> items = [value, value_plus_1]
      >>> value.bit_length()
      60
      >>> value_plus_1.bit_length()
      60
      >>> import numpy as np
      >>> a = np.array(items, dtype = np.float128) # larger than needed for value
      >>> a
      array([7.34625325e+17, 7.34625325e+17], dtype=float128)
      >>> a.astype(np.int64) # larger than needed for value
      array([734625324872288256, 734625324872288256])
      >>> np.__version__
      '1.15.4'


      As you can see, now both values in the array are equivalent, which shows a loss in precision that I am assuming happens on casting to float.



      My question is; Is there something I am doing wrong when creating numpy arrays that can be rectified to not lose precision?










      share|improve this question














      It seems numpy is losing precision on numpy.int64 values when converted to float types.



      My numpy version is 1.15.4 which seemed to fix this error.



      Here is an example:



      >>> value = 734625324872288246
      >>> value_plus_1 = 734625324872288246 + 1
      >>> items = [value, value_plus_1]
      >>> value.bit_length()
      60
      >>> value_plus_1.bit_length()
      60
      >>> import numpy as np
      >>> a = np.array(items, dtype = np.float128) # larger than needed for value
      >>> a
      array([7.34625325e+17, 7.34625325e+17], dtype=float128)
      >>> a.astype(np.int64) # larger than needed for value
      array([734625324872288256, 734625324872288256])
      >>> np.__version__
      '1.15.4'


      As you can see, now both values in the array are equivalent, which shows a loss in precision that I am assuming happens on casting to float.



      My question is; Is there something I am doing wrong when creating numpy arrays that can be rectified to not lose precision?







      python numpy






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 14 '18 at 13:24









      Nathan McCoyNathan McCoy

      1,1421126




      1,1421126
























          2 Answers
          2






          active

          oldest

          votes


















          1














          (The question is almost certainly a duplicate, but my search-fu is weak today.)



          There are only finitely many numbers that can be represented with 64 bit floating point numbers. The spacing between the numbers that are exactly representable depends on the magnitude of the numbers; you can find the spacing with the function numpy.spacing(x) for a floating point number x. In your case, the spacing of the floating point numbers around 734625324872288246 is 128:



          In [33]: x = float(734625324872288246)

          In [34]: x
          Out[34]: 7.346253248722883e+17

          In [35]: np.spacing(x)
          Out[35]: 128.0


          The integer value 734625324872288246 is not representable exactly as floating point. You can see that by casting the float back to integer; you don't get the same value:



          In [36]: int(x)
          Out[36]: 734625324872288256


          You can represent 734625324872288256 exactly as a floating point number, but the next lower representable integer is 734625324872288256 - 128 = 734625324872288128.



          And here are the obligatory links for questions about floating point:




          • What Every Computer Scientist Should Know About Floating-Point Arithmetic

          • What Every Programmer Should Know About Floating-Point Arithmetic






          share|improve this answer

































            2














            The numpy documentation (https://docs.scipy.org/doc/numpy-1.15.0/user/basics.types.html) states that the implementation of float64 only uses 52 bits for the mantissa, and 11 bits for the exponent. This is most likely not enough accuracy to store your 60-bit numbers with full precision.






            share|improve this answer























              Your Answer






              StackExchange.ifUsing("editor", function () {
              StackExchange.using("externalEditor", function () {
              StackExchange.using("snippets", function () {
              StackExchange.snippets.init();
              });
              });
              }, "code-snippets");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "1"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53301292%2fnumpy-losing-precision-when-converting-long-int-to-float%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1














              (The question is almost certainly a duplicate, but my search-fu is weak today.)



              There are only finitely many numbers that can be represented with 64 bit floating point numbers. The spacing between the numbers that are exactly representable depends on the magnitude of the numbers; you can find the spacing with the function numpy.spacing(x) for a floating point number x. In your case, the spacing of the floating point numbers around 734625324872288246 is 128:



              In [33]: x = float(734625324872288246)

              In [34]: x
              Out[34]: 7.346253248722883e+17

              In [35]: np.spacing(x)
              Out[35]: 128.0


              The integer value 734625324872288246 is not representable exactly as floating point. You can see that by casting the float back to integer; you don't get the same value:



              In [36]: int(x)
              Out[36]: 734625324872288256


              You can represent 734625324872288256 exactly as a floating point number, but the next lower representable integer is 734625324872288256 - 128 = 734625324872288128.



              And here are the obligatory links for questions about floating point:




              • What Every Computer Scientist Should Know About Floating-Point Arithmetic

              • What Every Programmer Should Know About Floating-Point Arithmetic






              share|improve this answer






























                1














                (The question is almost certainly a duplicate, but my search-fu is weak today.)



                There are only finitely many numbers that can be represented with 64 bit floating point numbers. The spacing between the numbers that are exactly representable depends on the magnitude of the numbers; you can find the spacing with the function numpy.spacing(x) for a floating point number x. In your case, the spacing of the floating point numbers around 734625324872288246 is 128:



                In [33]: x = float(734625324872288246)

                In [34]: x
                Out[34]: 7.346253248722883e+17

                In [35]: np.spacing(x)
                Out[35]: 128.0


                The integer value 734625324872288246 is not representable exactly as floating point. You can see that by casting the float back to integer; you don't get the same value:



                In [36]: int(x)
                Out[36]: 734625324872288256


                You can represent 734625324872288256 exactly as a floating point number, but the next lower representable integer is 734625324872288256 - 128 = 734625324872288128.



                And here are the obligatory links for questions about floating point:




                • What Every Computer Scientist Should Know About Floating-Point Arithmetic

                • What Every Programmer Should Know About Floating-Point Arithmetic






                share|improve this answer




























                  1












                  1








                  1







                  (The question is almost certainly a duplicate, but my search-fu is weak today.)



                  There are only finitely many numbers that can be represented with 64 bit floating point numbers. The spacing between the numbers that are exactly representable depends on the magnitude of the numbers; you can find the spacing with the function numpy.spacing(x) for a floating point number x. In your case, the spacing of the floating point numbers around 734625324872288246 is 128:



                  In [33]: x = float(734625324872288246)

                  In [34]: x
                  Out[34]: 7.346253248722883e+17

                  In [35]: np.spacing(x)
                  Out[35]: 128.0


                  The integer value 734625324872288246 is not representable exactly as floating point. You can see that by casting the float back to integer; you don't get the same value:



                  In [36]: int(x)
                  Out[36]: 734625324872288256


                  You can represent 734625324872288256 exactly as a floating point number, but the next lower representable integer is 734625324872288256 - 128 = 734625324872288128.



                  And here are the obligatory links for questions about floating point:




                  • What Every Computer Scientist Should Know About Floating-Point Arithmetic

                  • What Every Programmer Should Know About Floating-Point Arithmetic






                  share|improve this answer















                  (The question is almost certainly a duplicate, but my search-fu is weak today.)



                  There are only finitely many numbers that can be represented with 64 bit floating point numbers. The spacing between the numbers that are exactly representable depends on the magnitude of the numbers; you can find the spacing with the function numpy.spacing(x) for a floating point number x. In your case, the spacing of the floating point numbers around 734625324872288246 is 128:



                  In [33]: x = float(734625324872288246)

                  In [34]: x
                  Out[34]: 7.346253248722883e+17

                  In [35]: np.spacing(x)
                  Out[35]: 128.0


                  The integer value 734625324872288246 is not representable exactly as floating point. You can see that by casting the float back to integer; you don't get the same value:



                  In [36]: int(x)
                  Out[36]: 734625324872288256


                  You can represent 734625324872288256 exactly as a floating point number, but the next lower representable integer is 734625324872288256 - 128 = 734625324872288128.



                  And here are the obligatory links for questions about floating point:




                  • What Every Computer Scientist Should Know About Floating-Point Arithmetic

                  • What Every Programmer Should Know About Floating-Point Arithmetic







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 14 '18 at 13:55

























                  answered Nov 14 '18 at 13:45









                  Warren WeckesserWarren Weckesser

                  68.4k798132




                  68.4k798132

























                      2














                      The numpy documentation (https://docs.scipy.org/doc/numpy-1.15.0/user/basics.types.html) states that the implementation of float64 only uses 52 bits for the mantissa, and 11 bits for the exponent. This is most likely not enough accuracy to store your 60-bit numbers with full precision.






                      share|improve this answer




























                        2














                        The numpy documentation (https://docs.scipy.org/doc/numpy-1.15.0/user/basics.types.html) states that the implementation of float64 only uses 52 bits for the mantissa, and 11 bits for the exponent. This is most likely not enough accuracy to store your 60-bit numbers with full precision.






                        share|improve this answer


























                          2












                          2








                          2







                          The numpy documentation (https://docs.scipy.org/doc/numpy-1.15.0/user/basics.types.html) states that the implementation of float64 only uses 52 bits for the mantissa, and 11 bits for the exponent. This is most likely not enough accuracy to store your 60-bit numbers with full precision.






                          share|improve this answer













                          The numpy documentation (https://docs.scipy.org/doc/numpy-1.15.0/user/basics.types.html) states that the implementation of float64 only uses 52 bits for the mantissa, and 11 bits for the exponent. This is most likely not enough accuracy to store your 60-bit numbers with full precision.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Nov 14 '18 at 13:44









                          DatHydroGuyDatHydroGuy

                          6662313




                          6662313






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53301292%2fnumpy-losing-precision-when-converting-long-int-to-float%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              這個網誌中的熱門文章

                              Xamarin.form Move up view when keyboard appear

                              Post-Redirect-Get with Spring WebFlux and Thymeleaf

                              Anylogic : not able to use stopDelay()