From everything I’ve been able to find online, JavaScript allegedly uses IEEE 754 doubles for its numbers, but I have found numbers that can work in C doubles, but not in JavaScript. For example,

#include <stdio.h> int main(){ double x = 131621703842267136.; printf("%lfn", x); }

prints `131621703842267136.000000`

NOTE: IN AN EARLIER VERSION OF THE QUESTION I COPPIED THE WRONG NUMBER FOR C, but in JavaScript

console.log(131621703842267136)

outputs `131621703842267140`

. From everything I’ve read online, both C doubles and JavaScript numbers are 64-bit floating point, so I am very confused why they would output different results. Any ideas?

## Advertisement

## Answer

JavaScript’s default conversion of a `Number`

to a string produces just enough decimal digits to uniquely distinguish the `Number`

. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.) Formatting via `console.log`

is not covered by the ECMAScript specification, but likely the `Number`

is converted to a string using the same rules as for `NumberToString`

.

Since stopping at the ten’s digit, producing 131621703842267140, is enough to distinguish the floating-point number from its two neighboring representable values, 131621703842267120 and 131621703842267152, JavaScript stops there.

You can request more digits with `toPrecision`

; the following produces “131621703842267136.000”:

var x = 131621703842267136; console.log(x.toPrecision(21))

(Note that 131621703842267136 is exactly representable in IEEE-754 basic 64-bit binary format, which JavaScript uses for `Number`

, and many C implementations use for `double`

. So there are no rounding errors in this question due to the floating-point format. All changes result from conversions between decimal and floating-point.)

Prior to an edit at 2019-05-17 16:27:53 UTC, the question stated that a C program was showing “131621703737409536.000000” for 131621703842267136. That would not have been conforming to the C standard. The C standard is lax about its floating-point formatting requirements, but producing “131621703737409536.000000” for 131621703842267136 violates them. This is governed by this sentence in C 2018 (and 2011) 7.21.6.1 13:

Otherwise, the source value is bounded by two adjacent decimal strings

L<U, both having`DECIMAL_DIG`

significant digits; the value of the resultant decimal stringDshould satisfyL≤D≤U, with the extra stipulation that the error should have a correct sign for the current rounding direction.

`DECIMAL_DIG`

must be at least ten, by 5.2.4.2.2 12. The number 131621703**8**42267136 (bold marks the tenth digit) is bounded by the two adjacent ten-digit strings “131621703**8**00000000” and “131621703**9**00000000”. The string “131621703**7**37409536.000000” is not between these.

This also cannot be a result of the C implementation using a different floating-point format for `double`

, as 5.2.4.2.2 requires the format be sufficient to convert at least ten decimal digits to `double`

and back to decimal without change to the value.