- Mastering TypeScript 3
- Nathan Rozentals
- 860字
- 2021-07-02 12:42:49
Bigint
Proposals to the ECMAScript standard have recently included support for handling really large numbers. Most programming languages already natively support 64 bit numbers, but JavaScript has lagged behind in this respect, and currently only supports numbers that are at 53 bit precision. The reasons for this particular constraint are quite detailed, and boil down to the internal representation that JavaScript uses to store numbers in memory. Any number representation in memory needs to take into account both the sign, either positive or negative, and the precision of a number, or in other words the number of decimal places. In JavaScript, the largest number that can be used when using 53 bit precision is 9,007,199,254,740,991, which, in layman's terms, is:
Nine quadrillion, seven trillion, one hundred ninety nine billion, two hundred fifty four million, seven hundred forty thousand, nine hundred ninety one. This is a really, really, really, large number.
While we may not need to work with nine quadrillion possible different values in our code, these types of numbers do come up in certain circumstances. We only need to look at modern cryptography routines to find examples. If your application is working with a bank, for instance, the bank may generate some sort of numeric token that is encrypted with an advanced cryptography routine, in order to represent the unique transaction ID for a particular payment. The larger this number is, the more difficult it is to decrypt, and the more secure the system. Obtaining a 64 bit number as a unique ID is therefore quite feasible.
Let's take a look at the limits of the current number type in JavaScript, with the following code:
console.log(`Number.MAX_SAFE_INTEGER : ${Number.MAX_SAFE_INTEGER}`);
let highest53bitNumber = 9_007_199_254_740_991;
for (let i = 0; i < 10; i++) {
console.log(`${i} : ${highest53bitNumber + i}`);
}
Here, we start by logging the value of the constant named Number.MAX_SAFE_INTEGER to the console. This constant will return a number that is the maximum value of an integer that JavaScript supports. We then define a variable named highest53bitNumber and set it to this maximum value. The code then executes a simple for loop that adds the numbers 0 through 9 to this number, and logs the results to the console. The output of this code is as follows:
Here, we can see some pretty strange results. Adding the value 2 to the variable highest53bitNumber, or adding the value 1, surprisingly produce the same result. Adding the value 4 or 5 or 6 also produces the same result. What we are seeing here are the results of attempting to perform simple arithmetic on numbers that are beyond the Number.MAX_SAFE_INTEGER limit supported in JavaScript. As JavaScript does not support numbers beyond this limit, the results are indeterminate.
The latest versions of the ECMAScript standard have implemented a new basic type named bigint in order to handle these sorts of really large numbers. Let's take a look at the same loop we discussed in our previous code snippet using bigint as follows:
console.log(`using bigint :`);
let bigIntNumber: bigint = 9_007_199_254_740_991n;
for (let i = 0; i < 10; i++) {
console.log(`${i} : ${bigIntNumber + BigInt(i)}`);
}
Here, we have defined a variable named bigIntNumber, and specified that it is of the bigint type. The bigint type is an addition to the basic types of string, number, and boolean, and is treated in the same way. This means that we cannot assign a number type to a bigint type in the same what that we cannot assign a string type to a number type. Note the definition of the value for this bigint. We have appended the letter n to the numeric value in order for the compiler to recognize that we are defining a bigint value.
Our loop is similar to the previous code snippet in that it is looping through the values 0 to 9, and adding this value to our bigIntNumber variable, and then logging the results to the console. Note, however that we need to create a bigint value from the variable i, which is actually of type number. This is accomplished by calling the BigInt static function, and passing our number type in as an argument. In other words, BigInt(i) is converting the variable i, which is of type number, to a type of bigint. Running this code now produces the following results:
Here, we can see that using the new native bigint type, we are able to perform arithmetic calculations on numbers that are higher than the allowed 53 bit precision.