I bet many of you right now are thinking 'WTF is a negative zero'? Well sit around the campfire and let me tell you a story older than the internet...
Be warned, this post is like half math and Computer Science and some history around the Floating point standard (otherwise known as IEEE 754), but I promise it involves javascript, right at the end, down there β¬. The FP wizards can skip this part. For the rest...
A brief history of floating point numbers.
Long ago, people decided they wanted to do math with computers. Arithmetic with simple integers is nice and easy:
18+3=21
Add 8 and 3, carry the 1.
While humans usually work in decimal (unless you're adding time, in which case you're counting in sexagesimal), computers work in binary. But what you were taught in elementary school still works in binary as well as in decimal. It works exactly like that at the hardware/transistor level.
What complicates things is when you expand your mathematics to the field of real numbers. This is where we have to make an important distinction. On one hand, we have a number, a mathematical object you can use to count, quantify and label things things with. On the other hand you have a number's representation. On a piece of paper that might be a squiggle that looks like the shape below:
5
and is used to represent the fifth integer after zero. In computers we are looking for a way to represent, or encode numbers in binary.
Integers use a really neat scheme called Two's complement. It is standard in all computers today, because it has a bunch of neat properties, but it's not the only scheme possible. Some old computers used something called "Binary coded decimals". But using integers can only get you so far. At some point you will want to start doing math with real numbers.
A few years before the internet (told ya) some very smart people came together and figured a way for computers to do math with real numbers. This is known as the IEEE 754 floating point standard. A large part of the standard is the scheme for encoding real numbers in binary using a floating decimal point. If you are familiar with Scientific notation, floating point is basically that, but for computers. In the standard a real number is composed of 3 components:
- A sign bit
- An exponent (8 bits for a 32-bit FP number). This defines the range of your number (how big or small numbers you can represent).
- A significand (the remaining 23 bits). This defines your precision (beyond a certain number of digits the standard does not guarantee results to be mathematically accurate).
For the curious, the exact formula for turning an FP32 into the mathematical number can be found here. If you want to play with an interactive visual representation of the bit pattern go here.
Back to JavaScript land
Finally we get to familiar territory. While doing math in JavaScript, you will be using the Number type, which is nominally a 64-bit floating point number (MDN). (Except sometimes it can behave as a 32-bit integer (in bitwise operations) and how the javascript engine treats it internally is a can of worms I am not qualified to open.)
And every once in a while you might do an operation that might yield a weird value - a negative zero.
See, the encoding scheme for floating point isn't exactly the most efficient, in terms of using all information provided by its bits (unlike two's complement). It can produce certain weird states which do not make sense as a valid number. One of these is a negative zero. Another is NaN
(which works based on its own very special rules).
What exactly do you use a negative zero for?
Mathematically, absolutely nothing. It is an artifact of the underlying representation. Like the loss of precision resulting from computers not having unlimited memory.
But it is a separate state, and you can assign meaning to it.
While not particularly relevant, context: I was implementing keyboard shortcuts for doing an action to a list of things on a page quickly. Pressing a number over an element of the list would target that and successive elements, pressing the minus sign before the number would target all preceding elements. Since doing it 0 times is nonsensical, I assigned the 0
key as a shortcut for 10.
This means that I now need to find a way to distinguish a negative zero from a positive zero.
Attempt 1
What's your first thought? Strict equality? Yea, mine was too.
And you will fail too.
I would not fault you. ===
tests whether the underlying representation is the same. This is what we're looking for. In contrast ==
tests whether what is being represented is the same.
Try this in your console:
0 === -0
It yields true
.
Attempt 2
But we're knowledgeable coders and we know that special states in floating point math require special handling. For example NaN === NaN
is false (no really, try it). You have to use
isNaN()
It's a global method, though MDN recommends Number.isNaN()
for being more robust.
We go looking and find
Math.sign()
This looks perfect! It even returns a number, which we can do arithmetic with. We might go use it immediately, or we might read the docs. Either way, frustratingly we discover that Math.sign(-0)
returns... -0
. How unhelpful.
Attempt 3
You stare at your monitor in wild eyed confusion. Surely there is a reason for negative zero to exist. It must matter somehow. Math.sign()
makes a special exception for 0, and multiplying anything by 0 yields 0.
This is when you start giving in to the dark side. This is when you start considering string operations. (Number(-0).toString()
doesn't work, but Number(-0).toLocaleString()
does.)
Ugh. *shudders*
Have faith young padawan!
The key realization is, this is an idiosyncrasy of floating points. It matters within the floating point standard itself. There must be some operation for which it matters which zero you have. And there is one!
Dividing by zero.
The result of division by floating point zero is Infinity
(one of those special values). A signed Infinity
.
1 / -0
results in -Infinity
. (Though 0/0
is NaN
)
At this point mathsy people might start remembering things. If you've done math beyond high school - say calculus, you might remember, when taking limits it sometimes matters which side you're approaching your limit from. So this is not so far-fetched math-wise. And division by zero is not always indeterminate.
Solution
And with this we finally have our solution. Math.sign()
might make special exception for 0, but it makes none for Infinity
.
let actionCount = Number(keysPressed)
if (actionCount === 0) {
actionCount = 10 * Math.sign(1 / actionCount)
}
And so my first blog post ever is done. (This is a blog platform right?) Not sure how many people will read this, but if you do, please tell me how readable you found my wall of text.
Top comments (0)