The edge of infinity: how a computer handles infinite floating point numbers

ProgrammingJun 2021

I recently went down the rabbit hole of why the computer, the supreme calculator, can have floating point number inaccuracy. Why summing 0.1 and 0.2 isn't quite 0.3.

I’m concocting this post to clarify, for myself, why that is. The reasons are interesting and the mechanism behind it fascinating.

Everything is either a 0 or a 1

Computers operate on binary. This means that everything in a computer is stored in memory in the form of 1s and 0s — a text message, a sum in a calculator, an image file on my desktop, a website on the browser.

A number like 1023 or 3.14159265359 – a base 10 number made with the digits matching the fingers in my hand – is already an abstraction. That means that the simple number 1023 is stored as literal electrify (1) and lack thereof (0).  

Limited Infinity

When I break a unit down in three equal parts - 1/3 or 0.3333 - the truth is that this isn’t just a zero and some threes, but 0.33333... to infinity. A computer, on the other hand, has finite memory and cannot store an infinite number. At some point, it literally runs out of space and cuts infinity short.

The Mechanism

The algorithm that handles the binary encoding of a floating point number — IEEE 754 Specification — is nothing short of genius in its simplicity. What it does is reduce the number to a near-enough finite number that fits the allotted space. Consider the space between 1 and 2; it’s near infinite:

What IEEE does is it stores an idea of where the number is located, somewhere in the chasm between two numbers.

It does so in three levels. First, it determines whether the number is positive or negative — the sign. Then, it stores the integer range where the number is expected to be — the exponent. Lastly, it stores where in between those two numbers the number is — the mantissa.

Finally, the level of precision is dependent on the amount of space available for it. The example above is what is called a half precision floating point number; it takes 16-bits. But this spacing can go up to 256-bits. The bigger the space, the more granular the number’s location can be.

That is how a computer edges on infinity, and also how it does so with an endearing level of imprecision.

No items found.