I recently went down the rabbit hole of why the computer, the supreme calculator, can have floating point number inaccuracy. Why summing 0.1 and 0.2 isn't quite 0.3.
I’m concocting this post to clarify, for myself, why that is. The reasons are interesting and the mechanism behind it fascinating.
Everything is either a 0 or a 1
Computers operate on binary. This means that everything in a computer is stored in memory in the form of 1s and 0s — a text message, a sum in a calculator, an image file on my desktop, a website on the browser.
A number like 1023 or 3.14159265359 – a base 10 number made with the digits matching the fingers in my hand – is already an abstraction. That means that the simple number 1023 is stored as literal electrify (1) and lack thereof (0).
When I break a unit down in three equal parts - 1/3 or 0.3333 - the truth is that this isn’t just a zero and some threes, but 0.33333... to infinity. A computer, on the other hand, has finite memory and cannot store an infinite number. At some point, it literally runs out of space and cuts infinity short.
The algorithm that handles the binary encoding of a floating point number — IEEE 754 Specification — is nothing short of genius in its simplicity. What it does is reduce the number to a near-enough finite number that fits the allotted space. Consider the space between 1 and 2; it’s near infinite:
What IEEE does is it stores an idea of where the number is located, somewhere in the chasm between two numbers.
It does so in three levels. First, it determines whether the number is positive or negative — the sign. Then, it stores the integer range where the number is expected to be — the exponent. Lastly, it stores where in between those two numbers the number is — the mantissa.
Finally, the level of precision is dependent on the amount of space available for it. The example above is what is called a half precision floating point number; it takes 16-bits. But this spacing can go up to 256-bits. The bigger the space, the more granular the number’s location can be.
That is how a computer edges on infinity, and also how it does so with an endearing level of imprecision.
As I go about my day, I'll often have moments when I’d like something to be a particular way. Sometimes, I note these down, but often I’ll simply leave them in the back of my mind to see if they vanish or keep coming back. I’m becoming a programmer, in part, due to these inconspicuous bursts. I find joy in having an idea and concretizing it, and programming is a venture into extending my ability to do so from the realm of design, into that of engineering.
While knowing that certain seasons lend themselves to certain modes of action, and recognizing that I am in one of learning and apprenticeship; I’m still compelled to honor these peculiar forms of consciousness as they emerge. That’s where a blog proves itself useful, in that it allows me to concretize them to the extent that I currently can – through design and communication – and have them find their way into the world.
In a sense, I’m trying to leave the paradigm of ownership over ideas behind and rather enter one of facilitating them into the world and the minds of others. That seems like the right thing to do. So, here is the first of what I think will be many more.
Underline is a tool I often wish I had on my phone when reading a physical book. Yes, I do read on Kindle, and have become rather a sparse consumer of physical books out of care for the planet. But, every once in a while, I’ll treat myself to a flesh-and-bone book to go through the joy of underlining great thoughts, encircling interesting ideas, and annotating my own takes on the margins.
Often, I find myself wanting a single-purpose piece of software that uses the might of text recognition but only on underlined text. A simple app to bring the incredible power of the digital marginalia into the magical world of a physical book. As a result, I put together the pitch deck below as a weekend project to articulate this idea in broad strokes. Not as a finished product but as a starting point. Comments and ideas are welcome; and if appropriate will be added here with proper attribution.
We’re all desperate to be recognized for the things we have to offer. Everyone around you is looking for the invitation you are making to them. Quite often, we’re existentially disappointed because there is no invitation. The greatest invitation is for you to say to them that they have gifts that you do not have; and therefore you need their help. That is the most powerful leadership invitation you can make. — David Whyte in the last 5 minutes of a 2 hour long conversation with Sam Harris on the Making Sense podcast.
The 7-year-old child in me recognizes that opening line; the need for recognition in what we have to offer. The primordial desire to be seen.
How easy to go about my day being the center of my world. The recognition I innately crave, and the feelings that come with it, transferred to the coworker, the boss, the partner, who at times fall short of recognizing what I'm wanting to offer.
How beautifully paradoxical, then, that good leadership is the ability to recognize the humanity in the other to the same extent that I recognize my own. To step out of my own need for recognition and allow others to come forth with their own gifts. To gift them, in return, the recognition they, too, seek.
Mr. Whyte’s words echo in my mind. Every moment, an opportunity to invite as much as I want to be invited; to recognize as much as I want to be recognized, to lead even though at times it's easier to let myself be led.
A hidden secret of the addition and subtraction operators is that they can be used for type coercion from a given type into a number. This is, in essence, equivalent to using Number().
Having said this, the more robust way of doing this is by using the parseInt() and parseFloat() methods.
Another curiosity is that the parseInt() method takes in a second argument called radix that, broadly speaking, specifies the base of the number in the string.
Useful, for example, when converting binary numbers to base10 numbers, as seen in the examples above.
Societies emerge from the cooperation between people. This cooperation organizes people into communities, companies, governments, multinational bodies, and one day even multiplanetary organizations.
Money has been one of the driving forces behind this cooperation; a tool that allows us to exchange value with each other. But that is only one facet of our sophisticated reality.
Cooperation is information.
The terms of any cooperation have to be agreed upon, even if implicitly. As a result, there’s an entire world, invisible on a daily basis, of contracts, agreements, term sheets, lawyers, notaries, even governmental bodies, that together serve the function of enabling and preserving cooperation.
From two friends agreeing to collaborate on a side-project, to the contract of marriage, all the way to the opening of a bank account, the purchasing of a house, the wiring of money for the payment of a service, the fine print of an insurance policy, or even the law that governments uphold.
The challenges of digitalization.
We managed it relatively well until the computer came along, then the internet, and finally the inevitable digitalization of our world. We spent the last 30 years, or so, truly coming online.
The prospect of a world in which all forms of content and communication are in digital form on easily modifiable media raises the issue of how to certify when something was created and what its contents are.
The above paragraph is a paraphrase from a paper published in 1991, in the Journal of Cryptology, by Haber and Stornetta, on how to time-stamp a digital document. The ideas in this paper, some argue, were foundational to the beginning of the thought process that would later lead to the blockchain.
Issues of validity and truth were already emerging as early 1991, as we recognized that the digitalization of the world came with a whole new set of challenges. Agreements are modifiable, documents are hackable, terms are forgeable, in ways that can easily be hidden to the naked eye. These are issues that we’ve come to know quite well in the collective imagination with the emergence of deep-fakes, corruptible elections, and the challenges of the 24/7 social-media enabled news cycle.
How do we keep record of the truth?
In the physical world, in the old world, we might have written the truth down in numbered books, with no pages left blank, signed, stamped, and stored safely. That alone, digitalization aside, was prone to error and forgery. Now scale that kind of book-keeping to a global scale. That, coupled with computers and bits, mutable in nature, seems to have inevitably led to the emergence of the blockchain.
I say inevitably, but only in retrospect.
Money is just the beginning.
“The blockchain is a digital, decentralized, distributed ledger. Most explanations of the importance of the blockchain start with money […] But money is only the first use case [...] and it’s unlikely to be the most important.”
That is the opening line of The Blockchain Economy: A beginner’s guide to institutional cryptoeconomics, a medium piece by Chris Berg, Sinclair Davidson and Jason Potts. I mention it here because it was this piece that gave me the mental model to contemplate the bigger societal landscape from where the blockchain emerges.
Seeing the blockchain as an idea.
When reading and learning about the blockchain, it’s easy to come across a certain understanding of it as a “new technology” in the way that Apple’s new M1 chip or Artificial Intelligence are new technologies.
But central to understanding the blockchain is seeing it more as an idea than a technology. One of those apparently simple and obvious ideas that come along once in a while. Obvious in the way that the wheel is obvious – that is not obvious at all, it took us all the way to 4000 BC to come up with it. Yet, when it came about, it fundamentally altered the course of culture for the better.
Today, societies are made of citizenship, voting, laws, ownership, property rights, contracts, legalities, who can do what and when … and central to all this are ledgers which, at their most fundamental level, map these economic and social relationships.
The genius of the blockchain is that, on one hand, it’s just a ledger. But on the other hand, it’s a radically new idea for how to do just that. In its simplest form, it’s made up of two parts.
An interdependent chain of information.
The first is the idea of storing of information in such a way that each record carries with it a fingerprint. This fingerprint is an abstract representation of both the current record and the previous record. In real terms, when a block of information is created, a large number is generated based on the interweaving of the data inside the current block as well as some of the data from the previous block.
This results in the chaining of information, as it’s stored, in such a way that it becomes very hard to manipulate or compromise it. Simply put, if I change one block, then I have to change the blocks around it because the fingerprints have to match. Then if I do that, I have to also alter the blocks around those blocks, ad eternum.
Stored everywhere and nowhere.
The second idea is that no one computer, authority, institution or government is responsible for the bookkeeping. The blockchain is stored over and over again in multiple computers, owned by multiple people, across multiple countries,
Every time new records are added, computers from around the world compete with each other to update the blockchain, and get rewarded based on the validity of their update. That is then cross-checked by other computers in the network and only then, once all is squared away, does the blockchain get updated across the remaining computers.
The probability of the same computer, person, agent, or organization, updating the blockchain twice in a row is, as it currently stands, very low.
Together, these ideas form a global, decentralized, and hyper-secure way of storing information. When seen through this lens, this might be an invention akin to ideas like democracy and capitalism. Ideas that are structuring to the fabric of our world.
This is a system with the potential to enable cooperation at the planetary scale; regardless of any one person, organization, institution, or country; by being a transparent and secure account of what was said, what was agreed upon, what was done, what was traded, what was sold, …
The emergence of a New World Order.
Berg, Davidson, and Potts are not exaggerating when saying that the blockchain competes with firms and governments as a way to coordinate economic activity; read global cooperation. It is no wonder, then, that the blockchain emerged in the aftermath of the financial crisis of 2008. A time when it became apparent that the old system could be manipulated for the benefit of a few, at the expense of the many.
Comparable to the invention of the wheel, and mechanical time, the printing press, the blockchain might be about to open up entirely new categories of economic organization that had until now not only been impossible, but un-imaginable.
I’m left nothing other than awe-struck, inspired, and energized.
Standing on each other's shoulders.
Personally, I found it useful to take a step back from all the buzzwords and the threads on social media and see the blockchain through a broader and more agnostic lens. The vision introduced here is not mine. I’m here articulating it in lay terms as a means to clarify things for myself. But head over to medium and read Berg, Davidson and Potts’ piece.
This post is the first of a series of posts I’ll be working on as I methodically explore this incredibly exciting idea and its possibilities. I’m here to learn, not to be an expert; so if any part of it could be made better, by all means, do reach out and share your perspective.
Assuming I have a hypothetical program that’s supposed to take in two numbers, calculate their sum, and output that result onto the console. Somewhere along the way there’s a bug in the program, as the output comes out 1000 higher than expected.
A harmless example of a mistake nested inside a method that doesn’t crash the program and yet renders it unuseful. Node has a built-in debugger that allows me to run through the program, line by line, in and out methods, directly from the console. Below is a visualization of the debugger in use.
The debugger can be called in the command line by adding the inspect flag:
Once inside, I control the running of the program through a series of simple navigational commands.
Two more functionalities that are worth mentioning. The first being that I can insert a break-point anywhere in the program by using the debugger keyword. This will make node’s inspector to run through the program up until the debugger. This keyword is often also recognized by browsers in front-end code.
The second being that I can keep watch of certain variables as the program runs by adding them to the watchers-list. Adding, as well as checking, these variables can be done using the two keywords below.
Probably a lot more could be said about debugging, namely that VScode has a real neat and far more visual debugger. But I just wanted to leave here some rough notes for myself with the basics of debugging directly from the console.