The three most basic building blocks atop the electrical signal are the logic gates. At its simplest, a logic gate is a device that mediates the flow of current from the source to the ground. This device is graphically expressed where the letter (a) is positioned in the diagram below:
A gate is made of transistors that take in an input and produce an output. If the lightbulb is on, that means we have a positive output, a truth, a one. If the lightbulb is off, we have a negative, false, or zero output.
Take the simplest gate, the buffer gate. In order for the lightbulb to turn on, the input needs to tell the transistor to conduct the current to the ground. If the input is (1), the current will flow and the output will also be (1) – aka. the lightbulb will be on.
The next simplest gate is the NOT gate. The NOT gate inverts whatever input it is given. If the input is (1), it allows the current to flow. Given that electricity always chooses the path of least resistance, it takes the shortest path down through the transistor. As a result, it will not go through the light bulb and the input will be (0). On the other hand, if the input is (0), the current is forced to take the longer path and therefore turn on the light bulb.
Moving on to the AND gate. Like the name indicates, the AND gate needs both the first AND the second input to be (1). If ant of its transistors is (0), the current will simply not flow through; producing a (0) output.
The OR gate only needs one of its inputs to be (1) in order for the current to flow. As long as one of the inputs is (1) the current flows through and the output is (1).
These are the most simple operations in boolean logic: AND, OR, and NOT. These are commonly represented using the following graphical system and equivalent truth tables.
Finally, it's worth noting that even though AND and OR are expressed as having two inputs, they can actually have more than two inputs. That being so, their logic remains the same.
All data in a computer is stored through a binary electrical system – binary as in bi, two. The bit, the computer’s unit of data, is expressed through an electrical signal or the lack thereof. This signal is managed by a transistor, a tiny switch that can be activated by the electrical signals it receives. If the transistor is activated, it conducts electricity. This creates an electrical signature in the computer's memory equivalent to a 1 or a truth. Otherwise, the lack of signal is equivalent to a 0 or a false.
The basis of this binary system, as we have it today, was first introduced by Leibnitz in 1689, as part of an attempt to develop a system to convert verbal logic into the smallest form of pure mathematics. It is said Leibnitz was actually influenced by the i-Ching 🤯 and was attempting to combine his philosophical and religious beliefs with the field of mathematics. Together with George Bool’s work in logic and MIT’s Claude Shanon paper relating them to computing, this was basis for the simple and yet incredibly ingenious system behind today’s digital computer.
There have been ternary and even quinary electrical systems developed in the field of computing. But the more complex the system, the harder it is to tell the difference between different types of voltage; specially when the computer is low on battery or it’s electrical system interfered with by another device (i.e. a microwave). So the world settled on binary, the simplest and most effective system. The voltage is either there or not.
That's how we get zeros and ones: electricity.
Give me a sheet of paper and something to write with, and I'll turn the world upside down — NietzscheProgrammingJul 2021
I’ve had a somewhat liberating epiphany recently: The methods built-in to a programming language can also be written using simple procedures like if-else statements and loops. Built-in methods exist to bundle complicated procedures into one simple function — this makes programming easier. But they are also simply solutions to common problems, so a programmer doesn’t have to program them over and over again.
In design, there are thousands of nuts and bolts to every tool. Sketch and Figma are full of smart details meant to make a designer’s life easier. But I also know, by virtue of my experience, that all I need is a blank canvas, the rectangle tool, type and color. Not to be overly simplistic, but even that could be reduced to a sheet of paper and a pen.
Tools are helpful, but the work happens in thinking about and experimenting on a problem enough that eventually a solution starts to emerge — despite the tool. My crazy insight is that programming seems to be the same.
Splice is a robust method. With one single line of code, I can shorten an array, remove items at specific index positions, or even insert multiple new items at a location. It works in place and therefore on the array itself.
In my own version of splice, I built a couple of smaller methods that perform all the major procedures like shorten an array, delete an item(s) at a particular location, and insert as many elements as passed onto the function sequentially into the array.
A method to shorten the array
Methods to delete an item(s)
A method to insert an item(s)
All in all, lot’s to learn – but that was fun.
I have often reverted to Googled to get a quick random number generating function; without investing a moment to understand its simple mechanic and saving myself future searches. Being a little dyslexic, I'd get all confused with the max - min + 1 + min portion of the function. Well, today is the day I untangle this mess of mins and maxes.
If I were looking for a number between 0 and 9, I could simply shift that comma by 1 decimal place by multiplying it by 10.
To make the 10 inclusive, I could increase it by 1; or simply multiply it by 10 + 1. This would increase the range of possible random numbers from 0-9 to 0-10.
What this means, is that I'm multiplying the result of the random function by the range of possible numbers I'm looking for, and adding one so as to make it inclusive.
To get a random number between 0 and 75, I can:
What if I want a number between a minimum and a maximum, say between 25 and 150? There are two parts to the process. First, I need to determine the range of numbers I want my number to be in between of — that is the range of numbers between 25 and 150. That can be achieved by subtracting 25 from 150. I'm therefore looking for one random number out of 75 possible numbers.
Then, I want my possible random number to be in between 25 and 150. To get one of 75 random numbers that start at least at 25, all I have to do is add 25 to my random number. 🤯
In essence, this is a random number multiplied by a range of numbers and bumped up by the starting point number.
Good news is that the sort function does take a callback function with two arguments representing the two items being compared. In an array of numbers, if the difference between the two arguments is a positive number, that means that the first is bigger than the second. If the difference is a negative number, the second is bigger than the first.
This approach can also be harnessed for more unique examples. Below, for example, I have an array of human needs that I want to sort, and in the callback I provide a correct order template that is then used for the sorting:
Part of learning to program is to progressively develop the sensibility to stop writing the same things over and over again, and know when to abstract portions of code that run multiple of times into their own dedicated container; a container I can reference to multiple times in the future.
Good programming lies in a person’s ability to identify and work with ever more sophisticated versions of this idea of abstraction, optimization, and simplification. While also keeping in mind the program’s efficiency (how many steps are taken) and how much space it requires (space meaning literal memory). The careful balance of these forces is the life-long learning experience of programming.
One of the tools used to achieve this is the idea of memoization. In short, memoization is the idea of storing the result of a procedure — a piece of code — so that if, while the program is being run, that procedure needs to run again and again to yield the same result, I can rather store it somewhere and access that same result as many times as needed without having to run the code again.
Calculating a Fibonacci number — a number that results from the sum of its previous two counterparts — is a good example to demonstrate the utility of memorization. In essence, the Fibonacci sequence:
The process of traversing through this sequence is most efficiently done using a recursive function.
But, this recursive solution even though simple in design, creates a multiple function calls on the same number to yield the same result. Take fib(5):
In a simple fib(5), there are multiple calls on 3, 2, and 1. Now imagine calling fib(546731). This is where memoization comes in. To reiterate, memoization is the idea of storing the result of a function call for later use. This can be done with a simple key/value dictionary where, every time I call fib on a number for the first time, I’ll store the number and its result in the dictionary for later.
That means that each number will, now, only be called once. In all future calls after the first one, the fib function will use the value already stored without running itself again.
In a nutshell, that’s it. The idea of memoization isn’t exclusive to Fibonacci. It rather comes down to grasping it as an approach to be used in problems where the same piece of code is being run again and again to yield the same result.