Take for instance, a table—what is a table?
It's where you share a conversation, work on a project, eat a meal, write a story, keep books you're reading, leave keys when you come home, stack letters you haven't yet open.
But a table isn't what we do on it.
A table is a multi-layered object made of wood; that wood, an intricate pattern of fibers; those fibers an intricate structure of molecules, atoms, and eventually, pure energy.
Your table is all these things.
Your computer is just like the table.
The folders and files on your desktop are like the binders and papers on your desk, the books on your Kindle are like the books on your table, that word document you have open is the digital version of your notebook.
The objects on your computer seem so real that the computer, itself, has become invisible. Instead, you see text, images, favorites, todos, emails, work assignments, websites and the people you interact with.
Learning to code is to strip away the objects and see the computer for what it really is.
Like the table, the computer is made of many layers. An image isn't quite like an analog photo , a movie isn't quite like the movies of the old days.
These digital objects are collections of numbers that your computer turns into the visual experience you are familiar with on screen. From the words on a word document, to a movie on YouTube, or a conversation on FaceTime everything is numbers; everything in your computer is data.
Take a movie, for example.
A movie is made of moving images, and an image is made of squares of color, but what makes up a color—say, orange?
Yellow and red (obviously), but a computer doesn't process color like we do. On a screen, all colors are made of varying degrees of red, green, and blue, which are represented by a range of numerical values between 0 and 255. What we perceive as orange is 243 red, 83 blue, 45 green.
Therefore, what for us is a movie, for a computer is billions of numbers.
Take another example, your desktop.
In the same way that numbers represent a movie, your desktop stands for a series of internal processes and programs. Your cursor and your folder (seemingly separate objects), aren't in actuality separate at all.
It doesn't end there.
Code might seem like the end of the line, but it too functions as a human-friendly representation of lower level machine instructions; all of which can eventually be reduced to ones and zeros, the basic expression of an electrical signal.
Nothing is quite what it seems.
I'm a designer of technology.
A while back, I took it upon myself to learn programming from first principles, with the goal of becoming a designer-software-engineer who could build his own products and start-ups.
But I've recently had the insight that, while on my adventure learning to code, the product I'm building is my blog. In turning myself into a software engineer, I'm falling in love with explaining technology. In retrospect, this is no surprise to me, this is what I've always done; make sense of complex things and explain them.
It's taken me a while to admit it, but I love explaining technology, especially to non-technical people. I'm often caught in conversation, explaining the magic of procedural abstraction, how functions are a great mental model outside of programming, and how computers are more than what they seem.
I have learned and re-learned the principles of programming from different schools of thought. I keep coming back to the finish line only to begin again ever more fascinated with it.
Maybe I'll still become a software or artificial intelligence engineer. Maybe I'll still end up at Stanford or MIT as an adult student. But right now, I'm compelled to articulate the magic I see in programming for the friends and relatives I'm often in conversation with; the adult who believes he's just not a math person, or the grandma who grew up fascinated with computers. I believe there is a way to capture the magic of code and reframe it as a new way of thinking.
I've awakened to the idea that my blog may be my start-up. I've jotted down three guiding principles for this possibility:
1. Write to the beginner and the non-technical person.
2. Explain things as clearly as possible, through crystal clear language and visual design.
3. Never regurgitate facts, but understand them deeply enough to articulate them from a place of experience.
I'm not the obvious choice for the task; I was tortured into getting good grades in math, I dropped out of fine-arts school, I have a BA in fashion design, and most of my skillset is self taught; yet, I'm compelled to tell this story.
I'm writing this post, not as a proclamation, but as a time capsule, an admission of my own interests, no matter how big or small, to revisit in a distant future.
Logic design is the discipline of computer chip design. These chips sit at the foundation of what a computer is and does. The mechanisms inside computer chips manage manage electrical signals.
A chip, therefore, is nothing more than a gate. And a gate, in turn, is just a simple chip. A complex chip is made of simpler chips, or gates. This is all akin of legos and in its modularity.
A chip has two parts: its' interface, and architecture.
The interface is made of inputs, an output, and a broad idea of its functionality. The architecture, in turn, is made of smaller building blocks that make the functionality possible. The interface is the what, the architecture the how.
A chip can only have one interface; but it can have many possible architectures, some more efficient than others. This internal architecture of a chip is what the discipline of logic design concerns itself with.
A reminder, we're still very much in the world of binary. Both input and output will always be a zero or a one.
Now, some practical examples.
AND(a, b, c)
Let's take a three-input AND gate.
This means, as expressed in the truth table, that this gate only outputs 1 when all three inputs are 1. This, in essence, is a slightly more complex version of the elementary AND gate.
The challenge now is: how can we build this three-input AND gate using only elementary logic gates (our building blocks). We can do this using two AND gates. Let's deconstruct the function into an expression of this architecture.
This, in turn, translates into this diagram. The outer AND is the gate on the right and the inner AND serves as input to it. The input a, b, and c are completely interchangeable.
As specified, given that our entire architecture is made of AND gates, only when all inputs are 1 will the output be 1.
This is a composite gate; a gate made of other gates. Once built, composite gates can also become the elementary gates for more complex chips.
Another example, the Xor gate, or exclusive OR gate. This gate outputs 1 when only one of its outputs is 1. This Xor gate can be expressed using the following truth table.
he Xor gate can be expressed using the canonical representation expression: A and NOT b or B and NOT a. One for each of the two rows in the truth table above that have 1 as the output.
This can be mapped onto a full boolean expression.
And into the following diagram. The two inputs connect transversally into two different AND gates; in their normal state and in their negated state (NOT); the outcome of that is then inputed onto the final OR gate.
Xor's architecture is composite and its interface simple. Two inputs, one output, and a generic icon to express it.
This is the essence of logic design. To design an architecture of elementary gates that takes a given input and matches it to the desired output.
It's pure logic.
I just realized why the AND and OR gates map to multiplication and addition in boolean algebra. We’re essentially inputting a zero or a one into a mathematical expression.
In the case of AND, multiplication makes it so that as long as there’s a 0 in the input we’ll get a 0 in the output. This is because anything multiplied by 0 is, well, zero.
In the case of OR, addition makes it so that as long as there’s a 1 in the input, we’ll end up with a 1 in the output. This is because we have at least one 1 in the input to sum onto the result.
Ah, math is fun!
Functional programming is an idea, a way of approaching programming, that borrows from mathematics and its idea of what a function is.
In computer science, a function can be defined as a bundle of code that does something—it mutates a data collection, it updates a database, it logs things onto the console, etc. If we want, we can even make it do many of these things at once. A function, in computer science, is a set of procedures that get given a name and can be passed around and invoked when needed.
In mathematics, despite sharing the same name, a function has a stricter definition. A function is a mapping between an input and an output. It does one thing, and one thing only, and no matter what you give it, it always produces the same result. In addition to this mapping, the function will never mutate the input. It produces the output based on what we pass it.
What functional programming is (at a high level), is the use of these ideas in computer programming. It is a way of thinking and approaching a problem. In functional programming, we reduce a problem to small single-purpose functions that we can assemble together like LEGO blocks.
This can be boiled down to three core principles: 1) A function will always only look at the input; 2) A function will always produce an output; 3) All data structures are immutable.
The beauty here is that given, say, a collection of numbers, we can run it through a very complex set of functions and still be sure that our data remains exactly the same in the end.
The function only mutates values inside its scope, but anything coming from the outside remains the same.
In functional programming, there’s an emphasis on clarity, both syntactical and of purpose. Each block has one purpose and nothing else. Below, in Swift, I created a function that multiples all numbers in an Array by 10. This function is created in generic form and added as an extension to Array.
From the outside, the function is named as descriptively as possible so that anyone else interacting with it can see the what without having to deal with the how.
We don’t need to understand the function in order to use it. We call it and, no matter how complex its procedures, it should always produce the same output.
The benefit is that each function can be made and tested in isolation since it does just one thing. And over time, the function can be optimized and made a lot better without it ever impacting the code where it is called.
But, in a world of pure functions, there's still a need to bridge into the real and more messy world of side-effects. These are anything from logging to the console, writing to a file, updating a database, or any external process. The key here is to separate all side-effects from the pure logic of a program and isolate them.
Lastly, with functional programming, there is an incessant creation of copies of the same data, given that functions do not modify their input. This is problem has been solved by persistent data structures.
It's when I'm preoccupied with who I think I am that I run out of moves. It's like mistaking a tree for the forest. I am the forest, not a tree. Jung says, "since the growth of personality comes out of the unconscious, which is by definition unlimited, then personality cannot be limited either."
I finally grab a pen and just make a drawing because there is nothing else to do. I make a drawing, not to advance my practice, nor to define myself, nor to grow, but to just make.
A reminder that I is a lot bigger than Frank.