Functional programming is an idea, a way of approaching programming, that borrows from mathematics and its idea of what a function is.
In computer science, a function can be defined as a bundle of code that does something—it mutates a data collection, it updates a database, it logs things onto the console, etc. If we want, we can even make it do many of these things at once. A function, in computer science, is a set of procedures that get given a name and can be passed around and invoked when needed.
In mathematics, despite sharing the same name, a function has a stricter definition. A function is a mapping between an input and an output. It does one thing, and one thing only, and no matter what you give it, it always produces the same result. In addition to this mapping, the function will never mutate the input. It produces the output based on what we pass it.
What functional programming is (at a high level), is the use of these ideas in computer programming. It is a way of thinking and approaching a problem. In functional programming, we reduce a problem to small single-purpose functions that we can assemble together like LEGO blocks.
This can be boiled down to three core principles: 1) A function will always only look at the input; 2) A function will always produce an output; 3) All data structures are immutable.
The beauty here is that given, say, a collection of numbers, we can run it through a very complex set of functions and still be sure that our data remains exactly the same in the end.
The function only mutates values inside its scope, but anything coming from the outside remains the same.
In functional programming, there’s an emphasis on clarity, both syntactical and of purpose. Each block has one purpose and nothing else. Below, in Swift, I created a function that multiples all numbers in an Array by 10. This function is created in generic form and added as an extension to Array.
From the outside, the function is named as descriptively as possible so that anyone else interacting with it can see the what without having to deal with the how.
We don’t need to understand the function in order to use it. We call it and, no matter how complex its procedures, it should always produce the same output.
The benefit is that each function can be made and tested in isolation since it does just one thing. And over time, the function can be optimized and made a lot better without it ever impacting the code where it is called.
But, in a world of pure functions, there's still a need to bridge into the real and more messy world of side-effects. These are anything from logging to the console, writing to a file, updating a database, or any external process. The key here is to separate all side-effects from the pure logic of a program and isolate them.
Lastly, with functional programming, there is an incessant creation of copies of the same data, given that functions do not modify their input. This is problem has been solved by persistent data structures.
Give me a sheet of paper and something to write with, and I'll turn the world upside down — NietzscheProgrammingJul 2021
I’ve had a somewhat liberating epiphany recently: The methods built-in to a programming language can also be written using simple procedures like if-else statements and loops. Built-in methods exist to bundle complicated procedures into one simple function — this makes programming easier. But they are also simply solutions to common problems, so a programmer doesn’t have to program them over and over again.
In design, there are thousands of nuts and bolts to every tool. Sketch and Figma are full of smart details meant to make a designer’s life easier. But I also know, by virtue of my experience, that all I need is a blank canvas, the rectangle tool, type and color. Not to be overly simplistic, but even that could be reduced to a sheet of paper and a pen.
Tools are helpful, but the work happens in thinking about and experimenting on a problem enough that eventually a solution starts to emerge — despite the tool. My crazy insight is that programming seems to be the same.
Splice is a robust method. With one single line of code, I can shorten an array, remove items at specific index positions, or even insert multiple new items at a location. It works in place and therefore on the array itself.
In my own version of splice, I built a couple of smaller methods that perform all the major procedures like shorten an array, delete an item(s) at a particular location, and insert as many elements as passed onto the function sequentially into the array.
A method to shorten the array
Methods to delete an item(s)
A method to insert an item(s)
All in all, lot’s to learn – but that was fun.
I have often reverted to Googled to get a quick random number generating function; without investing a moment to understand its simple mechanic and saving myself future searches. Being a little dyslexic, I'd get all confused with the max - min + 1 + min portion of the function. Well, today is the day I untangle this mess of mins and maxes.
If I were looking for a number between 0 and 9, I could simply shift that comma by 1 decimal place by multiplying it by 10.
To make the 10 inclusive, I could increase it by 1; or simply multiply it by 10 + 1. This would increase the range of possible random numbers from 0-9 to 0-10.
What this means, is that I'm multiplying the result of the random function by the range of possible numbers I'm looking for, and adding one so as to make it inclusive.
To get a random number between 0 and 75, I can:
What if I want a number between a minimum and a maximum, say between 25 and 150? There are two parts to the process. First, I need to determine the range of numbers I want my number to be in between of — that is the range of numbers between 25 and 150. That can be achieved by subtracting 25 from 150. I'm therefore looking for one random number out of 75 possible numbers.
Then, I want my possible random number to be in between 25 and 150. To get one of 75 random numbers that start at least at 25, all I have to do is add 25 to my random number. 🤯
In essence, this is a random number multiplied by a range of numbers and bumped up by the starting point number.
Good news is that the sort function does take a callback function with two arguments representing the two items being compared. In an array of numbers, if the difference between the two arguments is a positive number, that means that the first is bigger than the second. If the difference is a negative number, the second is bigger than the first.
This approach can also be harnessed for more unique examples. Below, for example, I have an array of human needs that I want to sort, and in the callback I provide a correct order template that is then used for the sorting:
Part of learning to program is to progressively develop the sensibility to stop writing the same things over and over again, and know when to abstract portions of code that run multiple of times into their own dedicated container; a container I can reference to multiple times in the future.
Good programming lies in a person’s ability to identify and work with ever more sophisticated versions of this idea of abstraction, optimization, and simplification. While also keeping in mind the program’s efficiency (how many steps are taken) and how much space it requires (space meaning literal memory). The careful balance of these forces is the life-long learning experience of programming.
One of the tools used to achieve this is the idea of memoization. In short, memoization is the idea of storing the result of a procedure — a piece of code — so that if, while the program is being run, that procedure needs to run again and again to yield the same result, I can rather store it somewhere and access that same result as many times as needed without having to run the code again.
Calculating a Fibonacci number — a number that results from the sum of its previous two counterparts — is a good example to demonstrate the utility of memorization. In essence, the Fibonacci sequence:
The process of traversing through this sequence is most efficiently done using a recursive function.
But, this recursive solution even though simple in design, creates a multiple function calls on the same number to yield the same result. Take fib(5):
In a simple fib(5), there are multiple calls on 3, 2, and 1. Now imagine calling fib(546731). This is where memoization comes in. To reiterate, memoization is the idea of storing the result of a function call for later use. This can be done with a simple key/value dictionary where, every time I call fib on a number for the first time, I’ll store the number and its result in the dictionary for later.
That means that each number will, now, only be called once. In all future calls after the first one, the fib function will use the value already stored without running itself again.
In a nutshell, that’s it. The idea of memoization isn’t exclusive to Fibonacci. It rather comes down to grasping it as an approach to be used in problems where the same piece of code is being run again and again to yield the same result.
I recently went down the rabbit hole of why the computer, the supreme calculator, can have floating point number inaccuracy. Why summing 0.1 and 0.2 isn't quite 0.3.
I’m concocting this post to clarify, for myself, why that is. The reasons are interesting and the mechanism behind it fascinating.
Everything is either a 0 or a 1
Computers operate on binary. This means that everything in a computer is stored in memory in the form of 1s and 0s — a text message, a sum in a calculator, an image file on my desktop, a website on the browser.
A number like 1023 or 3.14159265359 – a base 10 number made with the digits matching the fingers in my hand – is already an abstraction. That means that the simple number 1023 is stored as literal electrify (1) and lack thereof (0).
When I break a unit down in three equal parts - 1/3 or 0.3333 - the truth is that this isn’t just a zero and some threes, but 0.33333... to infinity. A computer, on the other hand, has finite memory and cannot store an infinite number. At some point, it literally runs out of space and cuts infinity short.
The algorithm that handles the binary encoding of a floating point number — IEEE 754 Specification — is nothing short of genius in its simplicity. What it does is reduce the number to a near-enough finite number that fits the allotted space. Consider the space between 1 and 2; it’s near infinite:
What IEEE does is it stores an idea of where the number is located, somewhere in the chasm between two numbers.
It does so in three levels. First, it determines whether the number is positive or negative — the sign. Then, it stores the integer range where the number is expected to be — the exponent. Lastly, it stores where in between those two numbers the number is — the mantissa.
Finally, the level of precision is dependent on the amount of space available for it. The example above is what is called a half precision floating point number; it takes 16-bits. But this spacing can go up to 256-bits. The bigger the space, the more granular the number’s location can be.
That is how a computer edges on infinity, and also how it does so with an endearing level of imprecision.