Understanding the Fundamentals of Big O’ Programming

Dell Wilson Jr
The Startup
4 min readJul 23, 2020

--

In my past few months of being a software development student, I have often come across development concepts that pull my attention. Most recently, I stumbled across one of the craziest stories. This story starts a little over ten years ago, as a company that was working with slow Internet came up with the bright idea of having a race between transferring mass amounts of data (via the most time-consuming Internet on earth) and a carrier pigeon. The target being their office about 50 miles away. Of course, like many of you, I thought of things. A — they have a way to much time on their hands, and B — there is no way a pigeon is beating the Internet.

Like all good stories, the underdog won out, and the pigeon reigned victoriously, but for me, I instantly thought, “this is ridiculous. I’ve never heard of anything like this. No way this happened! But then I thought about when I tried to send someone a 30 min video, and how complex it was upload and transfer.

The whole story of the pigeon and the carrier led to a concept of algorithmic efficiency, aka “The Big O.”

As I am thinking back now on that pigeon, it’s easy to see how they are just so true to type. No matter what the pigeons were carrying, their travel (unless we factor in in-climate weather) will always be the same. The Internet is much different. The amount of time it takes to send something via the Internet can vary depending on how much data is sent.

So how does this apply? Well, the Big O is a runtime scale with respect to input variables.
The pigeon transfer speed is referred to as O(1), while the extensive internet transfer time would be referred to as O(N)

I know I know — “well what is that?”, “what does that mean?”, and a plethora of other questions come to mind, but let’s not overthink this

Here are some common examples of “Big O’ and standard orders of growth.

O(N)

One thing to note is, Big O always assumes the upper limit, whereas an algorithm will perform the max number of iterations.

O(N) is an algorithm that will grow linearly and in direct proportion to the data set’s size — determining if the function could return true after reading the first elements or false after reading all the items in the equation.

O(N2)

O(N2) will represent a function in which the algorithm is directly proportional to the square of the input size. As these input sizes may grow while adding more nested iterations, you would only increase the equation’s complexity. This equation could now represent O(N3) or O(N4) if you went to three or four iterations, and so on.

O(1)

O(1) is as simple as it gets. O(1) represents a function that will always remain the same regardless of input time(always take the same take).

O(2N)

O(2N) will represent a function that would double in performance for every element in the input. The function recursively calls itself twice for each input number until it is less than or equal to one. An example of this would be a Fibonacci sequence in which you would iterate through a series of numbers while adding the sum of the two proceeding numbers.
(2,2,4,6,10,16,26,42, etc)

Lastly, we have O(log n) — which I won’t get into, but I suggest diving into a little deeper (and will post a link below.) If you are at all into algorithms or analyzing their overall scalability and efficiency — than I suggest Big O!

--

--