Big O Notation and the Nonsense Therein.

William Dale
5 min readApr 19, 2018

--

From my relatively short time as a programmer, I have noticed that two of the biggest considerations when writing code are memory usage and speed. I started thinking about memory when I was learning about arrays. I was constantly assigning and reassigning the same array information to different variables in my program. Of course there were much better ways to complete the task at hand, but that aside I was focused on the unnecessary amount of memory that each array assignment was taking up. Now on a grand scale, the amount of memory each array assignment takes up is minuscule, but conceptually I knew there was a better way to use less memory. “Down the line, when I become a brilliant programmer” I thought, “how important will memory usage be?”

Along with memory, speed is vital to a good code. If your program takes too long to run, no matter how pretty and effective it is, it loses a lot of its value. Who would want to use a program that takes minutes to complete? Even seconds are torturous for some programs. I thought about how long a minute was and then thought that perhaps I was being a bit over the top with how annoying slow programs could be. A minute? It seems like such a small increment of time. Have you ever sat down and waited to see how long a minute actually is? Anyone who has ever done a plank knows that a minute can last FOREVER! Who wants to wait that long for the score of the Giants game?

Digging into this quandary I came across big O notation. Big O notation, as far as I can tell, is a way to compare the efficiency of algorithms. These algorithms are compared mainly in terms of their input size, N. When we compare these algorithms, we do so independent of machine capability. Since the performance of a machine is variable and can be enhanced or destroyed, we want to separate it out of our comparison so that we have as few independent factors as possible in our comparison (as we do with scientific experiments, we have our control group and change only one variable at a time).

N in the above graph (thanks to bigocheatsheet.com) as we know is the input size for the algorithm being compared, and it does a pretty good job of visually representing the progressive efficiency of each different big o function.

0(1):

Big O of one means that no matter what your input is, your speed remains the same.

arr = ["a", "b", "c", "d", "e"]arr[3] #=> "d"

No matter how big your array is, accessing a particular index will be always be the same speed. If you compare Big O notation to a race, O(1) is a racer who can teleport. Whether the teleportation takes a second or 100 years, they will reach any location in the race in the same amount of time.

O(n):

Big O of nis a linear relationship that states that as the input size increases, the speed to run that algorithm increases linearly (at the same rate)with respect to the input.

def big_o_of_n(n)
(0..n).each do |el|
puts el
end
end

As you put in larger and larger numbers for n, your speed increases linearly. Sticking with our race analogy, this is a racer who is running the race at the same pace the entire time. As the distance (input) grows, the time it takes the racer grows by a similar amount.

O(n²)

Big O of n squared is an exponential relationship saying that as we increase the input size, the run time will increase at a higher and higher rate with respect to input size. For n raised to any power, this holds true and that function can be considered part of O(n²).

x = (0..n)
y = (0..n)
def nonsense(n)
y.each do |num1|
x.each do |el2|
puts el1 * el2
end
end
end
nonsense(100) #=> a lot of numbers. actually 100^2 numbers

With the race analogy, imagine there is a train at the beginning of an open race track. It starts with an infinitely large pile of track. The train can move by laying its own track in front as it goes but the problem is that it can only carry one piece at a time. This means that the train must go back to the beginning after every track it lays. The further the train gets from the pile, the further it has to move back and then go back to the end to lay the next piece.

O(2^n)

Big O of 2 to the n is pretty bad. It seems to be an “avoid at all costs” sort of categorization for algorithms. It essentially means that the speed will double with each additional input for the algorithm.

def fibonacci(n)
if n == 1
1
elsif n == 2
1
else
fibonacci(n-1) + fibonacci(n-2)
end
end

Generally if an algorithm calls itself (recursive) to solve for n-1 inputs, it will be on the order of O(2^n). Back to racing, this is like a racer who is treating the race as training and so for every step they take, they go back to the beginning and repeated all of the previous steps that they have done already.

1) 1 step up, 1 step back
2) 2 steps up, 2 steps back, 1 step up, 1 step back
3) 3 steps up, 3 steps back, 2 steps up, 2 steps back, 1 step up, 1
step back
4) 4 steps up, 4 steps back, 3 steps up, 3 steps back, 2 steps up, 2
steps back, 1 step up, 1 step back
5) ...you get it

O(log n)

Big O of log n is a great program. Mathematically, log n is the inverse of n². What this functions says is that as you double the input size, the number of steps needed only increases by 1, and therefore the speed increases by a similar factor.

16/2 = 8, 8/2 = 4, 4/2 = 2, 2/2 = 1 (4 steps) 32/2 = 16, 16/2 = 8, 8/2 = 4, 4/2 = 2, 2/2 = 1 (5 steps)256/2 = 128, 128/2 = 64, 64/2 = 32, 32/2 = 16, 16/2 = 8, 8/2 = 4, 4/2 = 2, 2/2 = 1 (8 steps)

In a race, this is like having a spaceship that will increase its acceleration the further it goes. It is a very efficient algorithm and is one of my favorites.

There is a lot more to be said about Big O notation, this just about scratches the surface. Having analogies and examples makes it a bit easier to visualize but there is more to analyzing algorithms using Big O. Hopefully I can come back and add some or have another post about it. Who knows? Lets hope its O(1) rather than O(2^n) until I’m back.

--

--