Time & Space Complexity

Isa Weaver
3 min readApr 22, 2017

--

Big-O Cheat Sheet

“We want you to come up with the first solution that enters your mind. Then optimize it in regards to time and space complexity”, a Facebook recruiter told me while preparing me for interviews. Unfortunately, I never formally studied computer science so I was completely lost on the terms time and space complexity.

Time complexity is the computational time a given algorithm will take in relation to its input. Space complexity is the measurement of storage an algorithm will need. Commonly, people use Big-O Notation to analyze the time and space complexity of an algorithm. However, there are other ways to analyze an algorithm, such as Little O notation, Big Omega notation, and Big Theta notation. Big Omega notation describes the best case run time for an algorithm while Big O describes the worst case run time for an algorithm. Big Theta is the calculation of a running time as the input gets larger for an algorithm. Lastly, Little O notation is an asymptotic limit meaning it has a stronger bound condition than Big O. In interviews I could just use Big O notation and be fine.

Calculating the Complexity of an Algorithm

var example = 1;

In relation to N the running time will not change. Constant: O(1)

for ( i = 1000; i > N; i++){
return;
}

When N doubles so does the run time of the algorithm. So the running time is proportional to N. Linear: O(n), Space Complexity is O(1). This is because we aren’t allocating new variables.

for ( i = 1000; i < N; i++ ) {
for ( j = 1000; j < N; j++ )
return;
}

Running time of the loops is proportional to the square of N. When N doubles, the running time increases by N * N. Quadratic O(n²)

// Binary Search Tree Algorithmpublic Node search(Node root, int key){    if (root==null || root.key==key)       return root;   if (root.key > key)       return search(root.left, key);   return search(root.right, key);}

The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration. Logarithmic: O(log(n)), Space Complexity is O(n). This is because the size of the function scales with the size of the input.

void quicksort ( int list[], int left, int right )
{
int pivot = partition ( list, left, right );
quicksort ( list, left, pivot - 1 );
quicksort ( list, pivot + 1, right );
}

Algorithm consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic. N * log ( N ), Space Complexity is O(log(n)).

When you write more code you will need to judge between if you want your algorithms to work faster with a trade off on space.

--

--

Isa Weaver

Software Engineer @LockheedMartin Space, Previously VC Intern @Intel Capital. Opinions are my own.