# Programming and Time Complexity

## Why does it matter and why do I need to know about it?

Let’s start off by getting the definition so we can start wrapping our heads around the concept.

“In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms.”

Ok, great we have a definition. What does that truly mean though and why do we care. As we found out in the definition above, we want to quantify and understand how long it takes our algorithm to run. The reason for this is that in programming, the size of your program really matters. You could be returning information to your users twice as fast if you were able to improve a quadratic function to becoming linear etc.

#### Ok, I kind of get why it’s important. How do I find it?

It’s one thing to understand why something is important or why we need it. Utilizing and implementing is a whole other story. Let’s go step by step.

The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:

`eyasu;`

Is constant. The running time of eyasu will not change in relation to N.

`for ( i = 0; i < N; i++ )     eyasu;`

Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

`for ( i = 0; i < N; i++ ) {  for ( j = 0; j < N; j++ )    eyasu;}`

Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

`while ( low <= high ) {  mid = ( low + high ) / 2;  if ( target < list[mid] )    high = mid - 1;  else if ( target > list[mid] )    low = mid + 1;  else break;}`

Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.

`void quicksort ( int list[], int left, int right ){  int pivot = partition ( list, left, right );  quicksort ( list, left, pivot - 1 );  quicksort ( list, pivot + 1, right );}`

Is N * log ( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.

Generally, when you do something with every item in one dimension that is linear, doing something with every item in two dimensions is quadratic and dividing the actual working area in half is logarithmic. There are other Big O measures which the cheat sheet I’ve linked to above goes into. When considering Big O notation it is worth mentioning that most people look at the best and worst case scenarios. When you are typically asked for the time complexity though, you are being asked for the worst case. There are also other ways of expressing time complexity but Big O is what I’ve run into the most so far. I hope that made time complexity a little bit easier to understand.

Like what you read? Give Eyasu Felleke a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.