# Time Complexity Analysis of Dynamic Data Structure

Published in

Whenever we need to analyze the time complexity of a particular operation, most of the time, we go with worst-case time analysis.

## Worst-case time analysis

We find where the maximum number of code executions(code-wise) is executed for the given operation and calculate for n number of operations.

When we need to find the element which is last in the array, all n elements will be checked in linear search and this is the worst case. Based on this worst case, linear search time complexity will be defined as O(n).

If we consider the worst-case time analysis in data structure operations, For example, we consider the ith insertion operation in a dynamic array.

• We insert an element in a single operation when the size of the array greater than i. Time complexity → O(1)
• we insert an element with a total of i operations when the array size equals to i since at that time, we need to adjust the array. Time complexity →O(i)

Based on this worst-case time analysis, insertion operation of the dynamic array time complexity will be O(n) but this is too pessimistic because we insert the element with O(1) cost most of the time.

We need some better approach and that is amortized analysis.

## Amortized analysis

An average of a sequence of operations costs on a time period. Here, we are not averaging the all possible inputs like average time complexity and also we are not averaging over all possible random inputs like the probabilistic analysis. We just average the sequence of operations cost.

Let's look at how this amortized analysis can be done in dynamic array insertion operation

If you look insertion operation of dynamic array java code

`public class DynamicArray{  private int[] array;  private int size;  public DynamicArray(){     this.array = new int[0];     this.size = 0;       }  public void insert(int num){    if (this.array.length == 0){       this.array = new int[1];    }    if (this.size == this.array.length){        int [] temp = new int[2*this.array.length];        for(int i = 0; i< this.array.length; i++){            temp[i] = this.array[i];        }        this.array = temp;    }    this.array[this.size] =  num;    this.size = this.size+1;  }//length method, delete method, search method}`

If I say the cost of ith operation Ci then it can be defined as follows

`      i if i is an exact power of 2Ci =        1 otherwise`

When the array is full which means the exact power of 2 operations ( 1st insertion, 2nd insertion, 4th insertion, 8th insertion, …) we need to copy all the (i-1) elements to the double-sized temporary array. This is expensive and it takes i cost for the insertion operation ( because i -1 elements are copied and the ith element is added to the array)

When the array is not full we only need to add the element in the correct location and it takes only a single cost operation.

Our sequence of cost for operation looks like below

`1 + 2 + 1 + 4 + 1 + 1 + 1 + 8 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 16 + 1 + 1 + .............`

So we need to average the above sequence of operations.

So the equation looks like below

When we n insertions it takes roughly 3n times so when we average it

`average time = Total time for n sequence operations/ naverage time = 3n/n = 3`

Simply for n insertions, the amortized cost is O(n) and for single insertion operation, the time complexity is O(1). this is not the case when we look for worst-case time complexity. So The key point is when we do time complexity analysis on dynamic data structure operations, better to go with amortized cost analysis because the high-cost operation of dynamic data structure is occasionally happening (Adjusting its structure) and most of the low-cost operations are faster.

Amortized cost analysis is a better approach for the dynamic data structure.

Reference

Cormen, Leiserson, Rivest, Stein (CLRS book), Introduction to Algorithms, 3 rd Edition (2009), Indian edition available at the bookstore. ISBN: 978–81–203–4007–7