As per the formal definition, “an algorithm is a finite sequence of instructions to solve some problem”.
To understand the above definition, we should first understand the meaning of the word “problem” used above. A problem is anything the you’re trying to solve computationally, it can be as simple as
- Finding the maximum of two numbers
- Arranging elements in ascending or descending order.
- Finding the shortest path in a map or a graph.
- Moving a drone from one point to another.
- Suggesting the correct spelling for incorrect word
- and so on.
Now, let’s pick one problem and then try to understand the definition of the algorithm.
Problem: Find the max of given three numbers
If we have to provide an algorithm for the above problem, we need a way to represent it. Thus, we can use “pseudocode” to represent the algorithm. Pseudocode is an informal way to describe an algorithm, we use simple English to list each step instead of any programming language-specific statements. So the following can be the pseudocode.
1 read n1, n2, n3
2 compare n1 and n2
3 store maximum from the last step as M
4 compare M and n3
5 store maximum from the last step as M
6 print M
As you can see that the above pseudocode has only 6 instructions by which computer can solve some problem. It is also matching with the definition of the algorithm which says, it must have a finite sequence of instructions and it must also solve some problem.
Can a problem be solved by multiple different algorithms?
Yes, a problem can be solved by more than one algorithm. For example, if we have to arrange a given set of number in ascending order, it can be solved by any of the sorting algorithms like the Bubble sort, Quick Sort, Heap Sort etc. However, to select the most optimal algorithm is very tricky as depends on a lot of factors.
Now, consider a scenario where we have a “Problem X” and there are three known algorithms, say Algorithm 1, Algorithm 2, Algorithm 3 to solve it. If we have to find the best algorithm, one approach that we can take is to find the execution time and the memory used by the algorithm for the same input and the system. Let assume, we got the following data for each algorithm :
- Algorithm 1 : execution time= 5.0 seconds, memory = 1024 MB
- Algorithm 2 : execution time = 6.0 seconds, memory = 756 MB
- Algorithm 3 : execution time = 7.0 seconds, memory = 128 MB
By seeing the execution time, we might say that Algorithm 1 is better but if we have to solve the same problem on a device which has limited memory then Algorithm 3 will become more preferable. For the medium-sized device (in terms of memory), Algorithm 2 is best.
In the above scenario, we were able to select the right algorithm by using the data like execution time and memory used while assuming that the same data size and the same system is used for comparison. In reality, this method is not practical to compare two algorithms as it is difficult to get all this information for different combinations of input data-size and systems. Now, we are left with the following question.
If we have multiple algorithms for the same problem, how to compare and select the best?
In computer science, asymptotic notations are used to compare two algorithms. With asymptotic notations, we compare how the execution time or the memory requirement is growing with respect to the input size. It ignores the machine level details and the actual execution time of the algorithm.
In my next article, I will explain asymptotic notations in detail.