Discrete Mathematics and its application — Part 2A

Komolafe Tolulope
5 min readApr 3, 2016

--

In part 1 of this series, where I documented my journey learning Data Structures and Algorithms, (which I began early March of 2016) and so far the journey has been more exciting than I had thought. I had expected it to be boring based on my past experiences trying to learn algorithms. Having tried several times, the different concepts had failed to sustain my attention till now. And now? Yay!!! I finally understand asymptotic notations(my favourite part so far). As promised, let me share some of what I learnt with you.

What are Algorithms?

Algorithms are a set of steps that a computer takes to accomplish a task. I can also say algorithms are like road maps for accomplishing a given, well-defined task.

Algorithms are central to any computation or computer program. And since computer programs are a set of logical rules put together in a structured way, we can say Data Structures + Algorithm = Computer Program

You might ask, “why do I need to learn how to write or use algorithms?”. Well, knowledge of algorithms will help you identify potential solutions to a problem based on existing solutions to a similar problem or help you design an efficient solution for your problem (time and performance-wise).

The best favor a developer can do for his/herself is to understand the existing algorithms we have in computer science today. Apart from increasing your developer toolkit, it means you are not stuck trying to reinvent the wheel whenever you encounter a problem. I am all for writing and developing new algorithms, but it makes sense to choose from the vast array of existing ones where applicable.

The problem of reinventing the wheel every time you want to solve a problem 1. will take you longer (time wasting) 2. you will most certainly get it wrong several times (expending mental effort).

Some real life applications of Algorithms are

  • compression algorithms which are used by video calling/streaming apps
  • Google Maps uses route finding algorithms to suggest routes from one point to another.
  • If you want to develop a game that allows a user play against the computer, then you can use minimax algorithms.
  • Google’s search engine use searching and sorting algorithms.
  • Frequency algorithm used by Gmail and Slack for getting the most frequent person you send messages and media to and suggests that immediately you start typing

What makes a good algorithm?

  1. It solves a problem correctly.
  2. It solves it efficiently.

To choose the best algorithm to solve a particular problem, you have to be able to determine the efficiency of various algorithms. One of the most common ways of determining the efficiency of Algorithms is a technique called Asymptotic Analysis.

What is Asymptotic Analysis

Asymptotic Analysis is a study of algorithms based on their input size and the rate of growth of the running time (how fast the running time grows with the size of the input).

We perform this analysis because algorithms perform differently from each other with varying inputs, i.e., Given two algorithms, A and B, it might be possible that for a given set of input, the algorithm A performs better than the B. And for another given set of input B will perform better.

A lot of the theory is about how time and space requirements grow as the problem size increases, and it’s important because naive choices are often very fast on small datasets, but do not perform efficiently with large data sets.

Asymptotic notations are mathematical tools to represent time complexity of algorithms for asymptotic analysis.

The following notations are commonly used to represent time complexity of algorithms

  1. Big-Θ Notation(Theta): known as theta notation, this symbol is used to represent functions where we’ve nailed the running time to within a constant factor above and below, i.e. when the running time is within a range of constant values. Whenever we use big-Θ notation, we’re saying that we have an asymptotically tight bound on the running time. For example, When we say that a particular running time is Θ(n), we’re saying that once n gets large enough, the running time is, at least, k1⋅n and at most k2⋅n for some constants k1 and k2.
  2. Big-O notation: This term refers to the upper bound of an algorithm. For example, consider the case of Insertion Sort. It takes Θ(n) in the best case scenario and Θ(n^2) in the worst case scenario, so we safely say the time complexity of insertion sort cannot be worse than Θ(n^2) but can be better. In this case, we will write it as O(n^2). Consider a less complex example; suppose you work on average 7 hours every day and you walk up to a friend and say I work no more than 12 hours every day, your statement will be correct because 12 hours is an upper bound to 7 hours.
  3. Big-Ω Notation(Omega): This term is the opposite of Big-O notation. It is used to define the lower bound of an algorithm. If we want to say that an algorithm takes, at least, a certain amount of time(best case time complexity), then we use the Omega notation. The Omega notation is the least used notation among all three because mostly we don’t care about the best-case time complexity of our algorithm, we are mainly concerned with what the worst time case for any algorithm.

To stop this write up from becoming long and tedious, I will end this here. In the next part of this series, I will talk about how we compare functions using asymptotic notations and how we determine the time complexity of some Algorithms.

You can also check out this link to understand more about Asymptotic notations

https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation

I hope you learned something new from this article. If you have any question, you can send me a mail at betkomuniq@gmail.com or send me a DM on twitter @betkom.

--

--

Komolafe Tolulope

Software Developer at Andela, soccer enthusiast, Evangelist for gender equality