An Intuitive Introduction to Algorithms

Evan Wireman
Apr 22 · 6 min read
Image from Mohamed Hassan from Pixabay

What is an algorithm?

As a programmer, it is important to have a wide arsenal of tools at one’s disposal. This may include programming languages, text editors, packages, etc. Perhaps most importantly, it is crucial to have a firm understanding of the various types of algorithms that exist. These will becomes one’s most important set of tools in tech interviews and in the workplace.

For those who do not know, an algorithm is not a challenging concept to understand. Algorithms are simply well-defined, step-by-step solutions to problems. For example, there is an exercise in many computer science classes where students are asked to explain how to make peanut butter and jelly sandwiches. Shockingly, the process of doing so is an algorithm. An MIT summer camp handout provides insight as to what this algorithm might look like:

As you can see, each step is incredibly explicit, leaving no room for error. This is because algorithms are typically interpreted by computers, which are objectively dumb. For instance, say you give a computer a list of 7 things and ask for the 8th item. An intelligent human would merely reply, “There are only seven things on the list, thus I cannot give you the eighth thing.” A computer, however, would likely break.

On the plus side, computers are fast, which allows them to run really complicated algorithms more efficiently than humans ever could. For instance, a computer can sort a list of 100 things in a fraction of a second, provided it received explicit instructions detailing how to do so.

What types of algorithms exist?

Computers are asked to do a wide variety of tasks, and are expected to do them quickly. Sorting a contact list by each contact’s last name, searching for a specific video on YouTube, the list goes on. Since before the first computer was ever invented, scholars were researching algorithms: designing novel, efficient ways to solve problems. I’ll go over two of the more well-known algorithmic paradigms:

  • The brute force approach

Imagine you are looking for the word “enigma” in the dictionary. The intelligent person would simply flip to the section containing words with the letter e, and go from there. Pretty efficient, right? Unfortunately, computers do not understand english too well. Thus, one may propose using the brute force approach to solve this problem.

The brute force approach would entail looking at every word on every page, until the word enigma came up. This is quite an inefficient task. In fact, we can say that, if there are n words in the dictionary before the word enigma, we would have to perform n operations (an operation, in this case, would be checking to see if we reached the word enigma) before finding the correct word.

Let’s imagine the worst case scenario. Say you wanted to search for the word “zzzzzz.” Obviously, this word does not exist, but what if you wanted to double check? Using the brute force approach, this would require you to flip through every page, looking at every word, until you reach the last word in the dictionary (Zyzzyva) in order to determine that “zzzzzz” is not a word. Similar to the enigma example, this would require n operations if we assume there are n words in the dictionary. There must be a more efficient way to solve this problem.

  • Divide and conquer

Now let’s suppose that the dictionary you are looking at is ordered, as most are. This means that you could split the dictionary into pieces and determine if the words in each piece occur before or after “zzzzzz” alphabetically.

The workflow would look like this :

Eventually, you will split the dictionary so many times that you will be left with either one or two words. In either case, the process of determining if any of the remaining words is “zzzzzz” is trivially easy. Unlike with the brute force approach, you were able to reach this point while skipping a large portion of the dictionary.

The approach of dividing a workload into smaller and smaller pieces falls under the name divide and conquer. This algorithmic paradigm utilizes what is referred to as recursion, where a problem is defined in terms of a simpler version of itself. In this case, the problem is searching for “zzzzzz” within the whole dictionary. The “simpler version” would be searching for “zzzzzz” within a smaller section of the dictionary.

Just how much more efficient is this solution? In order to determine this, we need to calculate how many operations we performed. For simplicity, we will say that steps 1 through 4 in the workflow above count as one operation. We can do this because the amount of trivial operations performed at each step do not contribute much towards inefficiency: it’s the amount of steps taken that make code run slow.

So at each step, we perform a single operation. How many steps are we taking? Well, at each step we are either finding “zzzzzz” or are dividing the size of the dictionary in half. Thus, if the full dictionary contains n words, the amount of words that are disregarded at each step are:

With a little math manipulation, we eventually realize that as opposed to requiring n operations in the worst case, we now only require log n. So, if the dictionary contained 1,000,000 words, the brute force approach would require 1,000,000 operations in the worst case, where divide and conquer would only require about 20. Not bad!

Note that the log, in this case, would be log base 2. However, similar to how the amount of operations performed at each step can be disregarded, so can this. For an explanation of why, feel free to check out this link: https://cs.stackexchange.com/questions/78083/why-is-the-log-in-the-big-o-of-binary-search-not-base-2

Why are we only concerned with the worst case?

After talking so much about worst case scenarios, computer scientists probably sound like a group of chronic pessimists. However, this could not be further from the truth.

We worry about the worst case scenario because it is our hope to minimize the negative effects (or the probability of experiencing) the worst case. The negative effects can include anything from excessive memory consumption, a long run-time, or potentially stalling ad infinitum.

There is a certain notation tied with analyzing worst case runtimes. It is called Big O notation. In reading this article, you already know the Big O efficiency of two major searching algorithms:

  • The technical name of the brute force searching strategy I outlined is Sequential Search. Recall that, in the worst case, this algorithm will require n steps/operations. Thus, the Big O efficiency of this algorithm is written as O(n)
  • The technical name of the divide-and-conquer strategy is Binary Search. Since it required log n operations in its worst case, its Big O efficiency is O(log n)

Conclusion

And there you have it! You now know two terribly inefficient methods of searching through the dictionary. You also now know how to measure and describe their efficiency. I sincerely hope, however, that you decide to just Google the definitions of words.

Thank you for taking the time to read this. Please let me know if you found this helpful, or if you have any suggestions of computer science topics I should cover in the future. Happy hacking!

CodeX

Everything connected with Tech & Code

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store