Fastest Crash Course on Algorithms and Data Structures

My head is about to blow up.

Last weekend, I’ve gone through most of the basic algorithms and data structures in your everyday programming.

Why did you do that?

Good question! After all, your productivity at work might not be affected by this. However, I believe there’re times that it’s critical to be aware of algorithms you (or an awesome library you rely on) are utilizing under the hood.

This is more so if your work is related to a web application development, where nobody is able to predict what kind of load it hits in a coming year.

In addition to that, I’m starting to believe that if you know algorithms and data structures well, you’re more likely to write bug-free code. Often times code bugs come in form of “untested corner cases”. Experience in implementing popular data structures and utilizing them with algorithms develops your sense of testing corner cases.

How did you do that?

I’d say it’s a classic style: pick a book, and stick to it. This is the book I used.

I recommend this book for non-native English speakers who haven’t taken proper CS courses, and want to recap the fundamentals in this field.

I found this book easier to read than some of the alternatives. The book is not trying to be “ultimate guide” in this field, providing just right amount of things to meet your need (that is, to be aware of basic algorithms and data structures you’re unconsciously using every day).

OK, tell me what you’ve learned so far

Appreciation towards O(log N) algorithms

The book starts with explaining run-time and space complexity of various algorithms. I now have more appreciation towards O(log N) functions. You will need to utilize them when tackling processing massive data (besides paralleling them).

References/pointers are critical in data structures

Then, they introduce you to the Linked List, which is one of the simpler forms in data structures. Even though most of the time you use more sophisticated forms of data structures, Linked List familiarizes you with the concept of pointers/references as a means to traverse this data type.
You’ll also feel that retrieving something from somewhere always comes at a cost.

How good it feels to do something in O(1)

Next one is Arrays. After going through the Linked List, Array feels so good because it gives you an access to ANYWHERE inside an array by referencing to its index.
Following chapter talks about Stacks and Queues. They’re easy to reason about adding and removing an item from an array, and makes you think about preservation of the order of items in an array.

Sorting literally saves your life

Then finally, the book introduces Sorting to the audience; by the time you start reading the Sorting chapter, you’re worried like “How can I possibly retrieve some data without traversing every one of items? Everything seems to take at least O(N) time! omg I’m doomed…”

Then Sorting comes in like a superhero. Especially the kind of sorting that gets the job done by divide and conquer way.

Phew! This is a lot of stuff.

Yeah I know. Yet, I’ve only covered 1/3 of the content I’ve digested. Things I haven’t covered are:

a. Why Hash Table can keep data retrieval fast even after many times insertion
b. How you avoid Recursion to cause Stack Overflow
c. What RDB (Relational Databases) have to do to keep records sorted (Balanced-tree index).

I’ll try to write a sequel of this, but I am apparently bad at keeping this kind of promise. If you find this to be interesting, please leave a feedback :) That cheers me up a lot.

Thanks always!