I am fascinated by — and curious about — the application of technology in our everyday lives. This is why I was originally interested in programming. On a daily basis, we interact with apps that integrate various types of algorithm into its core or peripheral service. And it is amazing to see what the outcome is.
Even more amazing is uncovering the logic (or magic!) powering apps. This is why I decided to start reading “Grokking Algorithms.” It was such an accessible read! There are so many topics that I could summarize in detail. All in all, there are ten algorithms discussed, as well topics such as binary tree search, dynamic and linear programming, MapReduce, etc. I thought I’d provide a more topical overview of interesting algorithms belying current and relevant technologies.
The Fourier Transform
First of all, what a cool name (!). And a lot of use cases, too. Its potency is analogized as having the ability to know the ingredients of a smoothie (I’m assuming smoothie is a good analogy since its output is very different from its input). An app like Shazam, which guesses what song is currently playing, leverages the Fourier transform, which has the ability to separate a song into its individual frequencies. If a song can be separated into frequencies, the bass can be boosted and the treble, hidden. You can then match songs based on their bass! It is also used for compressing music by breaking an audio file down into its notes. The Fourier transform can discern which notes contribute to the overall song and get rid of any notes that aren’t significant.
This is definitely one type of algorithm that has far-reaching influence (I even used the algorithm for my Java server). A secure hash algorithm generates a (very long) string for every string, and can be used to differentiate between files. Every git commit has a unique SHA, which you can use to jump back to a very specific time. The concept of hashing is also important for password storage. When building apps with user authentication, passwords are *never* stored in a database as is in case of security violations. Instead, passwords are *hashed* and then stored; every time you enter your password, it is hashed and compared to the stored value.
The beauty of SHA is that it is a one-way hashing, so a hacker with the encrypted string cannot figure out the *actual* password. I was introduced to the term locality insensitive, meaning that changing of a character can dramatically transform the hash rather than make a tweak in one character. This helps in safeguarding sensitive information, like passwords, in the even that the data falls in the wrong hands.
When trying to understand where a user falls in terms of category, k-nearest neighbor (KNN) helps determine classification. If a user has more neighbors that are more into horror movies than romance movies, then the user can be classified as a horror moviegoer. That’s right — we’re talking about Netflix’s recommendations. We can think about it as if users are plotted by similarity. Find the n closest neighbors of Person X and if they like a certain movie, it’s most likely Person X is a fan as well. So, if Person X likes a certain movie and is adjacent to Person Y, then their respective favorite movies can be recommended to each other!
Of course, there is more complexity to this as we have to discern the similarities between users. The users can be plotted on our graph by determining coordinates (e.g. how much they like certain genres). Then, their ratings on individual movies can help determine whether other people who are similar to them would like the movie as well. So really, the more you rate, the more information is used to help compare you with others and thereby determine your recommended movies.
Optical character recognition
Machine learning always struck me as beyond me. In its simplest definition, machine learning is about making your computer smarter. More and more, we’re seeing how Google can filter spam emails or how Facebook can recognize faces to automatically tag people in photos. (Carina Zona also has an amazing talk about how detrimental and discriminatory such algorithms can be).
For instance, let’s say that we want numbers to be recognized in images. The steps of OCR would include going through many images of numbers and extract their features; this stage is called training. The algorithm would measure lines, point, and curves and associate them with a certain value. This makes sense — machine-learning algorithms must have a training step before unleashing it to the world and applying it in whatever app or technology. Spam filters use another algorithm, the Naive Bayes classifier, which is trained and breaks sentences into words. For each word, the algorithm determines the probability for a certain word to show up in a spam email (e.g. “million” may show up in spam in the form of a financial scam). Pretty nifty.
While this is a primer in technology and their algorithms, reading this book has given me rudimentary understanding of how they work. Breaking down these complex tasks is exactly what it means to be a programmer.