According to Wikipedia, apophenia is “the tendency to mistakenly perceive connections and meaning between unrelated things”. It is also used as “the human propensity to seek patterns in random information”. Whether it’s a scientist doing research in a lab, or a conspiracy theorist warning us about how “it’s all connected”, I guess people need to feel like we understand what’s going on, even in the face of clearly random information.
Deep Neural Networks are usually treated like “black boxes” due to their inscrutability compared to more transparent models, like XGboost or Explainable Boosted Machines.
However, there is a way to interpret what each individual filter is doing in a Convolutional Neural Network, and which kinds of images it is learning to detect. …
Markov chains have been around for a while now, and they are here to stay. From predictive keyboards to applications in trading and biology, they’ve proven to be versatile tools.
Here are some Markov Chains industry applications:
So far, we can tell this algorithm is useful, but what exactly are Markov Chains?
A Markov Chain is a stochastic process that models a finite set of states, with fixed conditional probabilities of jumping from a given state to another. …
Hadoop’s MapReduce is not just a Framework, it’s also a problem-solving philosophy.
Borrowing from functional programming, the MapReduce team realized a lot of different problems could be divided into two common operations: map, and reduce.
Both mapping and reducing steps can be done in parallel.
This meant as long as you could frame your problem in that specific way, there would be a solution to it that could easily be run in parallel. This will usually result in a big performance boost.
That all sounds good, and running things on parallel is usually a good thing, especially when working at scale. …
Why do Neural Networks Need an Activation Function? Whenever you see a Neural Network’s architecture for the first time, one of the first things you’ll notice is they have a lot of interconnected layers.
Each layer in a Neural Network has an activation function, but why are they necessary? And why are they so important? Learn the answer here.
To answer the question of what Activation Functions are, let’s first take a step back and answer a bigger one: What is a Neural Network?
A Neural Network is a Machine Learning model that, given certain input and output vectors, will try to “fit” the outputs to the inputs. …
LSTM Neural Networks have seen a lot of use in the recent years, both for text and music generation, and for Time Series Forecasting.
Today, I’ll teach you how to train a LSTM Neural Network for text generation, so that it can write with H. P. Lovecraft’s style.
In order to train this LSTM, we’ll be using TensorFlow’s Keras API for Python.
I’ll show you my Python examples and results as usual, but first, let’s do some explaining.
The most vanilla, run-of-the-mill Neural Network, called a Multi-Layer-Perceptron, is just a composition of fully connected layers. …
Probability Distributions are like 3D glasses. They allow a skilled Data Scientist to recognize patterns in otherwise completely random variables.
In a way, most of the other Data Science or Machine Learning skills are based on certain assumptions about the probability distributions of your data.
This makes probability knowledge part of the basis on which you can build your toolkit as a statistician. The first steps if you are figuring out how to become a Data Scientist.
Without further ado, let us cut to the chase.
In Probability and Statistics, a random variable is a thing that takes random values, like “the height of the next person I see” or “the amount of cook’s hairs in my next ramen bowl”. …
Convolutional Neural Networks are a part of what made Deep Learning reach the headlines so often in the last decade. Today we’ll train an image classifier to tell us whether an image contains a dog or a cat, using TensorFlow’s eager API.
Artificial Neural Networks have disrupted several industries lately, due to their unprecedented capabilities in many areas. However, Different Deep Learning architectures excel on each one:
Today we’ll focus on the first item of the list, though each of those deserves an article of its own. …
Applying filters to images is not a new concept to anyone. We take a picture, make a few changes to it, and now it looks cooler. But where does Artificial Intelligence come in? Let’s try out a fun use for Unsupervised Machine Learning with K Means Clustering in Python.
I’ve written before about K Means Clustering, so I will assume you’re familiar with the algorithm this time. If you’re not, this is the in-depth introduction I wrote.
And I also tried my hand at image compression (well, reconstruction) with autoencoders, to varying degrees of success.
However this time, my goal is not to reconstruct the best possible image, but just to see the effects of recreating a picture with the least possible colors. …
There is a Japanese word, tsundoku (積ん読), which means buying and keeping a growing collection of books, even though you don’t really read them all.
I think we Developers and Data Scientists are particularly prone to falling into this trap. Personally, I even hoard bookmarks: my phone’s Chrome browser has so many open tabs, the counter was replaced with a “:D” emoji.
In that zeal for reading and learning most of us experience, we usually end up lost, not sure of which book to pick up next. …
Deep Learning has revolutionized the Machine Learning scene in the last years. Can we apply it to image compression? How well can a Deep Learning algorithm reconstruct pictures of kittens? What’s an autoencoder?
Today we’ll find the answers to all of those questions.
I’ve talked about Unsupervised Learning before: applying Machine Learning to discover patterns in unlabelled data.
In the case of Image Compression, it makes a lot of sense to assume most images are not completely random.
In more proper words, it is safe to assume most images are not completely made of noise (like the static when you turn on an old TV), but rather follow some underlying structure. …