Top 5 Insights After I Spent 100 Days Learning About Artificial Intelligence

Jamie Beach
The Startup
Published in
7 min readMay 14, 2019
A drawing I did based on the first chapter of Max Tegmark’s Life 3.0 — The Tale of the Omega’s

At the end of January 2019, it suddenly dawned on me that my understanding of artificial intelligence was insufficient. It is increasingly impacting our every day. AI defends our inboxes from spam, it powers weather updates from Alexa, it enables Amazon to recommend a purchase or Netflix to suggest a movie. Every time we open Twitter or Facebook, it’s human versus an AI that knows us better than we know ourselves. Yet here I was — a professed technologist with so little awareness of what AI actually is.

It was while listening to an interview with Wired magazine founder, Kevin Kelly, on a podcast called Future Thinkers that a light bulb went off. Kevin was discussing AI and he made the statement that we are still in the beginning and that if anyone spends just a bit of time learning AI and machine learning beyond surface level, they will find themselves part of only a small percentage of people. I literally got home from work that day and began a 100 day deep dive.

I catalogued everything here, in this Trello board. Although time was hard to find, I did manage almost 200 hours of effort in 100 days. I read 9 books, did 2 Coursera courses (began a third), listened to many podcasts and did as many tutorials I could manage.

Here are 5 insights that I learned in that time :

1. AI is Old but New

The term Artificial Intelligence did not come from some science fiction novel. It came from a Summer workshop at Dartmouth College in 1956 that brought a number of smart people together make machines think. It was an intentional gathering to spawn the concept and while they didn’t leave the workshop with thinking machines, they came away with ideas and techniques that remain fundamental to AI today.

Following the workshop, interest heightened in different sub domains of Artificial Intelligence. Neural networks seemed very promising, but at the time, there were gaps and most research eventually discarded the concept. This period is referred to as the “AI Winter”. It lasted for decades. In recent years, however, the exponential growth in processing power and available data combined with new advances in deep learning has drastically increased the effectiveness of machine learning. So much that AI has been declared the “new electricity” by experts like Andrew Ng.

2. AI == Machine Learning != Terminator

“AI is done in PowerPoint and machine learning in Python.”

The Terminator. The epitome of pop culture reference to superintelligence.

Artificial General Intelligence (or AGI) is a hypothetical machine that thinks like humans do. It is the Terminator, or Hal, or that robot from Ex Machina or the voice of Her. Superintelligence is then machines that think beyond the ability of humans (read Nick Bostrom’s Superintelligence if you want to be a little bit scared of that). At this time, there is no such thing. Thus far, AGI is fantastical, futuristic and a little bit out of reach. It doesn’t mean that nobody is working on it. Nor does it mean that brilliant minds like Max Tegmark or Ray Kurzweil don’t talk extensively about it and expect it (they do, and soon). But the current practical form of AI is nearly entirely a sub-domain called machine learning.

Machine learning fundamentally looks like this :

Step 1: Take a problem and turn it into a prediction problem. In other words, given input parameters (features), predict the results. Predict how much a house costs, or whether given location and camera image, turn right or left?

Step 2: Decide on or define the algorithm or system. There are many, from linear regression to neural networks, deep learning, support vector machines, recurrent neural networks, convolutional neural networks, generative adversarial networks… the list goes on. Each algorithm is used for a particular kind of prediction problem. To predict the cost of a house, a linear regression model would suffice. Predicting an entire screenplay would use a recurrent neural network (RNN). Predicting images of faces of people that don’t exist uses a generative adversarial network (GAN).

Step 3: Get lots of training data. The more the better (usually). For house prices, get thousands of rows of data containing the features and actual prices that those houses sold for (labels). For character recognition, get lots of pictures of characters and label them accordingly.

Step 4: Train the model. Feed the training data. Calculate the error. Adjust and repeat until the error is minimized. Gradient descent and backpropagation are important concepts here.

Assuming a minimized error is found, the model is ready — feed it new features and it will predict the results. Often very accurately. Often even more accurately than a human otherwise would.

3. There is no Magic — Just Math

Screen cap of Andrew Ng’s Machine Learning course — I didn’t steal this. I just found it on google.

Before starting the 100 days, I knew there was math involved in machine learning. I just had no idea how much. Knowing calculus and matrix algebra would be incredibly beneficial for anyone jumping in, but fortunately you don’t need to be a math major to get it and frameworks are being iterated on that continually democratize machine learning more and more.

Important frameworks that add a layer of abstraction from the programmer and the math and algorithms include Google’s Tensorflow, Microsoft’s ML.NET and PyTorch. There are even additional layers of abstraction such as Keras that sits on top of Tensorflow.

And there are efforts to make machine learning even more accessible by offering machine learning models as a service or by creating programs that automate the process, such as AutoML and Auto-Keras.

4. Bias is a Big Problem

“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” — John Giannandrea

One of the biggest takeaways from this whole thing was the dangers of bias in machine learning models. Amy Webb’s great book, The Big Nine, focused on this in many of the chapters. Comprehensive and deliberate diversity in test data is incredibly important but is something that is lacking.

“The Founding Fathers of AI” — diversity in AI since 1956

Amy uses the ImageNet corpus as an example of inherent bias. With over 14 million labeled images, over half of them are created in the US. And ImageNet is certainly not alone in containing bias.

What happens when a data set contains mostly images of women for “nurse” or men for “CEO”. What happens when skin cancer image data uses only light skinned samples? There are real consequences when these models begin making their ways into our every day. And as the democratization of ML models continues and we settle on pre-made models without knowing the underlying test data used for training, the bias persists and potentially amplifies societal biases at large.

Researchers are well aware of the problem and all of the Big Nine companies (G-MAFIA + BAT) have mantras and guiding principles to project the need to reduce bias into their engineering culture. But it’s not intentional. Nobody is intentionally injecting bias into models. Even with the best intentions, bias is inevitable.

It is therefore so important that we all understand how machine learning works and how it impacts us — how it powers the Twitter and Facebook feeds that churn our own neurons to cultivate our perceptions of the World.

5. There’s So Much Opportunity

Forecast of Global AI-Derived Business Value (Billions of US Dollars); Source: Gartner (April 2018)

Kevin Kelly was right. We are still in the early days of Artificial Intelligence and machine learning. Yes, there are many applications already permeating our lives, but there remains so many opportunities in the space.

Machine learning can, has and will, completely change everything. During the last 100 days, one of the many books I read was called Manna by Marshall Brian. It’s fiction, but it describes a nearly utopian society where machines and automation have taken over all the work and humans are enabled to live a life however they want. There is no AGI needed — just machine learning at large. How far away is it really?

I foresee Instagram celebrities and youtube vloggers that aren’t even real yet have tens of millions of followers, their content completely generated with GAN’s and RNN’s. A new paradigm of entertainment powered by machine learning where everything from the movie scripts to the lifelike ultra realistic 3D models are created by machine learning models. Forget ever interviewing for a job. Why bother when your own personal data records can be matched with company data profiles instantaneously across all current job openings, using ML. Hyper-personalization for everything from cancer treatments to restaurant dinners to real-time generated music is within reach. Self-driving taxis, RNN based copywriting services, automated service agreements, automated court rulings, personalized life betterment strategies, drone deliveries, AI based investing, the list is endless, it is all tangible and nearly all currently up for grabs.

AI and machine learning will also likely impact humans at a full civilization level, helping mitigate existential risks such as climate change, war, asteroid impacts and disease.

The World is imminently going to be different. We may notice. We may not. AI will power the change and it’s already started to creep up on us.

And as Kevin Kelly said,

the future happens slowly and then all at once.

--

--

Jamie Beach
The Startup

A stoic, hacker, maker, father. Friend to all humans.