Published in


Artificial Intelligence: What Even Is It?

This week we’re talking about artificial intelligence, or AI. AI is everywhere — President Obama is talking about it, it’s helping you get the best search results from Google, customizing Netflix and Facebook for you, even diagnosing skin conditions. So what is it, how does it work, and at what point will it become sentient and learn the cruel sting of a lover’s betrayal? Let’s dive in!

The Intro

AI is a difficult to define concept, like art or what type of cuisine Jack-in the-Box serves (seriously, how can one place do burgers and tacos so well?). The definition of what is or is not AI is subjective, evolving, and can even be controversial. For simplicity, let’s just say that artificial intelligence is the display of complex reasoning, response, or reactions by machines similar to that of humans. It is in fact much more than that, but it’s a good starting point.

An illustrative example is the Turing Test, which says if a human cannot tell the difference between a computer and another human during a text-only chat, the computer passes the test and is “intelligent” (or perhaps the human being tested is… less than intelligent, but I digress). The test’s inventor, Alan Turing, was a hero of World War II, persecuted for his sexuality, and the “father” of computer science and AI. His biopic, aptly named the Imitation Game, and starring Butterbread SnapchatBenedict CumberbundBenedict Cumberbatch! That one. It’s worth the watch.

AI is inherently interdisciplinary, drawing heavily on math, statistics, computer science, and logic. Depending on the application, it can also involve fields like optics, linguistics, and neuroscience. Further complicating the definition of AI is the AI effect, which is the practice of demoting technology we once called AI to the generic category of “software” once it’s popular or well understood.

With the difficulty in defining AI, we’re going to take a more ostensive approach and talk through the hottest buzzwords and concepts like superintelligence, neural networks, machine learning, deep learning, computer vision, and natural language processing (NLP). We’ll also talk about some of the applications, laying the groundwork for future pieces on the impact of AI and what it means for our future. And now, the meat!

The Meat

Superintelligence and Strong AI: Do I have to worry about computers taking over the world?

When luminaries like Elon Musk, Bill Gates, and Stephen Hawking start warning us about the rise of computers, they often use terms like “superintelligence” or “strong” AI. These two terms are used to describe computer intelligence that goes beyond human capability, with strong AI simply referring to computers that are at least as smart and capable as humans at a wide variety of tasks versus just one task, and superintelligence referring to capabilities well beyond that of humans, again at a wide variety of tasks versus just one. Here’s a quick equation/visual:

Weak AI << Humans == Strong AI << Superintelligence

Keep reading and you too can be superintelligent. So should we be worried about superintelligence and computers enslaving us and taking over the world? As Van Wilder said “Worrying is like a rocking chair. It gives you something to do, but it doesn’t get you anywhere.” So no, don’t worry. But should we be doing anything about it? Honestly, probably not. Despite some of the smartest, most accomplished, and/or wealthiest people in the world talking about it, when put in context, the theoretical danger is still far from reality. According to Andrew Ng, one of the topmost researchers and developers in AI, worrying about superintelligence is like worrying about overpopulation on Mars. Eventually it may be a problem, but there are other concerns to deal with first. The bigger concern at this moment is from AI taking your job, or at least 50% or more of it.

Defining and Applying the Buzz: Machine Learning, Neural Networks, Deep Learning, and NLP

Machine learning describes a set of techniques to program computers to learn to accomplish a task better. It’s not just teaching computers to solve a problem, it’s teaching them how to solve a problem better over time and with more data. This set of techniques is called “learning” because we expect that with more experience in solving a problem, the computer will improve. When you hear the phrases deep learning, supervised learning, unsupervised learning, reinforcement learning, etc. these are all techniques in the field of machine learning. If you want to learn what those are in more detail, set aside 30 mins to check out the second half of Andrew Ng’s intro lecture. A very common example of applied machine learning is the spam filter (a mix of rules and experience that teach a computer what emails to mark as spam, getting better over time with more feedback). Ever notice the huge reduction in emails related to male enhancement? Spam filters aren’t perfect, but they continue to improve over time.

Neural networks, also called artificial neural networks (or “neural nets” if you’re hip or living in Silicon Valley (I say “or” cuz you’re almost certainly not both)), are an approach, or class of algorithms, used in machine learning meant to be symbolic and representative of how our brains work. Neural networks are networks of nodes, or neurons, responsible for receiving data, performing a small computation, and then passing that data to the next neuron. This is best understood by analogy: Imagine you have to translate a book from English to Italian. You have three translators who know English and Spanish, three translators who know Spanish and French, and three translators who know French and Italian. You’re also given a large sentence that represents a perfect translation from English to Italian to help train/test which translators are most reliable and which translators work best with which other translators.

Here’s a picture:

These next two paragraphs are more technical details/definitions of neural nets, feel free skip to Deep Learning if that’s not of interest. If you’re still here, in the above analogy, each set of translators is considered a “layer,” and each individual translator is a neuron or node. The large sentence that’s already translated is the “training data.” The English speaking translators are the input layer, and the Spanish/French translators are a “hidden” layer as we don’t know either language and aren’t checking it. The Italian speaking translators are the output layer. Training and testing the translators is loosely representative of something called “backpropogation” and giving some translators more credit than others is called “weighting.”

Looking at this image, we have a system where the translation process only moves from English to Italian. This is how a “feed-forward” neural network works. Imagine a process where the French/Italian translators spin cycles back and forth with the Spanish/French translators to improve their accuracy. This is how a “recurrent” neural network works. There are no limits on the number of layers and nodes a network can have, and in some cases more layers are vital to performance. In others, more layers can reduce performance much like the game telephone AKA whisper down the alley.

Deep learning is a specific type of machine learning that uses neural networks with multiple hidden layers (thus the “deep” part) and is modeled almost always for pattern recognition. It’s very commonly used in “computer vision,” which is how computers can understand images and video, a technology of supreme importance for technologies like self-driving cars, robots, automated surveillance, and visual search.

Natural language processing or NLP refers to the field of study focused on how computers can process human language. This field includes complex problems such as speech recognition (vital for tasks like automated transcription and voice control, used by products like Siri, Google Now, and Amazon Alexa products). NLP is also key to automated translation and complex search. So much of the information created by humans is in the form of the spoken or written word, making the ability of computers to process words vital to technological progress.

We’ll wrap our broad summary of AI here. If this seemed very technical, that’s because it was. However, if you made it through all of the above, you should have enough of a foundation to learn about the ways AI will impact our world and sound super smart at Thanksgiving. We’ll get to those topics very soon, after Part 2 of taxes of course.

P.S. Special thanks to the J-Team for their help with this piece — Javan Behler, Jennifer Gonzalez, and John Casey.




Delivering bite-sized summaries of currently relevant topics in fields such as science, economics, entrepreneurship, technology, and politics. Written with the goal of facilitating our readers having Not So Small Talk.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Neil Devani

Neil Devani

Investing in and supporting change.

More from Medium

Assay models: blue-collar AI in the biology lab

ML|Neuroscience: Genetic Memory, Thoughts — Active/Passive

The Role of AGI in Medicine

Being Normal Is Overrated