All About Artificial Intelligence

Your one-stop shop to learn about AI

David Piepgrass
Big Picture
7 min readFeb 4, 2017

--

Some people believe in a concept called “The Singularity”.

The singularity is the idea is that we will soon create an intelligence that is smarter than us, and that this intelligence will create an even smarter intelligence than itself in a short time, and then the cycle repeats in ever-shorter intervals. That would be an “intelligence explosion” whose outcome is impossible to predict, except that it will certainly dethrone humanity as the rulers of Earth and perhaps quickly lead to our extinction.

Is this a likely scenario? As Ramez Naam explains, no. No it is not.

But AI probably does play a large, and increasing, role in your life.

Real World AI

Artificial Intelligence is a broad field that encompasses all machine behaviors that seem “smart” to humans. Video games, for example, have had “AIs” for decades, which control the decision making of opponents (and maybe allies) in the game. Sometimes these AIs seem “dumb” by human standards, so we use the oxymoron “dumb AI” or “artificial stupidity”. Often, these AIs are even dumb on purpose:

These traditional AIs are mostly just mathematical “algorithms” pre-programmed by their human creators. Algorithms are specific strategies (often those proven to be “optimal” by mathematicians in some circumstances) to accomplish specific tasks. Closely related are “heuristics”: strategies for accomplishing a goal that are not guaranteed to be optimal or perfect, but usually work well enough in practice. For example, when you ask your GPS unit or Google Maps software to find the shortest route, it may use a combination of

  1. algorithms (notably the bidirectional Dijkstra algorithm) and
  2. heuristics (like “to search faster, ignore all minor roads that are far from the starting point and the destination”)

to find a good route quickly.

Machine Learning

Almost all of the new AIs that are transforming our world today are based on a different kind of AI called “machine learning”. Since it is based on learning, ML more closely resembles human intelligence than older forms of AI do.

Machine learning already permeates our world. For example, whether this article gets lots of recommendations or almost none, machine learning will have played a role. I don’t have personal knowledge of Medium’s recommendation engine, but if this article was recommended by Medium’s daily newsletter, a machine learning algorithm probably picked it for you based on a combination of

  • the interests you have indicated (either explicitly because you followed me or the tags on this article, or implicitly through things you have “hearted” in the past)
  • the machine’s perception of this article’s value, based, perhaps, on the number of “hearts” it has and the observed likelihood that any given reader will recommend it
  • the machine’s perception of my reputation based on its perception of my past work (a perception that, sadly, is completely invisible to me)

Even if this article was recommended by a friend, it may have been a machine-learning algorithm that recommended it to him. Or her.

Machine learning uses algorithms and heuristics too, except that these algorithms do not directly tell the computer what decisions to make. Instead, a machine learning algorithm’s job is to help the computer learn things (which it will use to make decisions later) based on a set of training data. As you read articles on Medium and choose whether to click the heart or not click it, you’re providing data that simultaneously “trains” Medium both in (1) what you like and (2) whether the article is good or not.

Perhaps the most common kind of machine learning is the kind that makes predictions: predictions about what articles you would like to read most, about what pages you might want to see when you search Google or Bing, about what diseases you might have based on a list of symptoms, about whether a stock’s price will go up or down, about what ads you are most likely to click on. We use machine learning to help each other and, occasionally, to exploit each other.

Categories of Machine Learning

Machine learning can be divided into two categories:

  1. Supervised learning: the computer is given a set of correctly-labeled data to learn from. For example, it might be given accurate lists of symptoms and for each list, a diagnosis verified by a doctor. From this, it is trained to predict diseases based on symptoms. In supervised learning, the machine does not learn from unlabeled data.
  2. Unsupervised learning: the computer is given a large set of data, but the data is not directly labeled. For example, Google scans the entire world wide web, but this data is not labeled with which pages are “best”, or which pages are “relevant” for a given query. Unsupervised learning problems tend to be more difficult than supervised ones.

Another way of classifying machine learning is by field of application:

  • Computer vision (e.g. object identification; X-box Kinect; lip reading; self-driving cars)
  • Image processing (e.g. image enhancement; artistic style transfer, inpainting; and coolest of all, image prediction and generation)
  • Audio recognition and generation (speech recognition, music generation)
  • Natural language processing, i.e. human language analysis (e.g. parsing sentences, machine translation, or deriving “meaning” from sentences, spam detection)
  • Accelerated physics simulations (e.g. smoke simulation, protein folding)
  • Games. While a computer became the world chess champion in 1997, computers didn’t beat world champions at Go until 2016, and poker until 2017. And then there’s Watson’s victory at Jeopardy — a fundamentally different, and in most ways harder, game — in 2011.
  • Artificial general intelligence (AGI): the field in which people try to figure out how human intelligence works and mimic that. We are far from creating human-like intelligence, though, and there are fewer research dollars available for this than for more practical applications. Here is a page that classifies AIs mainly based on their use of memory, noting that an AGI would rely heavily on its memory of the past and use memory differently than other kinds of AI. Related: software that makes AI software.

As you can see, AIs can already greatly surpass human ability at many specific tasks. Even AI experts are impressed:

We may also divide machine learning into approaches that use neural networks, and those that do not (e.g. linear regression and logistic regression, which are, basically, creating curves or surfaces to describe datasets). Usually, the most amazing things computers can do rely on neural networks, especially relatively complex neural networks that we call “deep learning”.

Finally, we can classify machine-learning systems by how the AI is trained. Cutting-edge research uses clever way of training such as via “evolutionary algorithms” or “generative adversarial networks”.

Neural Networks and Deep Learning

Despite their importance, it can be difficult to explain and understand how neural networks work, what exactly they do, and how to make one. In fact, after creating a neural network that works well, the engineers that created it often have difficulty identifying what role the individual “neurons” in the network play. To learn more about neural networks, I suggest this page:

Or see here for a more technical introduction:

Often, neural networks can perform better when given more data and examples to learn from. No wonder, then, that companies like Google and Facebook — with their enormous planet-scale databases — are making enormous investments in AI:

The main reason this field is exploding now (and not earlier) is that the most cutting-edge machine learning requires massive amounts of parallel processing power available in many-core CPUs and GPUs that have come on the market in the last few years. Some large companies are even designing their own custom chips designed specifically to train neural networks.

More Links

Here are more articles about AI. If you liked this overview, don’t forget to ♥ recommend it!

--

--

David Piepgrass
Big Picture

Software engineer with over 20 years of experience. Fighting for a better world and against dark epistemology.