Overview of Artificial Intelligence Buzz

Luis Bermudez
machinevision
Published in
5 min readNov 14, 2017

--

If you’re in tech, you’ve been hearing a lot of buzz around Artificial Intelligence, Machine Learning, and even Deep Learning. What’s the right word to be using and when? Do they all mean the same thing? I mean, people are sure using it interchangeably all the time.

Artificial Intelligence, Machine Learning, and Deep Learning are each a subset of the previous field. Artificial Intelligence is the overarching category for Machine Learning. And Machine Learning is the overarching category for Deep Learning.

Deep Learning is a subset of Machine Learning, and Machine Learning is a subset of Artificial Intelligence.

The real hype in recent times should all be credited to Deep Learning. This particular field of Artificial Intelligence and Machine Learning is the one that has been solving a ton of interesting problems in recent years — from automated grocery store purchases to autonomous cars.

Artificial Intelligence

So why have we been hearing so much about Artificial Intelligence? Some might credit Elon Musk and Sam Altman. Elon Musk has been increasingly speaking on the subject of ethics in Artificial Intelligence. I suppose he’s using Artificial Intelligence as familiar jargon used in science fiction media.

Artificial Intelligence is a wide field encompassing several sub-fields, techniques, and algorithms. The field of artificial intelligence is based on the goal of making a machine as smart as a human. That is literally the initial overarching goal. Back in 1956, researchers came together at Dartmouth with the explicit goal of programming computers to behave like humans. This was the modern birth of Artificial Intelligence as we know it today.

AI Goals

To further explain the goals of Artificial Intelligence, researchers extended their primary goal to these six main goals.

1) Logical Reasoning. Enable computers to do the types of sophisticated mental tasks that humans are capable of doing. Examples of solving these Logical Reasoning problems include playing chess and solving algebra word problems.

2) Knowledge representation. Enable computers to describe objects, people, and languages. Examples of this include Object Oriented Programming Languages, such as Smalltalk.

3) Planning and navigation. Enable a computer to get from point A to point B. For example, the first self-driving robot was built in the early 1960’s.

4) Natural Language Processing. Enable computers to understand and process language. One of the first projects related to this, was attempting to translate English to Russian and vice versa.

5) Perception. Enable computers to interact with the world through sight, hearing, touch, and smell.

6) Emergent Intelligence. That is, Intelligence that is not explicitly programmed, but emerges from the rest of the explicit AI features. The vision for this goal was to have machines exhibit emotional intelligence, moral reasoning, and more.

AI Fields

Even with these main goals, this doesn’t categorize the specific Artificial Intelligence algorithms and techniques. These are just six of the major algorithms and techniques within Artificial Intelligence:

1) Machine Learning is the field of artificial intelligence that gives computers the ability to learn without being explicitly programmed.

2) Search and Optimization. Algorithms such as Gradient Descent to iteratively search for local maximums or minimums.

3) Constraint Satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy.

4) Logical Reasoning. An example of logical reasoning in artificial intelligence is an expert computer system that emulates the decision-making ability of a human expert.

5) Probabilistic Reasoning is to combine the capacity of probability theory to handle uncertainty with the capacity of deductive logic to exploit structure of formal argument. The result is a richer and more expressive formalism with a broad range of possible application areas.

6) Control Theory is a formal approach to find controllers that have provable properties. This usually involves a system of differential equations that usually describe a physical system like a robot or an aircraft.

Machine Learning

Machine Learning is a subset of Artificial Intelligence. So what is Machine Learning anyway? If Artificial Intelligence aims to make computers smart. Machine Learning takes the stance that we should give data to the computer, and let the computer learn on its own. The idea that computers might be able to learn for themselves was divined by Arthur Samuel in 1959.

And what makes Machine Learning so important? One major breakthrough led to the emergence of Machine Learning as the driving force behind Artificial Intelligence — the invention of the internet. The internet came with a huge amount of digital information being generated, stored, and made available for analysis. This is when you start hearing about Big Data. And Machine Learning algorithms have been the most effective at leveraging all of this Big Data.

Neural Networks

If we’re talking about Machine Learning, then it’s worth mentioning a popular component of Machine Learning Algorithms: Neural Networks.

Neural Networks are a key piece to some of the most successful machine learning algorithms. The development of neural networks have been key to teaching computers to think and understand the world in the way that humans do. Essentially, a neural network emulates the human brain. Brains cells, or neurons, are connected via synapses. This is abstracted as a graph of nodes (neurons) connected by weighted edges (synapses). For more information on neural networks, feel free to read our Overview of Neural Networks.

This neural network has one layer, three inputs, and one output. Any neural network can have any number of layers, inputs, or outputs.

Deep Learning

Machine learning algorithms have been the driving force behind Artificial Intelligence. And the most effective of all the machine learning algorithms has been Deep Learning.

Deep Learning involves several layers of computation. In this case, “deep” refers to a “large” number of layers. Deep learning can be 20 layers or 1000 layers, but at least more than 2 or 3 layers. Deep Learning has gained momentum recently, not just because of the vast amount of data provided by the internet, but also because of the rise in computational power in the last decade. Specifically, GPU’s have increased the computational power by enabling parallel computations. As you might’ve guessed, Deep Learning is highly parallelizable.

This neural network has two layers, three inputs, and one output. Any neural network can have any number of layers, inputs, or outputs. The layers between the input neurons and the final layer of output neurons are hidden layers of a deep neural network.

The best showcase of Deep Learning is Deep Neural Networks (DNN). A deep neural network is just a neural network with more than two or three layers. However, deep neural networks are not the only type of Deep Learning algorithm — it’s just the most popular. Another Deep Learning algorithm is a Deep Belief Network (DBN). A Deep Belief Network has undirected connections between some layers. This means that the topology of the DNN and DBN is different by definition. The undirected layers in a DBN are called Restricted Boltzmann Machines.

Conclusion

So one way to think about all three of these ideas is that Machine Learning is the cutting edge of Artificial Intelligence. And Deep Learning is the cutting edge of the cutting edge.

--

--