AI vs Machine Learning vs Deep Learning
If you are a master student or a graduate of engineering Certainly, you have heard about machine learning and deep learning so far. most students cannot distinguish between machine learning and deep learning, or even artificial intelligence. In this article, we briefly want to explain the difference between them.
you can fork and Run this Notebook on GitHub:
We first define these three concepts
Artificial intelligence means trying to build a robot in a way that works like a human being. It means robots can learn, making mistakes, compensating for your mistake, and gaining an experience that leads to learning.
Artificial intelligence is a science and an approach to developing technology, and in fields such as image processing, signal processing, natural language processing, databases and etc.
And generally speaking, Machine learning is a subset of artificial intelligence.
Algorithms that can be learned are called learning algorithms, and a set of these algorithms are Machine Learning.
Deep learning is a subset of machine learning that can solve the problem
By using neural networks.
2-AI vs Machine Learning vs Deep Learning
AI and machine learning are often used interchangeably, especially in the realm of big data. But these aren’t the same thing, and it is important to understand how these can be applied differently.
Artificial intelligence is a broader concept than machine learning, which addresses the use of computers to mimic the cognitive functions of humans. When machines carry out tasks based on algorithms in an “intelligent” manner, that is AI. Machine learning is a subset of AI and focuses on the ability of machines to receive a set of data and learn for themselves, changing algorithms as they learn more about the information they are processing.
Training computers to think like humans is achieved partly through the use of neural networks. Neural networks are a series of algorithms modeled after the human brain. Just as the brain can recognize patterns and help us categorize and classify information, neural networks do the same for computers. The brain is constantly trying to make sense of the information it is processing, and to do this, it labels and assigns items to categories. When we encounter something new, we try to compare it to a known item to help us understand and make sense of it. Neural networks do the same for computers.
Benefits of neural networks:
- Extract meaning from complicated data
- Detect trends and identify patterns too complex for humans to notice
- Learn by example
- Speed advantages
Deep learning goes yet another level deeper and can be considered a subset of machine learning. The concept of deep learning is sometimes just referred to as “deep neural networks,” referring to the many layers involved. A neural network may only have a single layer of data, while a deep neural network has two or more. The layers can be seen as a nested hierarchy of related concepts or decision trees. The answer to one question leads to a set of deeper related questions.
Deep learning networks need to see large quantities of items in order to be trained. Instead of being programmed with the edges that define items, the systems learn from exposure to millions of data points. An early example of this is the Google Brain learning to recognize cats after being shown over ten million images. Deep learning networks do not need to be programmed with the criteria that define items; they are able to identify edges through being exposed to large amounts of data.
Data Is at the Heart of the Matter
Whether you are using an algorithm, artificial intelligence, or machine learning, one thing is certain: if the data being used is flawed, then the insights and information extracted will be flawed. What is data cleansing?
“The process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect or irrelevant parts of the data and then replacing, modifying or deleting the dirty or course data.”
And according to the CrowdFlower Data Science report, data scientists spend the majority of their time cleansing data — and surprisingly this is also their least favorite part of their job. Despite this, it is also the most important part, as the output can’t be trusted if the data hasn’t been cleaned.
For AI and machine learning to continue to advance, the data driving the algorithms and decisions need to be high-quality. If the data can’t be trusted, how can the insights from the data be trusted?
you can fork and Run this Notebook on GitHub: