Artificial Intelligence: Can machines think?

AceYourGrace
Aug 26 · 6 min read

During the second world war, the first-ever computer was invented by Alan Turing that successfully cracked the German communication code. In a paper that he wrote in 1950, Turing had asked a simple question: “Can machines think?”

When we search for something on Google or YouTube, have you ever wondered how the results are so accurate? When you scroll through your news feed on social media, how do you come across content based on your areas of interest? Or how do some emails end up in your spam folder? Or when you post a photo with your friends on Facebook, how does the system recognize and tag those people by itself? I think these questions could go on and on and fill up this entire blog, so I’m gonna stop right here and focus on the point I’m trying to make. Today, machines are being widely used in almost every sector, and the efficiency with which they give results often leaves us bewildered and curious, building up to the same question:

“Can machines actually think?”

To answer this, we need to first understand what Artificial Intelligence is. In 1950, Alan Turing had proposed the Turing test- a method of inquiry in artificial intelligence (AI) that tests a computer’s ability to exhibit intelligent behavior and think like a human being. It was the first serious proposal in this field, but the term “Artificial Intelligence” was first coined by John McCarthy in 1955 at a conference in Dartmouth. McCarthy’s description:

“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

In other words, AI is a technique of making machines solve tasks such as decision making, detecting objects, making predictions, and various other complex problems; problems that are usually done by human beings with their natural intelligence. It is a process of training them to work and behave like humans; a process of training them to actually think!

AI is a broad branch of Computer Science that has been growing exponentially over the decades. From face detection & speech recognition to news feed personalization, from sentiment analysis to predictive searches, from customer support through chatbots to forecasting potential demand trends in different companies, AI is being successfully implemented in almost every domain of the market. Sectors like Finance, Healthcare, Retail & E-commerce, Agriculture, Manufacturing, and Real Estate are the emerging leaders that have adopted AI technologies to their core. The applications of AI yield better and most economic results, saving time and effort, and greatly improving the human experience as a whole. AI has revolutionized how we approach and solve any problem. We must understand that it is a vast domain, and it is equally important that we understand its basic concept as well.

The backbone of Artificial Intelligence is Machine Learning. While AI is the broad science of getting machines to mimic human abilities, machine learning is a specific subset of AI that gets machines to make decisions by feeding them data.

General Process:

We input some data into our machine learning model and get some output. Then we take the output and our target data and put it into our Error/Loss Function(a function that tells us how bad the predictions of our model are.) Then we apply an optimization method that changes the parameters of our model such that it better fits the data. Our model gets slightly better. Then we feed another data and repeat this process.

Now, understanding this might be a bit tricky at first glance, so let me put it in simple terms. Machine learning is the method of training machines to interpret some data, process and analyze it, and finally, make decisions to solve real-world problems. It basically has two components:

1. Using algorithms to find meanings in random data.

2. Using learning algorithms to find a relationship between that knowledge and thus improve the learning process.

Its overall goal is to improve the performance of the machine. Stock market predictions, Product recommendations, and Google translate are some applications of Machine Learning.

Let’s talk a bit about how Google translate has evolved. Well, I think it hasn’t much changed since its first launch back in 2007, but it has definitely become better and faster, hasn’t it? The precision with which it translates now has improved a lot. Just like we humans get better as we learn and practice, it’s the same with Google translator as well.

Fun Fact:

Have you ever thought about how much data Google has? Well, it turns out that Google has approximately 10–15 Exobyte of Data! (1 Exabyte of data is equal to 1,000,000TB of data which is equal to 931,300,000GB of data.) So, let’s take our computer that can store up to 500GB of data. This means that Google’s 15 Exabytes of data is equal to around 30 million computers! Fascinating, right?

If we dive further into this topic, we have something called a learning algorithm that powers a machine to learn and become intelligent. For instance, when a computer detects an image, each of its features will provide the computer with some information like area, parameter, skeleton, and other details. It will then be able to process that input, compare it with what it already has(stored in its memory), and give out some output. In a nutshell, the machine uses algorithms to find some meaning in data and uses neural networks to improve its learning and understanding process.

Some other fun facts:

1. On May 11, 1997, a world-class chess champion, Garry Kasparov was defeated in a six-game match under standard time controls by a computer called “Deep Blue”.

2. In 2011, “Watson”, IBM’s question-answering machine won the show “Jeopardy!” by defeating two champions, Brad Rutter and Ken Jennings.

3. Google’s “Alphago” defeated a South Korean Go champion in 2016. This computer program used reinforced learning as well as Neural networks.

AI is undoubtedly one of the biggest scientific breakthroughs in the 21st century, and it has the potential to change humanity forever. And although it has fascinated the world with its myriad of concepts and applications, there are people who refer to it as a threat to the existence of human civilization. Now I think this is debatable, so I’d love to hear what your opinions are in the comments.

Instagram

LinkedIn

References:

https://bernardmarr.com/the-key-definitions-of-artificial-intelligence-ai-that-explain-its-importance/

https://www.youtube.com/watch?v=oV74Najm6Nc&t=425s

https://www.youtube.com/watch?v=SN2BZswEWUA&t=51s

Artificialis

Artificial Intelligence, life, health. All in one.