Unraveling Artificial Intelligence for the Non-Geeks — The Non-Technical Insight

Artificial Intelligence has been spreading its wings since 1950s but has been increasingly hogging limelight in recent times. Leaders of the world’s most influential technology firms including Amazon, Facebook, Microsoft, Google are emphasizing their enthusiasm for Artificial Intelligence (AI) and its applicability. AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage. Artificial intelligence is poised to have a huge impact in automating business processes including streamlining efficiency and anticipating barriers to growth. But what is AI? Why is it important? There is growing interest in AI, ML and DL, and the field is getting immense popularity amongst classes and masses.

What is AI?

The term AI was coined in 1956 by Dartmouth Assistant Professor John McCarthy, ‘Artificial Intelligence’ (AI) is a general term that refers to machines that exhibits behavior which appears intelligent. In the words of Professor McCarthy, it is “the science and engineering of making intelligent machines, especially intelligent computer programs.”

The present-day definition of artificial intelligence (or AI) is “the study and design of intelligent agents” where an intelligent agent is a system/machine that perceives its environment and takes actions which maximizes its chances of success.

How Artificial Intelligence Works

Artificial Intelligence is intelligence exhibited by machines rather than humans or any other animals. Ability of a machine to perform cognitive functions that we associate with human minds such as perceiving, reasoning, learning, interacting with the environment and problem solving. AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software/machine/Bots to learn automatically from patterns or features in the data. AI is a broad field of study that includes many technologies, as well as the following major sub-fields:

Deep Learning, Machine Learning and Artificial Intelligence are like a set of Russian dolls nested within each other, beginning with the smallest and working out. DL is a subset of ML, and ML is a subset of AI, which is an umbrella for these subfields. In other words, all ML is AI, but not all AI is ML.”

Machine learning (ML): In 1959, Arthur Samuel, one of the pioneers of machine learning, defined machine learning as a “field of study that gives computers the ability to learn without being explicitly programmed.”

Machine-learning programs adjust themselves in response to the data they’re exposed to. It lets us tackle problems that are too complex for humans to solve through algorithm. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.

Machine Learning — Real life Application

Let’s look at an example of using ML to identify who enters your house. For instance, we might want to know who is entering our house. That’s the outcome we want. Once we know that, we have clarity about what we want to use AI for. Then, we find out whether we have data that correlates with what we want to predict. In this case, we might want images of people entering our house. To gather that data, we will install a camera at the door to take a picture of anyone on the stoop. We want to predict the identity of the person at the door. The data that correlates with those predictions are the images taken by the camera. Finally, machine-learning algorithms will learn correlations and by providing them outputs (names) and the inputs (pixels in the image), they will find the rules that correlate the pixels representing a certain face with a certain name. That will be able to say: “Those pixels look like Ana.”

Deep learning: Deep learning is a type of machine learning that can process a wider range of data resources, requires less data preprocessing by humans, and can often produce more accurate results than traditional machine-learning approaches.

In deep learning, interconnected layers of software-based calculators known as “neurons” form a neural network.

The network can ingest vast amounts of input data and process them through multiple layers that learn increasingly complex features of the data at each layer.

The network can then decide about the data, learn if its determination is correct, and use what it has learned to make determinations about new data. For example, once it learns what an object looks like, it can recognize the object in a new image. DL uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.

Neural Network: It is a type of machine learning that is made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.

A neural network is created when neurons are connected to one another; the output of one neuron becomes an input for another.

Neural networks are organized into multiple layers of neurons. The ‘input layer’ receives information that the network will process. The ‘output layer’ provides the results. Between the input and output layers are ‘hidden layers’ where most activity occurs. Typically, the outputs of each neuron on one level of the neural network serve as one of the inputs for each of the neurons in the next layer.

Deep is a technical term. It refers to the number of layers in a neural network. Multiple hidden layers allow deep neural networks to learn features of the data in a so-called feature hierarchy. Deep artificial neural networks are a set of algorithms that have set new records in accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc. For example, deep learning is part of DeepMind’s well-known AlphaGo algorithm, which beat the former world champion Lee Sedol at Go in early 2016, and the current world champion Ke Jie in early 2017.

Neural Network — Example of Application

An example of image recognition algorithm to recognize human faces in pictures. When data are fed into the neural network, the first layers identify patterns of low-level features such as edges. As the image navigates the network, gradually higher-level features are extracted — from edges to facial features such as nose, eyes, from features to faces. At its output layer, based on its training the neural network will deliver a probability that the picture is of the specified type.

Natural language processing (NLP) is the ability of computers to analyze, understand and generate human language. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using everyday language to perform tasks.

Natural language refers to language that is spoken and written by people, and natural language processing (NLP) attempts to extract information from the spoken and written word using algorithms.

Why is AI so important?

AI is important because it tackles deeply complex problems, and the solutions to those problems can be applied to sectors important to human good — ranging from health, education and commerce to transport, utilities and entertainment. Artificial intelligence is not here to replace us. It augments our abilities and makes us better at what we do. Because AI algorithms learn differently than humans, they look at things differently.

It has several benefits that makes it so essential for organizations and human-well-being. The human AI partnership offers many opportunities.

Automates repetitive tasks: AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For this type of automation, human intervention is still essential to set up the system and ask the right questions. Typically for such tasks, RPA and Intelligent automation is used.

Makes existing products and services intelligent: Products and Services we use will be improved with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms such as Chatbots, bots and smart machines can be combined with large amounts of data to improve many technologies at home and in the workplace, from security intelligence to investment analysis.

Progressive learning algorithms: The algorithm becomes a classifier or a predictor. Algorithm can teach itself to play chess, recommend product and services. Models adapt with new data feed. Back propagation is an AI technique that allows the model to adjust, through training and added data, when the first answer is not quite right.

Power to analyze deeper data: Building a fraud detection system with several hidden layers was almost impossible a few years ago. All that has changed with incredible computing power and big data. The more data we feed to models, the more accurate they become.

Accuracy: The more data we feed to models, the more accurate they become. In the medical field, AI techniques from deep learning, image classification and object recognition can now be used to find cancer on MRIs with the same accuracy as highly trained radiologists.

Data becomes intellectual property: With self-learning algorithms, the data itself can become intellectual property. The answers are in the data; you have to apply AI to get them out. With the ever-growing importance of Big Data, it can create a competitive advantage.

What’s Next for AI?

The effectiveness of AI has been transformed in recent years due to the development of new algorithms, greater availability of Big Data, better machines to train them and cloud-based services to catalyze their adoption among developers. The benefits of AI and ML will be numerous and significant. From autonomous vehicles to new methods of human-computer interaction and enabling more capable and efficient day-to-day business processes and consumer services.

AI will continue to bring analytics to industries and domains where it’s currently underutilized. The use of AI will break down economic barriers, including language and translation barriers.

It will progressively augment existing abilities and make us better at what we do, give us better vision, better understanding, better memory and much more.

In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it’s not a replacement for humans — and won’t be anytime soon.

References

SAS. (n.d.). Artificial Intelligence What it is and why it matters. Retrieved from SAS Website: https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html

Skymind. (n.d.). A Beginner’s Guide to Artificial Intelligence (AI). Retrieved from Skymind: https://skymind.ai/wiki/artificial-intelligence-ai

Kelnar, D. (n.d.). The fourth industrial revolution: a primer on Artificial Intelligence (AI). Retrieved from Medium: https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1

Skymind. (n.d.). Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning. Retrieved from Skymind: https://skymind.ai/wiki/ai-vs-machine-learning-vs-deep-learning