AI Unveiled - Part 1: What is AI and what isn’t?

Iliada Eleftheriou
6 min readJan 31, 2024

--

In this series, we will demystify Artificial Intelligence (AI), distinguish between misconceptions and realities, foster a critical approach to this technology and investigate potential applications and ethical considerations.

Part 1 of the series provides a brief introduction to the umbrella term of AI, and delves into the differences of AI algorithms and conventional programming.

Imagine this

Imagine waking up in a world not too far from our present, and as you glance at your smart mirror, it not only reflects your image but also updates you on the day’s weather, your schedule, suggests a personalised breakfast based on your health data, and even provides clothing options based on your daily planned events. The autonomous car outside your door is ready to take you to work, effortlessly navigating through traffic and predicting alternate routes in real-time to ensure you arrive on time. Meanwhile, your AI-powered virtual assistant has already sorted through your emails, responding to routine messages and flagging the important ones for your attention.

At work, your team is collaborating on a project with the help of an AI-driven project manager. It tracks progress and deadlines but also suggests innovative solutions based on historical data and emerging trends. During a meeting, you present a realistic AI-generated presentation, powered with automated scenarios (like this one) images and videos, where the content is dynamically tailored to the various audience’s preferences, technical backgrounds and engagement levels.

Artificial Intelligence has the potential to integrate into our daily lives, enhancing efficiency, decision-making, and personalisation.

But what exactly is Artificial Intelligence, and how does it differ from other technological concepts?

What Is Artificial Intelligence (AI)?

We often hear about AI innovations in the areas of health, manufacturing, business, law, and many more. But what is AI and most importantly what isn’t?

Artificial Intelligence is an umbrella of terms and refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding.

AI is not a single technology but a broad field of science encompassing various subfields and approaches.

Examples of AI sub-fields:

  • Machine Learning (ML) is a subset of AI that involves training algorithms on large pre-existing datasets to find patterns and make predictions or decisions about future events.
  • Natural Language Processing (NLP) focuses on the interaction between computers and human languages, enabling machines to understand, interpret, and generate human-like text.
  • Computer Vision (CV) involves training algorithms to interpret and understand the visual world (including static images, videos, and real-time camera supervision) enabling tasks such as image recognition and object detection.
  • Robotics: AI-driven robots can perform tasks in the physical world, ranging from simple actions to complex movements.

In this video, we give a tour of the various robots of every size and format that are being studied and/or developed at the University of Manchester Schools of Engineering, Computer Science and Mechanical, Aerospace and Civil Engineering (more info here). This is a 360 video, to explore the lab and don’t miss the additional information and videos on the left wall.

The Cognitive Robotics Lab at The University of Manchester hosts a variety of robots focusing on the integration of the latest machine learning and artificial intelligence methods for the training of robots’ cognitive, social and linguistic skills. Amazing!

Three levels of AI

As research and development in the area of AI progresses, we move from simple to more complex algorithms, from lower to higher forms of intelligence. Human-like characteristics, such as emotions and thought processes, could potentially emerge and be replicated by machines and algorithms.

Currently, we categorise AI in three levels:

  1. Artificial Narrow Intelligence, also known as weak AI, is the currently predominant form of AI as we know it. In narrow AI, a learning algorithm is designed to perform a single task, such as image recognition, language translation, or playing chess. Any knowledge gained from performing that task will not automatically be applied to other areas. Examples of narrow AI include virtual assistants like Siri or Alexa, recommendation algorithms on streaming platforms, and facial recognition systems.
  2. General AI represents a higher level of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks and areas showcasing a form of generalising knowledge. General AI is not confined to specific tasks and can adapt to new, unfamiliar situations. It demonstrates a level of cognitive flexibility and autonomy resembling human-like intelligence.
  3. Super AI represents a theoretical level of AI where machines surpass human intelligence across all aspects. It implies a machine possessing cognitive abilities emotional intelligence, creativity and adaptability, exceeding the intellectual capabilities of the brightest human minds. Super AI is categorised by autonomous decision making and self-improvement, in which algorithms can independently make decisions, innovate, and solve complex problems without any human intervention.

As of now, we have achieved narrow AI but progress is ongoing. General AI and Super AI remain theoretical and are subjects of research and often speculation within the AI community. Some organisations use the two terms, general and super AI, interchangeably.

What AI isn’t

Despite the exciting progress in AI, we are far from creating machines with general intelligence surpassing human capabilities, just yet.

AI is not magic. AI tools operate based on data and algorithms. They are powerful tools but not infallible or all-knowing entities. They are not capable of independent reasoning and rely on training algorithms to identify patterns in the data to make decisions.

While AI can simulate certain human-like behaviours, it lacks consciousness, self-awareness, and emotions.

Is it really AI? Distinguishing between AI and traditional programming

In conventional traditional programs, a developer meticulously writes explicit instructions to direct the computer on how to execute a task. For example, if tasked with recognising and distinguishing between pictures of cats and dogs, in traditional programming, explicit rules are required to be defined by the programmers to analyse pixel values, coloir distribution, and geometric shapes within the images. “If the image has pointy ears and a long snout, classify it as a dog; if it has rounded ears and a shorter snout, classify it as a cat.” These rules are manually created by programmers based on our understanding of the visual characteristics of dogs and cats.

In the case of an AI algorithm, a machine learning model is trained on a large dataset of labeled images of dogs and cats (the model is provided with lots of pictures of cats and dogs and explicit labels whether a picture shows a cat or a dog). During training, the model finds patterns in the data and learns to automatically extract relevant features from the images without the support of explicit programming and rules provided by the team of programmers. These patterns enable the model to generalise its understanding to classify new, unseen images and predict whether a picture shows a cat or a dog.

The AI algorithm can adapt to a wide range of scenarios and variations, like changes in lighting, poses, and pictures of different breeds. It can also generalise well to new images not seen during training, even when dogs were cute sunglasses.

Photo by alan King on Unsplash

AI: a replacement or a colleague?

AI should not be viewed as an omnipotent entity that can replace human intuition and creativity. Instead, it serves as a tool designed to augment human capabilities, automating specific tasks and offering valuable insights.

What do you think?

In the following parts of this series, we will explore ethical considerations, data bias, importance of transparency and AI regulations, and generative AI tools.

Author

Dr Iliada Eleftheriou is a Senior Lecturer at the University of Manchester. She has a background in Computer Science (PhD) and is specialising in mapping complex data landscapes in healthcare settings to identify and address socio-technical challenges stemming from disparate information systems and data formats. Dr Eleftheriou is a member of the Greater Manchester Combined Authority Information Board and co-leading the health data systems community network of Cancer Research UK.

She is the Deputy Director of the Clinical Data Science programme and is leading modules on Health Informatics and Information Engineering including the interdisciplinary ‘AI: Robot Overlord, Replacement or Colleague?’ that investigates the impact of AI in our future lives and workforce.

--

--

Iliada Eleftheriou

Lecturer in Healthcare Sciences at the University of Manchester.