AIGP — Domain I: Understanding the Foundations of Artificial Intelligence — Part B

Jayashree Shetty
8 min readAug 24, 2024

--

In this second part of Domain-I, we will discuss on OECD framework for classifying AI systems, and understand differences among types of AI systems.

Disclaimer-This blog features my study notes, shared with the intention of helping others who are exploring similar subjects.The information presented is drawn from a range of resources and personal experience. While I strive to credit sources and ensure accuracy, not all references may be explicitly cited. If you notice any errors or missing attributions, please inform me. This content is meant as a helpful resource but isn’t a substitute for professional advice.

OECD Framework for the classification of AI systems

OECD Framework for classifying AI systems offers guidelines for evaluating AI technologies and creating policies to ensure the trustworthy use of AI [see Ref #2 below]. This framework helps in assessing AI system’s characteristics and its potential impacts.

OECD stands for the Organisation for Economic Co-operation and Development, founded in 1961 with 38 member countries to boost economic growth, improve people’s lives and facilitate international cooperation through effective policies.

OECD — Purpose of OECD framework in more detail

Promote common understanding of AI — Highlighting important features of AI systems, lets governments and others create better policies and find ways to measure things like the impact on people’s well-being.

Inform registries or inventoriesListing and describing different AI systems and their key features in databases of algorithms and automated decision-making tools.

Support sector-specific frameworks — Detailing guidelines for different industries like healthcare, finance, manufacturing and so on.

Support in risk assessment — Creating standard risk assessment framework to report AI issues to ensure global consistency.

Support in risk management — Managing risks by guiding how to mitigate problems, ensuring compliance, and oversee AI systems throughout their entire life cycle.

OECD — Dimensions for Classifying AI Systems and Applications

People & Planet — Identify individuals/groups that interact with AI systems

Economic Context — Describes the economic and sectoral context of an AI application, focusing on the organization and functional area for which the AI system is designed. Characteristics include sectors/industries the AI system is deployed (e.g. healthcare, manufacturing...) , business purpose, critical/non-critical nature, deployment, impact & scale and its technological maturity.

Data & Input — Describes the data and expert input that an AI model uses to understand and represent its environment. Characteristics include how data is collected, structure & format of the data, data used both to train an AI system in development (training) and the data it uses during actual use (production).

AI Model — Describes how the model is built and its usages.

Task & Output — Refers to the tasks performed by the AI system (forecasting, recognition..), its outputs and evaluation methods.

Image by Author — Mind map for OECD dimensions

OECD — Example of AI-driven health monitoring gadget

People & Planet: Non-expert users/end users using the wearable/gadget to monitor and improve their own health. The environmental impact is indirect, as the wearable can lower the carbon footprint by reducing the need for frequent medical visits, leading to improved overall public health.

Economic Context: Used in the healthcare sector for providing continuous health insights & thereby lowering healthcare costs.

Data & Input: Collects data from sensors measuring heart rate, activity levels, and sleep patterns.

AI Model: Can use supervised learning model to analyze health data and detect patterns.

Task & Output: Monitors vital signs and activity, sending alerts or suggestions to improve user’s health, such as exercise reminders.

Understanding the differences among types of AI Systems

Image Courtesy — AI in a Nutshell: A Practical Guide to Key Terminology by Tobias Zwingmann

Strong/Broad & Weak/Narrow AI

AI can be grouped into three high-level categories based on the their capabilities and functionality

Image by Author — Depicting differences between ANI, AGI, ASI

Broad artificial intelligence — More advanced in scope than artificial narrow intelligence (ANI) capable of performing broader set of tasks than ANI. But still lacks the full human-like capabilities of artificial general intelligence (AGI).

Basics of Machine Learning (ML) and its training methods

Machine learning (ML) typically falls under ANI. Involves training algorithms to learn from enormous data and perform specific tasks. Machine learning technologies can be broadly classified according to the type of training model they use — supervised, unsupervised & reinforcement learning. Below, you will also find a discussion on semi-supervised learning

Image by Author depicting machine learning models
Image by Author depicting different categorization of ML

Semi-supervised learning — Combines aspects of supervised and unsupervised learning for improved accuracy. Utilizes a small amount of labeled data and a large amount of unlabeled data. Enhances performance and is cost-efficient and is ideal when acquiring a large, labeled dataset is difficult.

Some more AI terminologies

Deep Learning — Subset of machine learning that can be applied to supervised, unsupervised and reinforcement learning tasks depending on the specific application and type of data. Focuses on using neural networks with many layers — inspired by human brain that is considered to be a complex organ.

Deep learning can be used in — natural language processing to understand text, image recognition & processing, speech recognition and so on.

Generative AI — Branch of deep learning that creates new content by learning patterns from existing data. It can generate text, images, music and so on. Some notable examples include GPT-4 for text , DALL-E for image creation, deepfake technology that creates realistic but fake media and so on.

Large Language Models (LLMs) are a type of generative AI, uses deep learning techniques to generate human-like text based on input prompts. Large Language Models (LLMs) are primarily text-based and excel in tasks such as text generation, language translation and text summarization. Quoting few examples — GPT-4, Ernie, BERT.

Further it can be categorized into Generative LLMs and Discriminative LLMs.

Generative LLMs — Generates new text based on input prompts. E.g. GPT4. Discriminative LLMs — Classifies or understands text by analyzing word relationships to make predictions or provide context. E.g. BERT.

Multi-Modal Models — As the name suggests, “multi-modal” AI systems uses multiple types of data like text, images, and audio, together to get a better understanding or generate richer results. E.g. DALL-E, Gemini.

Transformer Models — Transformer models are AI systems that analyze all parts of the input data at once to understand and generate text or other sequences efficiently.

Common use cases for transformers include Natural Language Processing, Machine Translation, Protein and DNA Sequencing etc.

Natural Language Processing (NLP) — Field of AI focused on enabling computers to understand, interpret and generate human language in a meaningful way. They can summarize large documents, and generate relevant text or even help in language translation. E.g. chatbots like ChatGPT, language translator like Google translate, speech recognition assistant like Siri and so on.

Basically NLP follows three steps:

- Parses the text by breaking down the given sentence and analyses further to understand the meaning of those text/words
- Further understands the context of the words to determine the intent
- Next produces human-like text or speech in response to queries

Foundation Models

Foundation models are large, pre-trained machine learning models used as a base for many different tasks and applications. Trained on extensive datasets and can be fine-tuned for specific purposes. Instead of building AI from scratch, data scientists use a foundation model as a starting point to create new machine learning applications more quickly and affordably.

These can be used in language translation, text summarization, text generation, code generation, content generation and so on.

Examples of foundation models — BERT, GPT, Claude and so on.

Robotics and Robotic Processing Automation (RPA)

Image Courtesy : Ideogram | Prompt —robotics in AI

Robotics and Robotic Process Automation (RPA) are distinct fields but share some similarities.

Robotics combines engineering and computer science to create machines that can perform tasks without human help. Typically uses physical robots to perform tasks. While RPA refers to the use of software robots (bots) to automate repetitive, rule-based tasks in business processes.

AI can enhance robotic processes by significantly increasing their efficiency. AI is driving the next phase of industry and manufacturing progress through enhanced inter-connectivity and smart automation, known as the Fourth Industrial Revolution or Industry 4.0.

Another emerging area is machine perception which refers to a system’s ability to interpret and understand sensory data from the environment, similar to how humans perceive the world. For example, a system that can touch, smell and taste produce could enhance overall food production, preparation and storage.

Expert Systems

An expert system is a form of AI designed to mimic the decision-making abilities of a human expert. Its used to support the humans in decision-making not to replace them.

Three main key components :

Knowledge Base — Contains organized collection of domain-specific information, including facts, rules from human experts.

Inference Engine — This core component applies logical rules to the knowledge base to derive new insights and make decisions/solve problems.

User Interface Allows interaction between the end user and the expert system by providing inputs and obtaining an output as resolution.

Where is it used — Medical diagnosis to assist doctors in diagnosing diseases based on symptoms and medical data. Customer help desks to provide automated support/ troubleshooting for common issues.

Image by Author — Mind map for Expert User

Fuzzy Logic

Way of reasoning, deals with uncertainties by allowing values between completely true and completely false like low, medium or high and another example could be warm, hot or very hot.

Fuzzy logic systems use fuzzy logic principles to handle and process uncertain information.

Follows 4 steps:

Fuzzification — Input data converted to fuzzy data sets.

Rule evaluation — Matches the input and the rules

Aggregation — Rule outputs are combined

Defuzzification — Fuzzy outputs converted back to specific values

Where it is used — Anti-lock braking system, modulating temperature in air-conditioners, automating wash cycles in washing machines.

References & some additional reading

1. https://www.oecd.org/en/about/members-partners.html

2. OECD Framework — https://www.oecd.org/en/publications/2022/02/oecd-framework-for-the-classification-of-ai-systems_336a8b57.html

3. https://iapp.org/train/privacy-training/OCT-AIGP/

4. Reinforcement learning — https://www.turing.com/kb/reinforcement-learning-algorithms-types-examples

5.https://www.iberdrola.com/innovation/fourth-industrial-revolution

6.https://aws.amazon.com/what-is/foundation-models/

7. Good Read on AI terminologies by Tobias Zwingmann https://blog.tobiaszwingmann.com/p/demystifying-ai-practical-guide-key-terminology?ref=gptechblog.com

8.https://www.kolena.com/guides/generative-models-types-concepts-and-popular-applications/

Explore my other articles on AIGP on Medium, and keep an eye out for new ones I’ll be publishing

Preparing for IAPP’s AIGP [Artificial Intelligence Governance Professional] Certification

AIGP — Domain I: Understanding the Foundations of Artificial Intelligence — Part A

AIGP — Domain I: Understanding the Foundations of Artificial Intelligence — Final Part

--

--

Jayashree Shetty

I am a Privacy Specialist working on data privacy domains. I am passionate about protecting data and ensuring compliance with global regulations and standards.