An AI-Dictionary for Designer
Little help to orientate in a somewhat new world
During my own journey to understanding AI, I stumbled over so many terms and expressions that did not say anything to me, that I started my own little dictionary. And if I struggle with this stuff, maybe someone else does too…
A
Agent: An autonomous entity in AI that perceives its environment, makes decisions and takes actions to achieve specific goals. Agents can vary from simple software programs that perform particular tasks to complex robots capable of interacting with the physical world.
AI washing: A marketing tactic where companies exaggerate or misrepresent the extent and capabilities of their AI technologies. Like “greenwashing,” AI washing involves branding products as AI-powered without substantial use of AI. This can mislead customers and erode trust in AI applications. Designers should communicate AI features transparently.
Algorithm: A step-by-step procedure or formula for solving a problem. Algorithms are the foundational components of AI systems, determining how they learn, process information, and perform tasks.
Artificial Intelligence (AI): A branch of computer science focused on creating systems capable of performing tasks that typically require human intelligence, such as learning, reasoning, and problem-solving. AI encompasses various technologies, including machine learning, natural language processing, and computer vision.
Artificial General Intelligence (AGI): A level of artificial intelligence where machines can understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence. Unlike narrow AI, which is designed for specific tasks, AGI aims to perform any intellectual task that a human can do.
Artificial Super Intelligence (Super AI): A level of artificial intelligence that surpasses human intelligence and capability, performing tasks and solving problems beyond the reach of the human mind. Super AI remains a theoretical concept and is the subject of significant speculation and ethical consideration.
Augmented Intelligence: The use of AI technologies to enhance human capabilities and decision-making, rather than replacing human intelligence. This approach emphasizes collaboration between humans and AI, aiming to improve productivity and outcomes.
Automation Paradox: The phenomenon where the introduction of automation in certain tasks may initially increase, rather than decrease, the complexity, and responsibility of human work. This can occur because automation can require human oversight and intervention, especially in cases where errors or exceptions arise.
B
Bias: Systematic and unfair discrimination in AI systems caused by incorrect assumptions in the machine learning process, often due to unrepresentative training data. Bias in AI can lead to unfair treatment of individuals or groups, making it a critical ethical issue to address.
Black Box: A term used to describe AI systems whose internal workings are not visible or understandable to users, making it difficult to interpret how decisions are made. This lack of transparency can hinder trust and accountability in AI applications.
C
Computer Vision: A field of AI that enables computers to interpret and make decisions based on visual data from the world, such as images and videos. Applications include facial recognition, object detection, and image classification.
D
Data Science: An interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data. Data science combines principles from statistics, computer science, and domain expertise to analyze and interpret complex data sets.
Deep Learning: A type of machine learning that uses neural networks with many layers (deep neural networks) to analyze various factors of data, often used for image and speech recognition. Deep learning models can automatically learn features from raw data, making them highly effective for complex tasks.
E
Ethic by Design: The practice of embedding ethical considerations into the design and development of AI systems from the outset, ensuring that they are fair, transparent, and accountable. This proactive approach helps prevent ethical issues from arising in AI applications.
Explainability: The extent to which the internal mechanisms of an AI system can be explained in human terms. High explainability is essential for trust and accountability in AI systems, allowing users to understand how decisions are made and identify potential biases or errors.
F
Foundation Models: Large AI models trained on vast amounts of data, capable of performing a wide range of tasks. These models, such as GPT-4, serve as the base for specialized applications. Their complexity and scale often make their decision-making processes opaque, leading especially here to “black box” concerns.
G
Generative AI: A type of AI that can create new content, such as text, images, or music, typically by learning patterns from existing data which was enabled properly by the development of foundation models. Examples include AI-generated art, music composition, and text generation using models like GPT-4o or Gemini 1.5.
H
Human-in-the-Loop (HITL): An approach to AI development where humans are involved in the training, tuning, and testing of AI models to improve accuracy and ensure ethical considerations. HITL systems combine the strengths of human judgment and machine efficiency.
I
Intent-Based Outcome Definition: A design approach where the interaction is based on understanding and achieving the user’s intended outcome rather than following a predefined set of instructions. And with that the third interface paradigm in the history of computing, following on patch processing and command-based interactions.
J
M
Machine Learning (ML): A subset of AI that involves training algorithms on data to enable them to make predictions or decisions without being explicitly programmed for each task. Machine learning encompasses various techniques, including supervised learning, unsupervised learning, and reinforcement learning.
Model: A mathematical representation of a real-world process created by training an algorithm on data. In AI, models are used to make predictions or decisions based on new data. Models can range from simple linear regressions to complex deep-learning networks.
N
Natural Language Processing (NLP): A field of AI focused on the interaction between computers and humans through natural language, enabling machines to understand, interpret, and respond to human language. Applications include chatbots, language translation, and sentiment analysis.
Neural Network: A series of algorithms that attempt to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. Neural networks are the foundation of deep learning and are used in tasks like image and speech recognition.
O
Overfitting: A modeling error that occurs when an AI model learns the details and noise in the training data to the extent that it negatively impacts its performance on new data. Overfitting results in models that perform well on training data but poorly on unseen data.
P
Q
R
Reinforcement Learning: A type of machine learning where an agent learns to make decisions by performing actions and receiving rewards or penalties. This approach is often used in applications like robotics, game-playing, and autonomous systems.
S
Supervised Learning: A type of machine learning where the model is trained on labeled data, meaning the input data is paired with the correct output. This approach is commonly used in tasks like classification and regression.
T
Training Data: The dataset used to teach an AI model to make predictions or perform a task. High-quality, relevant data is crucial for effective AI performance, as it directly influences the model’s accuracy and reliability.
Transfer Learning: A machine learning technique where a pre-trained model is adapted to perform a new, but related, task, saving time and resources. Transfer learning is particularly useful when there is limited data available for the new task.
U
Underfitting: A modeling error that occurs when an AI model is too simple to capture the underlying structure of the data, resulting in poor performance. Underfitting leads to models that fail to generalize well from the training data to new data.