A Designer’s Introduction to AI

The brief exploration of a (kinda) new design material: Part 1

Felix Kalkuhl
Bootcamp
13 min readJul 22, 2024

--

Just as we need to understand the characteristics of every material and tool we are utilizing, we need to start to discuss and understand artificial intelligence (AI) in depth. We need to understand what it is, what to use it for and how to use it. And while AI seems abstract and complex — it is not merely just a tool in the design process, but a dynamic, evolving medium that interacts with users in ways that traditional materials cannot and opens up yet not feasible possibilities in the design of products.

Far more, the impact AI is going to have is hard to say — predictions ranging from the „driver of the fourth industrial revolution“ to the evolution of (parts of) humanity to the „homo deus“ (Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow). And while there is a lot of dust in the air at the moment, we can be sure that it will be significant.

It’s easy at the moment to get overwhelmed by all the hype, yeah-sayers, and how unreal this technology feels. Therefore we need to demystify AI, and its capabilities and limitations, and discuss how it can be molded to improve and personalize interactions (and society).

Disclaimer: I’m not an AI expert by trade, just a fellow designer trying to keep up with the world we live in.

First things, first

If we talk about AI, what are we talking about? What is intelligence anyway?

Intelligence is defined by Merriam-Webster’s Collegiate Dictionary (Webster’s) as “1) the ability to learn or understand or to deal with new or trying situations, also: the skilled use of reason; 2) the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria”. Recently, this definition has expanded to include various types such as emotional, social, and multiple intelligences, indicating a broader spectrum that entails mental abilities necessary for adapting to, shaping, and selecting environmental contexts. Additionally, intelligence is increasingly seen as a capability to foresee change, incorporating foresight and insight to identify and respond to both opportunities and threats.

Webster’s defines AI as “the capability of computer systems or algorithms to imitate intelligent human behavior”. It‘s an umbrella term for a wide array of technological capabilities, from understanding human speech to recognizing patterns and calculating statistical probability (thus making decisions). But despite its vast potential, AI is not a panacea; there are clear boundaries to what this technology can achieve, and what it can and can not do.

A Short History of AI

The conceptual roots of artificial intelligence can be traced back to the 1930s when Alan Turing, proposed the idea of a ‘universal machine’ that could perform any computation if it were representable as an algorithm. His 1950 paper, “Computing Machinery and Intelligence”, introduced the Turing Test as a criterion of intelligence, which posits that a machine could be considered intelligent if it could fool a human into believing it was human.

AI as a formal field was born at a workshop at Dartmouth College in 1956, where John McCarthy, Marvin Minsky, Allen Newell, and Herbert A. Simon were instrumental figures. They were optimistic about the future of machine intelligence, a sentiment that drove much of the early research. This era saw the creation of the first attempts to mimic human problem-solving and reasoning, including the “Logic Theorist” and “General Problem Solver”.

Despite the early enthusiasm, the complexity of true intelligence proved greater than the initial projections. Funding and interest in AI research waned in the late 1970s and again in the late 1980s, leading to periods known as the “AI winters”. It was only in the 1990s, that AI experienced a resurgence thanks to improved algorithms, more powerful computers, and a shift to more practical applications. A pivotal moment in this revival was IBM’s Deep Blue defeating world chess champion Garry Kasparov in 1997. The development of the Internet also provided vast new datasets for training and refining AI algorithms.

The 21st century ushered in a new era for AI, marked by advancements in machine learning and deep learning. Projects like IBM’s Watson and Google’s AlphaGo demonstrated that AI could outperform humans in even more complex tasks such as playing Jeopardy! and the strategy game Go. The introduction of neural networks and increased computational power has enabled AI to excel in areas like search engines and advertising.

Today, AI aids in a range of activities, from simple tasks like filtering spam emails to more complex tasks, involving autonomous driving and weather predictions. The development of foundation models, which enable what we understand today as generative AI (genAI), like OpenAI’s GPT (Generative Pre-trained Transformer) and Google’s Gemini has dramatically expanded AI’s capabilities and its feasibility — arguably leading AI to its “iPhone Moment” in 2022.

Perception

When we discuss AI, there are a lot of mixed feelings. Some are filled with optimism, captivated by the vast potential AI holds to transform industries and improve lives. Others are concerned about the risks associated with AI, particularly the dangers it poses if misused or controlled by those with malicious intent, and the impact it’s going to have on the job market.

Fear

Popular fiction likes to explore terrifying possibilities that AI could unleash. Thinking about “The Matrix” (The Wachowskis), where humanity is used by AI as a source of energy, “Dune” (Frank Herbert), with its narrative of the Butlerian Jihad to free humanity from the threat of enslavement by thinking machines, and “QualityLand“ (Marc-Uwe Kling), which satirizes the consequences of an overly automated society.

In the real world, we face actual fears that are tied to tangible issues such as bias in AI systems, as seen with AI like COMPAS used in some US courts, an algorithm to assess potential recidivism risk. While the numbers actually prove its usage, this system has been criticized for its lack of transparency and racial bias, highlighting the moral dilemmas presented when AI is wrong, especially with false positives. The essence of humanity and ethics in AI usage becomes a significant concern.

Further, things like Deepfakes challenge trust and authenticity, prompting us to question whether we can truly trust what we see. And, as designers, we struggle with the emergence of ‘Creative AIs’. Like Runway ML and DALL-E, which could fundamentally challenge the traditional understanding of designers.

Hope

In contrast, fiction has also discussed the huge potential of AI, with portrayals highlighting AI’s potential not only to assist but also to enrich human lives. In “Star Trek” (Gene Roddenberry), the ship’s computer offers real-time data analysis and decision support, J.A.R.V.I.S. from “Iron Man” (Mark Fergus and Hawk Ostby), managing complex environments and security systems, and Samantha in “Her” (Spike Jonze) shifts the focus to the emotional spectrum.

In reality, the benefits of AI are already permeating various sectors, often enhancing our daily lives in ways we might not even realize. We talked earlier about things like spam filters and autonomous driving, but AI also powers the personalized recommendations we receive on streaming services like Netflix and Spotify, assists in navigating traffic through apps like Google Maps, and enables voice assistants like Siri and Google Assistant to understand and respond to our requests. And, even besides the wow effect of chatGPT, also more obvious, more „AI-like “, use cases are not a thing of the future; e.g. AlphaFold’s breakthroughs in protein folding which accelerate drug discovery and enhance our understanding of diseases, and tools like Microsoft’s Copilot are transforming how we work by providing coding assistance that augments human productivity and allowing us to waste much less time on StackOverflow.

Looking forward, the potential of AI to transform is immense. Imagining an AI that serves as a personal tutor to each student or AI systems in hospitals that support diagnoses and thus give doctors more time to focus on patient care is not just feasible but likely within the next decade.

As with most technologies, most famously probably dynamite and Alfred Nobel’s motivation for founding the Nobel Peace Prize, both fear and hope, and everything in between is true. Understanding the full range of aspects is crucial. Only if we are aware, we can embrace the positive possibilities AI offers while diligently safeguarding against its potential harms to maximize its benefits responsibly.

Behind the Curtain

“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke

What most of us probably have in mind when we think of AI, is described with the term artificial general intelligence (AGI, also known as strong AI). A type of AI that can understand, learn, and apply intelligence across a broad range of tasks, mirroring the cognitive abilities of a human being. And despite decades of research striving toward this goal, arguably any form of AI we have today is a form of specialized AI (also known as narrow AI or weak AI), referring to artificial intelligence systems that are designed and trained to perform specific tasks or solve particular problems.

Even genAI, which represents a significant advancement, can be understood as specialized AI because it can’t perform outside of its defined type of tasks and operates within a predefined environment (e.g. text-based chat). It must be noted that there is some discussion if this classification really fits or if we need a new category for them. What sets genAI apart is the specific training of foundation models. These models are trained on vast and diverse datasets, enabling them to generate a wide range of outputs and making them more versatile compared to traditional AI models. This evolution in training methods has significantly expanded their use cases, allowing for more diverse and sophisticated applications.

Fundamentally, AI operates on statistical analysis and data manipulation — it’s more akin to advanced pattern recognition and predictive analytics than to genuine human intelligence. And it’s probably our biggest mistake to humanize it. Describing AI as merely a tool for statistics might seem simplistic, but it’s a reminder that behind the apparent complexity, there are no thoughts, feelings, empathy, intuition, or opinions — just calculations.

The term ‘AI’ might even be somewhat misleading, as it suggests capabilities that the technology does not yet possess. Arguably ‘advanced computer statistics’ would be closer to the actual technology. The conversation about AI’s nomenclature has been ongoing, with figures like Meredith Whittaker of the AI Now Institute suggesting that how we name and think about AI influences public perception and policy decisions.

On the other hand, Mustafa Suleyman (DeepMind) opens a perspective on AI which emphasizes the need to treat AI like a new species, suggesting a deeper level of interaction and understanding beyond just viewing it as a collection of algorithms or mathematical models. He argues that perceiving AI solely as computations is similar to considering humans as nothing more than carbon and water, emphasizing the complexity and potential of AI systems. This approach advocates for a comprehensive understanding of AI’s capabilities, limitations, and implications, emphasizing the importance of prioritizing safety as we integrate AI into various aspects of society.

AI Innovation Gap

Especially foundation models have enabled unprecedented capabilities, but teams tasked with innovating struggle. Data scientists may propose innovations that fail to resonate with users, whereas we designers struggle to understand the technology and envision practical AI applications. This disconnect in effective ideation often represents a significant stumbling block.

There might be a couple of reasons why we have such a hard time getting warm with AI: First, the complexity and seemingly rapid evolution of (especially gen)AI can seem overwhelming. Its vast capabilities and the perception that mastering it requires highly specialized knowledge can be daunting for many designers. Additionally, there is a significant amount of fear and misunderstanding about AI within the design community. Many of us are concerned about how genAI might impact our creativity and job security and further fear that it could replace the human touch that is so integral to what is considered good design.

The hype surrounding genAI probably also plays a role in this hesitancy. Often, the discussion about AI in popular media and tech circles focuses on sensational applications or futuristic scenarios that appear irrelevant or impractical for day-to-day design tasks. This type of “surface-level” hype can diminish the perceived practical utility of AI, making it difficult for us to take its potential seriously.

The Dawn of a New Era

Design is not and never was about pushing pixels. It is about identifying and solving problems — yes, as engineering is, which arguably has its roots in design. And if you now think ‘that’s design thinking‘, also yes; design thinking was actually coined in the 80s to describe the toolset of a designer. This fundamental aspect of design is more relevant today than it has ever been, particularly with the emergence of new technologies like genAI.

As stated before, genAI is not just another tool to be used; it represents a new material, a substance from which entirely new forms of interaction and functionality can be crafted. Jakob Nielsen actually has heralded that AI introduces a new paradigm in human-computer interaction. Marking the rise of intent-based outcome specifications as the first new design paradigm in more than 60 years, following batch processing and command-based interactions.

Looking back, the first industrial revolution was powered by steam which enabled mechanical production, the second by electricity which made mass production possible, and the third by computing power which introduced automated production. Now, the discussion is prevalent that we are in the fourth industrial revolution, arguably driven by AI. This revolution is characterized by the seamless integration of technologies that blur the lines between the physical, digital, and biological realms. AI plays a crucial role by enabling the automation of cognitive tasks, enhancing decision-making processes, and driving innovation in various fields. It allows for the effective analysis of vast amounts of data, leading to more efficient production methods, personalized services, and advanced problem-solving capabilities.

About the Argument that it’s Developing to Fast

The seemingly rapid development can feel overwhelming, and it’s important to recognize what changes and that actually a lot remains constant.

The transition from command line interfaces to graphical user interfaces (GUI) traces back to pioneers like Douglas Engelbart and Xerox PARC. Engelbart’s innovations laid the groundwork, leading to Xerox’s development of the first GUI, which was later popularized globally by Apple with the introduction of the Macintosh in 1984. This device was the first commercially successful product to feature a GUI, changing the way people interact with computers worldwide.

Interestingly, despite the seismic shifts in technology, we continue to find relevance in classics like “The Design of Everyday Things”, written by Don Norman in 1988 which remains a cornerstone in design education. So some of the earliest developed principles have still some sort of relevance.

What remains stable with AI is the fact that we work with algorithms that process data, what mostly improved throughout the last couple of decades are the algorithms used, the data available, the way they are fed with data, and the computing power behind them. Understanding this gives us a framework for what will most likely stay constant and what will develop in the near future — and thus where principles will remain and where not. My hypothesis is that actually the biggest evolution we’ll see is our understanding of AI and what to do with it.

There is likely another historical parallel: the transition from text to something else itself. Command line interfaces, while powerful, pose many usability challenges. Users need to know exactly what commands to use and how to use them, which limits accessibility and ease of use. The GUI revolutionized user interaction by replacing text commands with visual icons and menus. The current reliance on text-based genAI interfaces holds some of the same problems and is likely just a stepping stone; as the technology matures, AI will evolve beyond text, becoming more intuitive and accessible in ways we can only begin to imagine. This evolution could mirror the shift from the command line to GUI for action-based interactions, fundamentally altering how we interact with intend-based outcome specifications.

From a modern perspective, almost ironically: the initial phases of GUI development were actually not heavily influenced by designers. It took us years to reclaim influence in the domain of interface design. In a nutshell: Sketch, which became the first widely used (not the first overall) specialized UI design tool, didn’t emerge until 2010 — nearly three decades after the Macintosh’s debut. Before Sketch, tools like Photoshop and Fireworks were repurposed for UI design, despite being not at all designed for that use and thus lacking even the smallest specific feature (we wouldn’t be able to live without today; e.g. smart guides and auto measurements).

Alphabet Inc. CEO Sundar Pichai has stated that “AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity”. To be fair, that might have a little marketing influence, but even if AI is just bringing a fraction of its potential, it will be probably the most productive era in history.

Epilogue

Imagine a world where genAI seamlessly integrates into our daily lives, enhancing our capabilities and enriching our experiences in ways we once thought impossible. A world, where AI-driven technologies assist us with personalized elder care, offering real-time health monitoring and tailored assistance, vastly improving the quality of life for our aging population. Childcare and education become more supportive and adaptive, with AI providing individualized learning and development plans that nurture each child’s potential. Our public services are smarter and more responsive, with advanced AI optimizing everything from emergency response times to community resource management, enhancing safety and well-being for all. Where AI empowers creative expression, enabling designers, artists, writers, and musicians to collaborate with intelligent systems that amplify their creativity; and where routine tasks are automated, freeing up our time for more meaningful and fulfilling pursuits.

The landscape of AI is set to evolve dramatically, driven by emerging new technologies, ongoing ethical debates, and the development of regulatory frameworks, but especially our understanding of what we working with. The impact it will have on us is still unfolding, and we are only starting to comprehend the potential and challenges that lie ahead. As designers, our role as problem solvers has never been so critical. We identify and explore problems, delve deeply into them, and develop innovative solutions. In the rapidly changing world of tomorrow, our skills and insights are more crucial than ever. Especially when we focus on human problems again, instead of discussing the pure technology.

In this discussion, I merely scratch the surface. The journey ahead of us is filled with questions that we are only beginning to understand:

  1. What can we use genAI for if we go beyond the low-hanging fruits? What do we want to use genAI for?
  2. Which implications come from a shift towards intent-based outcome specifications? What is their actual value? Where do they make sense?
  3. What is the equivalent of switching from the command line to the graphical user interface for command-based interactions for intend-based outcome specifications? Where will we go after the prompt-based genAI interfaces?
  4. What training and education are necessary to empower designers to work with AI as a material?
  5. Can we prevent the erosion of human skills due to over-reliance on AI in our products?
  6. How to balance user control and AI autonomy in intent-based systems?

But these are just a few of the ones where I feel the most pressure that we find a response. The future of AI is vast and uncharted, with immense potential to transform our world.

So let’s design a better one.

This article is part of a three-part series in which I look at what AI is, what it can be used for, and how it can be used as a design material:

  1. A Designer’s Introduction to AI
  2. The Ethics of Designing a (Generative) AI Product
  3. A Discussion on How to Design a Generative AI Product

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. Bootcamp is a collection of resources and opinion pieces about UX, UI, and Product. To submit your story: https://tinyurl.com/bootspub1

Felix Kalkuhl
Felix Kalkuhl

Written by Felix Kalkuhl

Young product designer deeply into design ops and with an obsession for minimalistic, intelligent interface solutions as well as fancy soft drinks.