Redefining Design Skills: Preparing for AI
The following article is based on a talk presented at the Service Design Global Conference 2024 in Helsinki.
Podés encontrar una versión en español de este arículo acá.
In the world of education, we’re uniquely positioned to think about the future. It’s not just a privilege; it’s our obligation. Our industry has a predictable rhythm — we know we’ll have semesters, students, and courses next year and the year after. More importantly, our service unfolds over years, unlike many others. Consider this: a student starting a 4-year undergraduate Design program like the one we do at Universidad de San Andres next year will be job hunting around 2029. In essence, we’re training the designers of the future today.
For those of us teaching and managing education, thinking about the future of design isn’t just an intellectual exercise — it’s at the core of what we do. Technology, particularly the rapid advancements in Artificial Intelligence (AI), raises critical questions: What skills will designers need to navigate these fast-paced disruptions?
While this article focuses on Service Designers, many observations apply to the broader design discipline and beyond. We’ll explore AI’s impact at three levels:
- Designer: How will AI change the way we work?
- Services: How will AI transform service delivery and consumption?
- Organization: How will AI affect the companies and institutions where we work?
Designer
Evolving Roles
To understand technology’s impact on any discipline, let’s think of a role as a set of tasks — the activities that fill your workweek. For a service designer, these might range from facilitating workshops to reviewing requirement documents. These tasks evolve over time, shaped by cultural shifts and technological advancements.
Take the role of an accountant: while the title remains unchanged, the day-to-day tasks of an accountant in 1954 would be markedly different from those in 2024. As AI capabilities grow, we must ask: How will our roles evolve? What will our work look like in the years to come?
Let me be honest, dear reader: I don’t know. And I don’t think anybody truly knows — the future remains as unpredictable as ever. But that doesn’t mean we can’t reflect on this productively. We can identify two primary ways AI will likely affect our tasks (Brynjolfsson, 2022).
- Automation: AI will allow us to perform certain tasks more efficiently. We’re already experiencing this with tools like Miro or Mural for clustering post-its, or using ChatGPT for email drafting. As AI models become more sophisticated, we can expect automation to increase in both scope and intensity. It’s important to note that this isn’t an overnight process or a complete delegation to AI. The pace varies, sometimes feeling slow, other times rapid, but the direction remains consistent.
- Augmentation: AI will enable us to perform tasks that were previously beyond our capabilities. This might include interpreting complex data patterns without a data analyst or coding prototypes without a developer. I’ve seen this work both ways. In a digital business Master’s class at Universidad de San Andres, non-design students are creating remarkably good service ad prototypes that were unimaginable just a couple of years ago.
Both these processes — automation and augmentation — are gradual and happen in parallel. We’ll likely spend less time on some tasks (or accomplish more in the same time) while simultaneously expanding our capabilities into new areas.
This brings us to the first of our principles for navigating design in the age of AI:
Principle 1: Adapt to Changing Tasks
Be prepared to evolve your role as tasks change unpredictably.
Navigating Biases
As we consider the impact of AI on our work, it’s important to understand that not all tasks are created equal. We can think of the tasks a service designer performs along a Tactical-Strategic axis.
As we bring AI to help us automate or augment our capabilities across this spectrum, a key question emerges: Is there any particular reason why we might want to apply AI to some tasks and not others? More importantly, is there any risk in using AI for more strategic tasks?
This is where we need to talk about biases in AI. Let’s start with a common example: language. If you’ve used LLM-based tools like ChatGPT, you might have noticed a difference in performance between languages. As a native Spanish speaker, I notice subtle inaccuracies when using ChatGPT in my language — phrases or concepts that don’t quite match how a native speaker would express them. The humor and ideas often feel more influenced by English than what I’d encounter in everyday Spanish conversation.
I’m among the lucky ones. Spanish is a popular language, and while the gap is noticeable, it doesn’t render the tool unusable. But as we move to less common languages like Romanian, Thai, or Persian, the gap widens. For smaller languages like Maori, Icelandic, or Guaraní — one of the languages spoken by two of my grandparents — the difference becomes so vast that the experience is hardly comparable.
The takeaway? AI is not neutral, and it never will be. This bias stems from how current AI models are built and aligned. AI models are trained on data — lots of it. The availability of training data skews heavily towards more popular languages, with English far in the lead (Joshi, Santy, Budhiraja, Bali, & Choudhury, 2020). Data scarcity for less common languages is a tough problem to solve, especially for languages from poorer or institutionally weaker contexts. As models grow larger, this becomes a complex sociotechnical issue that good intentions alone can’t resolve.
But language is just the tip of the iceberg. A recent study by Fundar, an organization dedicated to researching and designing public policies for sustainable and inclusive development, asked LLM models questions like:
- To what extent is equality between men and women guaranteed in Argentina?
- To what extent is protection against crime guaranteed in Argentina?
They then compared these responses to those of real Argentinians and found some interesting correlations. The AI models’ responses aligned closely with profiles of males interested in politics, adults with high educational levels, and right-leaning ideologies (Ortiz de Zárate, Díaz, López, & Torres, 2024).
My point isn’t that models are inherently right-wing — I suspect in other countries, the results might skew differently. The key insight is that AI models have a position, derived from their training and alignment process. Any bias-reduction process would be guided by humans, each with their own inherent biases. AI isn’t built on data representing the entire human experience, but rather a subset of it. For this reason, AI is not neutral, and there is no such thing as neutrality.
Principle 2: Turbocharge Critical Thinking
Actively reflect on potential biases you might incur while interacting with AI.
Services
AI Agents
How are users incorporating AI into their lives? Let me illustrate with a recent personal experience: getting a routine lab test. My goal was simple — to ensure everything was okay and get additional consultations if needed. But to reach that point, I had to navigate a series of touchpoints both within and outside my medical provider’s system. This journey included everything from scheduling an initial GP consultation to setting up a lab appointment once I had the order.
Traditionally, regardless of technology or channel, the actor connecting these dots has always been human — usually the user themselves. But this is likely to change in the near future. Enter AI Agents.
An AI agent is a system or program capable of autonomously performing tasks on behalf of a user (IBM, 2024). Let’s focus on a specific type: AI Assistants. Imagine I want to dine out next Friday with my partner. That’s my goal, and I can delegate it to my AI Assistant. The assistant, acting on my behalf, could:
- Search for restaurants matching my preferences
- Check Friday availability
- Book a table
- Send me a confirmation
Sounds convenient, right? It is. But let’s consider how these interactions might play out in our lab test scenario. Say I want my AI Assistant to schedule a lab test appointment once I receive the order. The assistant could process this request and communicate on my behalf. Here’s what that might look like:
In this email, we see how an AI Assistant might communicate on behalf of a user. The language is natural and professional, indistinguishable from what a human might write. While we don’t yet know the etiquette for a world with AI Agents, their communications could be virtually indistinguishable from our own. This raises interesting questions about transparency and disclosure in AI-human interactions.
We might soon find ourselves in the peculiar situation where human operators are answering requests from AI Assistants. Consider another example from our lab test case: I need to reschedule my results review appointment due to a delayed flight. My AI assistant could handle this by interacting with my medical provider’s chatbot:
Chatbot-to-chatbot interactions may seem far-fetched, but they might be closer to reality than we might expect. As with most changes discussed here, it won’t occur overnight or across entire service journeys immediately. But gradually, we’ll see users delegating more specific interactions to AI assistants.
And this is just the user-facing side of the story. AI systems will also be deployed within organizations to perform various tasks. While I find these internal transformations even harder to predict, anyone who’s worked on implementing complex tech systems in organizations knows it’s no simple feat. I suspect we’ll see a transition period where AI assistants working for users arrive before AI agents fully transform internal organizational processes.
In light of these developments, we arrive at our third principle:
Principle 3: Design for Human and Non-Human Entities
Design services that accommodate both users and AI agents as active participants
The Ripple Effects of AI Assistants
What will be the impact of AI Assistants on organizations delivering services? To grasp this, let’s look at the effects of previous technological transitions, particularly the internet and e-commerce on goods delivery.
For consumers, e-commerce has provided benefits such as a wider product selection, competitive pricing, and the convenience of 24/7 shopping from home. But flip the coin, and we see sellers increasingly forced to compete on price rather than experience, finding themselves at the mercy of algorithms. These changes have reshaped not just businesses, but our communities and neighborhoods. We’ve witnessed a shift from local stores selling books, food, or decor to a landscape dominated by delivery vans, distribution centers, and dark kitchens.
AI Agents are poised to not only amplify this trend in goods but expand it into services. The relationship between users and service providers could be increasingly mediated by AI Assistants and agents. While this won’t affect every industry, category, or market uniformly, we can anticipate a process analogous to what happened in the goods industry:
- Experience Commoditization: AI agents won’t care about how well-designed your app or website is. The factors that make a touchpoint effective for agents might differ significantly from what works for human users.
- New Differentiation Needs: As this commoditization unfolds, organizations providing services might need to explore novel ways to stand out.
- Distribution Channel Concentration: We could see a concentration of power in the hands of AI Labs or other tech industry players creating these AI Assistants.
As we navigate this transition, it’s essential to learn from the missteps of earlier technological shifts. The rise of Social Media and the so-called Sharing Economy offer valuable lessons on unintended consequences and the importance of considering broader impacts.
This leads us to our fourth principle for designing in the age of AI:
Principle 4: Understand Systemic Implications
Initiate conversations about AI’s impact on your company, industry, and community.
Organization
Reshaping Collaboration
Organizations are the sea in which service designers swim. While predicting how they might evolve with AI is challenging, it's an important consideration for our field.
Let’s focus on how AI might reshape collaboration across departments. In many ways, we can see a process analogous to the augmentation and automation we discussed at the individual level, but on a larger scale. As AI-powered tools augment teams, they’ll extend their capabilities beyond traditional boundaries:
- Designers doing data analysis
- Developers crafting user interfaces
- Analysts writing code
Imagine a world where others could engage with your domain in ways that seem impossible today, while you simultaneously gain the ability to contribute meaningfully to theirs. This increased disciplinary and functional overlap could unlock numerous possibilities, which, as we’ve acknowledged, are notoriously difficult to predict.
However, two key dimensions of this phenomenon deserve our attention:
- Redrawing Organizational Boundaries: This shift might lead to a restructuring of departmental limits within organizations. The nature of these changes will be influenced by both technological factors (like the evolution of AI tools) and cultural aspects (such as the organizational legitimacy of each discipline). This dimension might initially seem like a zero-sum game, with departments competing for expanded territories.
- Enabling Radical Interdisciplinary Collaboration: On a more optimistic note, picture the possibilities for unprecedented cross-disciplinary collaboration. AI tools, particularly those based on Large Language Models (LLMs), could help us communicate complex ideas in terms that people from other domains can readily understand. In a world where domain-specific jargon and tools have often hindered collaboration, AI could serve as a universal translator, breaking down these barriers.
AI’s use of natural language as its primary interface, especially in LLMs, could be a game-changer. It has the potential to enable meaningful collaboration in a landscape where specialized vocabularies and tools have historically impeded cross-domain partnerships.
Principle 5: Navigate Organizational Changes
Understand how technology reshapes collaboration in your organization.
A Tale of Two Priorities
Let me share a real story from a friend of mine, a developer working for a B2B company. He’s been developing a project to assist Sales Representatives in their work. To give you an idea, these reps typically follow this workflow: they analyze customer information databases, run various models to determine what can be sold to whom, customize the product for that customer, initiate contact, and close the sale.
This company, like many others, decided to implement AI in this process. Their approach, which seems logical at first glance, was to let AI handle the complex pattern-finding work, allowing sales representatives to focus on human-to-human communication. For this project to succeed, two things need to happen:
- The AI must match or exceed human performance in pairing customers with products, at the same cost or less.
- The sales reps must willingly adopt this tool, which is currently optional.
While the first point presents its own challenges, I’d like to focus on the second. Would sales reps want to use this tool? If you were in their shoes, would you? I’m not sure I would. For many, automation has stripped away the analytical and creative aspects of their work, leaving them with repetitive tasks. For these sales reps, the way they identified leads, the models they used, and how they applied them was core to their work — it was part of their identity.
This scenario brings to mind the idea that Service Design is ultimately Job Design (Tonkinwise, 2023). As service designers, we often shape how service workers operate. As we implement AI systems at scale, we must remember not to design out the meaningful parts of people’s work.
This challenge is particularly complex in a capitalist system that prioritizes efficiency above all else. While design alone can’t solve this conundrum, we must contribute to finding better solutions. I see two key areas for exploration:
- Rethinking Service Metrics: If what gets measured gets done, we need to reconsider how we evaluate services. While delivery efficiency and user satisfaction remain key, employee wellbeing should be an equally important metric as technology evolves. The future of service workers — and the global middle class — depends on it.
- How we see services. This aspect requires deeper disciplinary reflection, and I believe it’s highly relevant. I’d like to lean on those critical voices already sounding alarms about this issue. As Maffei and Del Gaudio (2023) pointed out, “a closer analysis of the user journey, the backbone of Service Design practice, highlights its critical structural issues and how it informs potentially harmful processes and outcomes.” It’s time for Service Designers to actively reflect on how our tools shape our understanding of the world and commit to changing them when they consistently fail to account for the full impact services have on workers and the environment.
Principle 6: Prioritize Well-being for All
Ensure services respect and enhance the well-being of each person involved.
The Diversity of Impact
The processes we’ve explored so far in this article will affect us all differently, shaped by our individual identities. It’s not just a matter of language, culture, or discipline, but also of class, gender, race, nationality, and more. As future designers, we must be sensitive to the fact that radical transformations often disproportionately affect those with less power. Unless we actively work to prevent it, AI could exacerbate existing inequalities.
Principle 7: Address Unequal Effects
Acknowledge that AI impacts people differently based on identity and context.
To recap, here are the Principles for Designing in the Age of AI:
Thank you for reading this far. If you’ve made it to this point, I imagine you might be feeling a bit overwhelmed, perhaps even gloomy. I sometimes feel that way too when contemplating these issues. But I’d like to close with an encouraging thought: we’ve navigated similar transitions before.
Let’s travel back 30 years to 1994. Imagine yourself then, pondering how the nascent internet might transform your work, services, and organizations. It must have seemed incomprehensible at the time. And indeed, many things have changed beyond recognition. The internet brought challenges, certainly, but also tremendous opportunities. The main point is that we adapted. We’re still adapting.
It’s worth noting that the rise of the internet created the right conditions for service design to flourish. Building on the work of pioneers before us, our discipline gained prominence by riding the wave of digitalization.
As Service Designers, it’s in our DNA to help organizations and people navigate technological transitions.
Let’s take on the challenge again.
References
Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus, 151(2), 272–287.
Joshi, P., Santy, S., Budhiraja, A., Bali, K., & Choudhury, M. (2020). The state and fate of linguistic diversity and inclusion in the NLP world. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL).
Ortiz de Zárate, R., Díaz, F., López, M., & Torres, P. (2024). Algorithmic biases and social representation in generative language models (LLM). Fundar.
IBM. (2024). What are AI agents? Retrieved from https://www.ibm.com/cloud/learn/what-are-ai-agents
Tonkinwise, C. (2023). All care, no responsibility: Everything service designers need to know about politics but were afraid to ask [Video]. MFA Products of Design, School of Visual Arts. https://www.youtube.com/watch?v=_sK7A1NR-6A
Maffei, S., & Del Gaudio, C. (2023). Unmaking the user journey: Fostering alternative Service Design futures. Proceedings of the ServDes 2023 Conference.