Who Taught the Bot to Smile? Generative AI, Trust Theatre, and the UX Illusion”
“We trust stories. We trust emotion.”
— Yuval Noah Harari, conversation with Reid Hoffman on AI
When Yuval Harari and Reid Hoffman sat down to discuss artificial intelligence, they addressed the headline issues: the risks of misaligned incentives, the persuasive power of generative models, and the possibility that autonomous agents might one day challenge human control over democratic institutions. These are urgent and widely discussed concerns — and rightly so.
But one critical layer was missing from their conversation: the interface. No one asked: Who taught the bot to smile?
No one brought up the dropdown menus, confirmation modals, or notification badges that shape how users actually interact with AI. These elements — seemingly neutral, often overlooked — serve as gatekeepers to user behaviour. They decide what gets surfaced, what stays hidden, and what actions are rewarded or discouraged. In short, they control the user’s frame of reference.
These aren’t just aesthetic or usability choices. They are deeply ideological. Interfaces carry values. They reflect assumptions about user intent, cultural norms, and cognitive behaviour. Behind every AI system lies a carefully constructed design system, and behind that design system is a set of political decisions — about inclusion, control, and default behaviours — that often go unexamined.
To truly understand AI’s societal impact, we must pay attention to the UI layer. That’s where trust is scaffolded, behaviours are shaped, and systems quietly teach us how to think, act, and believe.
Design Systems Aren’t Neutral
Design systems promise efficiency and consistency. But those aren’t apolitical goals — they’re structured choices that reflect values, norms, and assumptions baked into the system from the start.
As design strategist Amy Hupe observes, design systems are not just technical artefacts. They’re cultural artefacts.” They shape not just how things look or behave, but what is legible, what is prioritised, and what is excluded. When a component is added or deprecated, it signals more than a UX decision — it signals who the system is designed for, and who it quietly leaves out.
For instance, a login form that separates “first” and “last” names assumes a Western naming convention. In South India, Indonesia, or parts of Africa, where names don’t follow this structure, the user is forced to comply or falsify. A toggle with rapid animation might feel slick in one context but create cognitive overload for neurodivergent users. Even seemingly neutral choices — like AI-generated avatars or emojis — often default to sanitised, Western, heteronormative aesthetics, erasing cultural specificity and gender diversity in the process.
Design systems do ethics by stealth. They encode power, preference, and perception at the infrastructural level. Behind every AI, there’s a design system. And behind every design system, there’s a political story — one that too often goes untold. Making those stories visible is the first step toward equitable, responsive, and culturally-aware design practice.
Trust Isn’t Earned — It’s Designed
In their AI Monks conversation, Reid Hoffman speaks about the importance of calibrating trust in artificial intelligence systems. The phrase suggests that trust is a matter of rational assessment — something users consciously measure and adjust. But in practice, trust is shaped less by reason and more by experience. It gets designed into every interaction: how the system responds to hesitation, whether it allows for mistakes, and how clearly it communicates its own limits.
These are not passive outcomes. They are active design choices made under tight deadlines, roadmap pressures, and organisational priorities. Every placeholder, animation, tooltip, or delay contributes to a broader emotional architecture. Design systems, whether part of an app or a generative model interface, guide users not just in what they do, but in how they feel about the system. This is where the aesthetics of empathy come in: bots that “listen” with timed typing indicators, or interfaces that soften error messages to sound apologetic and humble.
Used well, these techniques can humanise AI, making systems more intuitive and approachable. But if misused or unexamined, they can conceal underlying extractive goals — like gathering behavioural data or nudging users toward outcomes they didn’t choose. Trust, then, is never neutral — it’s a designed effect.
The Modularity Trap
Every design system markets modularity: “Build once. Deploy everywhere.”
It’s an attractive promise — reusability, speed, and consistency across platforms. For design teams under pressure, modularity offers relief: no need to reinvent buttons, toggles, alerts, or layout grids for every project. But this convenience hides a deeper issue. What appears efficient often leads to ideological flattening.
When a component designed for Silicon Valley is dropped into a local government AI portal in Tamil Nadu, it carries more than just code. It brings assumptions: about reading speed, screen size, internet reliability, user patience, icon literacy, and even what constitutes a “success” state.
Most design systems — Material, Carbon, Fluent — were shaped inside Western corporate environments, prioritising data capture, seamless interaction, and neurotypical workflows. They reflect high-trust digital cultures with legal protections and service expectations that don’t translate globally.
Reusing these systems without localisation isn’t neutral — it’s a form of infrastructure colonialism. It means enforcing someone else’s defaults without context or consent.
Design isn’t just visual. It’s structural governance. So if you wouldn’t copy a legal framework across borders without local debate, why treat a UI library any differently? A responsible design system doesn’t just scale. It listens, adapts, and asks who it serves before it spreads.
New Directions: Toward Situated AI Design
We need to move beyond accessibility checklists and ethical mood boards. These are important starting points, but they often become superficial fixes — compliance over care. What’s needed now is a design systems reckoning in AI: a shift from styling interfaces to rethinking the very structures we design through. This means treating design systems not just as technical artefacts, but as cultural frameworks with real-world implications.
Here’s what that could look like-
Design Ethnography
Designing with users means more than usability testing. It involves understanding the rituals, metaphors, constraints, and aspirations that shape how people relate to technology. By embedding design ethnography into the AI development process, we move toward systems that are meaningfully contextual, not just superficially localised.
Participatory Systems Design
Marginalised communities should have the tools and agency to modify, adapt, or entirely reshape design systems. Forking isn’t just for developers — it’s a political gesture. When communities can rewrite the logic of interaction, they begin to author their own futures with technology.
Refusal as Design Logic
Design systems often funnel users into yes/no binaries. But lived experience is complex. Users should be able to reject a question, challenge a framing, or opt out entirely. Refusal should be designed in, not handled as an error.
Federated, Localised Design Systems
A truly inclusive approach avoids the trap of one-size-fits-all. Instead of enforcing global standards, we can support federated, locally grounded design systems that are responsive to different linguistic, cultural, and infrastructural contexts. Each system can reflect its community’s priorities while remaining interoperable across networks.
Together, these shifts lay the groundwork for an AI design practice that is adaptable, ethical, and rooted in care, not just efficiency.
The System Is the Message
Marshall McLuhan said, “The medium is the message.” With AI, it’s clearer than ever: the design system is the message. Every interface decision — how a chatbot listens, how it apologises, how it defaults to action — is already shaping how we interpret intelligence, consent, and care.
Even the most transparent AI will cause harm if the interface is coercive, nudging users toward predetermined choices or simulating empathy without accountability. Conversely, even the most biased model can appear trustworthy if the design system performs clarity, responsiveness, and emotional attunement. A polished surface often disguises structural imbalance.
This is why trust, in the age of AI, must be treated as a designed condition, not an earned virtue. We’ve embedded trust into systems without asking what kind of trust we’re modelling. Is it frictionless acceptance? Is it engineered compliance? Is it a manufactured feeling of being understood, without the system actually listening?
We don’t need more sleek, one-size-fits-all AI interfaces. We need design that is situated, plural, and responsive to context.
Interfaces that hold contradiction with grace.
That welcome ambiguity.
That make room for refusal and pause.
That ask: Should this AI speak at all? And if so, how, and to whom?
Designing for trust must begin with designing for doubt.
Final Thought: What we build is never just technical.
Every system, no matter how automated, carries traces of the people who built it, the values they held, and the exclusions they overlooked. Code doesn’t exist outside culture. Architecture, whether in physical cities or digital platforms, reflects priorities — what gets shown, what gets hidden, who gets included, and who gets left out.
Design is never neutral. It’s historical. It’s social. It’s shaped by inherited assumptions and silent defaults. From colour palettes and font hierarchies to the layout of a chatbot response window, every decision has cultural weight. And when we apply design to artificial intelligence — especially conversational systems, recommendation engines, and synthetic media — these choices don’t just influence experience. They shape knowledge itself.
AI won’t kill design. It will make design more visible. It will force us to confront what kind of designers we’ve become — and what kind we urgently need. As AI systems become more embedded in daily life, design moves from aesthetics and usability into the realm of ethics and politics.
We need designers who can hold complexity. Those who are trained to listen before building. Those who understand that interface decisions have downstream effects on perception, trust, and power. We need situated design — responsive to context, grounded in place, shaped by multiple voices. Plurality should be a feature, not a liability.
Messy design isn’t bad design. It’s an honest design. We need systems that hold contradiction, not erase it. Interfaces that offer room for refusal. Dialogues where silence, ambiguity, and dissent are respected, not optimised away.
Maybe the AI shouldn’t speak just yet. Maybe the interface should pause, ask more questions, or simply hold space for uncertainty. These aren’t bugs — they’re design choices that honour real life, not a simulated one.
The future of design isn’t minimal, universal, or frictionless.
It’s layered. Situated. Disobedient when necessary.
And that’s where its strength lies.

