The problem with AI development today: Designers need to step up

Sarah Tan
4 min readNov 14, 2023

Over the past few decades, AI research and development has centered heavily around the theory and development of computer systems and algorithms….

Conversations on AI have been mainly technical-oriented, led by data scientists and engineers

And that’s an increasing problem of concern.

Why?

In the early days, AI mainly focused on automating tasks, standardization and making industries more efficient

But AI has progressed way beyond that today. We are now in the 3rd Wave of AI, which is all about enhancing human intelligence and creating a seamless user experience.

Why is this more important than ever?

AI is no longer just a tool, it has become a partner that helps us do more and makes our lives easier.

With AI is deeply woven into our daily lives, new problems arise… ⚠️

Despite efforts to prioritize human needs, issues like malicious use, lack of transparency, privacy concerns, and data bias pose significant challenges.

As AI-driven products and features are increasingly incorporated into our day-to-day products and services, challenges begin to materialize — from UX problems in explainability and user feedback mechanisms to ethical issues like data bias.

The escalating adoption of AI technologies has prompted research into explainable and responsible AI practices, giving rise to Human-Centered AI development.

And this gives rise to: Human-Centered AI development.

Contrary to the traditional view of humans being “in-the-loop” around AI, Human-Centered Artificial Intelligence (HCAI) places human users at the center, with AI operating in support.

Initiatives, such as Stanford University’s Human-Centered AI Institute and MIT’s substantial investment in AI education, exemplify the global effort to prioritize a human-centric approach.

So what does this mean for us Designers?

Designers have this (untapped) superpower in problem-solving. We need to be more involved in making the process of building AI more Human-Centered.

This means utilizing design thinking as a guide — in mitigating AI’s impact on society through the design and development of human-centered AI systems and guidelines.

How?

Process — Designing for responsible AI through examining the human impact of AI systems

By exploring AI’s human impact through various HCD approaches, design aligns values between humans and machines, integrating ethics at the project core.

Advocating for user needs, design research examines AI implications in real-world contexts, addressing socio-economic dynamics.

Human-centric design, like inclusive and participatory approaches, ensures fairness in data models, mitigating bias and promoting inclusivity.

These perspectives inform decision-makers on potential human impacts and help anticipate unintended consequences

2. Outcome — Designing for explainable AI experiences to build trust and feedback

The development of AI technology relies on trust. AI relies on datasets to garner insights and learn from data to train and iterate its model.

When introducing new AI products and interfaces, careful considerations in the design of AI explainability is essential to instill trust and transparency in AI-centered products.

(I talk about UX principles for Human-Centered AI in my 3-part UX for AI series. Check it out!)

What does this mean for designers then? And what is the opportunity?

Design needs to better support actual Human-Centered AI practice.

To make sure algorithmic decisions create inclusive user experiences, designers must be part of engineering and coding discussions.

This collaboration ensures a multidisciplinary, human-centered approach to crafting AI systems.

For designers, this shift demands new capabilities:

  1. Understanding AI Capabilities: Grasping the conceptual foundations of AI as a design material.
  2. AI Opportunity Identification: Leveraging AI capabilities to identify new value additions
  3. Tech-user feasibility: Assessing the practicality of translating user needs into achievable data inputs.
  4. Cross-disciplinary Collaboration: Cultivating cross-disciplinary collaboration within AI systems.
  5. Explaining AI to users: helping users understand how AI features work through UX for AI
  6. Ethical Awareness: Evaluating and navigating the societal impact of Responsible AI development.
  7. Impact analysis: Designing AI solutions that address technical feasibility and business value.

(Psst… stay tuned to my next post! I’ll be sharing insights on making AI accessible for UX practices, highlighting key capabilities from interviews with 16+ AI designers. 👀)

The next frontier of AI is no longer technological, but also humanistic and ethical. Designers must equip themselves with the necessary skills to navigate this new terrain and guide Responsible Human-Centered AI development.

The call is clear: more design-specific resources are needed to integrate human-centered design approaches within AI systems.

It’s still an early adoption curve and that’s where my goal is — to educate and make Human-Centered AI more accessible to society.

If you found this post insightful, hit the like and follow button on Medium (it will mean alot!) Always looking to chat about AI and design — let’s connect and drop me a message on Linkedin! 💡

--

--

Sarah Tan

0-to-1 Strategic Design Partner for emerging tech/AI startups (formatif.co) 👉🏻 Upcoming HCAI Design Workshop in Europe: https://strat.events/europe/