🚀 Exploring AI & User-Centered Design: Where UCD Falls Short 🤖🔍
AI is a hot topic these days, but it's not a new concept. In fact, you're reading this because AI recommended it to you. As AI evolves and steps out from behind the curtain to play a more prominent role in our daily lives, we're discovering new applications for it. This leads to new products and experiences for users, who may either benefit from or encounter challenges due to this technology.
At the same time, the core goal of user-centered design (UCD) has always been to humanize technology. This is a crucial moment for the design community to actively engage with AI development to ensure that user voices are protected and represented. In this brief blog, I share my insights and concerns by comparing my academic background with the current industry challenges. My aim is to offer valuable observations and spark a discussion to gather diverse perspectives.
The Dilemma of Unknown Unknowns: User-centered design (UCD) relies on ethnographic research to understand people's needs and demands, guiding the creation or improvement of products and services. While UCD effectively addresses current challenges, AI introduces new, unpredictable issues. Although UCD remains crucial for humanizing technology, its existing frameworks may fall short in anticipating and tackling future problems. To address this, we should collaborate with other disciplines — such as data scientists, AI ethicists, and cognitive psychologists. Their expertise can help refine our current methods or develop new approaches, allowing us to anticipate better and solve emerging challenges.
Lack of Data: The issue of missing data is familiar and affects various fields, but it is especially problematic in AI projects. Designers often depend on their data, which can lead to overlooking crucial gaps. This problem is even more pronounced in AI, where large and diverse datasets are essential for effective and fair system performance. When data is incomplete or unrepresentative, it can result in biased outcomes and limit the system's ability to serve all user groups equitably. In my experience, missing data has frequently hindered efforts to achieve inclusivity, as some demographics or needs may be neglected. To tackle this challenge, we must proactively identify and address data gaps to ensure that AI systems are both inclusive and effective. For instance, research in women’s health has highlighted how missing data can lead to significant disparities. (See examples here: [Link]). Unfortunately, current frameworks do not adequately address this issue, indicating a need for significant improvement. The good news is that there are existing frameworks in AI ethics and data analytics for identifying these gaps. We can leverage these methods as a starting point to develop new design frameworks.
“Medical research, by default, is male led. For example, The Physician’s Health Study examined the effect of aspirin on cardiovascular disease involving 22,000 patients. Not a single one of them was female! Women of childbearing age were even banned in the 1970s from being enrolled in phase I clinical trials. This means treatments have been developed whose effectiveness and safety are unknown in women.” — Forbes
Complexity of Big Data: UCD designers typically focus on qualitative data and rarely engage with big data. In recent years, designers have concentrated on small user groups and short-term research to keep up with agile frameworks and industry demands, especially in the fast-paced digital world. This focus has limited their experience with analyzing and synthesizing large datasets, and they often need more training in extensive data assessment. To address this gap, it is essential to retrain ourselves and our peers in data visualization and analysis tools. Low-code and no-code software, such as MS Power BI, which is free and user-friendly, can be easily taught to designers, enabling them to manage and interpret big data better. (List of low-code and no-code data visualization tools)
Complexity of AI Systems: User-centered design focuses on involving end-users in the design process, gathering their feedback, and incorporating it into the development of solutions. However, AI systems often rely on complex algorithms and data processing techniques that are difficult for most users to understand fully. Many users might not be aware of how this technology works or its potential impact on their lives, and they may have misconceptions about its challenges. For example, a research study found that users’ main concern with biometric verification was data privacy, even though the data storage method used was actually secure. This underscores the challenge of effectively involving users in the design process when dealing with such intricate technologies.
Unpredictability of AI Behavior: AI systems, particularly those that learn and adapt over time, can exhibit unpredictable behavior. As a result, even a successfully designed UCD platform will require ongoing maintenance and refinements. Designers may need to develop frameworks to support this continuous adjustment. Additionally, it may be necessary to train a new tier of designers whose responsibility will ensure the system continues functioning correctly over time.
Bias and Fairness: Ensuring that AI systems are fair and unbiased is a significant challenge (check this case study). User-centered design might not adequately address the systemic biases that can be present in training data or algorithms, potentially leading to unfair outcomes. Designers generally have a solid understanding of the principles of a fair system, but many currently need more skills to evaluate and assess data to identify system pain points. However, their experience working in multidisciplinary teams and facilitating workshops can be invaluable in overcoming this challenge.
Transparency and Explainability: In my view, the billion-dollar challenge facing both the industry and the design world is how to keep humans in the loop and control of AI, particularly in the realm of deep machine learning. Understanding how and why AI makes decisions is crucial, as a lack of transparency can erode user trust and reduce our ability to anticipate and mitigate unintended consequences. While this is not a challenge that designers can solve alone, they can play a vital role in consulting with development and science teams to develop effective solutions.
Reflecting on my experiences across various domains, it's clear that while our current skills and frameworks are still relevant, they often fall short when applied to AI. While good designers know how to use existing frameworks, the best designers innovate and adapt, creating new frameworks tailored to each project's unique challenges.
Now is the time for our design community to step up and create AI-specific frameworks and guidelines. This is a collaborative effort that requires our collective expertise and creativity. I invite you to share your thoughts, experiences, and ideas on how we can address these challenges. Let's start a conversation and explore how we can work together to develop solutions that truly meet users' needs in the AI era. Your insights could be the key to shaping the future of design in this rapidly evolving field.