Taking a Human-Centered Approach to AI
While AI-based technology has vast potential to solve some of the world’s most complex problems, the inherent bias in algorithms can have severe and far-reaching implications. Does the solution lie with explainable AI, an emerging field of data science which sees human control being designed into AI systems?
--
By Kartik Poria, Lead Designer at BCG Digital Ventures
Artificial intelligence (AI) has the power to exponentially multiply human creativity, judgment, and intelligence.
But in AI as in life — power without control is dangerous.
AI extracts insights and patterns from large data sets, analyzing that information to predict outcomes and forecast trends. While this makes AI very valuable, it can also make its algorithms susceptible to harmful errors and biases. Unfortunately, this is something many organizations know all too well.
At one major medical center in the U.S., an algorithm screened patients for access to an intensive care management program. However, the model routinely prioritized white, healthy patients over black, sick ones. Surprisingly, the algorithm was performing its task correctly. So, what was causing the problem? After researchers investigated, they found that the algorithm assumed higher health costs equaled sicker people — without considering how socioeconomic factors disproportionately affect black patients’ access to preventative care and treatment.
Many people will also be aware of this oft quoted example from Amazon. The business had to scrap its secret AI recruiting tool after discovering it disproportionately rejected female candidates for technical positions. The tool’s algorithm vetted applicants by observing patterns in resumes submitted to the company over the prior 10-year period. In the male-dominated tech sector most resumes came from men and, over time, the algorithm essentially learned to prefer male candidates for technical roles.
Solving AI’s black box problem
These two examples demonstrate what’s known as the ‘black box’ problem in AI — when algorithmic processes make decisions that can’t be explained or understood by humans. And as you can see, this can have severe and far-reaching implications at individual, organizational, and societal levels.
Our responsibility is to build mutually beneficial relationships between users and AI-driven products. We can achieve this by introducing friction into the user experience through ‘moments of education’ and ‘moments of control’ to influence how the algorithm works.
Lifting the curtain on AI-driven decisions by allowing users and operators to not only scrutinize the data, but also the decisions being made and the weighting being applied to the decision-making process, enables us to move away from the inherent bias in our algorithms. Only with full transparency can we understand why an AI model came to a certain conclusion and be accountable for its output.
So, what can we do to build human oversight and accountability into AI and machine learning models?
One approach that will enable a deeper understanding of the process is to present a clear explanation of how an algorithm works and why it makes its decisions. By doing so, we can enable a faster and more transparent understanding by a broader, more diverse set of reviewers. This is critical given that many decisions today require contextual understanding beyond just the data and probability itself.
Creating explainability in AI
Explainable AI (XAI) is an emerging field of data science that can deliver the transparency needed to avoid the black box problem.
There are two key concepts in XAI: interpretability and explainability. Interpretability is the extent to which you can observe cause and effect within the system (i.e. see and interpret what the machine is doing and why). Explainability is the art of taking those observations and explaining them in human terms (i.e. communicating and explaining what the machine is doing and why).
Creating XAI always starts with interpretability. Data scientists can use several techniques to build models with observation in mind, such as rule extractions, model confidence scores, and model feature ranking.
This first step starts to move us away from closed-off, black box models that can’t be understood — toward white box models that we can open and interpret.
The next step in making AI explainable involves designers working with the data scientists to translate the data into user-centric observations and features. Deliverables from this step may include process infographics, descriptions, output examples, test sandboxes, and more. These explanations can exist at a global level, breaking down how the whole system works — or at a local level, breaking down a single recommendation or prediction output.
This second step moves us from the interpretable white box to a see-through glass box, where the model can be seen and understood by everyday users.
Explainable AI can stop users from taking a suggestion at face value, instead giving them insight into the machine’s decision-making process and bringing the responsibility back to where it belongs: the human.
Designers building for XAI must continue interpreting users’ needs and attitudes as a key part of the design process. But they also need to consider the algorithm as another factor within the design ecosystem. Designers will have to interpret the data and expose the model’s “thinking,” designing for both humans and machines.
Designing the next frontier in human-centric AI
Explainable AI can help create a more inclusive, equitable, and ethical future. If the algorithm that determined patients’ access to the intensive care program in the U.S. had an explainable model, for example, maybe hospital staff could have amended the outcomes sooner rather than later, potentially saving lives.
Meanwhile, in Europe, the European Commission has proposed AI regulation to turn Europe into a global hub for trustworthy AI.
As we watch to see how the new regulation evolves, there is no doubt that AI which everyday users can understand, influence, and regulate puts humans back in control. After all, AI is not (nor was it ever meant to be) a replacement for human intelligence — rather, it should be an extension of it. And at a time when we are still figuring out the role and impact of technology in our society, designing human control into AI systems is essential to ensuring that AI reaches its full potential in serving humanity.
Interested in joining BCGDV? See our current vacancies.
Want to find out more? Start the conversation with BCGDV.