Conceiving Better Outcomes

Using AI to Address Racial Bias in US Maternal Mortality Rates

In Winter 2019 and at Stanford’s Designing AI to Cultivate Human Well-Being class, interdisciplinary teams of students worked together to explore solutions to several important societal problems through the application of technology and AI. In this post and in the few following ones, the class teaching team will be highlighting the top 3 teams and how they defined their problem statement as well as their projects’ outcome.

A team of 4 students (2 from GSB, 2 from engineering) examined using AI to address racial bias in US maternal mortality rates. More American women are dying of pregnancy-related complications than any other developed country. Only in the U.S. has the rate of women who die been rising. According to the CDC, black mothers in the U.S. die at three to four times the rate of white mothers, one of the widest of all racial disparities in women’s health. Put another way, a black woman is 22 percent more likely to die from heart disease than a white woman, 71 percent more likely to perish from cervical cancer, but 243 percent more likely to die from pregnancy- or childbirth-related causes. How can we use AI to reduce this racial gap and decrease the mortality rate?

In this project, a team of students addressed this challenge of high mortality rate using AI and ML tools.

Here is how the team approached the problem in their own words:

When we first set out to find an AI solution to reduce the high maternal mortality rates in black mothers, we knew we needed to use a human-centered approach to be most impactful.

We completed a number of steps to dive deep into the issue and synthesize our information into actionable recommendations.

5 Stakeholder Interviews:
We gained 5 unique perspectives: an expectant black woman, a midwife serving minority populations, a Chief Academic Office / MD in a hospital system, an OB-GYN, and a lawyer.

What We Learned:
Our interview data and secondary research revealed salient themes:
(1) This is a major issue that people in healthcare have known about for some time, but few efforts have been made to address it.
(2) There exist challenging systemic tensions in healthcare propagating the problem (Example: over-prescription of opioids vs. systematic dismissal of pain based on race)
(3) All individuals are subjected to biases and medical professionals are not immune to these biases. The medical profession is extremely demanding and a lot of mental shortcuts are made to be efficient.

Journey Map of User Experience: Based on insights, we mapped a full experience of an expectant black woman to identify key pain-points and opportunities to use AI to find a solution.

Our methodology allowed us to identify three key opportunities where AI might improve the experience and outcomes for black mothers: (1) diagnostic tools (2) empathy/pain sensitivity training, (3) better context-driven feedback and onboarding.

During the last class session, the team delivered their presentation with their key insights and takeaways. Let us take a look. You can also read the team’s final paper here (pdf).



How do we design AI to promote human flourishing? In this Stanford class, our goal is to bridge the gap between technology and societal objectives. On Meduim, we plan to democratize the takeaways from this class through students blogs, guest speakers and takeaways from sessions.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lamia Youseff, Ph.D.

Lamia Youseff, Ph.D.


CS Research Scientist in ML at Stanford and GSB Sloan Fellow, Former MIT Researcher. X- FB FBLearner, MSFT Azure & Google Cloud Technology & Product Manager.