Inception Health
Published in

Inception Health

Digital Health & Equity

As those who dedicate our lives to the health of others, the death of George Floyd and the deaths of our neighbors, friends, partners, and family-related to racism, whether systematic and institutional, or implicit bias, is abhorrent and must lead to change. Of course, we know the problem is far from being isolated to the criminal justice system. These problems are also deeply rooted in health care in America. We know that we have disparities and gaps in health outcomes that are related to unequal treatment of people of color, which we must acknowledge. We must do more.

Our country is fighting two different, but interrelated, pandemics. One, a virus that hijacks our immune system (our body’s sense of right and wrong, self and not self) and does so in an apparently almost random way (which is undoubtedly not random, but we do not know the details yet). The other is a pandemic of social injustice, racism, that goes back to our country’s founding and that still permeates our society, and eerily also cannot distinguish right from wrong, self (fellow person) from not, and which is not random. Furthermore, these two problems are incredibly interrelated. They are synergistic and reinforce the disproportionate effect on communities of color.

In digital health, we must address these issues. As we pivot to virtual care, to keep people safe and reduce exposure resulting from the coronavirus epidemic, we must NOT leave behind those we serve because of a new type of unequal access, e.g. a digital inequality and bias reinforced by social bias and racism expressed in the digital realm in terms of a divide in internet connectivity and bandwidth, device ownership, and trust.

Specifically in digital health, we must also call for and implement changes proactively and not leave the issue of digital bias as an afterthought or misjudge it as an external effect that should not be the focus of digital health research and implementation. We see several immediate areas that need immediate and ongoing attention. This includes the gap in digital engagement by race, the focus of digital health research and development on disentangling and repairing biases that we have seen implemented in digital health such as those produced through machine learning and artificial intelligence. Let’s break down some of these issues into what we know, and what we can do for starters. Further work will require much more conversation and collaboration with the people we serve.

Digital Engagement

An action item is to ensure that we are equitable in our approach to inviting patients to use the portals, to read their notes, to engage with us digitally, and to ask questions that arise after visits. But we also must go further. Universal protocols to invite everyone, every time are a start. We must also go deeper to help and encourage, and to earn trust. Ultimately not everyone will want to engage, regardless of race, and that is okay. But we should remember that research from the OpenNotes collaborative shows that minority or otherwise disadvantaged patients reported **more** benefit from OpenNotes than others. And also, OpenNotes showed us that even if patients don’t always read their records, it’s important for them to have access to them, and know that we are meeting the creed “Nothing about me, without me,”

Digital Health Research & Development

JAMA, Rodriguez et al

I want to call out attention to colleague (and former fellow) Jorge Rodriguez’s article in JAMA which summarized key areas of attention, including technology literacy, language inclusivity, technology access, and inclusive design.

We already have some examples of digital health programs that we are part of that are trying to reshape access. Our digital maternity partner, Babyscripts, has engaged closely with expectant moms who face socioeconomic disadvantages which puts them at higher risk for poor perinatal outcomes and experiences. Our work with NowPow is all about creating better connections between those who would benefit from social services and community-based organizations. We have also partnered with Andy Slavitt’s Town Hall Ventures and the AVIA Health Innovation Network to evaluate digital solutions across major public health areas, including substance abuse, perinatal care, and non-emergency ED usage.

With our telehealth program, we have very quickly learned that we needed to work with our technology partners to shift the focus from more high resolution imagery to more on stability to support patients across a wider variety of internet connections. This is illustrative of a broader point. Rather than focusing on what we can do with the latest technology and the state of the art, how can we use digital to make care services more broadly more accessible, and to fit into the care and home plans of those who stand the most to benefit?

Biases in Analytics & Data

This mechanism of bias is particularly pernicious because it can arise from reasonable choices: Using traditional metrics of overall prediction quality, cost seemed to be an effective proxy for health yet still produced large biases.

But beyond these debates is a critical issue that data contains biases, and therefore algorithms and advanced analytics are incorporating and propagating these biases. Ziad Obermeyer, an emergency physician (and classmate of mine) showed that the use of a common prediction algorithm for patients with complex needs underestimated the benefit of services to minority patients. When identifying issues, Obermeyer and colleagues identified that because minority patients were less likely to receive care, they had less costs accrued in healthcare, which underestimated their needs. When reclassifying outcomes to include health status (rather than just cost) and then rerunning the model, biases were mitigated. Obermeyer and colleagues had an important line in their paper that bears repeating: “This mechanism of bias is particularly pernicious because it can arise from reasonable choices: Using traditional metrics of overall prediction quality, cost seemed to be an effective proxy for health yet still produced large biases.” The bottom line is that most machine learning and artificial intelligence products and solutions can reinforce the intrinsic biases embedded in the way we deliver care to our communities. Unless we are explicit in evaluating those AI and ML tools against potential biases and not merely evaluate the quality of the results, we will certainly amplify the bias and increase the divide at scale. This leaves us with the challenging responsibility to be careful about what we build, deploy, and make decisions upon.

The New England Journal recently published an article by Vyas et al that went through common algorithms and looked at areas where race is incorporated to “‘correct’ outputs on the basis of a patient’s race or ethnicity.” The review highlighted areas where this is able to influence clinical decision making in ways that may make minority patients more at risk. Across fields from cardiology, nephrology, obstetrics, oncology, and more, the use of race (again, a social construct) influences scoring.

Examining what role race has in algorithms, and checking for biases among well-intentioned algorithms should be pursued by anyone implementing them. Dr. Obermeyer has offered to help look at these algorithms with his team as well.

Moving Forward