Causal Artificial Intelligence: A New Way to Turn Data into Effective Health Interventions
The following is a simplified version of our original Frontiers in Artificial Intelligence article: Causal Datasheet for Datasets: An Evaluation Guide for Real-World Data Analysis and Data Collection Design Using Bayesian Networks.
In rural Uttar Pradesh, India, a state with more than 230 million people, 10 times more mothers and babies die during or after birth than in the US. Evidence shows that delivery at hospitals is vital to reducing maternal deaths. But, despite a government incentive scheme and community health workers deployed to spread the message, 20% of pregnant women still deliver their babies at home. We wondered, how could we change their decisions?
This is the type of complex problem that many lower and middle income countries are trying to solve: How can they identify the best interventions to drive behavior change, leading to better outcomes?
Data is the first step to solving these problems and policy decisions have become increasingly reliant on data-first approaches to provide necessary insights. This data-first mindset has spawned numerous data collection programs, ranging from specific subjects at the sub-national and national levels, which enable local or national governments and international agencies to monitor trends across health program areas and set priorities for health policy, interventions, and program funding.
Unfortunately, the way the global health field typically uses this explosion of data has not realized its revolutionary potential to inform health intervention design in a substantial, precise way.
Designing interventions for the real world without expensive and cumbersome trials
Since global health and development problems are complex, an in-depth understanding of the complexities on the ground — what informs people’s choices, who they listen to, what factors dissuade them, and more — is needed to ensure the right person receives the right intervention. This is known as a precision public health approach.
To find the right intervention — or the right “button to push” — to change a health outcome, we must identify the causal relationship between the intervention options and the outcome. Traditionally, to determine cause and effect and inform intervention design for a population, randomized controlled trials (RCT) are the gold standard. RCT’s can test hypotheses by comparing the results of an intervention to the results without an intervention (the control) within a group of people. However, due to cost, lack of infrastructure, ethical considerations, and other practical reasons, RCTs are not always possible. Importantly, this cannot be done with data that are observational only.
Without an RCT, determining causes and effects for a specific set of behaviors remains a challenge for global health practitioners. Moreover, RCTs are by design conducted with the intent to test a narrow set of hypotheses, not to explore unknown causal drivers, which is a missed opportunity to target public health solutions more precisely.
Even when an RCT is possible, it can only inform interventions at the population level, not at the individual level, since it averages across many individuals. This is particularly important because the “holy grail” of intervention design is a personalized one. Since circumstances vary greatly from person to person, an intervention should never be “one size fits all.”
So, with RCT results being observational and population focused, we are not able to understand the precise reasons that drive individual behaviors. This is a far cry from precision public health.
Enter Causal AI: A more efficient way to turn data into interventions
Causal AI gets around this challenge by essentially recreating an RCT without running a trial, revealing the complex web of cause and effect between many variables.To do this, we can first perform a process called “causal discovery” to extract all the causal links in the data. Then, we can isolate the specific causal effect of an intervention on an individual. Moreover, causal discovery allows us to explore cause-and-effects that might have been overlooked in traditional approaches.
This is the basic premise behind causal AI — an ideal tool for exploring potential health interventions in a wide range of scenarios.
At Surgo Ventures, we are particularly interested in a type of causal AI called causal Bayesian Network (see title image). A causal Bayesian Network is a representation of all the complex conditional dependencies among a set of variables in a data set. Its visual nature makes it easy to illustrate and interpret complex relationships between cause and outcome.
So why hasn’t the causal AI model caught on yet?
Despite many offerings of causal AI, we have not seen a wide adoption for real-world problems, primarily due to the fact that, unlike predictive AI approaches (in which predictive errors such as misclassifying an orange for an apple can tell us how good the model is), we usually do not have an efficient way to validate the causal relationships found by causal AI.
Take causal Bayesian Network as an example. We usually have no way to tell how confident we are with the result — rather, the resultant model of causality simply represents the “best” the algorithm can learn from the data we have available. But we don’t know whether “best” means; the algorithm can reveal 90% of causal relationships or 10%.
This makes causal AI results somewhat difficult to defend when they contradict previous beliefs or doctrines, even if those beliefs or doctrines themselves are not backed up by causal evidence. Thus, Bayesian Network results are often presented as a proof-of-concept of techniques to show that the method can recover insights already known, rather than as an actionable model for discovery, change, or intervention.
A possible solution: the “Causal Datasheet”
However, all is not lost: computer scientists have previously developed methods to estimate the accuracy of Bayesian Network models using ‘synthetic’ datasets. A synthetic dataset is one where all the cause and effect relationships between variables are already known. Because the relationships are known, computer scientists can estimate how good an algorithm is at recovering the same network of cause and effect. However, because synthetic datasets can differ from real world datasets in their characteristics (such as the number of variables, sample size, or other characteristics), we still don’t know how good an algorithm would perform with real-world data.
To solve this quandary and to empower practitioners to estimate uncertainty levels around the causal structures, we created tools to allow practitioners to generate synthetic data in a “Causal Datasheet” with a spectrum of properties that mimic existing or projected real-world data. We call it “Datasheet” because it is analogous to datasheets that accompany any product, like cars and electronic components, to inform customers of the product’s expected performance.
There are two primary goals and benefits of creating a Causal Datasheet.
- You can provide some expectation of performance given the basic, observable, characteristics of a dataset.
- You can provide guidance as to how many samples will be required in order to meet desired performance levels.
Causal Datasheets can be used in different stages of research and analysis. For planning purposes, we can use a Causal Datasheet to figure out the sample size or number of variables we need to get an answer we can trust. For existing datasets, we can use a Causal Datasheet to get a sense of how confident we can be in the cause and effect relationships we generate using a causal AI model.
How Causal Datasheets Helped Support Reproductive Health in India
To provide additional context, the following real-life examples show machine learning scientists and practitioners the benefit of using Causal Datasheets to design more effective health interventions:
Survey Design of a Study of Sexual and Reproductive Health
In 2019, we had the opportunity to use the Causal Datasheet to determine the sample size of a large-scale survey of sexual and reproductive health we conducted in Madhya Pradesh, India. Determining the sample size of this study was important because it had implications for the overall budget and timeline of our project. Typically, we want a survey to capture as many variables as possible (provided the survey is not too long) with as few samples as possible.
Our survey sought to quantify a wide range of causal drivers around family planning decisions. These variables included demographics, knowledge and beliefs, risk perceptions, past experiences, and structural determinants, such as accessibility.
We estimated that we would have between 30–60 variables that would be critical causal drivers of sexual and reproductive health decisions. From previous work, we estimated that causal variables would have, on average, three levels. Our project budget allowed a range of 5,000 to 15,000 samples, but we did not know which sample size would have sufficient performance for a causal Bayesian network model.
When comparing model performance metrics, we determined the right balance would be a sample of around 15,000 respondents and 50 variables to have confidence in our Bayesian network models.
Thus, using the Causal Datasheet approach, we set up India’s most comprehensive survey on sexual and reproductive health in a decade. In collaboration with the Clinton Health Access Initiative and the government of Madhya Pradesh, we surveyed more than 15,300 married women, their husbands, and their community health workers to holistically explore all of the factors and people influencing family planning decisions. To learn more about the results of this work, see our case study: Getting a 360 Degree View on Family Planning Choices.
Analysis of Data for a Study on Family Planning Usage
Now, coming back to the question at the beginning of this essay: in Uttar Pradesh, how could we find the right intervention so expecting mothers would choose to have their babies in hospitals rather than delivering babies at home?
In 2016, we surveyed over 5,000 women on various reproductive, maternal, neonatal, and child health (RMNCH) behaviors and outcomes. From this, we initially identified 41 variables we thought represented direct causal drivers of RMNCH outcomes and behaviors, such as birth delivery locations and early breastfeeding initiation, or perhaps healthcare facilities were too far?
The Uttar Pradesh government was already considering building facilities closer to villages, and our initial analysis of the observation data (using a predictive model based on correlations) supported this. It suggested distance to facilities, among other factors, was closely associated with the final delivery location.
To our surprise, using causal AI on the same data revealed that the distance to facilities did not drive mothers’ decisions to use a hospital. Instead the direct causal drivers were primarily access to transportation, having a predetermined delivery plan, and the expecting mother being convinced that a healthcare facility would provide a safer delivery of her baby than her home.
How could we be confident in these surprising results? When we generated a Causal Datasheet on synthetic datasets with similar characteristics, we saw that it performed well; it was able to consistently capture most of the correct cause and effect relationships between variables.
These results had important policy and financial implications:
- The government would be better off investing in strengthening the ambulance system to existing facilities, rather than building new facilities closer to villages;
- Community health workers typically advertised only the financial rewards of hospital deliveries. Now it was clear that they should also focus on hospital safety and help women develop delivery plans during the antenatal period.
Had the Uttar Pradesh government relied on predictive models based on correlations, it would have missed an opportunity to save precious resources and implement a potentially more effective intervention. To learn more about this work, please see our case study: Getting More Women to Deliver Their Babies in Hospitals.
These two examples illustrate the value a Causal Datasheet can provide for both machine learning scientists and practitioners. Causal Datasheets can aid in the planning of studies that have the analytical goal of causal discovery and inference, and in analysis of studies after existing data have been collected. This approach is particularly important when data characteristics are sub-optimal for data-driven learning, which is, unfortunately, often the case in low and middle-income countries.
We hope that Causal AI continues to gain attention, as it has immense potential for designing effective interventions that improve health outcomes.
For detailed information on the materials and methods of Surgo Ventures’ Causal Datasheet, dataset characteristics, structure learning algorithms, and related metrics, please see our paper published in Frontiers in Artificial Intelligence: Causal Datasheet for Datasets: An Evaluation Guide for Real-World Data Analysis and Data Collection Design Using Bayesian Networks.