Fighting the Opioid Epidemic with Interpretable Causal Estimation of Individual Treatment Effect

MIT-IBM Watson AI Lab
5 min readOct 9, 2018

--

Opioid misuse has been a growing problem throughout the U.S. since the 1990s and continues unabated to the present. In 2016, prescription opioid drugs contributed to 40 percent of all U.S. opioid overdose deaths, and in 2018, more than 115 people died each day from overdoses involving prescription opioids. Our team of MIT-IBM Watson AI Lab researchers is bringing the power of AI and machine learning to tackle this challenge. Our goal is to develop precision medicine models that can infer causal effects of opioid usage on individuals from observational data and do so in an interpretable way. The key words in this description are causal, to increase the confidence that ensuing interventions will have the desired effect; individual, to account for variations in effects due to patient characteristics; and interpretable, to communicate the results to medical experts, AI experts, and non-experts alike. Using these models, we hope to discover patterns that could guide opioid prescription practices and mitigate the opioid epidemic.

We are interested specifically in estimating the causal effects of opioid type (synthetic versus other classes), duration of supply, and dosage on the risk of adverse outcomes such as opioid addiction and long-term use. These variations in an opioid prescription (type, duration, dosage) are collectively referred to as “treatments”. The effects of these treatments may vary from person to person based on individual characteristics, for example, medical history and conditions, the procedure that may have led to the opioid prescription for pain, or demographic factors. Estimation of such individual treatment effects (ITEs, also referred to as conditional average treatment effects) is a major focus of our project.

One step toward estimating ITEs is to identify subgroups of patients who exhibit a similarly enhanced or diminished treatment effect relative to the overall population. Figure 1 shows a stylized example in which the treatment effect is the highest possible in subgroup Z1 (all treated individuals become positive), while it is moderate in subgroup Z2 and still lower in Z3. In a recently submitted manuscript [1], we propose a method for discovering such subgroups that combines a mixture model for describing subgroups and an outcome model that can adjust for nonlinear effects of confounders. The mixture model together with sparsity (few nonzero values) in its parameters allow the discovered groups to be interpreted in terms of their important features. For example, chronic pain conditions may be more prevalent in a subgroup with enhanced effect than in the general population. Further work on interpreting results is ongoing.

Figure 1: Subgroups with enhanced or diminished treatment effect

Our main data source in this project is healthcare claims data. Claims are generated every time a patient has a healthcare encounter, creating, over time, a detailed record of the patient’s prescriptions, procedures, diagnoses, and accompanying enrollment and demographic data. Claims data present challenges for causal inference because they are observational, meaning that they reflect natural healthcare experiences, not interventions as in a randomized controlled experiment. As a consequence, the population of patients who received one treatment, say synthetic opioids, is likely to differ from one that received a different treatment in ways that also affect the outcome (e.g., patients who received synthetic opioids may have undergone more painful procedures). Thus, estimates of the effect of interest may be confounded without further measures. In previous work [2], our team members have developed machine learning methods to reduce confounding, and this will continue to be a focus in the current project.

A related problem in causal inference from observational data is to understand overlap and support. Overlap refers to the extent to which groups of similar patients include members who receive all possible treatments. Support refers to the coverage of the observed patients (without regard to treatment). In Figure 2, regions B and C have no overlap while region D has no support. In such areas of poor overlap and support, estimates of causal effects are highly uncertain because of the lack of comparisons to similar individuals who received the under-represented treatments. In ongoing work, we have characterized this uncertainty in a mathematically precise way, showing that the error in estimating ITEs is bounded by a function of the degree of non-overlap between treated and control populations. This refines earlier work [2] by emphasizing the importance of non-overlap over other measures of dissimilarity between the distributions.

Figure 2: Illustration of overlap and support

To deploy policies based on causal estimates from observational data, it is crucial that practitioners are aware of the uncertainty due to low overlap and support. We aim to make this possible by developing interpretable descriptions of overlap and support in high-dimensional data. Our preliminary exploration has involved interpretable models such as trees (e.g., modified density estimation trees [3]) and rules [4]. Such models might indicate, for example, that overlap is lacking among patients with spinal conditions in a particular age group, information that practitioners might use to collect additional data. Moreover, measuring overlap in all variables may exclude larger groups than necessary. Indeed, overlap is only required in confounding variables, those that affect both treatment and outcome as discussed above. Mathematical formulations have not been explored for this problem of learning overlap and support jointly, distinguishing between confounders and non-confounders, and with interpretability as a requirement.

Our team’s ultimate goal is to work towards a science of interpretable, individual-level causal inference from observational data. In doing so, we hope to contribute solutions to the opioid epidemic as well as applications far beyond.

References

1. C. Nagpal, D. Wei, B. Vinzamuri, M. Shekhar, S. E. Berger, S. Das, K. R. Varshney. Interpretable Subgroup Discovery for Treatment Effect Estimation with Application to Opioid Prescribing Guidelines. Submitted 2018.

2. U. Shalit, F. D. Johansson, D. Sontag. Estimating Individual Treatment Effect: Generalization Bounds and Algorithms. International Conference on Machine Learning (ICML), Sydney, Australia, August 2017.

3. P. Ram and A. G. Gray. Density Estimation Trees. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Diego, CA, USA, August 2011.

4. S. Dash, O. Günlük, D. Wei. Boolean Decision Rules via Column Generation. Neural Information Processing Systems (NIPS), Montreal, Canada, December 2018.

Authored by Dennis Wei (IBM Research) and Fredrik D. Johansson (MIT)

--

--

MIT-IBM Watson AI Lab

This is the official Medium account of the MIT-IBM Watson AI Lab. The account follows the IBM Social Computing Guidelines. @MITIBMLab