NeurIPS 2020 | Probabilistic Approaches for Algorithmic Recourse With Limited Causal Knowledge

Synced
SyncedReview
Published in
5 min readDec 4, 2020

The rise of machine learning (ML) has made centralized decision-making more efficient than ever — while also raising some difficult but important questions. In real-world scenarios such as pre-trial bail, loan approval, or prescribing medications, it is not enough for black-box models to be accurate and robust — an algorithms’ decisions are also expected to be explainable, so their impact in real-world settings can be aligned with socially relevant values such as fairness, privacy and accountability.

“Consider an individual has been rejected a loan from the bank. Why did the individual not get the loan? What could they have done, or what could they change to get a favourable outcome in the future?” asks Max Planck Institute for Intelligent Systems PhD student Amir Hossein Karimi.

The term “algorithmic recourse” refers to actionable recommendations to individuals for obtaining a more favourable prediction from an ML system. In a NeurIPS 2020 spotlight paper, Karimi and joint first author Julius von Kügelgen teamed with co-authors from the Max Planck Institute for Intelligent Systems, ETH Zurich, University of Cambridge, and Saarland University to dive deep into the relatively young research field, and initiate two probabilistic approaches designed to achieve algorithmic recourse in practice.

Currently, counterfactual explanations are a popular approach for providing explanations on black-box classifiers, and can be interpreted as recommendations for achieving a specifically requested goal. For example, a possible explanation to the question “Why did the individual not get the loan?” might be that the loan application would have been accepted if the individual would earn $500 more per month and not have a second credit card. Such explanations however do not help individuals seeking algorithmic recourse, as efficiently computing high-quality counterfactual explanations of black-box models can be extremely difficult.

Although prior work on counterfactual explanations and algorithmic recourse treated features as independently manipulable inputs and ignored the causal relationships between them, some recent research has considered the causal structure between features in order to find basic actions and interventions for recourse. “While this approach is theoretically sound, it involves computing counterfactuals in the true underlying structural causal model (SCM),” note the researchers. This presents challenges, as in real-world settings, the true underlying structural causal model often stays unknown because it requires complete knowledge of the true structural equations. In the new paper, Karimi and the team show that it is impossible to guarantee recourse without access to the true structural equations.

In response to the question of how to achieve algorithmic recourse when causal knowledge is limited, for example, to only a causal graph, the team proposed two probabilistic methods that can do so with high probability:

  • Individualized recourse based on counterfactuals, an individual-level recourse approach based on Gaussian Processes (GPs) that approximates the counterfactual distribution by averaging over the family of additive Gaussian SCM
  • Subpopulation-based recourse, a subpopulation-based approach which assumes that only the causal graph is known and makes use of CVAEs to estimate the conditional average treatment effect of an intervention on a subpopulation similar to the individual seeking recourse.

In experiments on three synthetic datasets, the researchers noted the subpopulation-based approaches indicate more predictable behaviour, making them preferable for individuals seeking recourse under imperfect causal knowledge. While currently available methods based on point estimates of the underlying true SCM tend to make invalid recommendations or lead to costly proposals, the proposed novel probabilistic approaches deliver more robust recourse interventions, especially when true causal knowledge is only scarcely known.

“Identifying these biases in real-world (decision-making) systems is very important, insofar as the systems can be temporarily reverted back to the (supposedly more fair) human-based systems. In the long term, however, to reap the benefits that may come with automated systems, we should research and develop methods that can directly account for such notions of fairness during training and before they are deployed.” Karimi told Synced, “I believe there are quite a number of important directions for future research, especially when considering the interplay of algorithmic recourse with other ethical ML criteria, such as fairness, security and privacy, robustness, manipulation, etc.”

The paper Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach is on NeurIPS 2020 Proceedings. This is a NeurIPS 2020 spotlight paper, scheduled for a ten-minute presentation on Wednesday, December 9, at 8:20am PST.

Reporter: Fangyu Cai | Editor: Michael Sarazen

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global