Visualizing the Algorithm for complex decision-making

Samarth Ashwathanarayana
VisUMD
Published in
4 min readNov 3, 2023
Image Credits — https://unsplash.com/@soymeraki

Making decisions can be quite a daunting task. Whether it’s as trivial as picking a flavor at your favorite ice cream parlor, or as impactful as choosing the right career path for you, making decisions can be irking, confusing and time-consuming. If you are someone who likes making informed decisions, this article can help you in discovering how recent research in decision-making systems actually help us understand the factors involved in making complex decisions.

In this article, I will break down the findings from a Visualization paper written by Md Naimul Hoque and Klaus Mueller. The two authors developed an interactive interface called “Outcome Explorer” which visualizes the decision-making factors and how they are related to each other, in a way that any non-expert of Machine Learning systems can easily understand the causality involved.

Algorithmic Decision-Making Models

There has been a significant amount of research in the field of algorithmic decision making and explaining AI, but it is only understandable by experts in Machine-Learning. The EU General Data Protection Regulation mandates that all users have a “right to explanation”, and ensuring this right for their users was a driving force for the authors. Essentially, as non-experts of Machine Learning, we would have questions about decisions made by systems such as “Why did the model make a particular decision,” or “How will my actions change a decision?”.

Researchers use a Directed Acyclic Graph(DAG) to map out the causal relationships between variables that are considered in making decisions. For example, if the decision to be made is to choose which classes to pick for a semester, the variables would include class timings, the professor teaching that class, the location of the class, mode of the class(online vs in-person), number of credits, syllabus, etc. Each of these can be mapped out, and connected with lines to show how they are related to each other. The syllabus is related to the professor teaching the class and their expertise. This would be considered a directed relationship from the professor’s expertise to the syllabus and outcome of the course.

A Directed Acyclic Grap showing the causal relationships between factors that affect the relevance of a class.

How is Outcome Explorer different from other explanation interfaces?

While other tools give the outcomes “if” some variables are changed, Outcome Explorer gives “why” it changes.

Most decision interfaces allow users to change input factors and see how it affects the end decision. Outcome Explorer is different. It not only allows input factors to be changed, but also gives the user the connection between the variables and the option to disconnect them. If we are searching for the right job opportunity, we will be able to see the connection between the responsibilities and the salary offered. This makes more sense to non-expert users.

Additionally, Outcome Explorer helps users compare the outcome across different variables. In the demo video given by the authors, they showed how an expert would first create a model using a fixed dataset. The expert can manipulate the relationships on the Direct Acyclic Graph and then publish it for non-expert users to interact with and understand the causal relationships. The authors chose a dataset of housing in Boston. The non-expert user can control the variables like distance from their old house, and number of rooms per dwelling and see their impact on another variable — number of lower-status people living in that neighborhood. Overall, they are able to make these connections to see the price outcomes.

How can you benefit from Outcome Explorer?

Outcome Explorer gives us a visual representation of the different factors involved in making big decisions and the relationship between those factors. Sometimes, making important decisions are difficult, and we may forget to consider certain factors, leading us to make poor decisions.

If an expert has already programmed in the dataset we need to consider before making decisions, it would make our lives a lot easier. We would be able to trust Outcome Explorer because all the evidence is laid out before our eyes transparently, enabling us to make well informed decisions.

TLDR:

  • Non-experts of Machine-Learning find it difficult to understand how algorithmic decision-making models work.
  • Underlying causal relationships need to be transparent.
  • Directed Acyclic Graphs help visualize these relationships.
  • Outcome Explorer provides an interactive visualization to help non-experts of ML use and understand how causal factors affect each other and the final outcome.
  • By comparing different factors, complex decision-making is made easier to interpret in Outcome Explorer.

Watch Outcome Explorer in action-

Find the paper in PDF format here.

Reference:

Hoque, N., & Mueller, K. (2022). Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making. IEEE Transactions on Visualization and Computer Graphics, 28(12), 4728–4740. https://doi.org/10.1109/tvcg.2021.3102051

--

--