Diversity as a Computational Problem

How a computational approach to diversity, equity, and inclusion can help people and organizations make more diverse choices.

--

Photo by Nathan Dumlao on Unsplash.

Implicit bias causes discrimination against marginalized groups — a phenomenon well-recognized in society. In a utopian society, every demographic group would be represented equally in a domain. However, as we all know most often that is not the case. The first step towards equal representation is the ability to identify and acknowledge gaps in representation. In this research, we take a computational approach towards that goal. Before explaining our approach, let’s first see an application.

Consider the scenario of a live telecast where a producer is controlling several cameras to produce the telecast. A producer typically uses an interface named Switcher, shown below, to control the cameras. The CAM1, CAM2, and CAM3are the available camera angles. A producer uses the PRV frame to isolate an angle and preview it before selecting it for broadcasting. Finally, the PGM is the broadcasting frame that we see on our television.

The Switcher Interface used by live telecast producers.

Now, suppose the director wants to ensure balanced screen time for different demographic groups in the telecast to promote equal representation. However, It is not an easy task since the director has to make instantaneous decisions in controlling the cameras without any chance to revert or refine a decision. Moreover, the producer has to keep track of the screen presence of each character in their mind in a strenuous live environment. Enter Screen-Balancer!

To assist producers in this task, we introduced Screen-Balancer [1], an interactive visual interface that augments the switcher interface with several bar charts. The bar charts show gender and skin-tone distributions of each camera angle and their impacts on the gender and skin-tone distributions of the broadcast. The distributions are automatically extracted from the camera feeds. This helps producers make an informed decision before changing a camera angle.

Screen-Balancer: An interactive visual tool to balance phenotypic representation in live telecasts.

A computational approach to promote diversity

Now see what else our approach can do. It can be divided into two parts:

  1. Computation: We apply data science and machine learning methods to extract and quantify the representation of demographic groups from raw data formats such as videos, text, tabular data.
  2. Interactive User Interface: User-friendly interactive interfaces that present extracted representation in easy-to-understand visual formats. The interfaces allow non-expert users to analyze diversity representation and take appropriate actions.

A few potential usage scenarios:

  1. Help diversity professionals at a university understand patterns of (mis)representation of students and staff.
  2. Help journalists and creative writers in avoiding gendered, and stereotyped use of language in their writeup.

A few considerations

  1. Will our approach solve the overall bias problem in society?

Most probably not. Bias is ubiquitous in our society and often is multimodal and nuanced. A few computational tools will not solve the whole problem. However, we do believe our tools will work as a probe to promote diversity, reduce stereotypes, and motivate further efforts in this area.

2. Is it the only approach to address bias and stereotypes?

Of course not. There are many commendable approaches with similar goals. Computational fairness, social intervention, participatory design are a few to name. We consider ourselves a part of the overarching goal of equality.

3. Are there certain pitfalls of such tools?

There are too many examples of technology gone wrong to name here. Given the sensitivity of the topic, it is important to acknowledge the pitfalls of our approach. We list a few pitfalls here — (1) designing systems without conducting formative research with users; (2) designing AI that can propagate unjust decisions; (3) designing systems for people already in a position of power.

Acknowledging these pitfalls will inform the design of the systems while reducing the chances of misuse. A few design alternatives such as including marginalized groups in the design process, ensuring fairness and interpretability of the underlying AI can address these pitfalls.

Citation:

  • Hoque, M. N., Saquib, N., Billah, S. M., & Mueller, K. (2020). Toward Interactively Balancing the Screen Time of Actors Based on Observable Phenotypic Traits in Live Telecast. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1–18. pdf, video.

--

--