Here’s how AI researchers are thinking about the societal impacts of AI
Scientists have long subscribed to the idea that they shouldn’t bring ethical or political values into the scientific process. But what happens when their inventions cause safety concerns, exhibit racist behaviors, or pose threats to the environment? They might — as computer scientists recently decided to do — try formally integrating ethical reflection into their work to realize more prosocial outcomes.
Mobilizing an entire field to change something fundamental about the way it works is no simple task. One way is to change a step in the research process that is crucial to credibility: the peer-review process.
The NeurIPS 2020 Broader Impact Statement
In 2020, a top machine learning conference, NeurIPS required that all authors submit a broader impact statement:
In order to provide a balanced perspective, authors are required to include a statement of the potential broader impact of their work, including its ethical aspects and future societal consequences. Authors should take care to discuss both positive and negative outcomes
NeurIPS is a massive conference. In 2020, there were nearly 2,000 accepted papers. While the broader impact statement could not be the sole basis for a paper’s rejection, submissions could be rejected on ethical grounds. Thus, thousands of researchers thought about societal consequences of their work as part of the new requirement. The conference did not provide much additional guidance around how to write the statement, meaning that authors had to decide for themselves which topics to prioritize, the appropriate timeframe of impacts to discuss, how to write about uncertainty of future outcomes, and so on.
The broader impact statements are a snapshot of a community at a crucial point of change. They give us a glimpse into how researchers are grappling with ethics that can inform what might be working, what is potentially lacking, and how to move forward.
Our Analysis
With Jessica Hullman and Nicholas Diakopoulos, I conducted a qualitative thematic analysis of a sample of 300 NeurIPS 2020 broader impact statements (here’s the dataset). Our analysis surfaces several themes around what authors focus on and how they do so. We organized these themes into a framework with dimensions and sub-dimensions of variation (see the table below).
Broadly, we find that authors write about Impacts and Recommendations for how to mitigate negative consequences and ultimately realize better downstream outcomes.
Impacts
Under Impacts, we find themes for how authors describe impacts, what impacts they describe, who they say will be impacted, and when society will see impacts.
First, how do authors express impacts? We find that authors vary in how they express impacts along five dimensions:
- Valence of consequences as positive or negative
- Orientation toward technical outcomes or society-facing outcomes
- Specificity of consequences, ranging from highly contextual and specific to more general
- Uncertainty built into consequences such as acknowledging that a model could fail downstream
- Composition of the written statements along the prior dimensions
There are some categories of impacts, such as efficiency, where authors tend to write about them in terms of technical contributions. For instance, authors might state that their method reduces training time or resources (the implication being that this is, in and of itself, a positive consequence). On the other hand, authors tend to write about impacts on privacy as more socially-oriented, for example writing about implications to surveillance. Discussion of consequences as technically-oriented seems to run counter to the spirit of the broader impact statement, which calls explicitly for discussion of societal consequences.
Second, what types of impacts are authors focusing on?
We find that authors focus on the topics shown in the graph below. In our paper, we describe specific patterns in impacts within each topic, as well as the topic’s primary orientation as more technical or more societally-facing. Interestingly, we find that efficiency is a common topic that is predominantly technically-oriented. Bias (including issues of fairness) receives significant attention as well, and is described in ways that are technically-oriented and society-facing; privacy is yet another relatively common topic, but is written about primarily in society-facing ways. Though less prevalent, authors additionally write about impacts to the environment, threats of deepfakes (under Media), and impacts to employment (under Labor).
Authors are split around whether theoretical work has societal consequences. 9% of statements indicate that due to the theoretical nature of the work, there are no foreseeable, ethical, or negative societal consequences. On the other hand, 10% of statements indicate a connection between the theoretical nature of the work and potential societal consequences. It would be interesting to find out what leads authors to these differing conclusions.
Third, who will be impacted?
Over half the statements in our sample (64%) mention who might be impacted by the work. We find various groups of people mentioned:
- Broad domain (e.g., healthcare)
- People with specific attributes or conditions
- People in a certain industry (e.g., creative industries)
- Broader public (e.g., people on social media)
- Historically disadvantaged groups
- Researchers and practitioners
Fourth, when will society see these impacts?
Only about 10% of statements in our sample include the timeframe of impacts. Authors write about timeframe in broad terms — short-term versus long-term impact, or how some outcome might be realized faster.
Recommendations
Authors tend to focus far less on recommending ways to mitigate negative consequences. However, they do make various suggestions for how to achieve the following four outcomes:
- Safe and Effective Use of AI (21% of statements)
- Ensure “Fair” Outcomes (6%)
- Protect Privacy (5%)
- Reduce Environmental Impact (1%).
Clearly there’s a ways to go for the AI research community to be proactive about addressing and recommending positive paths forward.
At times, authors leave it ambiguous who is responsible for carrying out the actions they recommend (7% of statements). 24% of statements, however, indicate in some way who is responsible. Some responsible parties mentioned include researchers, policymakers, and stakeholders closer to deployment such as practitioners or system designers.
Looking Ahead
So, was the broader impact statement requirement successful? It’s difficult to answer this question, because the intended goals of the new requirement are a bit ambiguous. We offer the following three potential goals, including some ideas for how the broader impact statement might be modified to help achieve these goals in the future.
- Encourage reflexivity: To further encourage reflexivity, we suggest providing authors with further guidance around mapping technical results to societal impact (e.g., along the lines of Ashurst et al.’s “Impact Stack,” which was linked to in NeurIPS 2020’s official guidance).
- Initiate changes to future research: To influence future research directions based on potential societal consequences, researchers should write broader impact statements earlier in the research process so that they can make meaningful changes to their research.
- Minimize negligence and recklessness: Researchers might find it helpful to reference a set of relevant ethical issues in their area of work so that they may better identify and mitigate potential risks.
NeurIPS 2020’s broader impact statement marks a significant change in how the field of computer science thinks about ethics. It’s a tumultuous time to be studying AI ethics, as the field continues to experiment and try new approaches to grapple with its impact in society. We welcome your thoughts on our findings and on the evolution of the role of broader impact statements in research.