Two minutes NLP — Ethical and social risks from large Language Models

Discrimination, toxicity, misinformation, frauds, and environmental harm

Fabio Chiusano
NLPlanet
2 min readDec 11, 2021

--

Photo by engin akyurt on Unsplash

DeepMind just published the paper Ethical and social risks of harm from Language Models, where they anticipate possible risks from these kinds of models and create a comprehensive classification of them. An understanding of the risks posed by these models is definitively needed to foster responsible innovation.

DeepMind’s goal is indeed to support the broader research program toward responsible innovation on Language Models (LM), to increase public awareness of ethical and social risks to LMs, and to break these risks into smaller, actionable pieces to actively support and encourage their mitigation.

The risks covered in the report are:

  • Discrimination, Exclusion, and Toxicity: arise from the language model communicating unjust, toxic, and oppressive tendencies present in the training data, such as social stereotypes and unfair discrimination.
  • Information Hazards: arise from the language model predicting utterances with private or safety-critical information present in the training data.
  • Misinformation Harms: arise from the language model predicting false, misleading, or poor quality information. Potential harms include deception or unethical actions by humans who take the LM prediction to be factually correct, as well as wider societal distrust in shared information.
  • Malicious Uses: arise from humans intentionally using the capabilities of LM to cause harm, such as undermining public discourse, frauds, and personalized disinformation campaigns.
  • Human-Computer Interaction Harms: arise from LM applications, such as Conversational Agents, that directly engage a user with conversations. Potential harms include unsafe use due to users misjudging or mistakenly trusting the model, psychological vulnerabilities, and privacy violations of the user.
  • Automation, Access, and Environmental Harms: arise where LMs are used in widely used applications that disproportionately benefit some groups rather than others, increasing social inequalities from uneven distribution of risk and benefits, loss of high-quality and safe employment, and environmental harm.

Here is an image that recaps the six main ethical and social risk areas due to large language models:

Main ethical and social risk areas from large Language Models. Image by DeepMind.

--

--

Fabio Chiusano
NLPlanet

Freelance data scientist — Top Medium writer in Artificial Intelligence