Fair Bytes
Published in

Fair Bytes

How Biased is GPT-3?

Despite its impressive performance, the world’s newest language model reflects societal biases in gender, race, and religion

Last week, OpenAI researchers announced the arrival of GPT-3, a language model that blew away its predecessor GPT-2. GPT-2 was already widely known as the best, state-of-the-art language model; in contrast, GPT-3 uses 175 billion parameters, more than 100x more than GPT-2, which used 1.5 billion parameters.

GPT-3 achieved impressive results: OpenAI

--

--

--

A Medium publication sharing byte-sized stories about research, resources, and issues related to fairness & ethics of AI

Recommended from Medium

Extracting keywords from text: A case study with Japanese job postings

What is Image Annotation in Agriculture?

Reinforcement Learning: Multi-armed Bandits

Basic ML Terminologies

Creating a movie recommender using Convolutional Neural Networks

An introduction to Deep Q-Learning: let’s play Doom

NLP Guide 101: NLP Evolution — Past, Present and Future.…

Running Machine Learning Model for Predicting the Salary inside the Docker Container

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Catherine Yeo

Catherine Yeo

Computer Science @ Harvard | I write about AI/ML in @fairbytes @towardsdatascience | Storyteller, innovator, creator| Visit me at catherinehyeo.com

More from Medium

The secret of deploying GPT-3 app

How NLP will accelerate EU Taxonomy reporting (and make way for a new era in sustainable finance)

Generative Pre-trained Transformer 3 by OpenAI

SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization (Review/Explained)