GPT-3: What it is and How it Could Affect Work, Education, and Life As We Know It

Thea Knobel
GSV Ventures
Published in
4 min readFeb 3, 2021

Sergey Karayev, PhD, Head of AI for STEM at Turnitin, Co-Founder of Gradescope, and GSV Ventures Advisor shares his insights into GPT-3.

Each month GSV Ventures hosts an advisory member to speak with our portfolio of founders during our ClubGSV meetup. Advisors include the former U.S. Secretary of Education, University Presidents, Founders/CEOs of major companies and more. Together the GSV advisors represent some of the most important institutions driving the dawn of the age of digital learning. The goal of ClubGSV is to increase access to education critical for founders. Due to popular demand, we will publish some of the learnings from these sessions in our new series: ClubGSV. This is the first installment.

Sergey Karayev is focused on developing and deploying AI systems that improve human life. In 2014, he finished a PhD in Computer Science at UC Berkeley, and co-founded Gradescope, which develops AI to transform grading into learning. In 2018, Gradescope was acquired as a standalone product by Turnitin, a leading ed tech provider focused on upholding academic integrity. Now Sergey runs AI for STEM at Turnitin and is an active advisor to GSV Ventures. Here’s a recap on his GPT-3 research from our recent meetup.

The Evolution of GPT-3

In the summer of 2020 an artificial intelligence lab, OpenAI, announced the creation of a truly revolutionary technology: GPT-3. GPT-3 (Generative Pre-Trained Transformer 3) is a deep learning model for the task of language modeling. It can answer homework questions, write poetry, tweet, translate languages, generate images, and even code. GPT-3 uses neural networks, where a computer learns to perform some task by analyzing training examples. GPT-3 spent months analyzing nearly a trillion words on the internet from books, to blogs, to articles, to social media. Now it is virtually indistinguishable from human writing.

The creation of the first version of this artificial intelligence network started with just over a hundred million numerical parameters. To “train” a neural network is to find parameters that minimize error on data. The more parameters there are, the more the model can handle complexity. Artificial intelligence has become smarter at an exponential rate. In June 2018 OpenAI GPT-1 had 110 million parameters, in March 2019 OpenAI GPT-2 had 1.5 billion parameters, and in June 2020 Open AI GPT-3 had 175 billion parameters. Neural networks with 1.6 trillion parameters have recently been published, with no sign of slowing down.

A graph shows the exponential growth of GPT-3.

OpenAI did not release the model parameters, out of societal concern for misuse of the technology. Instead, they provided an API to the GPT-3 model, such that they can monitor its use. For now, OpenAI has granted select people access to a private beta. The company has invited outside developers to help it explore the tools’ capabilities through a GPT-3 API that they can monitor. OpenAI eventually plans to release a commercial product in late 2021, offering businesses a paid-for subscription to the AI via the cloud. So what are the implications of GPT-3 according to Sergey?

What GPT-3 Can Do

GPT-3 has the ability to create everything from text, to code, to images. Its specialty is synthesizing millions of data points found on the internet and turning it into a language with a human-like output. When given simply an author’s name and one single word, GPT-3 was able to compose prose entirely on its own. GPT-3 can also code from a simple description and create an image that doesn’t exist in real life. This can lead to an efficient workforce where everyone is essentially armed with the ability to be both a copywriter and a software engineer.

A twitter user posts a photo of prose generated by GPT-3. He gave it one simple word for the prompt and the artificial intelligence generated an entire text.
A twitter user posts an interactive video, which shows GPT-3 coding.
A slide from Sergey’s presentation, showcasing how GPT-3 can be used for image generation.

The Perils of GPT-3

GPT-3 is not perfect. It has the potential to cause unintentional or intentional harm through computer generated outputs posing as human language. Artificial intelligence leaders are continuing to think through these challenges as the technology evolves. Here are the top 5 threats:

  1. Language can be filled with bias or hate speech
  2. Tons of useless search results
  3. More fake articles, reviews, and social media posts
  4. Physical, social, and biological reasoning is not fully accurate
  5. Potential for academic dishonesty
Screenshots show GPT-3’s potential bias when given prompts to write tweets based on one word.

How Will GPT-3 Affect Work, Education, and Life

GPT-3 has the potential to change life as we know it, from the way our jobs are done, the way we interact on the internet, and how we learn. Currently, the technology is at the point where instructor perception and AI-powered writing quality are intersecting. GPT-3 can create multiple choice questions, write short essays, and even surmise explanations. This is an exciting innovation, but also could also lead to wide spread cheating.

A slide from Sergey’s presentation shows the potential for academic dishonesty.

Sergey suggests that GPT-3 and its future successors may be a “calculator for writing.” It’s worth thinking about whether students should learn how to effectively use such a calculator, in addition to learning the fundamentals of good writing. How exactly GPT-3 will affect the workforce and education is not known. However, one thing is known for sure: GPT’s progress will not slow down.



Thea Knobel
GSV Ventures

Vice President of Platform & Marketing at GSV Ventures investing in the globe's top EdTech startups.