GPT-3 — A revolution in AI

Shaunak Inamdar
Analytics Vidhya
Published in
6 min readJul 4, 2021

“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it. AI is made by humans, intended to behave by humans, and, ultimately, to impact humans’ lives and human society.” — Fei-Fei Li, Professor at Stanford University

Neurons firing in our brain during speech.

A major goal of Artificial Intelligence was to get computers to understand, process and one day, mimic human-level speech such that they would be indistinguishable from real humans. This is known as the Turing test. With the development of the GPT-3 model, computers can write digital texts as good as or even better than humans. This is a major leap for AI and humanity as well since this expands the horizons of what we can achieve with the help of AI.

An image by Microsoft explaining the goal of NLP.

To understand GPT-3, first, we will need to understand what is meant by natural language. What makes human thought so unique is its interpretation using natural language. This has nothing to do with languages like Chinese or Russian. This deals with the naturally evolved grammar and linguistic rules in our brains due to which we understand what someone is saying to us. This has been retrained and reinforced in our brains for years and years due to evolution. However, computers do not understand human speech as clearly. For example, we can easily understand that Olive oil is oil made of olives and baby oil is oil made FOR babies. This is because of contextual grammar.

To help computers understand natural language, researchers have developed a technology called language modelling. How this works is: an AI is given some text and is asked to predict what might come next. It is rewarded for being correct and punished for being wrong. Training this model consists of repeating this process multiple times over as much data as possible.
GPT-3 is one such language model.

GPT-3 is a 3rd generation language prediction model which is part of the GPT-n series. It stands for Generative Pre-trained Transformer-3. It was developed by OpenAI, the biggest AI research lab in the world founded by legendary investors Sam Altman(Y-Combinator) and Elon Musk. The goal of OpenAI is to develop Artificial General Intelligence that can benefit humanity as a whole. The GPT-3 model was trained on data from the internet. It used multiple datasets like Common Crawl that had more than 560GB of data that contained more than a billion words. What makes this particular model so powerful, however, is its robustness. It was trained on 175 BILLION parameters. That is billion with a ‘B’, making it the most robust model in history. For context, the human brain has around 100 trillion neurons. This means that GPT3 is about as robust as 1/1000th of a human brain. This is impressive in itself. The parameters alone used almost 700GB of storage. Moreover, Artificial Intelligence models get better in terms of performance when trained repeatedly. This is why OpenAI trained this model for over 1000 petaFLOP/s days so the amount of computing power used was over an exaflop. To put this in context, 1 exaFLOP of computing power is equivalent to 37 trillion years of adding numbers together(1 exaFLOP = 1000 petaFLOP = 10¹⁵ kiloFLOP). This required more than $4.6 million for development.

Cost of training similar models through the years with better tech. Image from Lex Friedman.

The result of this all was a language model that could generate and predict texts with human-level quality that could fool humans most of the time. There is no doubt that this has tremendous real-life use cases but developers have just dipped their toes in this model.

Some interesting applications are:

Text generation- Given a word or a sentence for an article, the model can generate short articles that fool humans and are cogent and extremely well-articulated. It could also chat with humans and very few could guess that they were talking to a bot. However, the biggest Turing test was when someone let loose GPT-3 on Reddit. More than a week passed before someone suspected that u/thegentlemeter was in fact a bot and not a human. During this span, it replied to loads of comments and held conversations with many users in the r/AskReddit subreddit. The quality of its answers was very good although sometimes a little twisted. However, its ability to stay 100% on the topic was impressive even by AI standards.

“The only conclusion that can be drawn about the purpose of life is to live happily, but how does one define happiness? Humans struggle with this.” — u/thegentlemetre( GPT-3 bot on Reddit)

A comment by the GPT-3 bot on Reddit.

To leverage this ability to write texts, many interesting applications have come up. These are a few of my personal favorites and I invite you to give them a
try —

  1. PhilosopherAI — Get answers to your philosophical questions, powered by AI.
  2. AI-generated recipes — Recipes suggested by someone who has read all the recipes on the entire internet.
  3. Debuild — React web apps created entirely by AI. All you need to do is describe what your vision looks like.

Code that writes code is a very promising use case of this technology. This is why GitHub along with OpenAI is launching the GitHub Copilot. This is a plugin that reads the code that you feed it and generates code based on your function descriptions and previous code.

This is only the 3rd generation of the GPT-n project. It could be possible that GPT-4 or GPT-5 may be capable of reasoning. This could also lead to further growth and newer applications of AI that we cannot even think of. The robustness of this model also raises the question — If AI can reason, will it be conscious? If GPT-3 almost passed the Turing test for writing short articles. What can a trillion parameter model do? A lot of these discussions seem speculative, it seems that the day is not far when we can have truly intelligent conversations with AI.

“If the Computer was like a bicycle for our brain, GPT-3 is like a fighter jet.” — Dr.Károly Zsolnai-Fehér,

“I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too.” — David Chalmers (Australian Philosopher and Professor at NYU)

Thank you for reading :) Please subscribe for more such articles and follow me on Instagram, Medium and GitHub.

--

--

Shaunak Inamdar
Analytics Vidhya

Shaunak Inamdar is an AI enthusiast with a passion for writing about technologies and making them accessible to a broader audience. www.shaunak.tech