No, GPT-3 Is Not Superintelligent. It’s Not Tricking Humans, and It‘s Not Pretending to Be Stupid.

There is an unhealthy level of GPT-3-powered snake oil being sold to non-technical readers.

Jacob Bergdahl
The Startup
Published in
4 min readDec 13, 2020

--

There is a lot of hype surrounding OpenAI’s incredibly powerful machine learning algorithm GPT-3. The algorithm can generate convincing pieces of text with very little input required. Give it a title such as “Feeling unproductive? Maybe you should stop overthinking,” and it can generate an entire full-length cohesive article that could go viral (and it did). It can be used to perform a wide range of tasks. For instance, GPT-3 can summarize articles, generate fan fiction, and even write programming code.

There is no doubt that this is an outstanding machine learning algorithm. Some, myself included, might rightfully call it a game-changer. However, as is often the case with riveting technologies, some people jump to the most absurd conclusions.

Let me give you some examples.

One author, Bernard Mueller, wrote a popular article titled “I asked GPT-3 for the question to ‘42’. I didn’t like its answer and neither will you.” In its conclusion, the author hypothesized: “Perhaps it [GPT-3] knows the question perfectly well, but considers humans as too immature

--

--