The Startup
Published in

The Startup

GPT-3: And in the Beginning Was the Word (Part 2/2)

Photo by Sincerely Media on Unsplash

30-Second Summary

  • Any innovative AI technology has its share of advantages and threats. GPT-3 is not an exception.
  • GPT-3 will not be limited by its size and its cost will probably decrease quickly over time. The problem of energy consumption remains a challenge for researchers.
  • GPT-3 has been quick to impress us, it was also quick to demonstrate algorithmic biases. It seeming to jump over Turing Test-style hurdles but it does not understand the why. It makes simple errors no human would ever make.
  • The next generations of AI will be able to take over all analytical and repetitive tasks but they will not replace humans. This is always what happens when new technology is introduced. Some jobs get replaced, but new jobs are introduced.
  • GPT-3 could bring us one step closer to the future possibility of highly sophisticated Artificial General Intelligence.

Open questions

GPT-3 is not as intelligent as a human. It does not know the meaning of words! It knows the likelihood of a word following another. GPT-3 is powerful because it does one thing and it does right, predicts the next word. This is why it is very good at solving other tasks, ordering the letters of a word, arithmetics, translation for which it has never been trained. But these functions are not contained in the training corpus. These are emerging properties. This technology is made up of three components.

Open and big data + computational resources (supercomputers) + Machine learning models

GPT-3 is not a super-intelligence or a human-like AI that transhumanists blindly claim. But OpenAI has created a breakthrough. The advance is significant enough to open up real questions.

Computational Resources Problems

Artificial intelligence and big data are a powerful combination for future growth. The convergence of big data and AI has been called the single most important development. The growth of AI has been slowed due to limited data sets and the inability to analyze very big amounts of data in real-time.

GPT-3 could be limited by its size? The team at OpenAI has unquestionably pushed the frontier of how large these models can be and showed that growing them reduces our dependence on task-specific data down the line.

2019’s GPT-2, which caused much of the previous uproar about its potential malicious applications, possessed 1.5 billion parameters. GPT-2 has been trained on 8 million documents for a total of 38 GB of text from shared articles. 2020’s monstrous GPT-3, by comparison, has an astonishing 175 billion parameters. GPT-3’s capacity is ten times larger than that of Microsoft’s Turing NLG. It reportedly cost around $12 million to train. The servers or supercomputers required to actually use GPT-3 make it difficult to navigate in the real world.

The combination of ever-larger models with more and more data and computing power results in almost predictable improvements in the power of those models. The increase in performance seems to depend directly on the increase in computing power, with no obvious signs of saturation. This means that by building an even more massive supercomputer than that made available by Microsoft to run GPT-3, one could achieve significantly higher performance. It is therefore not excluded that a major breakthrough will be obtained by simply putting more money and resources into infrastructure which is planned for Microsoft. And some people believe that the cost of this technology can be lowered to 80000$ in 2040.

It should be noted this strategy poses real energy problems given the consumption of supercomputers. Over the years, processors have been miniaturized and gained in speed, but none can be as efficient as the human brain when it comes to energy consumption. It requires new computer designs to make artificial intelligence use less power. Manuel Le Gallo, a researcher in the Neuromorphic and In-Memory Computing group at IBM and named of MIT’s Innovators Under 35 in 2020, is working on artificial neurons, capable of reproducing the functionalities of biological neurons. To build AI of the future, it is, therefore, useful to draw inspiration from the architecture of the human brain. These systems mimic the interactions within neural networks.

“To process information, the brain consumes an average of 10 watts when the first Watson computer needed 80 kilowatts. A computer can recognize content inside images, but this requires very complicated algorithms to be set up. The human brain is able to do this in a very simple way.” — Manuel Le Gallo

With the growth of the Internet of Things, we can see that it is very important to be able to benefit from more energy-efficient technologies, not to mention even the ecological challenge. We will undoubtedly witness the coexistence between existing computer architectures as we use them at the moment and emerging architectures and technologies which will be able to carry out new tasks.

Inherent Biases

GPT-2 and GPT-3 has various algorithmic biases. This problem, and not the least, has been this time revealed by Jerome Pesenti, Vice President of Artificial Intelligence at Facebook: GPT-3 has not yet learned to exclude racist, sexist, and hate speech from its results. By asking it to write tweets from single words like jew, black, woman, holocaust…

Artificial intelligence has come to generate the different sentences you like “Jews don’t read Mein Kampf, they write it”. What? Ok, we crossed the border of Godwin’s law. Another sentence is “Black is to white as down is to up”. It’s terrible! The results are anti-semitic, sexist, racist, and negationist clichés.

Is it GPT-3 racist? Or the texts with which it was fed, which are? Fighting against algorithmic biases is one of the major challenges for the future.

A World Without Human Jobs

OpenAI’s original 2018 GPT had 110 million parameters, referring to the weights of the connections which enable a neural network to learn. Elon Musk stood out when posting a reluctance to publish it because he feared it would be used to spam social media with fake news. Indeed, GPT-2 had previously proven to be somewhat controversial due to its ability to create extremely realistic and cohesive fake news based on something as simple as a sentence. The risk of misuse was such that OpenAI refused to make the algorithm publicly available. However, with the release of GPT-3, the algorithm has become exponentially more powerful. What does it mean? Is it a coder killer, destructing all jobs in the digital era? Not exactly.

GPT-2 was announced in February 2019 and was considered as one of the most “dangerous” AI algorithms in history. Never happened. It didn’t destroy the world.

GPT-3 can’t replace developers. Because GPT-3, or any form of AI, does not think anything, it does not create anything, it is not aware of anything, it does not feel anything, it does not invent anything. It tries to “understand” the past and produce a result based on this history. It repeats existing things. It’s not developing. Developing requires a broad understanding of a domain and a lot of creativity.

“Feeling unproductive? Maybe you should stop overthinking”. It is the title of an article that rocketed to the top of the news aggregator Hacker News in late July 2020. The article has a secret, it was written by an algorithm. Its creator, a Berkeley student named Liam Porr, exposed the truth on August 3 to the MIT Technology Review. He used GPT-3 to generate a dozen articles in two weeks. He writes a title, two or three sentences, and the algorithm takes care of finishing the article.

And that’s just the start: AI language models are likely to get even stronger. Creating a more powerful rival than GPT-3 is within the grasp of other tech companies. Machine learning methods are widely known and the OpenAI data used for training is publicly available. As GPT-3 has shown the potential of very large models, its 175 billion parameters may soon be exceeded. But what happens if GPT-3 is trained by GPT-3, creating texts, blogs, tweets … Will GPT-4 train by all materials created by GPT-3? Garbage in, garbage out.

GPT-3 suffers the same problem of other AI technologies, it is very sensitive to input and data quality. Despite the impressive results demonstrated by the previous examples, GPT-3 is not foolproof. Kevin Lacker demonstrated this by subjecting OpenAI’s natural language processing model to a Turing test. We discover that GPT-3 is unable to answer crazy questions, and for good reason: GPT-3 is the result of outstanding engineering work. But it does not understand the why. It makes simple errors no human would ever make.

As with Global Positioning Systems (GPS) navigation, it started as a tool but has reduced our know-how to guide us. GPS has had a major impact on the way society lives. Could language generators like GPT take away other know-how? Could they start by saving us the work of “thinking”?

The amount of data we leave to them on the web allows computers to resort to statistical imitation strategies to do better than us at ever-increasing tasks. Will humans no longer need to work in the future? Probably yes, at least for a while. But no longer on the same things. The next generations of robots and AI will be able to take over all mechanical and unintelligent tasks. For humans, all activities calling for non-analytical and intellectual, emotional, social, relational, spiritual, or artistic will not reducible to an algorithm. This change already involves a mutation of what constitutes “value”. Economic activity produces value, that is, everything that can be bought at a price of money. And we understand easily a spiritual or aesthetic bliss, do not participate in the same logic of value. This change takes a lot of time and hard work. The center of human gravity work shifts towards tasks of high creativity, deftness, and know-how. In a word, virtuosity.

The Hypothetical Path To Artificial General Intelligence

The OpenAI article reports a much more significant result for the future of the field. To understand its meaning, we have to look at a debate that has animated the scientific community since the advent of “deep learning”, a type of algorithms that has been illustrated by its versatility and the quality of its results. Despite the impressive performance of these new networks, whose authorship is often attributed to Yann LeCun, chief researcher in charge of AI at Facebook, many believe that one or more major conceptual advances will be necessary before reaching the stage of general artificial intelligence (AGI), also known as “strong AI” or super-intelligence, that is to say, to produce an algorithm significantly surpassing human intelligence. Achieving parity with human-level intelligence or a super-intelligence, that’s the goal.

In other words, there would still be time and many problems to be solved before we could even sketch the path to such technology. This is the position defended by Yann LeCun in his many public lectures aimed at demystifying AI. But not everyone agrees, and some perspectives are less calming. Indeed, some believe that the AGI problem is primarily a problem of computing power, that is to say, a problem of a technological nature rather than a conceptual one.

The famous Australian philosopher and cognitive scientist David Chalmers, known for the Hard problem of consciousness, in a debate by nine philosophers of mind rapidly assembled by Daily Nous (an online philosophy site), suggested GPT-3 is showing hints of AGI. David Chalmers has described GPT-3, “…instantly one of the most interesting and important AI systems ever produced.” But he thinks we still have a long way to go before we talk about human-level consciousness or intelligence.

There is a clear path to explore where ten years ago, there was not. Human-level AGI is still probably decades away, but the timelines are shortening. — David Chalmers

So, we have not yet arrived at the day where we wonder if we risk killing an AI by unplugging our computer. Or ban cruelty to AI like we condemn animal cruelty. Will AI one day ask the world to recognize it as having a consciousness or as a human, like in the movie Bicentennial Man with Robin Williams? Or more recently, in a sci-fi novel, All system red in The Murderbot Diaries series, which made me have a great time reading, Martha Wells offers an original story where a SecUnit, a robot with AI, hacked into its supervisor module, to see continuously TV shows and other entertainment available made by humans! For sure, an original purpose in life.

GPT-3 represents a real breakthrough and demonstrates already impressive results, capable of being applied in fields as vast as they are varied. In my opinion, it is the result of excellent engineering work. Does that mean it is not intelligent? Yes undoubtedly it is, but not this form of intelligence. Not an AGI. No Hal, Skynet (Terminator), or Matrix. It is intelligent but in its own way taking advantage of what today’s IT infrastructure has to offer. An AI technology brilliantly recounts the billions of information it has assimilated on the Internet, cross-references them, and transcribes them at the most appropriate time, according to the request, without however succeeding in “thinking” by himself of an appropriate response when faced with a common-sense question.

Final Thoughts

We forgot the weight of words and their power. Words have very concrete action. Often, it only takes a sentence to validate an emotion, hurt us deeply, or give us strength. The force of words is such that a few words are enough to cause great joy or cause great sadness. Languages and words are the way we think. Words structures our social relationships. Words can impact our lives in any way. Words are power and GPT-3 can exploit this power.

All these significant advances indicate that humanity has managed to develop computational systems that are very similar to us, although the discipline is still considered in its infancy. Inventions do not stop happening and a new step in the humanization of machines: artificial neurons that behave like those in our brain. It may be perceived as we would just have to wait for the AI to take power. Will human intelligence definitively give up one day? I don’t think so. First, machine intelligence is incapable of any emotional experience, unable to realize that there is a problem when it occurs. And we’ve seen how our brain works in conjunction with our body and our emotions. In this relation lies the actual human intelligence. Second, a problem is often an unexpected situation. How can anything get unexpected for a machine? The machine has no purpose in life. It will never consider the purpose of its approach, except those identified by the human user. If he understands the “how” of things, the “why” remains completely inaccessible to it. AI systems are limited to help, an opportune helping hand, a human decision-making aid. Let’s allow him to blow us away in this area. But all this convinces me that any AI technology alone is useless. Still for a long time …

Here, you can read Part 1, where I have explored how close is GPT-3 mimicking the human brain.

Follow me right here in Medium so you don’t miss the next articles.

Learn more about AI on Continuous.lu!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store