Brainless brute force — overhyping GPT-3 in software development

Joao Simões de Abreu
Quidgest’s blog
Published in
5 min readSep 27, 2021

The Generative Pre-Trained Transformer 3, also known as GPT-3, is the recent state-of-the-art Natural Language Processing technology developed by OpenAI. It made headlines across the globe after “writing” an opinion piece on The Guardian — that was the result of a short but detailed briefing and the compilation of eight different outputs (essays). The base for GPT-3 consists of giving it a cluster of words or a structured sentence, and it will generate text that is consistent with the input, which is enabled by the 175 billion parameters.

Such advanced feature warned us, once again, artificial intelligence might be pursuing jobs. In this case, white-collar workers such as journalists, lawyers, accountants, and even software developers.

Overhyping Artificial Intelligence — what could go wrong?

Before going any further, it is essential to note that the craze around GPT-3 is another artificial intelligence overhype. It makes great media headlines, feeds contemporary Luddites’ speech, and skyrockets everyone’s expectations. High hopes (often driven by companies developing solutions in the area) and low results were the main triggers for the former so-called Artificial Intelligence Winters — periods of reduced funding and interest in artificial intelligence research.

Since the begging of the last decade, we have been living in Spring. Global corporate investments were 67.85 billion USD (nearly 5.5 times more than in 2015), research and development is stronger than ever, and venture capitalists are looking for disruptive startups focused on artificial intelligence.

Here is where we — companies developing and/or researching about artificial intelligence — must be careful: overhyping what artificial intelligence can do will eventually lead to another Winter, which only benefits the Luddites that want to discredit the technology.

Be honest about what the technology can do, especially in marketing campaigns. Knowledgeable people will eventually find out on their own when they “look under the hood”. If there is no match between what you are promoting and what your technology can do, your marketing campaigns will fuel a generalized disbelief in artificial intelligence.

GPT-3 in software development

Since its launch, GPT-3 use cases have been beyond article writing. A Decentralized Creator article enumerates 25 tools created using OpenAI’s software. It goes from creating product descriptions and sales emails to A/B testing and document extraction.

Among the paraphernalia, there is one skill GPT-3 seems to have acquired during training: software development. One can simply write a briefing on an application’s appearance, and GPT-3 generates the source code.

This is how products such as GitHub Copilot are being pursued. And it fits the narrative of democratizing software development: people outside IT can finally develop their own applications without hiring an expensive professional (type what you want and — like magic — the solution will be presented to you in the form of code).

However, GPT-3 is not the answer for technology development democratization. On the contrary, it is far from expectations and deviates from the path we should chase.

Predicting is not good enough

OpenAI’s technology is good at PREDICTING text. It uses a probabilistic approach to forecast whatever makes more sense coming next given a particular context. The output results from what is publicly available from the Internet — in GitHub Copilot’s case, it uses whatever code is available on GitHub.

A probabilistic approach is not the way of moving towards the future of software development. Whatever is probabilistic has a chance of failing. Although GPT-3 might use an enormous basis of knowledge, such as GitHub, it does not mean the code is correct. There is no selection process in GPT-3 regarding what is entirely right or wrong. And if every chunk of the source code retrieved from the Internet is prone to mistakes, it means the larger and more complex the tool generated, the more likely it is to fail.

In software, what is not 100% correct, is 100% wrong.

Cyberunsecurity

There are also cybersecurity concerns. New York University’s Tandon School of Engineering academics has put GitHub’s Copilot to the test on the cybersecurity front. In their paper released in August, they found that roughly 40% of the time, “code generated by the programming assistant is, at best, buggy, and at worst, potentially vulnerable to attack”. In addition, the potential coding assistant tends to generate incorrect code, an inclination for exposing secrets, and problems judging software licenses.

Lookup-table disguised as Artificial Intelligence

Moreover, as Francesco Gadaleta states in his article, GPT-3 is simply a massive lookup-table that does not even “perform backpropagation due to the massive amount of parameters it is equipped with” — an important machine learning feature to feedforward (i.e., to replace positive of negative feedback with future-oriented solutions). “GPT-3 is similar to the developer who has some familiarity with the syntax of a programming language, without knowing any type of abstraction behind it, and who is constantly referring to a massive dictionary of coding snippets,” he adds.

Centralized brute force

GPT-3 is a centralized brute force instead of a decentralized smart unit. In fact, according to MIT Technology Review, “its enormous power consumption is bad news for the climate: researchers at the University of Copenhagen in Denmark estimate that training GPT-3 would have had roughly the same carbon footprint as driving a car the distance to the moon and back, if it had been trained in a data center fully powered by fossil fuels”.

In software development, we must be able to solve vagueness and contradictions. If an app-development machine was going to do so, it would need an extensive unbiased data set, which is not the case with the Internet.

Programming at a Human speed

Copilot’s methodology does not pursue any new way of working. Artificial intelligence has a tremendous potential to enhance software development productivity, but this novel product works the way around: they have put artificial intelligence working at the speed of a programmer instead of a programmer working at the rate of artificial intelligence.

We must rethink the process of developing technology through artificial intelligence. We should not use it to code like a regular programmer. We must rethink the process to reach new heights.

Building a Castle (of sand)

Creating a solution through GPT-3, especially to those who never developed software before, might sound like building a castle — unfortunately, in this case, it is a sandcastle. Even the next Generative Pre-Trained Transformer generation, with 500 times more parameters than the GPT-3, will be prone to the same mistakes. The problem is not strength. It is a lack of brains. We need more models and less experimentation and probabilities, and more rules instead of randomness.

With everything that was pointed out above, you still may ask: “what if we build an application with GPT-3 and get a quality assurance squad to work on the code?”

That is still not the way forward. Developers and IT professionals must not waste their time skimming through machine-generated code to find bugs and vulnerabilities. If we want to pursue a world where we use artificial intelligence to help us build applications, the “machine” should be able to produce error-free software in the first place. The future of software development is a little to no-code approach that is highly unlikely to produce errors and easy to change.

This future is based in seven ingenious software development trends.

--

--