The AI Boom: Is Superior Intelligence Here?

For the past few months, we are witnessing some of our sci-fi fantasies coming to life. Whether it’s the AI shown in the movies like ‘HER’, or JARVIS AI from Ironman movies, dark movies like Terminator, don’t seem made up anymore, does it? For some, it’s truly concerning, and for others, it’s the best time to be alive! There is so much happening in such a short time that we didn’t even get a chance to gasp air and think about it. In this article, I’ll try to find out how much closer we are to the era of AGI (Artificial General Intelligence), an AI which can be superior to the human intelligence.

The LLMs (Large Language Model) AI Revolution

ChatGPT-3, GPT-3.5, and then, bang! GPT-4 within a year! And now, we are noticing other similar AI models popping out everywhere on the internet. On top of that, image generation models like DALL.E and Midjourney have spared wide chaos among artists.

But why did we see such sudden growth of LLM among all other AI models?

Well in simple words, it’s been given a good habitat. For example, we presently have the greatest computational reach ever due to large advances in GPU and TPU technology. Huge amount of data is also available on the internet due to the rise in network infrastructure all over the world. And great research breakthroughs have occurred in machine learning and artificial intelligence. One instance of such a breakthrough is the Transformers model, which became popularly known as the LLM and was developed by researchers at the University of Toronto and Google. At last, the cherry on the cake is that it continues to receive a huge amount of funding from tech giants.

Problems with LLM AI

I don’t want to go down a rabbit hole of explaining the whole working of LLMs, “Positional Encoding”, “Attention”, “Self-attention”, etc… etc… There are tons of articles and videos explaining all. I’m here to discuss the problems with this language model, and whether can it achieve “true intelligence”. To justify my points, I’ll try to compare them with human intelligence.

Bias

One striking issue that arises with this model is bias. This AI monster feeds on 100s of terabytes of data that is collected by scraping the internet. There is an old saying in India that goes, “You burp what you eat.” Likewise, LLM AI could be biased or incorrect on certain topics, as the data from which it’s fetching its answers, could have been deliberately been tweaked for personal interest. A major source of its data, for instance, could be Wikipedia, which does not proof check data inputted by its users. Using this loophole as a weapon for personal or even political interest could become very problematic in a demographic nation.

Hallucination

A second drawback to the LLM model is what experts call ‘hallucination’. During my research, I found a great explanation of this phenomenon given by Phaedra Boinodiris, who is IBM Consulting’s global leader for Trustworthy AI. She states LLMs work by predicting the best probable next word doing so, however, it’s not understanding the context of the user’s question. Therefore, in some cases, it can make classic statistical errors! This can lead to it giving false information with confidence and no proof! Of course, this defect can be solved by the model providing a simple explanation of its answer. But yet by this, it doesn’t acquire a sense of knowledge.

Reasoning

Yet another defect in this model is that it does not have its own decision-making power. Indeed, when we ask this model to justify a point or pick a side, it’s unlikely that it will choose one, neither will it provide a proper, valid reason for its choice. It does not have the ability to understand multiple perspectives. For example: If it were presented the option of whether to save the life of a human or an animal, it won’t be able to make a clear decision.

Soul

One major issue is that LLM lacks soul. Let me explain. According to Rajiv Malhotra, a profound writer who has built a career in the world of artificial intelligence, there is a great difference between intelligence and consciousness. I refer to consciousness as ‘soul’. When we put our efforts into any activity of interest, we sometimes say, “I put my soul into it.” But what does this expression really mean? Certainly, it’s our way of saying that we’ve invested our emotions, efforts, and experiences into the activity, and it means we’ve given it a unique personal touch. AI cannot achieve this. Yes, it could mimic individuality, but its efforts would not come with genuine emotions or feelings. This is because humans have this organic, original, and spontaneous way of thinking and processing. AI, on the other hand, is more probabilistic and mechanical.

Other

Artificial intelligence also comes with other functional problems like privacy and security risks. By privacy and security, I mean AI can inadvertently leak sensitive information from its training data or be manipulated to generate hidden information. Recently, I came across a YouTube video made by a hacker called Fabian Faessler on his channel, LiveOverflow. He shows how an LLM AI can be tricked with simple prompt manipulation to reveal a secret key given by a user. Surprisingly, the model can be fooled to make such a revelation even after being given clear instructions to safeguard the key.

Additionally, AI is also known to have a negative impact on the environment. For instance, we are noticing reports coming out about how ChatGPT consumes gallons of water to answer simple questions.

The Road Ahead

Certainly, we can see that with all these drawbacks, current AI models are nowhere near equivalent to human intelligence. Yet, no matter what we have to say, AI has already shown us its impact on our lives. Despite its problems, it’s mimicking us well and so well, in fact, that it’s going to replace a large amount of our working population in the near future. On top of that, people are finding new techniques to improve these models through projects like AutoGPT and Baby AGI.

In conclusion, Large Language Models have brought us to the brink of a new era, the AGI epoch. While the exponential growth in computational power, vast amounts of data, and breakthroughs in machine learning algorithms have fueled this boom, issues of bias, hallucination, and reasoning plague current models, and ethical and environmental concerns must be addressed to ensure a responsible and sustainable AI ecosystem. The ultimate question of whether AI can develop true consciousness remains uncertain, but ongoing dialogue is necessary to navigate the complexities of this technological revolution and ensure its benefits are shared equitably.

*Disclaimer: I’m not an AI expert. My background is in computer applications and visual effects. What is more, I’m a tech enthusiast. I find the field interesting, and I keep myself updated through such research. On some of the points mentioned in this article, I could be wrong. If so, please correct me in the comments section.*

--

--

Harikrishna Chauhan
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

A technology enthusiast. Very passionate about creating next-generation creative content by converging Art and Technology together.