Is AI Something to Panic About?
--
…no but your personal lack of understanding of the technology just might be.
Last week I was watching the breakfast news broadcast on ABC. This week they are doing daily sessions on informing people about scams, how they work, and what to look out for. One of the presenters asked what was making scams more prolific, in which she proposed that it could be AI.
[Cue the Judgement Day footage from the Terminator Movie]
To my minor amazement the interviewee avoided engaging in a conversation focused on AI but is AI making scamming easier? Well, of course it is but this is akin to saying computers make doing many things easier.
Hell, for hundreds of years technology has been making things easier. Take the “wheel” for example. How many peasants do you think were concerned about losing their jobs to the horse and cart?
Then there is the industrial revolution, the technological revolution, and now the AI revolution.
If you think we are just now undergoing a new phase in….well, phasing out humans, you are truly misinformed. It’s been happening since the year dot and if you think AI is just now impacting our lives, think again.
During the 17th century, scholars like Leibniz, Thomas Hobbes, and René Descartes delved into the idea that rational thinking could potentially be structured in a manner akin to algebra or geometry. Hobbes, renowned for his work in Leviathan, famously asserted that “reason is simply a form of calculation.”
It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers of the machine. — Ada Lovelace
Replacing human thought with the processes of a machine dates back to the first calculating machine in the early 1800s. Imagine at the time what people thought of it. They were afraid that something, not human, was capable of replicating facets of human thought.
At the time, Ada Lovelace, who published the first computer program stated, It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers of the machine and nowhere is this statement more prevalent than today, among what I can only surmise as the hysteria surrounding AI.
You can’t turn on the television, log into YouTube, or check Facebook without being accosted with a bunch of scaremongering articles and titles focussing on AI. Just today on YouTube I saw the banner for a clip featuring ex-CEO of Google saying “AI is reaching the point of no return”.
[Cue the Judgement Day footage from the Terminator Movie]
A plethora of articles and clips showing AI developers shrugging and saying that they don’t know how their AI works and that they are surprised by the outcomes also adds more fuel to the fire of the misunderstandings and existential critical panic that seems to be getting out of control.
So, in order to douse the flames of this so-called “end of days” computing revolution, I’d like to inject some facts into the conversations and dispel some common and evolving myths.
Firstly, ChatGPT is not an example of the embodiment of Artificial Intelligence, however, Mr. Data from Star Trek is. How can I say this? Well, first…ChatGPT is a Large Language Model (LLM) built atop a neural network.
What does this mean? In short, ChatGPT has read almost everything that has been written, has statistically analyzed which words are most likely to follow other words, and is therefore able to predict what the next best word should be (given human input). It does all these computations by means of a neural network. A neural network is a form of artificial intelligence. It is not the whole of artificial intelligence.
If the domain of artificial intelligence was the Earth, then Neural Networks would be…say…Canada and neural networks are a subdomain of the AI technique known as Machine Learning. This would encompass other techniques to include, say, the continents of North and South America.
Now given all the facets of Artificial Intelligence, Mr. Data is indeed a great role model because he physically embodies pretty much everything a human can do (including the addition of his emotion chip he received in The Next Generation: Season 4 Episode 3). Also the field of AI is nowhere near creating such a complete machine replica of a human, and right now, such an “AI being” is a work of pure fiction.
Secondly, the interviews and clips of these AI (namely neural network) researchers throwing their hands in the air, astonished that their AI works and stating that they don’t know how it does, is…well, I call… bullsh$%t.
The neural network concept is built on our understanding of how signals pass through the human brain. Human brain cells, called neurons, become perceptrons in a neural network. Without going into all the architecture and algorithms involved, essentially an artificial neural network simulates a human neural network, right down to some low-level intricate workings. Each perceptron, like a neuron, is a tiny little switch that can allow or deny electrical signals to travel through it.
Now if you build an artificial simulation of something and it works like the thing that you modeled it on, and it works…then shouldn’t the response be Hurrah, not Huh?
A neural network, like the one ChatGPT is built on, may have billions of perceptrons. During the training of such a network, each perceptron calculates its own threshold value that determines when it switches on or off, like the actual neurons in our own brains do. So, it’s indeed the case that researchers and developers might not actually know what these threshold values are for each perceptron but it’s not some mystical incomprehensible thing. If the developers want to know what each perceptron’s threshold is, they only have to take a look in the computer memory but saying they don’t know how it works is just irresponsible.
[Cue the Judgement Day footage from the Terminator Movie]
Thirdly, while AI does seem to be bombarding us of late at an exponential rate, it’s not all of AI. It’s almost all applications and services being built on top of OpenAI’s GPT architecture. Why? It’s because it has a wide range of applications and can be used in various contexts, such as customer support, content generation, language translation, brainstorming ideas, and more. Its versatility makes it appealing to businesses and individuals alike. Additionally, OpenAI has made its interface user-friendly, meaning that even individuals without technical expertise can easily interact with the model through a simple interface. This accessibility has contributed to its widespread adoption. So basically, anywhere human language is being used as a service, OpenAI’s tools can be used to develop a tool to almost replace humans in these roles.
Last but not least, going back to my first point, we’ve been surrounded by supposed artificial intelligence since the first calculating machines. History records the definition of artificial intelligence changing over time as the once magical machines have become mainstream and developers have been reaching for the next level of human-like abilities out of machines.
In 1955, Stanford Professor John McCarthy (who was first to coin the term AI) defined it as “the science and engineering of making intelligent machines”. This in itself isn’t that remarkable and you could even consider it a bit circular. The idea that artificial intelligence is the mechanisation of the process of human thought stems back to the Chinese, Indian, and Greek philosophers in the first millennium BCE. They were concerned with the formalisation of logic and algorithmic thinking, the exact things the very first computers were capable of right out-of-the-box. Do we consider these artificially intelligent? Maybe not now but initially, they were quite revered. In 1950, Alan Turing defined AI saying “A machine can be said to have exhibited intelligent behavior if it can mimic a human’s responses well enough that an average human cannot distinguish it from another human.” and more recently, Gary Marcus, a contemporary AI researcher, suggested that “intelligence is the capacity to learn and apply knowledge, to reason effectively, and to adapt to new situations.”
Turing devised a famous test based on his definition. While there were many attempts over the years at beating the Turing Test (which attempted to fool a human into thinking they were interacting with another human, via a chat like interface), it hasn’t been until recently with these large language models that it has actually been achieved. While there’s been so much uproar about the success of these models and them being intelligent, Marcus has rebuked, “I don’t think it’s an advance toward intelligence. It’s an advance toward fooling people that you have intelligence.”
Personally, I stand with Marcus. Anyone touting these large language models as actually intelligent, conscious, and even sentient is preposterous. They are fooling us, quite well mind you, that they are, in fact, more along the lines of Marcus’ definition of AI rather than actually being more in line with Turings.
This is a rabbit hole I could continue down for pages but won’t right now…and just one more thing: Will AI turn on humans and bring our extinction?
[Cue the Judgement Day footage from the Terminator Movie]
All technology can be used for good and evil. Machines are only capable of what they are programmed to do. It’s not like computers haven’t already made detrimental errors (programmed into them through human error) that have affected human lives.
One tragic example of computers making mistakes that cost lives occurred during the Gulf War in 1990. The Patriot missile system, deployed by the United States to intercept incoming Scud missiles, had a critical software error that caused it to miscalculate the Scud missile’s position. On January 25, 1990, a Scud missile struck a U.S. Army barracks in Dhahran, Saudi Arabia, killing 28 American soldiers and injuring around 100 others because the Patriot system failed to intercept it due to the software error. This incident underscores the importance of rigorous testing and verification in critical computer systems, particularly those used for military defense, as computer errors in such high-stake situations can have devastating consequences.
Another example of a computer causing human deaths, this time in a medical situation, is the Therac-25 incidents in the 1980s. Therac-25 was a radiation therapy machine that had a software design flaw, leading to a race condition and resulting in patients receiving severe overdoses of radiation. This flaw caused serious injuries, burns, and deaths.
These incidents highlight the need for robust software design, rigorous testing, and strict safety protocols to prevent such tragedies.
Certainly we don’t need a computer algorithm to destroy mankind when we are quite capable of doing it to ourselves…oops, I digress.
So to wrap all this up, to answer my initial question, “Is AI something to panic about?”…no but your personal lack of understanding of the technology just might be, for without sufficient knowledge of the power of AI and how it is shaping our future, it may be destined to be misunderstood, misused and little effort applied to ensure its responsible and beneficial use.
Learn all about the inner workings of AI in my upcoming course. Register your interest here.