Artificial General Intelligence as a Modern Day Spiritual Conjuring

Branko Blagojevic
ml-everything
Published in
4 min readOct 10, 2019

Douglas Adams proposed the answer to the Ultimate Question of Life, the Universe, and Everything is 42. According to Artificial General Intelligence (AGI) alarmists, he’s at least on the right track.

Machine learning models consist entirely of a series of numbers and model meta-data specifying some structure. The goal of training a model is to arrange the numbers and meta-data such that it achieves some objective, usually minimizing or maximizing the final number to some pre-determined value.

The idea that these numbers can be arranged in such a way that they take on consciousness, purpose and meaning can only be described as a modern day spiritual conjuring. The only difference between fictional conjurings and the pseudo-scientific equivalent of AGI is the use of numbers rather than words and sounds.

What is it with numbers?

The way machine learning works today is you take something (input), convert it into numbers and add | multiply | apply non-linearity to another group of numbers (layers), and finally get a set of numbers out (output). Then you make some inferences about the output.

input -> [n hidden layers] -> output

The final output can be anything you want it to be. It can be a single number and you can say if the final number is close to 1 the input is a hotdog, and if its close to 0 it’s not a hotdog. Or you can have 100 different outputs and each number can represent a particular type of hotdog, and the largest number is the most likely type of hotdog, or maybe if they’re all close to 0 there is no hotdog.

All machine learning models work like this.

For textual information, each word, phrase or part of word is converted into a numerical representation that has been derived from another machine learning model. Then the numbers are processed by adding | multiplying | applying non-linearity through another series of numbers. Then the final output number can be inferred as meaning one thing or another or compared to other word vectors for a textual output.

For images, the image is already represented by numbers (RGB or similar encoding) and is scaled up/down and conformed to some pre-existing models input. Then those numbers are added | multiplied | non-linearity applied and the output result is another number that can be inferred to mean anything you want.

Sound information is similarly represented in a numerical form and the same process is applied.

Reinforcement learning (RL) is the same concept except with an intermediate step. RL consists of training a system to make decisions in a dynamic environment to maximize some goal (e.g. play chess, go, drive a car). The world is represented by (you guessed it) numbers. Then the RL system has some way to anticipate how certain actions it can take will change the current state. It can also evaluate states in terms of the model’s stated goal. So the RL system can have multiple models, one to represent the state, one to determine the outcome of actions, and one to evaluate (score) the state. But the underlying mechanism is the same and involve applying some addition | multiplication | non-linearity to a series of numbers.

Magic Numbers

With a large enough model, there exist some magic numbers that can do it all. It may not even have to be that large of a model.

These numbers can the Turing Test. If you convert your words into numbers you can take those numbers and add | multiple | apply non-linearity as prescribed by the model’s meta data and get some other numbers back that can be interpreted as words. And these words would be very convincing to you that you are talking to a person.

These numbers can pass the Coffee Test. You could embed these numbers into a mechanical system with some way to move around and receive numerical optical input. The system could then come into your home and make you coffee.

These numbers can pass pretty much any other artificially constructed proxy for intelligence. The problem with these tests is that they all focus on specific traits a person can do. It’s like trying to create a dog and focusing on the reconstructing a dog’s park as a measure of dogness of the creation.

Artificial intelligence is a bad name. It may have helped with funding and reporters writing on the topic, but it is so disconnected with the colloquial use of the term intelligence. Learning is less objectionable, as models can improve themselves without being explicitly programmed how to do so. But even that is a bit of a stretch. Do we say a guitar learns to tune itself as we tighten its tuning pegs?

Magical Words

What if we imbued the same pseudo-scientific mysticism to words? What if we were worried that someone can create some beautiful prose that could take on meaning and overthrow our lives. In the 1999 novel Lullaby by Chuck Palahniuk, the protagonist discovers an African chant that can kill anyone instantly upon hearing it. He uses the power to exact revenge and ultimately falls victim to its power.

There are dangerous words and ideas, but as a society we are so unconcerned words sprouting consciousness and destroying humanity that Western Civilization is built on their free expression. If only we could apply the same confidence when dealing with numbers.

--

--