Alien Intelligences in our Midst

Carlos E. Perez
Intuition Machine
Published in
5 min readAug 6, 2017
Credit: http://www.ucmp.berkeley.edu/cnidaria/ctenophora.html

There is a mistaken notion here that AGI will eventually behave like humans. This could either be a very good thing or a very bad thing. The reality, however, is that AGI will likely behave entirely different. Deep Learning systems today seem to emulate some human skills well, but they are truly very different in nature. To begin with, the Artificial Neural Network design is closer to a matrix multiplication than it is to a real biological neuron. Yet, despite this difference, these systems are able to perform impressive biological like cognition like face identification and locomotion.

Anil Seth writes about the Octopus:

The octopus is our very own terrestrial alien, with eight prehensile arms lined with suckers; three hearts; an ink-based defense mechanism; highly developed jet propulsion; a body that can change size, shape, texture and color at will; and cognitive abilities to rival many mammals. They can retrieve hidden objects from nested Plexiglass cubes, find their way through complex mazes, utilize natural objects as tools, and even solve problems by watching other octopuses do the same.

The octopus has most of its neurons residing outside of its central brain. It is unlike humans or mammals. How it integrates information will likely be very different from how humans do. Its consciousness, as evidenced by its ability to watch and learn behavior from other octopuses, may be entirely alien from the kind of intelligence we find in other animal species. (Note: I use the word consciousness here as the same as self-awareness)

Douglas Fox writes about an even more alien species, the Ctenophore:

This type of animal, called a ctenophore (pronounced ‘ten-o-for’ or ‘teen-o-for’), was long considered just another kind of jellyfish. But that summer at Friday Harbor, Moroz made a startling discovery: beneath this animal’s humdrum exterior was a monumental case of mistaken identity. From his very first experiments, he could see that these animals were unrelated to jellyfish. In fact, they were profoundly different from any other animal on Earth.

The ctenophore had an advanced nervous system that uses a different set of molecules that any other animal on earth. It has evolved a nervous system from a different set of genes as any other known animal on earth. So despite, starting from a different initial condition, it surprising evolved the same neural dynamics as other animals. In other words, neural behavior appears can be constructed out of different building blocks. Therefore there is some kind of more general mechanism at work here. Fox writes:

Moroz now counts nine to 12 independent evolutionary origins of the nervous system — including at least one in cnidaria (the group that includes jellyfish and anemones), three in echinoderms (the group that includes sea stars, sea lilies, urchins and sand dollars), one in arthropods (the group that includes insects, spiders and crustaceans), one in molluscs (the group that includes clams, snails, squid and octopuses), one in vertebrates — and now, at least one in ctenophores.

‘There is more than one way to make a neuron, more than one way to make a brain,’ says Moroz.

Even more surprising is that these different paths evolved the same mechanisms but with different building blocks:

Nicholas Strausfeld, a neuro-anatomist at the University of Arizona in Tucson. He and others have found that the neural circuits underlying smell, episodic memory, spatial navigation, behaviour choice and vision in insects are nearly identical to those performing the same functions in mammals — despite the fact that different, though overlapping, sets of genes were harnessed to build each one.

As if there’s some underlying universal principle, yet to discover, that self-organizes the development of not only neurons but how these neurons are configured to perform certain functions. Why is it that smell, episodic memory, spatial navigation, etc. arrive at near-identical structure despite starting from different genes?

So, the construction of a single neuron can be different however the structure of a collection of neurons to support the same function tends to be identical for the function. Does form follow function? Does optimization for survival tend to lead to identical functional structures?

The above exploration gives a sense of the richness of intelligence that exists in our biological world. Also that it is entirely conceivable that there are many kinds of intelligence that may exist. As a civilization, however, should we strive to create machines that think like humans (with all its cognitive biases)? Or do we strive to create tools that augment and enhance our current limited cognitive capabilities?

If we taught a horse to perform long division we may plausibly conclude that the horse was intelligent. However, very few people will say that a hand calculator has any intelligence. Our definition of intelligence may either be of the biological adaptive kind that is able to autonomously negotiate its environment. Alternatively, it can be one that can perform complex mathematical operations or answer questions derived from an encyclopedia. Our ancestors would definitely think that our smartphones to embody intelligence. However, our evolved understanding of intelligence says that it is obviously not the case. We attribute intelligence to that of an entity that is self-aware. But, even though a horse is self-aware, we purposely ignore this on the argument that it isn’t intelligent enough.

Human intelligence is caught between a rock and a hard place. On one side there are computer systems that are able to perform all sorts of rigorous computations at a massive scale with extreme precision. On the other extreme, there are Deep Learning systems (that reside in computers) that are able to perform inductive inference, such as face recognition, that exceeds human capabilities.

Humans have already accepted that cognitive activities like long division or chess playing are more suited for computers. Humans are now realizing that other abilities once in the domain of biological cognition are now being performed with higher precision by deep learning systems. Go, a game thought to be well suited to our human intuitive capabilities has been bested by a computer system. DeepMind’s AlphaGo does not remotely function like a human brain. It is a hybrid system that combines Deep Learning with other computer algorithms (i.e. Monte-Carlo Tree Search and Reinforcement Learning). What this should tell us is that advanced cognitive reasoning capabilities can already be achieved by alternative methods without the need for AGI capabilities.

Further Reading

h

--

--