Deep Meaning

Catalina Butnaru
A Field Guide to Unicorns
4 min readFeb 11, 2017

--

The search for meaning and insight in human-level artificial intelligences

Prediction ≠ meaning

When we begin to question what we know, when we doubt, that is one of the most wonderful moments of consciousness, marking an unquenchable thirst for meaning.

One way to pursue meaning is to admit to ourselves that we do not understand how any factual observation is relevant to another. And strive to get an answer.

Is there a causal relationship? Is there a correlation? Is there a rule we can use to predict outcomes? Or is it just a deep intuition we subjectively experience as qualia, and simply means something to us?

How do we find meaning, how do we find answers?

For Derrida, meaning does not exist outside text (notions, language). So is it truly possible for AI to find meaning, if we accessorise it with natural language processing and prediction? Derrida’s work was discredited, yet current approaches to AI still subscribe to this reductionist belief. Moreover, media is too eager to interpret advances in both NLP and prediction as testimonies that AI is catching up with us.

“The man is a being in search for meaning” — Plato

One dangerous belief is that prediction accuracy implies understanding of what is predicted. A recent study on convicts revealed that a trained AI system was able to tell convicts from non-convicts just by analysis photos of people. It does not mean that AI can tell how and why a subtle face trait is correlated with criminal behavior. In fact, that’s phrenology, another discredited theory. In fact, the AI correlated other things, such as facial expression after years in prison or skin colour with the status “convicted”.

Truly understanding what something means, whether it’s a notion, trait, image or correlation, has not yet been unlocked by AI.

Super recognisers and meaning

At the extreme opposite of people with prosopagnosia we find super recognisers, people with incredible facial recognition abilities. The Met has a squad of several dozens of super recognisers.

Oddly enough, super recognisers cannot precisely say what it is that made them recognise a face. This peculiar type of understanding or insight is the kind of meaning we cannot teach supervised AI, because we do not understand how we arrived to it, ourselves.

Some find meaning through conversation, others through dogged study and research. Others find meaning through meditation. Most find a satisfactory meaning, quite a bunch really don’t care.

In all instances, arriving to a meaning is something we can’t yet teach AIs to do, but it seems to be related to a more or less explicit desire to find it.

“Reason only has insight into that which it produces after a plan of its own.” — Immanuel Kant

Should AIs be taught to derive meaning after learning how to create their own agenda? A teleological approach to building unsupervised AI might be a good starting point. But do we want AIs to have their own agenda?

AlphaGo and Watson

Jan Bussieck’s sobering remark that “we now have systems that are able to recognise images and speech with an accuracy that rivals that of humans” stands to emphasise just how easy it is to underestimate the speed of AI developments.

AI includes deep learning, neural networks and natural-language processing, and extends to advanced systems that learn, predict, adapt and operate under no supervision (reinforcement), such as AlphaGo and Watson.

Deep learning (AlphaGo, WaveNet) is most frequently applied in search, recognition & categorisation; machine learning — in self-driving cars; natural language processing — in Alexa; and prediction (Watson) in medical diagnosis and decision making.

Intelligent programs like these could potentially unearth scientific insights, increase productivity, and improve human existence many more times than and other technologic breakthrough before.

Still, meaning is still the missing piece between training AI on our civilisations’ knowledge and finding the insights that spark true understanding and awareness.

Nnaisense and the new dawn of meaningful AI

Jürgen Schmidhuber and Faustino Gomez have recently fundraised for Nnaisense. The venture aims “to build large-scale neural network solutions for superhuman perception and intelligent automation, with the ultimate goal of marketing general-purpose Artificial Intelligences”.

The most exciting part about the company’s long term vision and values is not to be found on the website though. The founding team refused investment from large companies and declined using biased data sets. Biased — in the sense that they only reflect a specific silo of the industry, and is intrinsically faulty given the way data is captured and triggered.

Nnaisense aims to build general artificial intelligence, the hard kind of AI that humanity fears will obliterate us.

I am personally excited and positive that AI is needed. I do not fear it, I fear what humans will do with it.

Above all, and before we ask whether AI is more likely to turn into Skynet, diversity, I believe that thoughtfulness, diversity and ethics are needed in AI technology. We must plant these 3 basic pillars to the foundation of any form a future general AI might take.

If you’d like to explore more questions about AI, come to my conference.

--

--

Catalina Butnaru
A Field Guide to Unicorns

City AI London and Women in AI Ambassador | Product Marketing | AI Ethics | INFJ