Redefining Intelligence and Its Owners

“Empathy will set us free.” — Sophia the Robot

Jeanne Elizabeth Daniel
The Startup
6 min readNov 17, 2018

--

Photo by Possessed Photography on Unsplash

I was fortunate enough to recently attend the Forum for Artificial Intelligence Research (FAIR) 2018, hosted in Hermanus by CAIR. This was the first conference I have attended that placed a very serious emphasis on the Ethics of AI, with quite a few speakers coming from Philosophy backgrounds. Topics that were discussed included ethical issues surrounding Robot Rights and Citizenship, Robot Prisons, Artifical Moral Agency, Lethal Autonomous Weapon Systems (LAWS), and more.

Adversarial Examples

One of the speakers, Ettiene Barnard, a leading machine-learning researcher in South Africa, made the bold statement that “Deep Learning is both great … and pathetic”, by using the example of how easily an AI system can mislabel a picture of banana as a toaster, by simply placing a psychedelic sticker next to it.

This scorcery is called Adversarial Images. They work by tricking computers into seeing things that aren’t there and the way they do it is by exploiting a weakness in their AI algorithms known as an ‘adversarial example’.

Some attendees argued that the psychedelic sticker kind of looks like the light that reflects off a shiny toaster, and thus the AI machine didn’t completely fail in its task. After all, Deep Learning models learn to generalize objects from thousands of images. But, humans are also not infallible to being fooled by mirages or manipulated images. Take, for example, this famous photograph of the so-called Loch Ness monster.

The “surgeon’s photo” of 1934, now known to have been a hoax, inspired decades of search and speculation about the Loch Ness Monster.

So, if human intelligence is the gold standard, should the bar for intelligence include the ability to make mistakes or be fooled in certain circumstances?

What is Intelligence anyways?

It got really philosophical at the conference as we started questioning the very definition of Intelligence. If our idea of “Intelligence” can be simulated by machines, is it still “Intelligence”, and should we then re-evaluate?

From “Mainstream Science on Intelligence” (1994), an op-ed statement in the Wall Street Journal signed by fifty-two researchers (out of 131 total invited to sign) defined Intelligence as: A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings — “catching on,” “making sense” of things, or “figuring out” what to do.

It seems to be the general consensus that intelligence is something characteristic of human behaviour, and maybe a select few mammals. But what about complex ecosystems that self-regulate without any human intervention, or swarms of fish that swim in perfect unison, or bee colonies with thousands of inhabitants that seem to think and work as a collective? Intelligence cannot and should not be limited to only tasks and abilities demonstrated by humans.

Artificial Intelligence

In broad terms, Artificial Intelligence is machines exhibiting human-like abilities, such as the ability to reason logically, identify objects in pictures, translate text from one language to another, pattern recognition, generate speech or text, create art, differentiate between groups, and more.

The first AI artwork (Portrait of Edmond Belamy) to be sold in a major auction achieves $432,500.

What machines excel and humans fail at is memorizing and aggregating massive amounts of data, and doing very large calculations. Human brains are wired to think and compute in parallel, where as machines still compute linearly. For this reason, humans are still far superior in complex tasks like creative problem solving, learning from just one or two experiences, morality, decision-making, planning, and more.

With the recent rise of deep neural networks architectures, machines can now learn complex patterns from thousands of positive and negative examples, generate never-before seen images (such as the one above), and even adapt its own behaviour based on reward functions.

Suddenly, machines can solve many of the above-described problems with breathtaking accuracy, and even fool humans into thinking they are speaking to another human, and not a machine (a true Turing Test landmark). The Turing Test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Artificial Stupidity

Because machines learn patterns from large amounts of data, they rely heavily on the quality of that data. They are thus also susceptible to mistakes or misrepresentations in the collection, labelling, engineering, and historic decision-making processes contained within the data. These “deviations” are often referred to as bias, and the ignorance of their presence has lead to many negative second-order effects.

Notable cases where machines learnt such bias and exhibited frighteningly bigoted behaviour were:

For example, this plot of occupations relative to the words “he” and “she” show that, in a word embedding space, stereotypically female occupations (like “Homemaker”) are found much closer to the the female pronoun than the male pronoun, which boasts closer proximity to stereotypically male occupations (like “Financier”).

The flaws are not necessarily in the algorithms used, but as a result of misrepresentative, or biased training data. The algorithms used are, by nature, discriminative and tries to exploit nuances in the data. The commonly-used phrase is “garbage in, garbage out”, meaning that if a model is trained on bad data, it cannot be expected to perform well. If left unchecked, these flaws have serious ramifications if they are part of systems deployed in society.

What is frightening though is the fact that humans, the supposedly intellectually superior beings, are the creators and curators of this biased data.

Sentinal Creatures and their Associated Rights

The big elephant in the room is perhaps “robo rights”. If an entity exhibits human-like intelligence, such as being able to reason, emphasise, and display self-awareness, should it be granted the same rights as that of an equally sentinel creature, such as a human? Or will they be treated as slaves, with a master controller?

Another topic up for discussion is the granting of free will. With free will comes the responsibility for one’s actions, and with that comes consequences for one’s actions. Will artificial intelligence be given free will, like Microsoft’s Tay, to learn and interact? Will self-driving cars be “punished” if they cause an accident? And, of course, to punish it, the entity must be able to comprehend that it is being punished.

In Conclusion

While Artificial General Intelligence is still very far away, (I mean, we are still struggling with task-specific AI) we have to start thinking about the philosophical and legal implications it will have on our way of life, and the very definition of humanity, and develop in accordance.

Artificial Intelligence has become such a easily and widely-used term, with little thought to the depth of what it actually means to create Artificial Intelligence by simulating human behavior, and learning from human-generated data.

Artificial Intelligence should also not be limited to the scientific sphere, but be influenced by the teachings of philosophy, anthropology, law, history, and more.

Thank you to the organizers of FAIR 2018 for recognizing and addressing the need for more philosophical influences in developing AI.

Thank you for reading to the very end. Feel free to pop me an email at jeanne.e.daniel@gmail.com .

--

--

Jeanne Elizabeth Daniel
The Startup

Modern problems require innovative solutions. Senior data scientist by day at www.stubbenedgelabs.com