The ethics of Artificial Intelligence

Justin Lee
The Startup
Published in
14 min readJun 26, 2018

“Beware; for I am fearless, and therefore powerful” — Frankenstein’s Monster

In Mary Shelley’s famous 1818 novel, Dr. Frankenstein creates intelligent life out of inanimate matter — and later regrets meddling with nature.

But by that point, it’s already too late.

Today, a similar narrative runs through our society; but this time, it’s not confined to the pages of a book.

The monster?

Artificial Intelligence (AI).

For the first time ever, we’re building technology with the power to develop itself further — without human input. Technology with the capacity to outsmart us. Technology with the means to overthrow us.

This poses a radically new challenge.

Do you trust this computer?, a new documentary from Elon Musk, warns that AI is a new life form posed to “wrap its tentacles” around us.

And a recent influx of AI horror stories, including robots cooperating to open a door and reports of Google’s AI becoming “highly aggressive”, paint a similarly frightening picture.

Will our story with AI end in tragedy of Frankenstein proportions? Or will we be able to live in harmony? Only time will tell.

But the discussion raises important moral and ethical questions about AI, which is just as much a frontier for societal risk as it is for progress.

So what are the issues keeping AI experts awake at night — and how can we address them?

Unemployment

What’s at risk of being automated?

From the Luddite movement to the internet, people have feared that technology will ‘steal all the jobs’.

Over the decades, we’ve continued to build machines to perform routine tasks more efficiently than humans.

All of these inventions have coincided with steep economic growth, making our lives faster, and easier.

And despite some tricky periods of adjustment, the machines didn’t steal all the jobs. New jobs replaced old ones, and the majority of people were able to find employment. Happy days.

Why? Because when we automate manual tasks, we free up resources to create more complex roles that are concerned with cognitive, rather than physical labor.

That’s why the hierarchy of labour depends on whether a job can be automated or not (e.g. a university professor earns more than a plumber).

Until now.

From legal writing to detecting fraud, composing art to conducting research, AI is learning how to automate non-routine jobs, too.

A recent Mckinsey report estimates that up to 800 million jobs could be lost worldwide to automation by 2030.

For the first time ever, we will start competing with machines on a cognitive level. The scariest part? They will ultimately have the capacity to be much, much smarter than us.

Many economists are concerned that as a society, we won’t be able to adapt — and will ultimately get left behind.

And what about the impact of automation on our personal lives?

Here’s how it works at the moment: we sell our time in exchange for the money to sustain ourselves.

What happens when this time is returned to us? Will the vacuum created by automation cause social unrest?

Or will our children’s children look back and think it inhuman that we had to auction off our waking hours in order to survive?

Inequality

What happens in a future without jobs?

Our current economic structure is simple: compensation in exchange for contribution.

Companies rely on a certain amount of work and pay out a salary or hourly wage.

But with the help of AI, a company can massively reduce its human workforce.

So, its overall revenue will go to fewer people. And those in charge of AI-driven companies will proportionally earn a much higher wage.

That wealth gap is already widening.

In 2008, Microsoft was the only tech company that made the top ten most valuable globally; Apple came in next at 39, Google at 51.

Fast forward to 2018, and the top five spots have been claimed by the top five tech giants, both in the USA and globally:

Today, Silicon Valley fuels a ‘winner-takes-all’ economy, in which one company retains the majority of the market share.

And so, startups and smaller competitors struggle to compete with the likes of Alphabet and Facebook because of their varying access to data (more users = more data, more data = better service, better service = more users.)

The other problem? These tech giants create relatively few jobs comparative to their hold on the market.

In 1990, Detroit’s three largest companies were valued at $65 billion with 1.2 million workers. In 2016, Silicon Valley’s three largest companies were valued at $1.5 trillion but with only 190,000 workers.

Will technology really fulfill its promise to replace the jobs it destroys?

Esteemed computer scientist Moshe Vardi isn’t so sure:

“What people are now realizing is that this formula that technology destroys jobs and creates jobs, even if it’s basically true, it’s too simplistic.”

And how will workers whose skills become redundant survive?

Worryingly, experts anticipate that high unemployment rates could lead to violence and uprisings amongst those left behind.

In his closing statement at the 2017 Asilomar AI Conference, Andrew McAfee made a grim prediction:

“If the current trends continue, the people will rise up before the machines do.”

So, is it possible to structure a fair post-work society and post-labor economy?

Many think Universal Basic Income (UBI) is the answer.

Implementation of UBI would mean all citizens receive a set income, regardless of their occupation, financial history, housing and demographics.

The solution has lauded by thought leaders including Richard Branson, Elon Musk, Bill Gates, and Mark Zuckerberg; and a 2018 Gallup poll found that 48 % of Americans agreed.

However, 80% of those in favor of UBI expect that the businesses who benefit from AI should pay for it.

And as Evgeny Morozov concludes:

“Why bother to have a state at all, if Silicon Valley can magically provide basic services, from education to health, on its own? Even more important, why still pay taxes and fund non-existent public services, which are to be provided — on a very different model — by tech companies anyway? This is a question that neither the state nor Silicon Valley is prepared to answer.”

Relationships

How do machines trigger reward centers in the human brain?

Bots are becoming more sophisticated in their ability to model human conversations and relationships every day.

In 2015, a bot called Eugene Goostman won the Turing Test for the first time in history.

This marks the beginning of an age in which we regularly speak to machines as if they were human.

We are already seeing some examples of machines triggering reward centers in our brains. Clickbait headlines, for instance, built using A/B testing, which is an elementary form of algorithmic optimization.

The same goes for ‘pull-to-refresh’ and other features that make social media, mobile and video games so moreish.

Technology has already become a powerful way of streamlining human behavior and sparking action. So instead of causing dependency and addiction, could it nudge people towards better, more altruistic behaviors?

Or perhaps AI will simply perform these behaviors itself?

While people have a finite capacity in terms of the patience and affection they can expend on others, AI has unlimited resources that it can channel towards building and nurturing relationships.

In “I, Robot,” protagonist Del Spooner discovers a robot in his grandmother’s house, baking a pie. This prospect is looking more like a soon-to-be reality than a movie script every day, as demographers anticipate that by 2060, one in four people will be aged 65 and over.

Robots could assist with social care and companionship for the disabled and elderly, such as cooking, managing medications and even alleviating loneliness.

Ultimately, AI has great potential to transform and support human behaviour — as long as it ends up in the right hands.

Singularity

How do we stay in control of an intelligent system?

In The Cambridge Handbook of Artificial Intelligence, Keith Frankish isn’t sure we can:

Kurzweil (2005) holds that intelligence “… is inherently impossible to control,” and that despite any human attempts at taking precautions, intelligent entities by definition “have the cleverness to easily overcome such barriers.”

“Let us suppose that the AI is not only clever, but that, as part of the process of improving its own intelligence, it has unhindered access to its own source code: it can rewrite itself to anything it wants itself to be. Yet it does not follow that the AI must want to rewrite itself to a hostile form.”

Why are humans at the top of the food chain?

We aren’t bigger, faster or stronger than many animals.

No, our dominance is down to our intelligence. We can overcome pythons and lions and sharks because we’re capable of creating resources to control them with: physical tools such as cages, and cognitive tools such as training.

But what happens when we create AI that is smarter than us? This concept is called ‘singularity’: the point at which humans are no longer the most intelligent beings on earth.

This brings to mind Isaac Asimov’s “Three Laws of Robotics”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Singularity would overthrow these rules.

British mathematician and cryptologist I.J. Good first warned of singularity when he coined the term ‘intelligence explosion’ in his 1965 essay, Speculations Concerning the First Ultraintelligent Machine.

An intelligence explosion could occur when we succeed in building Artificial General Intelligence (AGI), whereby a system would be capable of recursive self-improvement, ultimately leading to Artificial Super Intelligence (ASI).

The AGI would understand its own design to such an extend that it could redesign itself or create a successor system, which would then redesign itself, and so on, with unknown limits.

In 2015, Steven Hawking echoed these sentiments:

“It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help.”

There would be no hypothetical ‘off’ button if things got out of control.

No big red button.

ASI would be able to anticipate our every move.

It would be able to defend itself.

How do we define humane treatment of AI?

A rock has no rights.

We can pulverize it, throw it at a wall, build a house with it.

A tree has more rights than a rock, but less than an animal. An ant has more rights than a tree, but less than a cow.

Our society follows this hierarchy more-or-less without question. If a being has human-like qualities, it earns a moral status. We have an in-built moral obligation to treat it in a certain way, and not in others.

Moral status boils down to a two-way criteria: sentience and sapience.

  • Sentience means the capacity the capacity to experience sensations like pain and suffering.
  • Sapience refers to a set of capacities associated with higher intelligence, such as self- awareness and responding to reason.

Neuroscientists haven’t yet unlocked the definition of a conscious experience. But a large part of it comes down to underlying mechanisms that revolve around reward/pleasure vs. fear.

We share these mechanisms with animals. Can we implement them into machines? At the moment, AI still operates on a superficial level. But it is slowly becoming more complex and lifelike.

As soon as we start thinking of machines as entities that can perceive, act and feel, it’s not so crazy to consider their rights.

Should they be devoid of any moral status, despite their intelligence being higher than that of some humans?

Frankenstein eventually developed an emotional sensibility, but when he tried to slot into society, people violently rejected him out of fear. This sentiment is echoed in reactions to AI today.

For instance, as part of an AI-based study to test the limits of human kindness, a company invented a friendly robot to hitchhike around the company. Within a couple of days, the robot was beheaded and decapitated by a human in an unprovoked attack.

A similar situation unfolded in Japan, where a customer-welcoming robot was destroyed by a group of children in a mall.

As AI becomes more widespread, will we begin to enforce punishment for this type of behaviour?

Racist robots

How do we eliminate AI bias?

The capacity of AI processing is far beyond that of humans. But AI can’t be trusted to be fair, or neutral, as it essentially reflects the conscious — or unconscious — biases of its developers (who are usually white males).

That’s where AI starts to go wrong.

One example is Google’s photos service where AI is used to identify people, objects and scenes.

Another is software where AI is used to predict future criminals, which showed bias against black people.

And Tay, Microsoft’s racist robot, won’t be forgotten any time soon.

AI is created by humans who can be racist, biased and judgemental.

And when AI takes over cognitive tasks previously performed by humans, they inherit social requirements too: i.e., the expectation to make decisions on a non-biased, non-judgemental basis.

If an algorithm is based on a complex neural network or directed evolution, it could be almost impossible to understand why or how it is judging people in a certain way.

But a machine learner based on decision trees or Bayesian networks could be much more transparent to inspection from programmers.

This could then enable an auditor to discover the root behind a certain bias, and correct it.

So, what if AI was created exclusively by those who strive for social justice and progress? It could become a catalyst for positive change.

That’s why it will become more and more important to develop algorithms that are transparent to inspection, and predictable to those they govern — and are governed by.

Why we shouldn’t freak out yet

Voluntary regulation and governance

Leaders in related fields have made a point of developing regulations in an attempt to mitigate some of the negative aspects of AI.

One of the most significant examples is Google’s declaration of ethical principles, the result of a revolt amongst the company’s programmers.

The new guidelines for AI use were outlined in a blog post from chief executive Sundar Pichai.

Pichal said that Google would not design AI for:

  • Technologies that cause or are likely to cause overall harm.
  • Weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people.
  • Technology that gathers or uses information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

He also laid out seven more principles which he said would guide the design of AI systems in future:

  • AI should be socially beneficial.
  • It should avoid creating or reinforcing bias.
  • Be built and tested for safety.
  • Be accountable.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Be made available for use.

The Electronic Frontier Foundation called the guidelines a “big win for ethical AI principles”.

Another example is the British Academy and the Royal Society, who have brought together ‘leading academics, industry leaders, civil society and data and technology specialists’ to develop a set of principles to guide data governance.

These principles state that all systems of data governance across the varied ways data is managed and used should:

  • Protect individual and collective rights and interests.
  • Ensure that trade-offs affected by data management and data use are made transparently, accountably and inclusively.
  • Seek out good practices and learn from success and failure.
  • Enhance existing democratic governance.

AGI is still a long way away

Artificial General Intelligence (AGI) is the holy grail of AI.

At the moment, AI that is equivalent or superior to human intellect is programmed to a specific, restricted domain, such as searching the internet or playing a game.

So, the AI is really good at that one thing — but nothing else.

Consider the case of Deep Blue. This AI became the world champion at chess, beating a human. But Deep Blue can’t perform any other tasks other than chess.

Similarly, a bee is skilled at building hives, a beaver at building dams — but a bee cannot build a dam, a beaver visa versa.

The ability to do both from watching and learning is a uniquely human trait.

An AGI would be able to learn from watching, and ultimately would be able to accomplish a wide variety of tasks, think and rationalize in the same way a human can — possibly even at a super-human intelligence level.

However, there is no guarantee that the power of AGI would benefit the world and not become an existential threat to humanity.

So how far away are we from developing AGI?

While the pace of innovation is dizzying, the reality of today’s narrow AI is still relatively limited. It’s still unable to perform accurately 100% of the time, even with genius machine learning engineers and millions of dollars in funding behind it.

And in the meantime, narrow AI faces the same problems as many other inventions.

This does not make these problems better, or worse — just perhaps more manageable under the assumption that we can anticipate them.

Ultimately, it’s wrong to exploit people’s ignorance by giving the impression that is — or ever will be — human.

“Misidentification with machine intelligence leads to false ethical evaluations of AI’s potentials and threats. What could lead us to over-identify with machines? Quite simply, a misconception of human life that puts the capabilities of language, mathematics and ‘reason’ as its key characteristics.”

The unknown

As machines become faster, smarter and more capable, our lives become more efficient — and therefore, more prosperous.

AI is a boundless landscape with unimaginable power to transform the world we live in.

It’s exciting. It’s unfamiliar.

But with great change comes potential danger.

History has given us many warnings of how the widespread, rapid adoption of new technologies can result in public uprisings, fear and anxiety.

When will we have reached the point of no return, at which technological development becomes irreversible?

According to Armin Grunwald, it’s as soon as soon as technology has expectations of us — and not the other way around.

That might seem like an impossible outcome right now.

But if we’ve learnt anything about exponential technology, it’s that it creeps up on us…

One moment it’s a speck in the distant future, the next we can’t remember life without out it.

And as Mary Shelley wrote:

“Nothing is so painful to the human mind as a great and sudden change.”

Our world is on the cusp of being transformed forever.

Let’s tread carefully.

Originally published at blog.growthbot.org.

--

--

Justin Lee
The Startup

Growth & Acquisition @HubSpot. In perpetual software update mode.