Artificial Intelligence : Why We’re Not Out of the Woods Yet

Josh Sephton
Future Proof Briefings
6 min readFeb 5, 2016
Image courtesy of Baron Visuals

I’ve been talking a lot about artificial intelligence recently. I believe that 2016 is the year when it will become an integral part of our lives. However, a vocal minority of the tech elite have spoken out against AI, suggesting it’s a serious threat to the survival of the human race. Elon Musk, founder of PayPal, SpaceX and Tesla, said

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out.”

I’ve always believed that we’ll avoid serious calamity by simly not teaching computers to be homicidal, but exactly how likely are we to end up in the Matrix? Are we really building our own Skynet? Should we be wary of accidentally creating Replicants?

Well, more likely than I’d imagined actually.

As I started digging into the risks posed by AI, I came across a form of basic intelligence that’s been around for decades. It’s already come close to causing damage to the human race on several occasions. I also spent some time analysing the risks of a more recently popular technology.

High-Frequency Trading

In 1998, the U.S. Securities and Exchange Commission authorised electronic stock exchanges. By 1999, some enterprising traders had written software which would trades equities. Computers can analyse data far quicker than a human. By teaching computers how to parse and interpret data, they can make trades based on new information before their human counterparts can even track their eyeballs across the screen.

The intelligence is the most basic imaginable. These computers are being taught only what they need to know to perform their job. In essence they’re simple “if this… then that…” systems.

  • If the 7 day moving price average is higher than the 7 minute moving price average then sell.
  • If the 1 hour moving volume average is lower than the 1 minute moving volume average then buy.

These systems don’t seem to pose serious risk to my wellbeing.

By 2010, high-frequency trading accounted for about 80% of all equity trades. Then something unexpected happened. On May 6 2010, the Dow Jones Industrial Average lost 9% of it’s value in only a few minutes. It then recovered within about half an hour.

Image from CNBC

This blip was caused by automated high-frequency trading systems. It wasn’t a single malicious act but a complex network of systems that all reacted in the same way to an external stimulus.

Cliff Asness, founder of Applied Quantitative Research — one of the world’s leading quantitative-investment funds — said that losses like this could be triggered by

“a strategy getting too crowded…and then suffering when too many try and get out the same door”.

There’s the rub, lots of simple things can have unexpected effects when combined. We have less and less idea of how they’re going to behave. There are so many moving parts that they’re interacting in ways we can’t predict.

Self-Driving Cars

There’s currently a self-driving car arms race happening. All of the major car manufacturers have autonomous car programs, Google’s work has been widely publicised, and London has even announced a pilot program.

The cars are fitted with a wide array of sensors, giving it the ability to ‘see’ its surroundings. They are not, however, simple “if this… then that…” systems. The cars are designed to learn from their surroundings using artificial intelligence. The more miles the cars drive, the more situations the cars encounter and the better they learn.

One method for teaching them is to use game theory to minimise or maximise certain metrics (or “payouts”). When driving, for example, minimising the length of the route is good. Maximising the average speed is good. Minimising the amount of time the car is stationary is good.

If every decision we made was perfectly optimal according to game theory, the world would be a very different place. Consider a two-lane road with one of the lanes closed. As you’re driving along, the majority of people will merge early and queue in a fair and logical manner. However, there’s always one person who gets as close to the closure as possible before cutting in. It’s infuriating that this person thinks their time is more valuable than everyone queuing patiently.

This situation is a classic game theory problem in disguise. It’s a version of the Prisoner’s Dilemma. There are four possible outcomes of the lane-closed scenario. Remember, we’re trying to minimise the amount of time the car is stationary.

  1. I join the queue with everyone else, no one cuts in at the end.
    Expected wait time: 4 minutes.
  2. I skip the queue, no one else takes the empty lane.
    Expected wait time: 1 minute.
  3. I take the empty lane, everyone tries the same tactic.
    Expected wait time: 4 minutes.
  4. Drivers split evenly between the lanes, I join either queue.
    Expected wait time: 2 minutes.

So my average wait times are:

  • Take the empty lane: (1 + 4)/2 = 2.5 minutes
  • Join the queue: (4+ 2)/2 = 3 minutes

It’s in my interest to act like a jerk, even if everyone else is acting like a jerk!

Aside from seeing more selfish drivers on the road, I wanted to demonstrate that teaching machines can lead to unexpected consequences. Humans aren’t perfect game players, we don’t always act in our own best interest. We have an innate sense of ‘us’. But computers are perfect. Computers will behave exactly as told without making mistakes.

It’s strange to think that we need to teach machines to be less than perfect in order for us to stay safe. If we miss it with the first wave of machines, we might not get a second chance. The machines might decide, in their infinite wisdom, that our existence is a barrier to a maximised payout.

I now understand Elon Musk’s hesitance in creating artificial intelligence.

We’ve already built systems which are so complex that they behave in ways we can’t predict. We’re well on our way to building systems which won’t have “common sense” and will behave differently to humans.

It’s only a matter of time before we build something which could pose a serious threat to the existence of the human race. That’s not to say we should lock it away in Pandora’s box. James Barrat, author of Our Final Invention, said

“Advanced artificial intelligence is a dual-use technology, like nuclear fission, capable of great good or great harm. We’re just starting to see the harm.”

Artificial intelligence has the opportunity to add real benefit to our lives. We just need to ensure we develop sensible ways of protecting ourselves. A three-pronged approach can help ensure that we minimise any risk:

  • Education
    Make sure people understand the risks of automated systems in society.
  • Litigation
    Don’t let people build things without thought and attention to risks.
  • Economics
    Ensure labor market policy keeps significant number of people employed.

If you liked this article please click the heart button below to share — it will help other people see it.

--

--

Josh Sephton
Future Proof Briefings

Founder of Pritchatts Consulting Ltd., making companies more profitable by making their data work for them.