Computers doing the thinking

Artificial Intelligence is very good at handling uncertainty.

Future Lab
4 min readDec 5, 2016

How intelligent is Artificial Intelligence? Is it so intelligent that we can rely on it? How much can it help us and where are its limits? Many applications of AI are used to analyze, then forecast outcomes and then react to these appropriately. As AI develops, it is becoming a feature of many of the things we live with, but it is also a tool to help us do what we do: the computer does some of the thinking, so that we can get on better with our job.

Learning

It’s dangerous being a robot: this is evident in the video below which shows a robot patrolling a shopping mall in Osaka and being attacked by children.

Researchers worked out an algorithm to minimize the danger — for example, noting that children in groups were a bigger risk. The robot learnt to avoid dangers, even approaching a child’s parents in the expectation that they would influence the child to behave.

This is an important use for Artificial Intelligence — analyzing data to determine risk and how to counter it. That’s one of the main activities of the Teamcore Research Group at the University of Southern California (USC), which has worked on programming police activities, airport security and wildlife rangers.

You can work out where attacks are more likely to occur, but you can’t forget the other places.

Milind També — Director of Teamcore Research Group, USC

Game theory

Teamcore director Milind També says they’ve based their research on game theory: “The game between a defender and an adversary is a central pillar of our work, but we’ve relaxed some of the traditional assumptions because of difficulties in the real world.”

One of those difficulties is that traditional game theory assumes that players are rational, but they aren’t.

“We look at past activities to see what adversaries have done,” explains També, “and we use that data to make predictions.”

Luckily for the programmers, there’s a lot of crime and plenty of experience when forecasting for regular police work, but it doesn’t work with counter-terrorism, which (fortunately) is much rarer.

Humans are not very good at randomization, they tend to do on/off.

Airport security has to be based on more rigid structures. Predictive analysis as well as prescriptive analysis are used to work out where resources can be used more efficiently.

“You have to plan that very conservatively, assuming the adversary thinks very strategically.”

So while data for a police department could use past experience to forecast robbery locations and then work out the robbers next move, airport security has to be based on more rigid structures.

This predictive analysis is only one part of their work. Teamcore also does prescriptive analysis, working out where resources can be most usefully applied.

“You can work out where attacks are more likely to occur,” says També, “but you can’t forget the other places.”

An airport site which is unlikely to be attacked must be inspected occasionally, otherwise potential attackers will take advantage of its weakness. How often does that need to be? And when? A predictable inspection is almost an invitation to attack.

AI’s ability to draw conclusions from unorganized material makes it ideal for such challenges.

Making sense out of disorder

“Humans are not very good at randomization,” explains També. “They tend to do on/off.” And computers can work through a range of options which is beyond human capacity. “We have 20 sky marshals for 1,000 flights. That’s an astronomical number of combinations. We have patterns of deployment which no-one had ever come up with.

Not yet foolproof: Coastguard reported that their AI system told them to go faster than the boat’s top speed.

AI’s ability to draw conclusions from unorganized material makes it ideal for such challenges. “AI is very good at handling uncertainty and is very clever at determining which patterns it is even worth looking at.”

But it must be fed with the right information: Coastguards told També’s team that their program was telling them to go somewhere faster than the boat’s top speed. “In the early stages, people often point out problems,” També admits, “but in the end, we often find that, even if people say they didn’t think something would be a good idea, it actually worked. We have a lot of success stories.”

By Michael Lawton

Stay tuned as we continue this artificial intelligence discussion and focus on the safety concerns surrounding autonomous vehicles.

--

--

Future Lab

An initiative by ASSA ABLOY to observe future trends in security.