What can happen, when we don’t have a clue why a somewhat autonomous gadget does what it does — the Gatwick Airport drone incident.
Roland Pihlakas, 28. January 2019
Here’s yet another example of what can happen when we don’t know why a somewhat autonomous gadget does what it does — or as in the current case, even who it belongs to. As if by order, this is a perfect illustration. I strongly recommend you to read it all the way through, for it entails hysteria and witch-hunting and random accusations and apologies this way and that.
All in all, this story illustrates the notion, that when considered in a broader sense, the problem of identifying the owners of autonomous devices or even drones is no longer resolvable with robust methods.
The reason I’m posting this is because I am deeply interested in possible solutions to similar problems when dealing with fully autonomous agents, when no preparations have been made on a national level to deal with such incidents. Also, I would like to figure out what kind of preparations would be beneficial in order to mitigate such situations.
The problem lies in the fact that it is not possible to reconstruct a drone’s route of flight in the same way criminological methods allow us to determine the trajectory of a bullet, for example. And catching the drone does not equal catching the drone’s owner, let alone catching the true culprit.
What we do know, however, is that we’re off to a bumpy start, for drones are actually only modestly autonomous compared to the real autonomous agents that will soon surface on the market. Drones become autonomous mostly when they develop a malfunction. Not that this would somehow be a less important aspect to consider.
Anyhow, in the light on this incident, we can at least hypothetically ask, that even if the owner of the drone is found, does that automatically mean, that the owner is liable for the damages, instead of, for example, the manufacturer. The same question of the owner’s / manufacturer’s share of liability will rise in the future, only not hypothetically anymore, with the use of true autonomous agents.
To cut a long story short, here’s an overview of what happened — a drone was found flying near Gatwick airport in England, causing a lot of trouble over an extended period of time right before Christmas. This was followed by hysterics, witch-hunting, a lot of accusations of negligence towards different parties involved. And all this took place in England — a fairly developed and prosperous country.
Both the media and the police, not to mention a parliament member, were at first eager to erase the whole situation by finding a black sheep as quickly as possible. This was achieved by using the good old proximity clause — the most easily detectable suspect must be the real culprit. As if this logic would still apply in the case of autonomous gadgets… The result was predictable — the urge to find a quick-fix solution to a complicated problem gave rise to a full witch-hunt, where the collateral damage far outweighed the gains. The publication of possible suspects, the 36 hours spent in interrogation, the rummaging through personal property… It makes me wonder how nobody confessed to anything, for research indicates that during such arduous interrogations — considering the combination of time-span and fatigue — people become quite susceptible to influence and can be made to believe just about anything about themselves. The investigator’s later commentary “sorry, but justified” simply does not compensate the damage done to innocent parties. And their hopes of maybe finding the culprit by a lucky strike simply did not materialise.
Clarifications for what this post is not about:
- Methods for shooting down or catching drones.
- Whether drones are autonomous.
- All kinds of airport security issues.
- Claiming that the police force is bad.
The story illustrates, that what is needed, is:
- An infrastructure for enforcing the principles already found in existing laws also on autonomous devices.
- An infrastructure that enables justified accountability of various persons related to the robot-agent (either manufacturers, owners, or users).
The irony lies in the fact that once autonomous agents have entered the market, it would probably be impossible to regulate them afterwards, for upgrading them with registries or accountability mechanisms is likely not possible. And therefore those first-round gadgets would simply become banned. Of course, many would ignore such a requirement and all hell would break loose. The time to act to evade this is right now, but we are running out of time.
- Project: Legal accountability in AI-based robot-agents’ user interfaces. Proposed legal, user interface, and technical general principles for achieving accountability in advanced autonomous agents. Also clarifications for some popular confusions regarding AI technology in general. Determining why exactly a particular decision was made by a robot-agent is often difficult. But the whitelisting-based accountability algorithm can greatly help by easily providing an answer to the question who enabled making a particular decision in the first place. Whitelisting enables the accountability and humanly manageable safety features of robot-agents with learning capability, making the robot-agents both legally and technically robust and reliable.
- What happens when autonomous robots are not regulated or on the contrary, qualify as subjects of law? Proposals have been made, that in order to have a worthwhile dialogue on the subject of regulating autonomous agents, we should first determine what are the problems that need to be solved via these regulations.
- Gatwick Airport drone incident / Wikipedia.
- The famous slaughterbots drone attack video in YouTube by the Future of Life Institute and Stuart Russell.
- The Islamic State of Iraq and the Levant launched a new propaganda campaign against the West, posting a new poster online threatening cities with drones, prompted by the events at Gatwick Airport.