What can happen, when we don’t have a clue why a somewhat autonomous gadget does what it does — the Gatwick Airport drone incident.

Roland Pihlakas
Jan 31 · 5 min read

Roland Pihlakas, 28. January 2019

Gatwick Airport in 2012. Chris Sampson, https://en.wikipedia.org/wiki/File:Gatwick_Airport_aerial_view_-_Chris_Sampson.jpg

Here’s yet another example of what can happen when we don’t know why a somewhat autonomous gadget does what it does — or as in the current case, even who it belongs to. As if by order, this is a perfect illustration. I strongly recommend you to read it all the way through, for it entails hysteria and witch-hunting and random accusations and apologies this way and that.


All in all, this story illustrates the notion, that when considered in a broader sense, the problem of identifying the owners of autonomous devices or even drones is no longer resolvable with robust methods.

The reason I’m posting this is because I am deeply interested in possible solutions to similar problems when dealing with fully autonomous agents, when no preparations have been made on a national level to deal with such incidents. Also, I would like to figure out what kind of preparations would be beneficial in order to mitigate such situations.

The problem lies in the fact that it is not possible to reconstruct a drone’s route of flight in the same way criminological methods allow us to determine the trajectory of a bullet, for example. And catching the drone does not equal catching the drone’s owner, let alone catching the true culprit.


What we do know, however, is that we’re off to a bumpy start, for drones are actually only modestly autonomous compared to the real autonomous agents that will soon surface on the market. Drones become autonomous mostly when they develop a malfunction. Not that this would somehow be a less important aspect to consider.

Anyhow, in the light on this incident, we can at least hypothetically ask, that even if the owner of the drone is found, does that automatically mean, that the owner is liable for the damages, instead of, for example, the manufacturer. The same question of the owner’s / manufacturer’s share of liability will rise in the future, only not hypothetically anymore, with the use of true autonomous agents.

To cut a long story short, here’s an overview of what happened — a drone was found flying near Gatwick airport in England, causing a lot of trouble over an extended period of time right before Christmas. This was followed by hysterics, witch-hunting, a lot of accusations of negligence towards different parties involved. And all this took place in England — a fairly developed and prosperous country.

Both the media and the police, not to mention a parliament member, were at first eager to erase the whole situation by finding a black sheep as quickly as possible. This was achieved by using the good old proximity clause — the most easily detectable suspect must be the real culprit. As if this logic would still apply in the case of autonomous gadgets… The result was predictable — the urge to find a quick-fix solution to a complicated problem gave rise to a full witch-hunt, where the collateral damage far outweighed the gains. The publication of possible suspects, the 36 hours spent in interrogation, the rummaging through personal property… It makes me wonder how nobody confessed to anything, for research indicates that during such arduous interrogations — considering the combination of time-span and fatigue — people become quite susceptible to influence and can be made to believe just about anything about themselves. The investigator’s later commentary “sorry, but justified” simply does not compensate the damage done to innocent parties. And their hopes of maybe finding the culprit by a lucky strike simply did not materialise.


Clarifications for what this post is not about:

  • Methods for shooting down or catching drones.
  • Whether drones are autonomous.
  • All kinds of airport security issues.
  • Claiming that the police force is bad.

Conclusion.

The story illustrates, that what is needed, is:

The irony lies in the fact that once autonomous agents have entered the market, it would probably be impossible to regulate them afterwards, for upgrading them with registries or accountability mechanisms is likely not possible. And therefore those first-round gadgets would simply become banned. Of course, many would ignore such a requirement and all hell would break loose. The time to act to evade this is right now, but we are running out of time.


See also.


Thanks for reading! If you liked this post, clap to your heart’s content and follow me on Medium. Do leave a response and please tell me how I can improve.

Connect with me —

Skype | Facebook | LinkedIn | E-mail

Three Laws

Topics about the AI alignment, AI safety related problems, the "Three Laws of Robotics", and other proposed solutions. Join our Slack space: http://bit.ly/three-laws-slack. Currently the workspace consists of channels oriented towards technical and governance topics.

Roland Pihlakas

Written by

I studied psychology, have 15 years of experience in modelling natural intelligence and in designing various AI algorithms. My CV: http://bit.ly/rppro25028

Three Laws

Topics about the AI alignment, AI safety related problems, the "Three Laws of Robotics", and other proposed solutions. Join our Slack space: http://bit.ly/three-laws-slack. Currently the workspace consists of channels oriented towards technical and governance topics.