What happens when autonomous robots are not regulated or on the contrary, qualify as subjects of law?

Roland Pihlakas
Feb 6 · 9 min read

Roland Pihlakas, 17. January 2019

Proposals have been made, that in order to have a worthwhile dialogue on the subject of regulating autonomous agents, we should first determine what are the problems that need to be solved via these regulations.

Hereby I will present one set of possible introductory questions to be considered when dealing with the issue of the liability of autonomous agents, followed by my analysis of the subject. On top of that, I will scrutinise the suggestion, made by some, that autonomous agents should be made subjects of law.

In my article I will mainly touch upon problems in dire need of solutions. I will not go into the details of the possible solutions themselves, for it is a subject far exceeding the span of this article, and a process that would probably need to be implemented in several phases. Another argument for separating the assessment of the problems from the deliberation of the solutions, is that it could easily lead to a considerable confusion, especially when people of different backgrounds and expertise are involved. Thus far it appears, that different people interpret even the basic terms differently. Therefore it would be helpful to start untangling this problem from the very beginning, by first asking the “whys’”.

General introductory questions for thought and discussion:

  • How is the liability determined when an autonomous robot acts alone and there are no appropriate infrastructures or regulations concerning this matter?
  • Should the usufructuary / holder be able to install any software into their autonomous robots, in a similar manner as computers allow at the moment?
  • What happens, when autonomous robots are made subjects of law and liable for their own actions? (with the help of insurance companies and their own assets).
  • Is the option of not regulating autonomous robots equal in impact to autonomous robots being subjects of law and liable for their actions?
  • What is the standpoint of insurance companies on this matter at the moment and how could this field be regulated before gaining access to the necessary input data? How will this impact innovation? (At the moment, three viewpoints seem to prevail: 1) no regulation is necessary; 2) robots should be made subjects of law; 3) regulations should be for the infrastructure that covers data gathering and processing necessary for accountability and liability detection.)
  • Will the insurance companies agree to insure all unaccompanied autonomous agents operating in the public space?
  • Would it be possible for anybody to send autonomous robots out to act in the public space in the future?
  • What are your views on the perspective of autonomous agents being used as weapons or for criminal purposes? How likely do you consider it to be and how easy would it be to determine the real culprits?
  • Is the registry of drones necessary? Is it really conductive to notify the centre of each and every flight?
  • How is the autonomous robots regulation affected by the GDPR?

Analysis — What happens when autonomous robots are not regulated or on the contrary, qualify as subjects of law?

1. Lack of accountability. When we do not have a state-controlled / national infrastructure, that would allow for the necessary information to be shared / accessed or even worse, do have a system, but one that lacks sertificates and mechanisms for appointing liability / accountability, then, in the case of a damage case, it will be impossible to ask the major source of danger — the machine — where it came from, who it belongs to and why it behaved the way it did — and get reliable answers.

  • A. When accountability mechanisms are inadequate as described in clause 1. it will become difficult to guarantee legal rights to the injured parties. In these cases it is difficult to determine liability (for, unlike any other machine, the autonomous agent can operate remotely and without supervision by humans. This is also a crucial differentiator between autonomous robots an organisations, the former can operate without the inclusion of human beings in the process). — 1) Consumer protection problems — the manufacturer’s liability is hard to determine / detect and the owners and users will have a hard time getting consumer protection in the case of a malfunction or other problems. — 2) Third parties and non-owners will be denied legal rights, for it is very hard to determine / detect the robot-agents owner’s / user’s liability. — 3) When the society is denied legal rights, plus factors in the case of clause 2. (below, lack of third party feedback) it will lead to mistrust and resistance towards autonomous agents. People will refuse to buy them based on moral grounds. We see analogous situations in self-serve cashiers in the supermarkets, where people flock to the one human cashier at work, whilst the self-serve boots remain empty.
  • B. Neither manufacturers nor users are sufficiently motivated to increase the safety levels of their products or use of them, leading to numerous accidents. — 1) Lack of technical supervision. Connected to best practices / standards / certificates / permits for manufacturers. — 2) Lack of proficiency by users (users have not acquaintanced themselves with instructions manuals / videos). Connected to permits for users. — 3) Lack of customer support and help. Access to the necessary information may be hindered. A matter of general public interest. — 4) When the autonomous agents become open-source code based and anybody can build their own autonomous robots, there will be no more guarantees, that the agents are safe and properly supervised / accounted for.
  • C. Arising from clause 1. (lack of accountability) — the necessary information for identifying liability is unattainable to the insurance companies, and due to 1.B (increased accident rate), insurance companies will not be willing to insure (for a reasonable price), for the risk is too big and erratic. This will inevitably lead to the impediment of innovation in the field of autonomous robots.
  • D. The private sector will abstain from developing and manufacturing certain autonomous products, from lack of confidence both in the legislation and user assurance. This has reportedly already happened to several prospective products.
  • E. When clause 1. (lack of accountability) is not solved, it will become very easy to use autonomous agents for criminal purposes, to cause: — 1) Material loss. — 2) Physical injuries and suffering. — 3) Military or terrorist assaults. Connected to using the agents as weapons. — 4) Espionage, violation of privacy.
  • F. If these aspects are not taken into consideration in the development of autonomous agents, then it is impossible for the manufacturer to trace the source of malfunctions and to analyse and correct these problems. Innovation will slow down.
  • G. The rise of situations, where agents will be used either carelessly or malevolently to profit their owners (either financially or otherwise), with no cost to the owners, but with significant risks or expense to third parties. This will lead to further financial stratification of society via unjust enrichment.

2. Third parties and even users will have difficulties in finding the one appropriate source to send their feedback to in case of problems. Without feedback the software can not be improved.

  • A. This sensed helplessness of people will lead to frustration.
  • B. Lack of feedback leads to hindered innovation on the producers / manufacturers side.

3. Interested parties will not have an adequate overview of agents in operation and their circumstances.

  • A. The state will have no overview of the types of agents in use and their functionalities, and will therefore have no grounds for new regulations or amendments. Connected to legislation.
  • B. The insurance companies will not have adequate information for risk appraisal and pricing policies, as well as for constructing all other clauses and claims for their contracts.

4. The state and real estate owners can not inform the users of autonomous agents of variable, time and situation-specific alterations / constraints to the use of agents.

5. The regulation of automatic warning / notifying and calling for help:

  • A. It will remain unspecified, whether the autonomous agent must call for help in case of an accident.
  • B. It will remain unspecified, whether autonomous agents should notify their usufructuaries / holders, owners, or manufacturers in case of other anomalous situations and mistakes, in order to avoid further possible damage.

6. Specific scenarios related to the autonomous agents being considered as a subject of law: the autonomous agent will either not be able to defend its actions in court, or the agent will provide a universal allegation towards its own manufacturer, claiming to be neither sufficiently prepared to act in a harmless manner, nor to be competent to defend or explain its own actions in court. Again, this will lead to stifled innovation, for no manufacturer wants this kind of unconstrained load of responsibility.

  • A. In case agents become subjects of law, they need to be insured to increased extent (for no other party will reimburse those losses besides insurance companies and the agents own assets). In such a case the problem seen in clause 1.C (the need to calculate risks and costs and determine liability for the insurance companies) will become even more urgent and insurance companies will refuse to take such risks (for a reasonable price).
  • B. When the maximum reimbursement rate and the agents own assets will not suffice to cover the damages, then the injured party will be left without total reimbursement (for no other parties will step in in this matter).
  • C. Situations may rise, where all damages are covered monetarily. But sometimes it is indeed more convenient for the owner or manufacturer to compensate the damage via monetary means, instead of admitting responsibility otherwise. But in actuality, most damaged parties in accidents do not comfortably succumb to the notion of becoming simply an involuntary “damage suffering service”. Not all moral values can or should be measured by monetary worth. Furthermore, studies in the field of social psychology demonstrate, that when moral values are replaced by monetary fines (which in turn, can be considered as “fees” for certain actions), then these moral values, judgements, and motives will be irrevocably impaired even once the monetary system is removed.

The irony lies in the fact that once autonomous agents have entered the market, it would probably be impossible to regulate them afterwards, for upgrading them with registries or accountability mechanisms is likely not possible. And therefore those first-round gadgets would simply become banned. Of course, many would ignore such a requirement and all hell would break loose. The time to act to evade this is right now, but we are running out of time.


I will conclude with a novel idea pertaining to the “solutions” part of regulating autonomous agents. This idea rose from the general resistance and arguments, that regulating autonomous robots would simply be too difficult. I propose a type of regulation, that would make following the regulations or entrys to the registries optional, not mandatory. This allows for the opportunity to follow best practises, to make use of the registry and other national services. In addition, compliance to the regulation could provide the users with some additional benefits and bonuses as well. And then making the system mandatory can be a future prospect / project.


See also.

  • What can happen, when we don’t have a clue why a somewhat autonomous gadget does what it does — my analysis of the Gatwick Airport drone incident. All in all, the story illustrates the notion, that when considered in a broader sense, the problem of identifying the owners of autonomous devices is no longer resolvable with robust methods. What is needed, is the infrastructure for enforcing the principles already found in existing laws also on autonomous devices, and the infrastructure that enables justified accountability of various persons related to the robot-agent (either manufacturers, owners, or users).
  • Project: Legal accountability in AI-based robot-agents’ user interfaces. Proposed legal, user interface, and technical general principles for achieving accountability in advanced autonomous agents. Also clarifications for some popular confusions regarding AI technology in general. Determining why exactly a particular decision was made by a robot-agent is often difficult. But the whitelisting-based accountability algorithm can greatly help by easily providing an answer to the question who enabled making a particular decision in the first place. Whitelisting enables the accountability and humanly manageable safety features of robot-agents with learning capability, making the robot-agents both legally and technically robust and reliable.
  • Making the tax burden of robot usage equal to the tax burden of human labour. There have been proposals to introduce robot taxes. I would propose something slightly different as a potentially better alternative. The main point of my proposal is that the tax burden of technology is currently lower than the tax burden of human resources and this will have to change sooner or later. By “technology” I mean all of the following: software, robots, even cyborgs, all other technological solutions, be it a trained bird on a branch — it makes no difference from the perspective of this proposed solution and no registration or classification will be necessary. Therefore this solution can not be bypassed by “legally correct” tricks for avoiding the taxes.

Thanks for reading! If you liked this post, clap to your heart’s content and follow me on Medium. Do leave a response and please tell me how I can improve.

Connect with me —

Skype | Facebook | LinkedIn | E-mail

Three Laws

Topics about the AI alignment, AI safety related problems, the "Three Laws of Robotics", and other proposed solutions. Join our Slack space: http://bit.ly/three-laws-slack. Currently the workspace consists of channels oriented towards technical and governance topics.

Roland Pihlakas

Written by

I studied psychology, have 15 years of experience in modelling natural intelligence and in designing various AI algorithms. My CV: http://bit.ly/rppro25028

Three Laws

Topics about the AI alignment, AI safety related problems, the "Three Laws of Robotics", and other proposed solutions. Join our Slack space: http://bit.ly/three-laws-slack. Currently the workspace consists of channels oriented towards technical and governance topics.