Accountability-related translated excerpts from the Estonian lawmakers report about regulating robot-agents — the so-called “kratt” law.
Roland Pihlakas, 2. November 2018
Note: Below you can find some of my personal highlights from the report, which were based on the relevancy to the topic of accountability. Though the highlights are not an exhaustive representation of different kinds of statements relevant to accountability topic in the report, they should provide an initial overview. Please note that the excerpts or highlights do not indicate whether I agree with the statements or not. The highlights only indicate a subsection of statements which I considered relevant. Topics I considered irrelevant, too broad or narrow, or too briefly represented in the report, were left out, just as well as some relevant topics I had no time to cover here. I have also added some explanatory statements of my own. The text was unofficially translated into English by non-lawyers, utilising re-phrasing to a great extent, and it contains mostly common vocabulary.
The original report can be found here (it is unfortunately available only in Estonian language): https://www.mkm.ee/sites/default/files/loppraport_analuus_sae_tase_4_ja_5_soidukite_kasutusele_votmiseks_riigikantselei_2017_10_15_ver_10_final.pdf
The untranslated excerpts in Estonian language can be found here: “Vastutuse omistamise tehnoloogiate teemaga seonduvad väljavõtted “kratiseaduse” raportist.”
Publicly editable Google Doc with this text is available here for cases where you want to easily see the updates (using history), or ask questions, to comment, or to add suggestions.
This project is done in collaboration with the Department of Transportation in the Ministry of Economic Affairs and Communications.
The current law leaves the drivers/operators of self-driving cars without protection because they are responsible for the driving regardless of whether the car was in self-driving mode or not.
Because there have been no discussions about regulating testing and using self-driving cars in Estonia, it is now the plan of the Ministry that such discussions are started.
The most attention in the report is paid to updating general laws and to creating new robotics specific regulations.
The operator is a person who sits on the driver’s seat and in the case where there is no such person, the operator is the person who initiates the start of the autonomous technology.
In order to avoid drafting a new specific regulation every time a new technology is created or taken into use, it is proposed that a new robotics’ regulation is drafted and the self-driving cars should also be called with the common denominator of “intelligent robot”, in addition to using the specific concept “robot-agent” (an exclusion to that would be one of the scenarios analysed in the report).
There are four scenarios proposed:
In the first scenario, the robot-agent is interpreted as an intelligent robot which obtains limited legal capacity as a result of an entry in the registry made by the owner. At the same time the robot-agent is not made equal to a person, and is instead treated either as an object or person, depending on the situation. An intelligent robot who has no limited legal capacity, is treated simply as an object (including as a source of increased danger, causing responsibilities to its holder).
In the second scenario, the intelligent robot is understood as a robot who can represent the owner or holder in a transaction, given their approval. Such an intelligent robot would be generally treated as an object, similarly to an animal, unless the law states differently. The intention in this case would be to highlight the autonomous, non-controllable and autonomously learning aspects of the robot, guaranteeing the contract and tort law treatments.
In the third scenario, the robot holder does not have the option for giving the intelligent robot legal rights, but the legal definition of a robot, the rights, obligations, and responsibilities of the manufacturer and holder would be regulated by a separate robotics’ law.
In the fourth scenario, only autonomous cars would be treated in the updates to the law.
Need for a regulation — Chapter 1. (1) I.
There is a principle that new technology does not mean that existing laws are unable to regulate them.
In various fields the existing law can be applied with correct and fair results even to new technologies, and therefore updating the laws is not necessary. It is important to regulate as little as possible and as much as necessary. An important principle is the principle of technology neutrality.
A regulation for intelligent robots was created in 2008 in South Korea with the purpose of improving clarity and innovation. In this act they define the concept of the intelligent robot, pay attention to important ethical questions, and provide directions for using the country’s financial resources.
Practical examples — Chapter 1. (1) II.
The self-driving car collects an enormous amount of information during its movements and although part of this information is by itself anonymous, most of it can still be connected to the user. Currently there is no legal basis for such processing of personal data. Even more, the GDPR regulations do not provide the legal basis for the car owner to use data collected during the car use, nor the clarifications about how, whether, who and to whom can this information be transferred.
A question is whether the state should obligate the manufacturers of the self-driving cars in such a way that in case of an emergency, while driving empty, they should always “self-sacrifice”.
It is not important whether the robots are based in artificial intelligence or are autonomous in other ways. The future society needs the new ethical, moral and legal norms in any case. As Jaan Tallinn has said in his interview to Allan Aksiim: “The beings who could destroy the world do not necessarily need to be self-conscious. The computers do not need to be conscious in order to win humans in chess, just as well as consciousness is not necessary for making world-changing management decisions.”
Concepts — Chapter 1. (2) I.
Being autonomous means that an individual is “turned on”, it has sufficient resources, and it is able to perform activities based on certain competences without additional external input. In this interpretation each act of an individual which can potentially be decided and made without immediate command, is autonomous.
One of the internationally discussed topics recently has been also autonomous weaponry.
Concepts — The concept of robotics and the robot — Chapter 1. (2) II.
A robot should: 1. Meet up to various criteria, including the ability to solve tasks and to function according to certain principles. 2. To be composed of a physical machine, which is able to act according to its surroundings, or has some indirect physical support aspect (for example, a financial robot or algorithmic robot has the server hardware, the Apple Siri has a microphone and a speaker, etc).
An intelligent robot: 1. Obtains its autonomy thanks to its ability to analyse data which it collects through its sensors, or through the capability to exchange data with its environment. 2. Has the ability to learn, communicate, relate (these are meant to be elements of AI). 3. Has a physical support system. 4. Adjusts its behaviour and activities based on the environment.
A cirurgical robot is not autonomous because a human being takes part of its decision processes. But this does not mean that such cirurgical robots should not be regulated for safety and training purposes.
An intelligent robot has to make decisions in different situations which cannot be exactly predicted by the engineer. Therefore such a robot is described as an artificial object or system which perceives the world and acts based on that. This leads to “unpredictable beneficial behaviour”. The behaviour of such a robot depends largely on its programming, which is so complicated that the behaviour is impossible to predict.
About rights and obligations — The robot’s similarity with an animal — Chapter 1. (3) I.
In the legal literature there has been an increasing amount of treatments of applying the legal aspects of an object to the robot, just as well as treating the physical robots as an animal. This has mainly two reasons. First, the question of how to apply responsibility in cases where the risk or manufacturer laws do not apply (in such a case a responsibility like animal owners responsibility would be applied). Secondly, the reasons similar to the ones found in animal protection law (public interest, mistreatment of robots, human values and morality).
In legal literature there have been warnings against equating the robots with animals too much because at least currently the technology does not have “sensations”, “instincts” like animals have, nor “behaviour from its own will” to the extent that animals have.
An approach towards agents comparing them with animals is proposed by US law researchers, but for various reasons this cannot be transferred to Estonian law.
Neither can treatments about analogy to animals from international legal literature transferred to Estonian law because a “thing” is a physical object. Therefore analogy with a robot is possible only in case we always require the presence of hardware.
About rights and obligations — Personhood and legal capacity — Chapter 1. (3) II.
In addition to the question whether the robot should be treated similarly to an animal in ownership and responsibility questions, there is also the question whether we should recognise the limited legal capacity of a robot. With the latter it is meant whether the robot should be treated as an agent, not as a person.
Currently there is yet no legal system in which the robot would have legal capacity. But this is under active discussions. /…/ It is claimed that providing the robot with a limited legal capacity is beneficial both socially, economically, and also politically.
EU 2014 RobotLaw instructions for regulating robotics suggest that robots could have a limited legal status which would be similar to the legal status of a legal person. This would enable the robot to participate in contracts. The need for a status of a legal person is also stressed in an application to EU Parliament and Committee about civil right norms in robotics, which states that at least the most complicated autonomous robots could have a status of a legal person with their own rights and responsibilities. That would include compensating the damage caused by them, as well as applying the electronic personhood in cases where robots are making autonomous decisions, or relate in other autonomous ways with third party persons.
About rights and obligations — Personhood — Chapter 1. (3) II. (i)
The current report does not analyse the option of creating a separate legal person. Instead it is proposed that the intelligent robot could be treated as an agent for a person or for a legal person.
The status of a person arises because in case of a natural person, it is a human being. But a robot is not a human being. In the current legal system legal persons can operate only owing to the fact that they are represented by a human.
In legal literature, a legally competent robot has been called an electronic person or also an agent.
About rights and obligations — Partial legal capacity — Chapter 1. (3) II. (ii)
Legal capacity is traditionally understood as a capability to obtain civil rights and obligations. This capability is same for all persons and it cannot be limited. In civil code the subjects of civil rights are called persons. They can have rights and obligations and they can take part in legal relations. Both natural and legal persons have a legal capacity which has the same meaning for both cases, but a different applicability amount — the legal persons cannot have the civil rights and obligations which apply only to human beings.
About rights and obligations — Limited active legal capacity — Chapter 1. (3) II. (iii)
The concepts of active legal capacity and limited active legal capacity of robots who need to participate in transactions in the name and interests of their holder need to be applied in such a manner that it would not disrupt civil trust via invalid transactions.
1. Guardianship and designated transactions. In the case of adults with restricted active legal capacity, the law allows the appointment of a guardian and the extent to which the guardian operates in the name of the ward, defines the extent of the restriction. The same analogy should be applied to the intelligent robot — the owner of the robot is the guardian in everything where the robot itself does not have permissions to make agreements (in the interests of the guardian or robot holder). 2. Permissions / clearance. In the same manner the analogy of permissions or sometimes postponed permissions can be applied. Valid transactions can be made by the robot with restricted active legal capacity when they are cleared by the guardian before the transaction. In a similar manner can this robot receive fulfilled obligations by others. 3. The “pocket money” rule. One of the alternatives to consider is the “pocket money rule” where the precise form of the allowed transaction is not dictated, but any transaction / operation remaining within a clearly defined limited amount of resources is automatically cleared for the agent with restricted active legal capacity.
In all three of those cases the list of permitted transactions or exact monetary amount of the allowance should be described in a real-time accessible robot-agent registry (see chapter 3. — Main problems and probable solutions: the “Registry of Robot-Agents”).
Furthermore we must point out, that penal law does not recognise the concept of active legal capacity, only the concept of guilt capacity (the capacity to be aware of and to regulate one’s behaviour according to the norms accepted by society). Therefore, in the foreseen future we have no grounds to accept the ability of intelligent robots to recognise the damages they may cause, for they operate only in the interest of their owners and are thereby not a subject of penal law or offense law.
Responsibility — Civil liability — Chapter 1. (4) I.
Responsibility for any losses sustained by another human being can derive from: — Defects in the machinery or program; — The absence or inadequacy of instructions by or for the owner/guardian; — The negligent or criminal behaviour of the owner/guardian. In the latter case, the regular contractual or non-contractual liability institute can be applied. In the former case, many questions arise — the professionalism of the owner, the robots capacity to evolve and become distanced from its creator; in case of open-source robots even the issue of many contributors and how their responsibility should be distributed.
In addition to that, it is predicted, that the main users of self-driving cars will not be laymen but companies, who will start hiring or leasing such machines to the final user (car-sharing companies) — in other words, there will be multiple layers of responsible parties.
There have been indications from civil liability related sources that with the rise of new technologies the institutes of the major source of danger and the liability of the manufacturer will not be able to fairly distribute the obligations for attestation.
Responsibility — Risk liability — Chapter 1. (4) I. (i)
The general idea behind the liability for the major source of danger is that the owner utilises the motor vehicle in his own interest, therefore any potential accompanying damage is the risk of his operation. The risk of operation entails the risk of automatisation. This might entail the incorrect use of autonomy as well as technical system failure, leading to an accident. Therefore the accident caused by an autonomous vehicle including self-driving cars will be covered by the liability of the possessor of the vehicle.
We do not agree with the argument in Estonian legal literature that risk liability will definitely discourage people from buying autonomous vehicles and that therefore we should consider abolishing the whole concept of risk liability in the context of autonomous vehicles. It is reasonable that when the risk liability applies despite of the actual amount of guilt, people do not want to be liable in such situations, where they have no control over the vehicle and therefore can not avoid accidents from happening. /…/ Therefore, in the case of autonomous vehicles as well as regular ones, not the avoidance of risk liability, but a mandatory motor third party liability insurance, will help society in accepting the corollary risks.
However, motor third party liability insurance does not cover all losses — e.g. damage upon the owner of the vehicle. The first practical solution would be casco insurance, amongst others.
Legal literature refers to a solution to the problem via placing the liability on the next potentially responsible party — the manufacturer.
Responsibility — Manufacturer’s liability — Chapter 1. (4) I. (ii)
The liability of the manufacturer is defined and laid out in the law of Obligations Act: If the damage to or death of a user has occurred because of a fault or a defect in the product, the liability falls to the manufacturer.
Various literature implies, that although self-driving cars are considered to be more safe, the manufacturer responsibilities still increase to the extent of hindering innovation. We do not agree with such a statement, since manufacturers’ responsibilities are their operating risk and it helps in finding a justified balance in a situation where the operator of the vehicle and the insurance company do not bear sufficient amount of responsibilities or their responsibilities are not relevant.
According to the directive, a product is faulty to the extent where it is not as safe as could be justifiably expected by a person.
Safety is the main concern pertaining to self-driving cars. Even the most meticulous of researches can not exclude all possible glitches in software. However, the user has a reasonable right to expect a safe product, and therefore the manufacturer is liable, in case an accident has occurred due to a faulty program or similar reasons.
One of the problems pertaining to the responsibilities of the manufacturer lies in the fact that obtaining proof about the liability of the manufacturer is a complicated and costly process considering the complexity of the technology. The burden of proof statute in the Law of Obligations Act states that the injured party must prove that the product was faulty and that the damage done happened due to this defective state of the product.
The provisions of the directive of manufacturer responsibility have been criticized specifically in the context of possible accidents of self-driving cars, argumenting that it is too complicated to claim the liability of the manufacturer, since the technological complexity of self-driving cars makes it difficult for a layman to understand and prove the causality between the defect and the accident.
Also, future courts must be able to identify and assess software malfunctions as well as plain physical defects, which requires a whole new level of competency from our courts. It is advisable to promote the creation of creditable standards for manufacturers on a national level.
It is predicted, that the main source of dispute between the manufacturers and users will be on the subject of faulty designs.
Marketing errors of self-driving cars. One of the possible shortcomings of the product may be that the user is not sufficiently informed about how to use the product safely. The manufacturers need to guarantee the education of the users, because autonomous cars and agents are a new technology and the risk from misuse is heightened. One of the possible solutions would be mandatory instructional videos.
It is necessary to determine how much user education by the manufacturer is sufficient after which the manufacturer is freed from education related responsibility.
Responsibility— Evidence-related technology — Chapter 1. (4) I. (iii)
It is probably necessary to use black boxes in order to find out who was responsible at the time of the accident. One of the related questions is who is the owner of the collected information and in which circumstances are the manufacturers obligated to provide the information from the black box.
One of the supposed cases which makes the assignment of responsibility more complicated, is the issue of learning in intelligent robots. It is assumed that in case of an unusual situation or due to some other unforeseeable cause the robot may behave differently from its original programming, or that it may even behave entirely autonomously, because such a situation is missing from its programming.
A self-driving car may not stop at a crossing even when it detects another approaching vehicle, because it may “assume” from previous experience that it is driving on a main road.
Still, it has already been argued in legal literature that this does not mean that existing regulations for risks and manufacturer responsibility cannot be applied here.
Responsibility — New risks — Chapter 1. (4) I. (iv)
A new risk is that a pedestrian may transfer their previous life experience with cars to autonomous cars and assume that when the vehicle is slowing down, this also means that the vehicle is going to stop. But self-driving cars are slowing down all the time and may ultimately not stop in case they did not detect the pedestrian.
Using recording devices technology for analysing the accidents has been criticised in association with privacy and data protection laws, as well as in the light of one of the base principles in the criminal law about the right of people to not accuse themselves (the nemo tenetur principle).
Responsibility — Criminal liability — Chapter 1. (4) II. (i)
Because the robot’s own responsibility is currently a very futuristic topic (if it ever will be possible), it is very important that the robot holder or manufacturer can be held responsible. Otherwise there will be a legal loophole where the victim and the society will suffer for no responsibility.
On the one hand, it is not possible to foresee all the actions of a robot. The robot is operating autonomously without the immediate presence of the robot’s holder. On the other hand, the robot’s holder should still be de facto responsible as they should be able to foresee all the possible harmful situations.
In Germany the manufacturer responsibility regulations are also a part of the criminal code. They cover the situations of negligence. The product should meet certain standards and its safety must be tested before it is permitted to enter the market. Even more, the manufacturer has the obligation to continuously gather data from the users and to react to any damage situations while the product is in the market.
In criminal law the blame can be assigned in three ways: — First, execution through somebody else. Analogous situations would be using a dog, a child, or some machine for the perpetration of the crime. This can be done by the holder by intentional modification of the settings of the machine, or by an unauthorised third party person modifying the settings, but also by the manufacturer modifying the settings. — Secondly, execution through indirect intent, as the negative consequences should have been foreseeable. Mostly this covers not accepting robot’s software updates or not doing the maintenance procedures. — Finally, the crime might be perpetrated by the robot itself.
There are also new kinds of violations regarding where the autonomous technology can be used and where it must not be used.
One hypothetical scenario is that the software that is built for protecting the computer from malware finds a new way for achieving its goals: it hacks a website it considers dangerous and erases all suspicious files there. But that would be destruction of the property of others.
Responsibility — Ethical choice issues: collision — Chapter 1. (4) II. (ii)
One of the topics that has been covered in literature is the question of ethical choices in case the accident is unavoidable, but there is a choice regarding the potential victim. This is called the Trolley problem. /…/ The question here is who should decide the rules for such situations, should they be based on the lawmaker’s, manufacturer’s or user’s preferences?
At the moment, such choices of life-or-death cannot be regulated since such situations are uncontrollable, but in the case of autonomous technology the situations might become partially controllable.
Responsibility— Carrying out the state supervision — Chapter 1. (4) II. (iii)
Another new topic is the question whether the self-driving car with only an under-age passenger in it should stop when the police requests stopping. What are the permissions for the police in case the self-driving car fails to do so? On one hand there is the risk of hacking of self-driving cars, but on the other hand it is feared that the police might misuse their power.
The question is how to communicate with the owner of the car in case it is driving empty or with an under-aged passenger. One potential solution is that such cars must have a special piece of equipment which enables contacting the owner.
Responsibility — The robot’s responsibility — Chapter 1. (4) III.
The current report does not propose a solution where the robot has its own responsibility. In legal literature there have been discussions about the issue of the unpredictable nature of the autonomous robots’ actions which are therefore unrelated to their owners’ or manufacturers’ will.
It has been proposed to establish an insurance fund for compensating the victims of such scenarios.
Still, in criminal law it has been found that even though the robot can be unpredictable, it has no self-consciousness and therefore it is meaningless to talk about a robot’s own responsibility. This applies even while acknowledging that legal persons can be made responsible.
Insurance — Chapter 1. (6)
Arguably, one of the unsolved questions is how the insurance companies should calculate their prices while it is argued that self-driving cars may learn from their accidents and improve their system.
It has been argued in legal research literature that the public interest for accepting self-driving cars is so great that we should avoid any responsibility issues. Instead, we should arguably simply cover all accidents from a specialised national fund. Otherwise it is feared that the responsibility issues will slow down the innovation due to associated business risks.
Privacy and data protection — Chapter 1. (7)
While much of the information collected by the self-driving car is anonymous, it can nevertheless be connected to the location and habits of the user.
Non-legislative issues — Code of Ethics — Chapter 1. (9) II.
AI safety has become a branch of science which is researching ways to formulate robust, universal, and generally helpful base ethical principles.
Non-legislative issues — Infrastructure — Connected driving — Chapter 1. (9) III. (iv)
Connected driving can make use of two additional communications channels: by utilising data from vehicle to vehicle, and additionally also utilising data from infrastructure to the vehicle. For example, data about accident locations can be shared, which enables smoother braking and acceleration depending on the road conditions.
One of the ways for ensuring data integrity and for protecting against malicious modifications is utilising blockchain-based technologies.
Non-legislative issues — Expertise centres — Chapter 1. (9) IV.
In legal research literature it has been found that specialised institutions and expertise centres for supporting the private and public sector in deployment of robots are necessary. Such centres can provide information about consumer rights, product safety, etc.
Additional key issues related to legal entity — Chapter 1. (10)
The authors of the report believe that the robots should not be given personhood today and not in the foreseeable future. Regardless of the intelligence of the robot, the goal of a robot is fulfilling a task which was given by the manufacturer or by the user. In that sense the robot is legally a representative of another person.
Additional key issues related to legal entity — Other robot-related rights and freedoms — Chapter 1. (10) V.
With regards to the right to free speech the robots are again different: instead of protecting a robots’ right to self-expression, we should ask whether the receivers of this information have the right to receive this particular information. This applies even while legal persons have rights for self expression.
The main problems and suggestions — Chapter 3. (1) I.
The authors of the report propose to distinguish the general concept of “intelligent robot” and a special concept of “robot-agent”. They propose that an “intelligent robot” qualifies as an “robot-agent” only when it is registered.
The authors of the report propose that an intelligent robot is treated as an object (in the context of sale, rent, storage, etc), but with respect to its autonomous and learning properties it is also treated as an animal. The authors propose that declaration of will should be rephrased so that the agreement for a transaction given by the holder via the intelligent robot is valid both in the context of civil code and also in the context of law of obligations act.
The main problems and suggestions — The definition of “intelligent robot” — Chapter 3. (2) I. (i)
One of the problems in Estonia is that the word “robot” is already in use in our legal space, as it represents the “mover robot”, also called package transport robots. This concept is currently defined in a very particular way describing the methods by which the robot obtains its information from the environment. The authors of the report consider this implementation detail unimportant and impractical. For the proposed concept of “intelligent robot” they suggest adding only the use of information to its definition, regardless of how the information is obtained.
The intelligent robot is a device, machine, technology, or method which is able to perform its tasks fully without human control, deciding its actions and evaluating their consequences based on the information that has been obtained from the environment.
- “Project: Legal accountability in AI-based robot-agents’ user interfaces.” The central subject of this project is legal accountability in artificial intelligence. We are going to show who can justifiably and fairly be made responsible for the actions of artificial agents, and how can whitelisting help both artificial intelligence developers and legislators in making sure that we will have as few surprises as possible. In other words, the project will research the possible ways to control and limit the agents’ actions and learning from a legal point of view, by utilising specialised, humanly comprehensible user interfaces, resulting in clearer distinctions of accountability between the manufacturers, owners, and operators.
- An older document with a general technical overview of the original idea of the above referred project: “Permissions-then-goals based AI user “interfaces” & legal accountability: First law of robotics and a possible definition of robot safety”
- Another older document with a slightly more detailed technical overview of the original idea of the above referred project: “Implementing permissions-then-goals based AI user “interfaces” & legal accountability: Implementing a framework of safe robot planning”
- The untranslated excerpts in Estonian language: “Vastutuse omistamise tehnoloogiate teemaga seonduvad väljavõtted “kratiseaduse” raportist.”
- Estonia considers a ’kratt law’ to legalise Artifical Intelligence (AI)
- The original report can be found here (it is unfortunately available only in Estonian language): https://www.mkm.ee/sites/default/files/loppraport_analuus_sae_tase_4_ja_5_soidukite_kasutusele_votmiseks_riigikantselei_2017_10_15_ver_10_final.pdf