Estonia considers a ’kratt law’ to legalise Artifical Intelligence (AI)

Marten Kaevats
E-Residency Blog, E-residentsuse blogi
9 min readSep 25, 2017

In Estonian mythology, a Kratt is a creature brought to life from hay or household objects. Estonia now faces the very real challenge of regulating the rise of automous machines in order to support AI entrepreneurs and protect the public interest.

The mythological creature kratt in an Estonian film “November”, Homeless Bob Production 2016

Estonia is known for its ’firsts’. We were the first country to declare internet access as a human right, the first country to hold a nationwide election online, the first country in Europe to both legalise ride sharing and delivery bots, and — of course — the first country to offer e-Residency.

Countries around the world now face the challenge of understanding the rise of Artifical Intelligence, which is increasingly affecting the daily lives of their populations, so which country willl be the first in developing a comprehensive legal framework that ensures the technology can be developed in an ethical and sustainable way? We think the answer once again should be Estonia.

This work to understand AI in Estonia started with our self-driving vehicles task force. However, it quickly became clear that their scope was too limited as working on traffic regulations is simply not enough given the far reaching implications of the technology. Regulating mobility on its own will only lead to more complexity and possible misunderstandings for society. Instead, we need to streamline the whole process and legalise AI. To introduce better regulations, society must also play a role in co-creating the necessary framework so that the end result is understandable for everyone. The task force has suggested four different options regarding how to regulate AI in a user-friendly way.

The work, started in November 2016, is led by the task force, together with the Ministry of Economic Affairs and Communications and the Government Office. Experts from all walks of life have been included in the discussions on how to solve the problem of accountability in machine and deep-learning algorithms. These algorithms are very different from the average programs because they lack the usual ‘if-then’ type of logic. In the event of an incident, even the creators of the algorithm may know where the mistake exactly occurred, because the decision-making of these systems is intuitive. These ‘black box’ type algorithms possess great potential for value creation in a digital society, but are legally hard to define.

The liability question is not difficult technically, it is an ethical dilemma. Technically there are several already existing options to choose from: personal, producers, service providers and even governmental liability, where the government covers the cost. But the focus of this question is ethics. When my child is harmed by a self-driving car or other type of robot, then I really want to point my finger at somebody and say: “You are guilty and will go to prison.” At a time when algorithms are ensuring the safety of our society and are doing it way better than our current sets of rules, we have to emotionally understand that there is not someone to blame in every case — just as in the case of most train accidents today. Train drivers cannot always bear the burden of blame when the laws of physics sometimes make it impossible for them to avoid accidents.

Testing self-driving vehicles is legal on all public roads in Estonia since 2nd of March 2017

To define the scope and domain of this process even better, it is important to understand that we are currently working on narrow AI, taking account the possibilities of general AI. The aim is not to solve the issue of super-intelligence, which is still far off and way more complex issue. So for the sake clarity — we are not working on a ‘Terminator Skynet’ scenario. Rather, we are solving the problem of liability in systems that are already quite common (e.g. financial bots). The number of these kinds of expert systems is growing and the lack of legal clarity in this domain is a major obstacle in their implementation in the physical world. The easiest examples of this are self-driving vehicles, but we must also consider smart refrigerators, some big data analytics tools, predictive algorithms of various natures etc. In this context, the aim should be to give representative rights to algorithms. But rights also mean responsibilities.

Agenda

The legalisation of AI will have a deep and far-reaching impact on the everyday lives of our citizens. For the local economy, this means pulling down barriers for the further digitalisation of our industries, bringing in new investment, and creating new jobs in ICT, while also abolishing jobs at the same time. Legal clarity is the biggest obstacle for wide scale implementation. Potential investors need to know what will happen when things go sour. Local entrepreneurs and civil society may start to experiment with new technologies and service models, thus actually enabling the next industrial revolution.

Legalising AI will remove barriers to enable the next industrial revolution

For the citizens it means lots of new types of services and products that are easy to use and remove a lot of mundane tasks from their lives. It also means more free time and a rise in their productive time. It will be our choice how to make the best use of this.

The global perspective is different. Estonia as a country with 1.3 million people is a perfect test ground for new and bold ideas, and a place to experiment with relatively small capital cost. At the same time, being bold and implementing new ideas also means that the local culture is open-minded towards failure. The key is to learn from each failure. We see Estonia as a pathfinder, constantly moving in uncharted territories. The practical experience and know-how from these experiments will be our contribution to the global discussion. So that governments with a far bigger headcount can avoid strategic mistakes.

In addition, Estonia is the first country to introduce e-Residency — a programme that is successfully attracting skilled entrepreneurs from around the world and providing them with access to our business environment. Many e-residents have focused their entrepreneurial activities on emerging industries, such as Artificial Intelligence, so providing a better legal framework can further enhance the value of e-Residency and bring even more benefiits to Estonia.

Options

A law firm Triniti and a team led by Karmen Turk and Maarja Pild have outlined the options for giving representative rights to AI. Representative rights mean that AI can buy and sell products and services on its owner’s behalf. The owner might be a private individual using SIRI or it might be, for example, a brokerage firm that uses algorithms to buy and sell shares. The legal work is not fully ready yet. We are still exploring these options and want you to participate in this discussion.

The biggest conversation starter is probably the idea to give separate legal subjectivity to AI. This might seem like overreacting or unnecessary to the status quo, but legal analysis from around the world suggests that in the long-term this is the most reasonable solution. Some technology-minded legal experts even claim that this inevitable in 5 to 8 years. But when drafting laws we need to look at the longest possible perspective and try to future-proof our decisions now as much as possible. In this case, AI would be a separate legal entity with both rights and responsibilities. It would be similar to a company but would not necessarily any humans involved. Its responsibilities would probably be covered by some new type of insurance policy similar to the vehicle/motor insurance nowadays. In Finland there is already a company whose voting board member is an AI. Can you imagine a company that has no humans in their operations?

Another option is changing and broadening the scope of something the lawyers refer to as the ‘declaration of intent’. This opens the philosophical discussion of what is ‘will’. Currently, intent is described as quite a regular and straightforward thing. When I go to a bar and tell the barman I want a beer — that’s obviously quite easy. But broadening the scope means that I would say: “I want something for the next three years”. And now the barman has to assess correctly each and every time when I walk into this bar whether I would like a beer, coffee, tea, sandwich or Cuba Libre. And the barman will do it based on the particular time of day, my mood and habits, the group of people I am together with etc.

Also, there is a need to put in place a robotics/AI act to outline the necessary principles and to underpin the technological advancements. Even though when we talk about AI we are referring to algorithms, the border conditions also need to be defined in clear way. What are sensors legally? How is sensor data managed? Who owns what? In Estonia, the underlying value of our information society is that citizens and other users have ownership over their own data, the government or private companies are merely providing a service of keeping them safe and private. The same core value applies here, but the difficult question is how exactly to enforce it. A robotics act would also try to draw some red lines not to be crossed by decision-making algorithms. These lines would be based on values and ethics.

Communication

The work of the self-driving vehicles task force has indicated that the idea of driverless vehicles is strong and understandable enough for a non-specialist that it can be used as a communication frontline to explain to society other, more complex ideas. This technology also embeds all of the critical issues of the digital era: data privacy, openness, transparency, trust, ethics, liability, integrity etc. Thus being the perfect tool for a conversation starter for much wider topics such as AI, internet of things, robotics etc.

In Estonia, we have another trick up in our sleeves: we can use our rich culture of linguistics and mythology as a vehicle for understanding more complex technological issues. For example, in Estonian mythology we have a character called kratt, a creature which has existed in our cultural space for hundreds of years and which is composed of a number of unique features. When the owner acquires from the devil a soul for its kratt (in modern tech talk this mean algorithm), the kratt begins to serve its master.

From a communication point of view, the “kratt” narrative is useful because every Estonian knows this story. Kratt’s are something that society understands; AI is something that is complex and difficult to understand. From a technological point of view, the kratt character has exactly the same features as AI. When the Czech writer Čapek invented the word ‘robot’ in 1920 the inspiration came from the Slavic language word ‘robota’ meaning forced labourer. Yes, a robot is something made to fulfil certain tasks, but we can also say that a kratt is a robot with super powers and thus the legal representative rights.

Ethical enforcement

Estonia has recognised the complexity, scope and possibilities of this issue. Our aim is to contribute to the global discussion with positive case studies with an emphasis on ethics and cyber measures. The immense and sometimes jaw-dropping possibilities of AI cannot be enabled when we don’t have both the right values and right regulations.

From a governmental perspective it is crucial to consider the practical enforcement side of implementing these kind of measures as well. The Estonian government is working together with an Estonian blockchain company Guardtime to ensure measures of anti-tampering and data integrity within these algorithms. In this type of system, hacks can be detected in just one second, compared to the current global average of 7 months! The first real life pilot will go live next year.

Blockchain enables transparency and integrity, thus making these systems trustworthy

The Estonian government authorities have acknowledged that the biggest obstacle for mass implementation of AI is our current cyber capabilities, particularly regarding firstly the integrity of these systems and secondly their security. Take for example, my blood type. I’m an A positive, and I personally don’t really care who know — nefarious criminal or not. But if somebody changes my blood type in a medical database, it is a great threat to my life and could be considered attempted murder. Similar to life and death decisions made by self-driving cars, I want to be sure that the decision-making algorithm has not been tampered with.

Join in the discussion

The main reason to start this discussion now is that the Estonian public administration feels that these challenges are imminent and we to be able to discuss them. The public discussion will take time because the issue at hand has wide implications for our everyday lives. We need the know-how and contribution of the best global experts, and perhaps most importantly, we need to start discussing AI in our kitchens and saunas and with our e-residents around the world. So please feel free to contribute to the discussions with hashtags: #krattlaw , #eResidency and #Estonia.

--

--