Governance of Internet of Things and Ethics of Intelligent Algorithms

ThingsCon
The State of Responsible IoT 2018
13 min readAug 24, 2018

By Prof. Dr. Eduardo Magrani & Dr. Ronaldo Lemos

The ThingsCon report The State of Responsible IoT is an annual collection of essays by experts from the ThingsCon community. With the Riot Report 2018 we want to investigate the current state of responsible IoT. In this report we explore observations, questions, concerns and hopes from practitioners and researchers alike. The authors share the challenges and opportunities they perceive right now for the development of an IoT that serves us all, based on their experiences in the field. The report presents a variety of differing opinions and experiences across the technological, regional, social, philosophical domains the IoT touches upon. You can read all essays as a Medium publication and learn more at thingscon.com.

New technical artifacts connected to the Internet constantly share, process, and storage a huge amount of data. This practice is what unifies the concept of Internet of Things (“IoT”) to the concept of Big Data. With the growing dissemination of Big Data¹ and computing techniques, technological evolution and economic pressure spread rapidly, and algorithms have become a great resource for innovation and business models. This rapid diffusion of algorithms and their increasing influence, however, have consequences for the market and for society, consequences which include questions of ethics and governance.²

Automated systems that turn on the lights and warm the dinner by realizing that you’re returning home from work, smart bracelets and insoles that share with your friends how much you’ve walked or cycled during the day in the city or sensors that automatically warn farmers when an animal is sick or pregnant. All these examples are manifestations considered innovative technologies associated to the concept that is being constructed of Internet of Things.

However, there are strong divergences about the IoT concept, so there is no single concept that can be considered unanimous. In general, it can be understood as an environment of physical objects interconnected with the Internet, through small and built-in sensors, creating a ubiquitous computing ecosystem, aimed at facilitating people’s daily lives, introducing functional solutions in the processes of day by day.

In this sense, the combination of intelligent objects and Big Data can significantly change the way we live. Some researches estimates that in 2020 the number of interconnected objects will increase from 25 billion to 50 billion intelligent devices. The projections for the impact of this scenario of hyperconnection in the economy are impressive — the global economic impact estimated is more than $ 11 trillion in 2025.³ Due to these estimates, IoT has been receiving strong investments from the private sector and emerged as a possible solution to the new challenges of public management, promising, from the use of integrated technologies and massive data processing, more effective solutions to problems such as pollution, congestion, crime, productive efficiency, among others. In addition, IoT can bring countless benefits to consumers.

All this hyperconnectivity and continuous interaction between various devices, sensors and people, have altered the way we act communicatively and make decisions in the public and private spheres. Increasingly, the information circulating on the Internet will no longer be placed on the network by people alone, but by Things and algorithms with artificial intelligence that exchange information among themselves, forming a space for connections and increasingly automated information.

We observe today the construction of new relationships that we are establishing with the machines and other interconnected devices, allowing algorithms to begin to make decisions and to guide evaluations and actions that were previously taken by humans. This is still a relatively recent culture and implies important ethical considerations in view of the ever-increasing impacts of algorithmic communication in society.

Taking into account how recent this hyperconnectivity and IoT digital scenario is, based on the close relationship between intelligent objects, Big Data and Computational Intelligence, or between the so-called “ABC” of Information and Communication Technologies — Analytics + Big Data + Cloud Computing –, we are not yet fully aware of its potential benefits and risks. However, we must seek an adequate balance in legal regulation in a way that it does not hinder innovation, but ensuring that the law also advances in this area, seeking appropriate standards for new technologies and the IoT scenario.

Considering this scenario and the lack of adequate regulation by Law, we are experiencing a self-regulation of the market itself and a regulation that is often done through the design of technology, which is known as “techno-regulation”. IoT is advancing faster than our ability to safeguard individual and collective rights.

Given the context of constant and intense storage, data processing, sharing and monetization of data, it is crucial to discuss the notions of privacy and ethics that should guide technological advances, reflecting on the world in which we want to live and how we see ourselves in this world of data and machines related to the new IoT scenario. The way we relate to Things tends to be more and more intense. Data governance and an adequate comprehension of the agency of human and non-human ‘actants’ in this hyperconnected environment is fundamental. Benefits and risks for companies, the State and consumers should be weighed cautiously. The law must be attentive to its role in this context: on the one hand, not to hamper the current economic and technological development, and, on the other hand, to regulate technological practices effectively, intending to restrain abuses and protect constitutional rights.

Since algorithms can permeate countless branches of our lives, as they become more sophisticated, useful, and autonomous, there is a risk that they will make important decisions on our behalf. To foment the integration of algorithms into social and economic processes, algorithms governance tools are needed.⁴

The governance of algorithms can vary from the strictly legal and regulatory, to the purely technical point of view. Researchers at the University of Zurich argue that algorithm governance must be based on identified threats and suggest a risk-based approach, highlighting those related to manipulation, bias, censorship, social discrimination, privacy breaches, property rights and abuse of market power. To prevent these risks from materializing, it is necessary to resort to governance.⁵

One of the main themes raised by doctrine when it comes to governance is the opacity of the algorithms. The problem of opacity is associated with the difficulty of decoding its results. Humans are becoming less and less able to understand, explain, or predict the inner workings, biases, and eventual problems of algorithms. Thus, experts have been discussing the need for greater transparency aiming a better understanding about algorithmic decisions and processes.

Relating to this concern, we will focus now on advanced algorithms endowed with machine learning, such as intelligent robots equipped with artificial intelligence, considering that they are technical artifacts (Things) attached to sociotechnical systems with a greater potential for autonomy (based largely on the processing of Big Data) and unpredictability.

The implementation of programs capable of learning how to execute functions typically performed by humans, creates new ethical and regulatory challenges, since it increases the possibility of obtaining results other than those intended, or even totally unexpected. This is because, as previously argued, these mechanisms also act as agents in society, and end up influencing the environment around them, even though they are non-human entities. It is not, therefore, a matter of thinking only about the “use” and “repair” of new technologies, but mainly about the proper ethical orientation for their development.⁶

In addition, the more adaptable the artificial intelligence programs become, the more unpredictable are their actions, bringing new risks. This makes it necessary for developers of this type of program to be more aware of the ethical responsibilities involved in this activity. The Code of Ethics of the Association for Computing Machinery indicates that professionals in the field should develop “comprehensive and thorough assessments of computer systems and their impacts, including the analysis of possible risks”.

The ability to amass experiences and learn from massive data processing, coupled with the ability to act independently and make choices autonomously can be considered preconditions for damages liability. However, since Artificial Intelligence is not recognized today as a subject of law, it cannot be held individually liable for the potential damage it may cause. In this sense, according to Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, a person (natural or an entity) on behalf of whom a program was created must, ultimately, be liable for any action generated by the machine. This reasoning is based on the notion that a tool has no will of its own.

In another perspective, in the case of damage caused by acts of an artificial intelligence, another type of responsibility is the one that makes an analogy with the responsibility attributed to the parents by the actions of their children (strict vicarious liability). Thus, adopting the theory of “robots as tools”, the responsibility for the acts of an AI could fall on its producer, users or their programmers, responsible for their “training”.

Should an act of an Artificial Intelligence cause damages by reason of deceit or negligence, manufacturing defect or design failure as a result of blameworthy programming, existing liability rules would most often indicate the “fault” of its creators. However, it is often not easy to know how these programs come to their conclusions or even lead to unexpected and possibly unpleasant consequences. This harmful potential is especially dangerous in the use of Artificial Intelligence programs that rely on machine learning mechanisms, in which the very nature of the software involves the intention of developing an action that is not predictable, and which will only be determined from the data and events with which the program comes into contact.

As the behavior of an AI is not totally predictable, and its behavior is the result of the interaction between several human and non-human agents that make up the sociotechnical system and even of self-learning processes, it can be extremely difficult to determine the causal nexus between the damage caused and the action of a human being or legal entity.

According to the legal framework we have today, this can lead to a situation of “distributed irresponsibility” (the name attributed in the present work to refer to the possible effect resulting from the lack of identification of the causal nexus⁷ between the agent’s conduct and the damage caused) among the different actors involved in the process. This would happen mainly when the damage occurs by the involvement of different agents, within a complex sociotechnical system, in which the liability will not be obvious, possibly involving at the same time the actions of the intelligent thing itself, of natural persons, and of a legal entity, all linked. This reflects a serious responsibility challenge called by some scholars as the “problem of the many hands”.⁸

The ideal regulatory scenario would guide the development of the technical artifacts and manage it from a perspective of fundamental rights protection. But no reliable answers have yet been found on how to deal with the potential damages that may arise due to programming errors, or even due to machine learning processes that end up incorporating undesired conducts into the behavior of the machine that were not predicted by developers. Therefore, establishing minimum ethical foundations for regulating purposes is just as important as developing these new technologies, as part of the governance strategy.

Thus, when dealing with Artificial Intelligence, it is essential to promote an extensive debate about the ethical guidelines that should guide the construction of these machines. However, clear parameters of how to conduct this study, from the point of view of ethics, are yet to be defined. The need to establish a regulatory framework for this type of technology has been highlighted by some initiatives.

The General Data Protection Regulation in Europe (GDPR) already established important guidelines concerning, for example, data collection storage and privacy, setting key principles, such as: Purpose Limitation, Data Minimization, Storage Limitation, Accuracy, Transparency, Integrity and Confidentiality and Accountability. It is important to note that for some scholars the GDPR also predicts a “right to explanation” for decisions made by automated or artificially intelligent algorithmic systems and also a “right not to be subject to automated decision-making”, aiming to enhance the transparency and accountability of automated decision.

On another angle, a conference held in January 2017 in Asilomar, CA, aimed to establish the definitions of a series of principles so that the development of Artificial Intelligence programs can be beneficial to humanity. Twenty three principles were indicated, the most notable among them are:⁹

  1. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards;
  2. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible;
  3. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why;
  4. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications;
  5. Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

Therefore, designers and builders of advanced AI systems should be considered stakeholders of the moral implications of their use and its damaging autonomous actions. Additionally, there should also be considered the liability of the designers/engineers concerning the responsibility in guaranteeing values ​​such as privacy, security and ethics in the design phase of the artifacts, to avoid damages a posteriori. Hence, the challenge is to think of a “value-sensitive design”. As an example, we can mention the inputs of: “privacy by design”, “security by design” and, “ethics by design”, always taking into account what is within the sphere of control and influence of the designer. This should lead to an approximation of civil society, policy makers and law enforcement agencies with the work of engineers.

From a legal standpoint, it is fundamental to keep in mind the new nature of a diffused/shared liability, potentially dispersed in space, time and agency of the various actants in the public sphere, humans and non-humans. We need to think about the context in which assumptions on liability are made. The question that is presented to us is not only how to make computational agents liable, but how to reasonably and fairly apply the mentioned liability. We must, therefore, think of a “shared responsibility” between the different actors entangled in the sociotechnical network and their spheres of control and influence over the presented damage situations. At the same time, we need to reflect on ethical foundations as part of a governance strategy for intelligent algorithms, which makes necessary a whole new interpretation of the role of Law in this techno-regulated context.

Ronaldo LEmos

Ronaldo Lemos (@lemos_ronaldo) is one of the Co-founders and a Director of the Institute for Technology & Society of Rio de Janeiro (ITS Rio). PhD in Law from the University of São Paulo (USP), Master’s Degree in Law from Harvard University, and studied Law in an Undergraduate level also at USP. He is a Law professor at the Rio de Janeiro State University (UERJ) and a visiting researcher at MIT Lab. He was a visiting professor at Princeton University, affiliated with the Information Technology Policy Center. Ronaldo was also a visiting professor at the Oxford University (Michaelmans term, 2005). He us the Director of the Creative Commons project in Brazil and Co-founder of the project Overmundo, which won the Golden Nica in the Digital Communications category. He is a member of the Social Communications Council, created by article 224 of the Brazilian Constitution, with headquarters in the Federal Senate. He is a Liaison Officer at MIT Media Lab for Brazil and member of the Administration Council of Mozilla Foundation.

Eduardo Magrani

Eduardo Magrani (@eduardomagrani) is Coordinator of the Institute for Internet and Society of Rio de Janeiro (ITS Rio). PhD. Senior Fellow at the Alexander von Humboldt Institute for Internet and Society in Berlin. Eduardo Magrani has been working with public policy, Internet regulation and Intellectual Property since 2008. He is Professor of Law and Technology and Intellectual Property at FGV Law School, UERJ, IBMEC and PUC-Rio. Researcher and Project Leader at FGV in the Center for Technology & Society (2010–2017). Author of the books “Digital Rights: Latin America and the Caribbean” (2017), “Internet of Things” (2017) and “Connected Democracy” (2014) in which he discusses the ways and challenges to improve the democratic system through technology. Associated Researcher at the Law Schools Global League and Member of the Global Network of Internet & Society Research Centers. Ph.D. and Master of Philosophy (M.Phil.) in Constitutional Law at Pontifical Catholic University of Rio de Janeiro with a thesis on Internet of Things and Artificial Intelligence Regulation through the lenses of Privacy Protection and Ethics. Lawyer, acting actively on Digital Rights, Corporate Law and Intellectual Property fields. Magrani has been strongly engaged in the discussions about Internet regulation and was one of the developers of Brazil’s first comprehensive Internet legislation: the Brazilian Civil Rights Framework for the Internet (“Marco Civil da Internet”). He is coordinator of Creative Commons Brazil and the Digital Rights: Latin America and the Caribbean Project, alongside with prestigious Latin American organizations.

ThingsCon is a global community & event platform for IoT practitioners. Our mission is to foster the creation of a human-centric & responsible Internet of Things (IoT). With our events, research, publications and other initiatives — like the Trustable Tech mark for IoT — we aim to provide practitioners with an open environment for reflection & collaborative action. Learn more at thingscon.com

This text is licensed under Creative Commons (attribution/non-commercial/share-alike: CC BY-NC-SA). Images are provided by the author and used with permission. Please reference the author’s or the authors’ name(s).

  1. Big Data is an evolving term that presents any amount of accumulated, semi-structured or unstructured data that has the potential to be exploited for information.
  2. SAURWEIN, Florian; JUST, Natascha; LATZER, Michael. Governance of algorithms: options and limitations. Info, v. 17, n. 6, p. 35–49, 2015.
  3. http://www.zdnet.com/article/25-billion-connected-devices-by-2020-to-build-the-Internet-of-things/
  4. DONEDA, Danilo; ALMEIDA, Virgilio A. F. What Is Algorithm Governance? IEEE Internet Computing, v. 20, p. 60, 2016.
  5. SAURWEIN, Florian; JUST, Natascha; LATZER, Michael. Governance of algorithms: options and limitations. Info, v. 17, n. 6, p. 37, 2015.
  6. WOLF, Marty, et al. Why We Should Have Seen That Coming: Comments on Microsoft’s tay “Experiment”, and Wider Implications. 2017. Available at: http://digitalcommons.sacredheart.edu/computersci_fac/102/.
  7. ‘Causal nexus’ is the link between the agent’s conduct and the result produced by it. “Examining the causal nexus determines what were the conducts, be them positive or negative, gave rise to the result provided by law. Thus, to say that someone has caused a certain fact, it is necessary to establish a connection between the conduct and the result generated, that is, to verify if the action or omission stemmed from the result caused. Available at: https://www.jusbrasil.com.br/topicos/291656/nexo-causal. Accessed on 27 September 2017.
  8. Available at: http://unesdoc.unesco.org/images/0025/002539/253952E.pdf.
  9. Available at: https://futureoflife.org/ai-principles/.

--

--

ThingsCon
The State of Responsible IoT 2018

ThingsCon explores and promotes the development of fair, responsible, and human-centric technologies for IoT and beyond. https://thingscon.org