The ART of AI — Accountability, Responsibility, Transparency

Virginia Dignum
4 min readMar 4, 2018

--

Artificial Intelligence (AI) is increasingly affecting our lives in smaller or larger ways. In order to ensure that systems will uphold human values, design methods are needed that incorporate ethical principles and address societal concerns. In this article, I introduce the ART design principles (Accountability, Responsibility and Transparency) for the development of AI systems sensitive to human values.

There is an increasing awareness that a responsible approach to Artificial Intelligence (AI) is needed to ensure the safe, beneficial and fair use of AI technologies, to consider the implications of moral decision making by machines, and the ethical and to define the legal status of AI. Several initiatives are aiming at proposing guidelines and principles for the ethical and responsible development and use of AI (see e.g. IEEE Ethically Aligned Design, the Asilomar principles, the UNI Global Union reflection on the future of work, the Barcelona declaration, or the EESC opinion, just to cite a few).

Developments in autonomy and machine learning are rapidly enabling AI systems to decide and act without direct human control. Greater autonomy must come with greater responsibility, even when these notions are necessarily different when applied to machines than to people.
Ensuring that systems are designed responsibly contributes to our trust on their behavior, and requires both accountability, i.e. being able to explain and justify decisions, and transparency, i.e. understand the ways systems make decisions and to the data being used. To this effect, we propose the principles of Accountability, Responsibility and Transparency (ART) . ART implements a Design for Values approach, to ensure that human values and ethical principles, and their priorities and choices are explicitly included in the design processes in a transparent and systematic manner.

Ethical AI rests in three pillars of equal importance, the ART of AI:
1. Accountability refers to the need to explain and justify one’s decisions and actions to its partners, users and others with whom the system interacts. To ensure accountability, decisions must be derivable from, and explained by, the decision-making algorithms used. This includes the need for representation of the moral values and societal norms holding in the context of operation, which the agent uses for deliberation. Accountability in AI requires both the function of guiding action (by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader context and by classifying them along moral values).

2. Responsibility refers to the role of people themselves and to the capability of AI systems to answer for one’s decision and identify errors or unexpected results. As the chain of responsibility grows means are needed to link the AI systems’s decisions to the fair use of data and to the actions of stakeholders involved in the system’s decision.

3. Transparency refers to the need to describe, inspect and reproduce the mechanisms through which AI systems make decisions and learns to adapt to its environment, and to the governance of the data used created. Current AI algorithms are basically black boxes. However, regulators and users demand explanation and clarity about the data used. Methods are needed to inspect algorithms and their results and to manage data, their provenance and their dynamics.

Responsible AI is more than the ticking of some ethical `boxes’ or the development of some add-on features in AI systems. It requires the participation and commitment of all stakeholders, and the active inclusion of all of society. This means training, regulation and awareness.

Researchers and developers should be trained to be aware of their own responsibility where it concerns the development of AI systems with direct impact in society. Governments and citizens should determine how issues of liability should be regulated. For example, who will be to blame if a self-driving car harms a pedestrian? The builder of the hardware (e.g. of the sensors used by the car to perceive the environment)? The builder of the software that enables the car to decide on a path? The authorities that allow the car in the road? The owner that personalized the car decision-making settings to meet her preferences? The car itself because its behaviour is based on its own learning? All these, and more, questions must be informing the regulations that societies put in place towards responsible use of AI systems. All of which requires participation. It is necessary to understand how different people work with and live with AI technologies across cultures in order to develop frameworks for responsible AI. In fact, AI does not stand in itself, but must be understood as part of socio-technical relations. Here again education plays an important role, both to ensure that knowledge of the potential AI is widespread, as well as to make people aware that they can participate in shaping the societal development. A new and more ambitious form of governance is one of the most pressing needs in order to ensure that inevitable AI advances will serve societal good. Only then accountability, responsibility and transparency are possible.

For more information: http://designforvalues.tudelft.nl/projects/responsible-artificial-intelligence/

--

--