A Framework for Ethical and Trustworthy AI

Jean-Francois Gagné
Element AI
Published in
7 min readApr 9, 2019

A little over nine months ago I was honoured to be chosen as the sole non-European expert to provide input, both from a Canadian and an industry perspective, into the European Commission’s High-Level Expert Group on AI (AI HLEG). Yesterday we published our final Ethics Guidelines for Trustworthy Artificial Intelligence.

In these guidelines, we pursue a “human-centric approach,” informed by human and fundamental rights such as dignity, freedom, equality and justice. AI isn’t contained by national borders, and ethical principles and requirements need to be rooted in concepts that promote the inherent value of all human beings, no matter their geographical location.

The AI HLEG is composed of 52 subject experts from academia, civil society, research and many more areas. We published our first draft in December and our group benefited from receiving comments and insights from over 500 submissions. Feedback came from engaged members of civil society as much as from international AI labs. You will see this diversity of thought reflected in our final Guidelines, and I hope that continues in its application, as other regions use it as a reference point for AI policy.

Trustworthy AI

Yesterday’s Ethics Guidelines for Trustworthy AI are highly timely.

Most products we use today come with certain guarantees and responsibilities of use. However, the performance of AI across the value chain is highly dynamic. Accountability in this area is complex, control over the system’s behaviour isn’t always clearly attributable and expectations between stakeholders such as the researcher, data vendor, designer, user and the broader environment need to be more clearly defined.

If there is a lack of trust in technology, in the developers, in the governmental framework and between stakeholders, we run the risk of rejecting technology that could empower us. As a business owner, one of my responsibilities is to go the extra mile to calibrate expectations, and to provide sufficient explanations to all involved.

But commonly accepted ground rules are also important. The process of explanation and negotiation of accountability is a huge transaction cost, and the industry’s current self-regulation is ultimately limited by self-interest. A clear set of rules of the game is critical to instilling trust in the technology and ensuring it is used to its best possible impact.

The Guidelines

I am proud to say that our Guidelines are much more than a list of values and abstract principles to approach AI. They’re grounded and built upon international human rights and fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (EU Charter), informing the core values and principles necessary to achieve trustworthy AI. This contextualisation is important. We need to take trustworthy and ethical AI from a conceptual level to a practical one.

We realised that there is no need to treat AI much differently than other technologies, or to reinvent the wheel. The initial process for us was to embed adaptive AI systems that engage with a complex stakeholder value chain in what has already been established: human rights.

From there, we moved to the topic most current debates focus and end on: how can core values and principles be embedded into AI? This is similar to frameworks that we have already seen from various tech-oriented organisations and centres such as IBM and Microsoft. In our Guidelines, we focused on four principles derived from human rights:

  • Respect for Human Autonomy
  • Prevention of Harm
  • Fairness
  • Explicability

Next, we identified seven stakeholder-oriented requirements, implementable through technical and non-technical methods (e.g., testing and validations or standardisation).

  1. Human Agency and Oversight forms the first requirement, grounded in an adherence to fundamental and human rights and the necessity for AI to enable human agency.
  2. Technical Robustness and Safety concerns itself with the development of the AI system and focuses both on the resilience of the system against outside attacks (e.g.adversarial attacks) and failures from within, such as a miscommunication of the system’s reliability.
  3. Privacy and Data Governance bridges responsibilities between system developers and deployers. It addresses salient issues such as the quality and integrity of the data used in developing the AI system, and the need to guarantee privacy throughout the entire life cycle of the system.
  4. Transparency demands that both technical and human decisions can be understood and traced.
  5. Diversity, Non-Discrimination and Fairness are requirements that ensure that the AI system is accessible to everyone. These include, for example, bias avoidance, the consideration of universal design principles and the avoidance of a one-size-fits-all approach
  6. Societal and Environmental Well-Being is the broadest requirement and includes the largest stakeholder: our global society and the environment. It tackles the need for AI that is sustainable and environmentally friendly, as much as its impact on the democratic process.
  7. Accountability complements all the previous requirements, as it is relevant before, during and after the development and deployment of the AI system.

The seven key requirements are government regulated, as would be standards.

The next logical step is for the requirements to be translated into actual international industry standards,however a complete system of standards can take years.

The human rights framework upon which the Guidelines are built helps to identify the existing applicable laws and policy that could be adapted into rules of the game for AI’s rapid development.

A global standard of “human-centric” AI

Global discussions have come a long way. The Asilomar Conference on Beneficial AI was among the first to propose principles for AI, followed by a quick increase in complementary movements, ranging from the Montreal Declaration, to large multi-stakeholder initiatives such as the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.

The Guidelines weave the connecting line of human rights through the discourse, and it is the general direction in which the international debate is heading. You can see this with more recent initiatives such as the Toronto Declaration or the work undertaken by the OECD in their Expert Group on AI in society and their AI policy observatory.

Governments, too, are publishing ever more refined AI strategies, with Canada as a forerunner in 2017. All this is leading industry standards to take shape through e.g. the IEEE and the International Organization for Standardization (ISO) (the ISO even has a working group on Trustworthiness). Linking back to the root of our Guidelines, the Australian Human Rights Commission is undertaking groundbreaking work, exploring the relationship between human rights and technology in great depth. I look forward to their findings released in late 2019, and hope that more governments will follow suit. Comprehensive, multi-stakeholder consultations on the impact of AI on human rights and freedoms are necessary for regulators to understand how the Guidelines can be complemented by legally-binding, enforceable protections.

Next Steps

Now is the time to work hard to implement the requirements set out in the Guidelines. The requirements included an indicative assessment list to operationalise them and to create the first instance of a framework towards future industry standards. This list will be piloted and revised throughout 2019.

In turn, governments should use the Guidelines as a benchmark for developing and implementing new legislative and regulatory mechanisms that safeguard the protection and promotion of human rights and freedoms in the digital age.

The Guidelines and European Commission’s Communication on Building Trust in Human Centric AI encourage stakeholders and EU Member States to engage with the key requirements in the Guidelines and use them for consensus building on a human-centric AI. The Communication further mentions stronger cooperation with like-minded international partners such as Canada, Japan and Singapore.

Canada already has multiple engagements with the European Union in the area of AI, from a research perspective with l’institut PRAIRIE, as well as from a governmental perspective with the recently established International Panel on AI between France and Canada. I’m particularly excited about the International Panel on AI in the context of pushing our Guidelines towards international standards, given the Panel’s mandate “to promote a vision of human-centric artificial intelligence”.

We have the Guidelines. We have the international platforms. Now governments across the world should implement and develop them, stakeholders push and support them, citizens demand them, and organizations adopt them.

Ethical and trustworthy AI needs to become an indisputable international norm.

Image derived from the European Commission’s HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE — ETHICS GUIDELINES FOR TRUSTWORTHY AI.

Originally published at jfgagne.com on April 9, 2019.

--

--

Jean-Francois Gagné
Element AI

Serial entrepreneur and thought leader, Former: Head of AI product management and strategy @ServiceNow , Founder @element_ai , CPO @blueyonder , Ceo @Planora