It is time to refocus our AI ethics discussion on people

Meeri Haataja
Saidot
Published in
3 min readFeb 25, 2019

I am writing this in anticipation of the upcoming high-level conference Governing the Game Changer — Impacts of artificial intelligence development on Human Rights, Democracy and the Rule of Law taking place 26.-27.2. in Helsinki. To participate in this debate, I want to raise my concern regarding the focus of our AI ethics discussion, and suggest what I believe would be the critical shift we need in order to move on in realizing human-centric AI in Europe and beyond.

During the past year I have been involved in several initiatives for defining ethics principles for AI. I have witnessed many collective learning processes whilst working on these. As part of Finland’s national initiative, we’ve engaged with over 60 organizations committed on the ethical use of AI. Principles are essential in building shared view, commitment and language for guiding organizations’ AI initiatives with values.

We start with ethics principles; as AI ethics are essentially about breathing values into our algorithms.

Europe has a relatively well-defined set of core values, widely accepted throughout our societies. In AI Governance, we seek to create mechanisms for securing value alignment in a society where authority and power is increasingly embedded in both algorithmic systems and automated decision making. These systems impact our thoughts, emotions and actions; and increasingly, the opportunities, privileges or penalties we’re given. While we’ve become used to agree values with people, technology is changing this; we need new mechanisms for agreeing on the values we’ve built into our technology infrastructure.

The most burning questions of today is: How do we secure people — as individuals and as a democratic society — continue have their say on whether these systems we develop and use align with their core values?

I have become increasingly concerned by my observations from the ongoing AI ethics discussion; We strive for human-centric use of AI, yet leave people out from around the table — both physically, and literally. We regard AI governance as a mechanism by which to set rules and restrictions to companies, while it should also be one of empowering people, to have their say on the ways AI impacts their own lives and our society. While this is a major challenge, I believe, the realization of it can also be a powerful key from principles to action.

We need a radical mindset shift to collectively empower people and secure their agency. AI governance is about establishing and enforcing people’s right to understand the ways AI influences their thoughts, emotions or actions, and securing right to contest the AI made automated decisions.

The impacts of this right can be immense; We create both a common target and the language on which the whole AI industry will act on. Responsibility will be established as a fundamental service rather than compliance. Role of AI governance shifts from stating competing versions of ethics principles into empowering people with the right skills and information; enforcing the realization of these with laws, standards and best practices.

I believe we are entering a time when responsible organizations challenge the old market practices with openness. Transparency becomes the sign for trustworthiness. It empowers people and allows trusted third parties to assess, review and guide organizations in their initiatives towards responsible and human-centric AI innovation.

Transparency is a prove of ethics, and a critical means for enabling agreeing on values embedded in technology.

This is the time when we need to move beyond ethics principles. It is also growingly clear the worlds of AI ethics and innovation are too often deeply conflicted in our discussions, with only little common ground or willingness to find consensus. I believe this shared focus on inclusion, securing people’s agency with skills and information, can be the trigger for new meaningful AI governance with enormously important impact: taking democracy into an algorithmic age.

--

--

Meeri Haataja
Saidot
Editor for

CEO & Co-Founder, Saidot | Affiliate at Berkman Klein Center at Harvard | Chair of IEEE’s Ethics Certification Program for Autonomous & Intelligent System