AI ethical principles are for us

Virginia Dignum
3 min readApr 16, 2019

--

Just last week, the European Union published its Guidelines for Trustworthy AI and a few weeks before version 1 of the IEEE initiative on Ethically Aligned Design of Intelligent and Autonomous Systems were presented. (full disclosure: I am member of the EU high level group on AI and of the executive committee of IEEE ethically aligned design initiative, the bodies behind these two reports). The impact of these two reports, coming from the European Union and from the leading international professional organisation of engineers is potentially very large. Engineers are those that ultimately will implement AI to meet ethical principles and human values, but it is policy makers, regulators and society in general that can set and enforce the purpose. We are all responsible thus.

Both documents go well beyond proposing a list of principles, but aim at providing concrete guidelines to the design of ethically aligned AI systems. Systems that we can trust, systems that we can rely on. Based on the result of a public consultation process, the EU guidelines put forward seven requirements necessary (but not sufficient) to achieve trustworthy AI together with methods to achieve these and an assessment list to check these requirements. The IEEE-EAD report is a truly bottom-up international effort, resulting from the collaboration of many hundreds of experts across the globe including Asia and the global South. It goes deeper and beyond a list of requirements or principles and provides in-depth background on many different topics. The IEEE-EAD community is already hard at work on defining standards for the future of ethical intelligent and autonomous technologies, ensuring the prioritization of human well-being. The EU will be piloting its assessment list in the coming months, through an open call for interest.

As Norbert Wiener said already in 1960, as often quoted by Stuart Russell “We need to sure that the purpose put into the machine is the purpose which we really want”. Moreover, we need to ensure that we put in place the social and technical constructs that ensure that the purpose remains in place when algorithms and their contexts evolve.

Ensuring an ethically aligned purpose is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. It is a work of generations. It is a work always in progress. Obviously, errors will be made, disasters will happen. A lot of bling will be added in an attempt to move the attention from less shine places. Several organisations have be accused of ‘ethics washing’ and efforts from Google to set up, and as quickly dissolve, an ethics board, do not contribute to trust on these efforts. More than accusing these failures, we need to learn from them and try again, try better.

It is not an option to ignore our responsibility. AI systems are artifacts decided, designed, implemented and used by us. We are responsible. We are responsible to try again when we fail (and we will fail), to observe and denounce when we see things going wrong (and they will go wrong), to be informed and to inform, to rebuild and improve. The principles put forward by the EU and the IEEE are the latest in a long lists of sets of principles, by governments, civil organisations, private companies, think tanks and research groups (Asilomar, Barcelona, Montreal, Google, Microsoft,… just to mention a few). However, it is not just about checking that a system meets the principles on whatever is your favorite list.

These principles are not check lists, or boxes to tick once and forget. These principles are directions for action, are codes of behavior. For AI systems, but most importantly for us. It is us who need to be fair, non-discriminatory, accountable, to ensure privacy of ourselves and others, and to aim at social and environmental well-being. The codes of ethics are for us. AI systems will follow.

There is work to be done. We (people) are the ones who can and must do it. We are responsible.

--

--