Highlights from Unesco’s Recommendation on the Ethics of AI
Unesco has just published (July 2020) a draft version of a recommendation on the ethics of Artificial Intelligence: https://unesdoc.unesco.org/ark:/48223/pf0000373434
The 19-page document brings many of the concerns expected from Unesco but also a fair bit of content that I highlighted below, that may be of wider interest for the communities at the intersection of technology and government.
Useful definitions
AI systems can be approached as technological systems which have the capacity to process information in a way that resembles intelligent behaviour, and typically includes aspects of learning, perception, prediction, planning or control.
AI systems embody models and algorithms that produce a capacity to learn and to perform cognitive tasks, like making recommendations and decisions in real and virtual environments. AI systems are designed to operate with varying levels of autonomy by means of knowledge modeling and representation and by exploiting data and calculating correlations. AI systems may include several approaches and technologies, such as but not limited to:
i. machine learning, including deep learning and reinforcement learning,
ii. machine reasoning, including planning, scheduling, knowledge representation, search, and optimization, and
iii. cyber-physical systems, including internet-of-things and robotics, which involve control, perception, the processing of data collected by sensors, and the operation of actuators in the environment in which AI systems work.[…] AI actors, understood as those who play an active role in the AI system lifecycle, including organizations and individuals that research, design, develop, deploy, or use AI.
AI specificity
Besides raising ethical issues similar to the ones raised by any technology, AI systems also raise new types of issues. Some of these issues are related to the fact that AI systems are capable of doing things which previously only living beings could do, and which were in some cases even limited to human beings only. These characteristics give AI systems a profound, new role in human practices and society. Going even further, in the long term, AI systems could challenge human’s special sense of experience and consciousness, raising additional concerns about human autonomy, worth and dignity, but this is not yet the case.
Concerns in education and culture
AI systems are connected to education in many ways: they challenge the societal role of education because of their implications for the labour market and employability; they might have an impact on educational practices; and they require that education of AI engineers and computer scientists creates awareness of the societal and ethical implications of AI.
AI has implications for cultural identity and diversity. It has the potential to positively impact the cultural and creative industries, but it may also lead to an increased concentration of supply of cultural content, data and income in the hands of only a few actors, with potential negative implications for the diversity of cultural expressions and equality.
Human oversight and determination
27. It should always be possible to attribute both ethical and legal responsibility for the research, design, development, deployment, and use of AI systems to a physical person or to an existing legal entity. Human oversight refers thus not only to individual human oversight but to public oversight.
28. It may be the case that sometimes humans would have to share control with AI systems for reasons of efficacy, but this decision to cede control in limited contexts remains that of humans, as AI systems should be researched, designed, developed, deployed, and used to assist humans in decision-making and acting, but never to replace ultimate human responsibility.
Privacy
31. The research, design, development, deployment, and use of AI systems should respect, protect and promote privacy, a right essential to the protection of human dignity and human agency. Adequate data governance mechanisms should be ensured throughout the lifecycle of AI systems including as concerning the collection of data, control over the use of data through informed consent and permissions and disclosures of the application and use of data, and ensuring personal rights over and access to data.
Awareness and literacy
32. Public awareness and understanding of AI technologies and the value of data should be promoted through education, public campaigns and training to ensure effective public participation so that citizens can take informed decisions about their use of AI systems.
Multi-stakeholder and adaptive governance
34. Governance should consider a range of responses from soft governance through self-regulation and certification processes to hard governance with national laws and, where possible and necessary, international instruments. In order to avoid negative consequences and unintended harms, governance should include aspects of anticipation, protection, monitoring of impact, enforcement and redressal.
Fairness
35. AI actors should respect fairness, equity and inclusiveness, as well as make all efforts to minimize and avoid reinforcing or perpetuating socio-technical biases including racial, ethnic, gender, age, and cultural biases, throughout the full lifecycle of the AI system.
Transparency and explainability
36. While, in principle, all efforts need to be made to increase transparency and explainability of AI systems to ensure trust from humans, the level of transparency and explainability should always be appropriate to the use context, as many trade-offs exist between transparency and explainability and other principles such as safety and security.
37. Transparency means allowing people to understand how AI systems are researched, designed, developed, deployed, and used, appropriate to the use context and sensitivity of the AI system. It may also include insight into factors that impact a specific prediction or decision, but it does not usually include sharing specific code or datasets. In this sense, transparency is a socio-technical issue, with the aim of gaining trust from humans for AI systems.
38. Explainability refers to making intelligible and providing insight into the outcome of AI systems. The explainability of AI models also refers to the understandability of the input, output and behaviour of each algorithmic building block and how it contributes to the outcome of the models. Thus, explainability is closely related to transparency, as outcomes and sub-processes leading to outcomes should be understandable and traceable, appropriate to the use context.
Safety and security
40. Governments should play a leading role in ensuring safety and security of AI systems, including through establishing national and international standards and norms in line with applicable international human rights law, standards and principles. Strategic research on potential safety and security risks associated with different approaches to realize long-term AI should be continuously supported to avoid catastrophic harms.
Responsibility and accountability
41. AI actors should assume moral and legal responsibility in accordance with extant international human rights law and ethical guidance throughout the life cycle of AI systems. The responsibility and liability for the decisions and actions based in anyway on an AI system should always ultimately be attributable to AI actors.
42. Appropriate mechanisms should be developed to ensure accountability for AI systems and their outcome. Both technical and institutional designs should be considered to ensure auditability and traceability of (the working of) AI systems.
Policy Action 2: Addressing Labour Market Changes
50. Member States should work to assess and address the impact of AI on labour markets and its implications for education requirements. This can include the introduction of a wider range of ‘core skills’ at all education levels to give new generations a fair chance of finding jobs in a rapidly changing market and to ensure their awareness of the ethical aspects of AI. Skills such as ‘learning how to learn’, communication, teamwork, empathy, and the ability to transfer one’s knowledge across domains, should be taught alongside specialist, technical skills. Being transparent about what skills are in demand and updating school curricula around these is key.
51. Member States should work with private entities, NGOs and other stakeholders to ensure a fair transition for at-risk employees. This includes putting in place upskilling and reskilling programs, finding creative ways of retaining employees during those transition periods, and exploring ‘safety net’ programs for those who cannot be retrained.
52. Member States should encourage researchers to analyze the impact of AI on the local labour market in order to anticipate future trends and challenges. These studies should shed light on which economic, social and geographic sectors will be most affected by the massive incorporation of AI.
Policy Action 3: Addressing the social and economic impact of AI
54. Member States should devise mechanisms to prevent the monopolization of AI and the resulting inequalities, whether these are data, research, technology, market or other monopolies.
55. Member States should work with international organizations, private and non-governmental entities to provide adequate AI literacy education to the public especially in LMICs in order to reduce the digital divide and digital access inequalities resulting from the wide adoption of AI systems.57. Member States are encouraged to consider a certification mechanism for AI systems similar to the ones used for medical devices. This can include different classes of certification according to the sensitivity of the application domain and expected impact on human lives, the environment, ethical considerations such as equality, diversity and cultural values, among others. Such a mechanism might include different levels of audit of systems, data, and ethical compliance. At the same time, such a mechanism must not hinder innovation or disadvantage small enterprises or startups by requiring large amounts of paperwork. These mechanisms would also include a regular monitoring component to ensure system robustness and continued integrity and compliance over the entire lifetime of the AI system, requiring re-certification if necessary.
58. Member States should encourage private companies to involve different stakeholders in their AI governance and to consider adding the role of an AI Ethics Officer or some other mechanism to oversee impact assessment, auditing and continuous monitoring efforts and ensure ethical compliance of AI systems.
59. Member States should work to develop data governance strategies that ensure the continuous evaluation of the quality of training data for AI systems including the adequacy of the data collection and selection processes, proper security and data protection measures, as well as feedback mechanisms to learn from mistakes and share best practices among all AI actors. Striking a balance between metadata and users’ privacy should be an upfront concern for such a strategy.
“Devise mechanisms to prevent the monopolization of AI and the resulting inequalities, whether these are data, research, technology, market or other monopolies”. This is, in my opinion, both the most important and difficult to implement of Unesco’s recommendations since there are immense economic and strategic rewards for establishing a technical lead and monopolizing the benefits accruing from AI. I hope our societies will be wise enough to devise such mechanisms and still not hinder the development of AI.
Policy Action 5: Promoting AI Ethics Education & Awareness
67. Member States should encourage in accordance with their national education programmes and traditions the embedding of AI ethics into the school and university curricula for all levels and promote cross-collaboration between technical skills and social sciences and humanities. Online courses and digital resources should be developed in local languages and in accessible formats for people with disabilities.
68. Member States should promote the acquisition of ‘prerequisite skills’ for AI education, such as basic literacy, numeracy, and coding skills, especially in countries where there are notable gaps in the education of these skills.
69. Member States should introduce flexibility into university curricula and increase ease of updating them, given the accelerated pace of innovations in AI systems. Moreover, the integration of online and continuing education and the stacking of credentials should be explored to allow for agile and updated curricula.
70. Member States should promote general awareness programs of AI and the inclusive access to knowledge on the opportunities and challenges brought about by AI. This knowledge should be accessible to technical and non-technical groups with a special focus on underrepresented populations.
71. Member States should encourage research initiatives on the use of AI in teaching, teacher training and e-learning, among other topics, in a way that enhances opportunities and mitigates the challenges and risks associated with these technologies. This should always be accompanied by an adequate impact assessment of the quality of education and impact on students and teachers of the use of AI and ensure that AI empowers and enhances the experience for both groups.
72. Member States should support collaboration agreements between academic institutions and the industry to bridge the gap of skillset requirements and promote collaborations between industry sectors, academia, civil society, and the government to align training programs and strategies provided by educational institutions, with the needs of the industry. Project-based learning approaches for AI should be promoted, allowing for partnerships between companies, universities and research centers.
Policy Action 7: Promoting Ethical Use of AI in Development
81. Member States and international organizations should strive to provide platforms for international cooperation on AI for development, including by contributing expertise, funding, data, domain knowledge, infrastructure, and facilitating workshops between technical and business experts to tackle challenging development problems, especially for LMICs and LDCs.
82. Member States should work to promote international collaborations on AI research, including research centers and networks that promote greater participation of researchers from LMICs and other emerging geographies.
Policy Action 9: Establishing Governance Mechanisms for AI Ethics
87. Member States should foster the development of, and access to, a digital ecosystem for ethical AI. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, Member States should consider reviewing their policies and regulatory frameworks, including on access to information and open government to reflect AI-specific requirements and promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data, among others.
Open data, particularly in government and research, should consider the specific needs of training ML systems: publishing meta-data, annotated data, code for importing and cleaning data and facilitating collaboration by sharing code for analysis and training of models in platforms such as Kaggle.
Policy Action 10: Ensuring Trustworthiness of AI Systems
90. Member States and private companies should implement proper measures to monitor all phases of an AI system lifecycle, including the behaviour of algorithms in charge of decision making, the data, as well as AI actors involved in the process, especially in public services and where direct end-user interaction is needed.
91. Member States should work on setting clear requirements for AI system transparency and explainability based on:
a. Application domain: some sectors such as law enforcement, security, education and healthcare, are likely to have a higher need for transparency and explainability than others.
b. Target audience: the level of information about an AI system’s algorithms and outcome and the form of explanation required may vary depending on who are requesting the explanation, for example: users, domain experts, developers, etc.
c. Feasibility: many AI algorithms are still not explainable; for others, explainability adds a significant implementation overhead. Until full explainability is technically possible with minimal impact on functionality, there will be a trade-off between the accuracy/quality of a system and its level of explainability.
92. Member States should encourage research into transparency and explainability by putting additional funding into those areas for different domains and at different levels (technical, natural language, etc.).
93. Member States and international organizations should consider developing international standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined.
Policy Action 11: Ensuring Responsibility, Accountability andPrivacy
94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.
95. Member States are encouraged to introduce impact assessments to identify and assess benefits and risks of AI systems, as well as risk prevention, mitigation and monitoring measures. The risk assessment should identify impacts on human rights, the environment, and ethical and social implications in line with the principles set forth in this Recommendation. Governments should adopt a regulatory framework that sets out a procedure for public authorities to carry out impact assessments on AI systems acquired, developed and/or deployed by those authorities to predict consequences, mitigate risks, avoid harmful consequences, facilitate citizen participation and address societal challenges. As part of impact assessment, the public authorities should be required to carry out self-assessment of existing and proposed AI systems, which in particular, should include the assessment whether the use of AI systems within a particular area of the public sector is appropriate and what the appropriate method is. The assessment should also establish appropriate oversight mechanisms, including auditability, traceability and explainability which enables the assessment of algorithms, data and design processes, as well as include external review of AI systems. Such an assessment should also be multidisciplinary, multi-stakeholder, multicultural, pluralistic and inclusive.
96. Member States should involve all actors of the AI ecosystem (including, but not limited to, representatives of civil society, law enforcement, insurers, investors, manufacturers, engineers, lawyers, and users) in a process to establish norms where these do not exist. The norms can mature into best practices and laws. Member States are further encouraged to use mechanisms such as regulatory sandboxes to accelerate the development of laws and policies in line with the rapid development of new technologies and ensure that laws can be tested in a safe environment before being officially adopted.
97. Member States should ensure that harms caused to users through AI systems can be investigated, punished, and redressed, including by encouraging private sector companies to provide remediation mechanisms. The auditability and traceability of AI systems, especially autonomous ones, should be promoted to this end.
98. Member States should apply appropriate safeguards of individuals’ fundamental right to privacy, including through the adoption or the enforcement of legislative frameworks that provide appropriate protection, compliant with international law. In the absence of such legislation, Member States should strongly encourage all AI actors, including private companies, developing and operating AI systems to apply privacy by design in their systems.
99. Member States should ensure that individuals can oversee the use of their private information/data, in particular, that they retain the right to access their own data, and “the right to be forgotten”.
100. Member States should ensure increased security for personally identifiable data or data, which if disclosed, may cause exceptional damage, injury or hardship to a person. Examples include data relating to offences, criminal proceedings and convictions, and related security measures; biometric data; personal data relating to “racial” or ethnic origin, political opinions, trade-union membership, religious or other beliefs, health or sexual life.
101. Member States should work to adopt a Commons approach to data to promote interoperability of datasets while ensuring their robustness and exercising extreme vigilance in overseeing their collection and utilization. This might, where possible and feasible, include investing in the creation of gold standard datasets, including open and trustworthy datasets, that are diverse, constructed with the consent of data subjects, when consent is required by law, and encourage ethical practices in the technology, supported by sharing quality data in a common trusted and secured data space.103. The possible mechanisms for monitoring and evaluation may include an AI observatory covering ethical compliance across UNESCO’s areas of competence, an experience sharing mechanism for Member States to provide feedback on each other’s initiatives, and a ‘compliance meter’ for developers of AI systems to measure their adherence to policy recommendations mentioned in this document.
“Responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system”: if AI systems start acquiring own resources and designating and remunerating natural persons as guardians and overseers, including them in the decision-making process, why should they be treated in any way different to a corporation and be denied legal personality?
I have only added very few comments to the selected highlights but might be encouraged to write more if there is positive feedback from readers so, please, leave comments.