AI, Human Dignity, & Inclusive Societies: Priority Recommendations to Better Ensure AI Doesn’t Deepen Disparities for Vulnerable Populations & Minority Groups

Brandie M. Nonnecke
6 min readMay 30, 2019

--

AI-enabled systems are increasingly taking on a central role in core institutions, influencing consequential decisions that directly affect human rights. These systems have made their way into our healthcare facilities, courthouses, and employment offices, deciding who gets insurance, who receives parole, and who gets hired. While in many instances AI applications are intended to increase efficiency and efficacy by overcoming errors and biases inherent in human decision-making, ill-considered designs and applications threaten to deepen disparities, especially among vulnerable populations and marginalized groups.

The 2019 ITU AI for Good Global Summit brought together thousands of stakeholders from governmental and intergovernmental organizations, academia, civil society and industry to identify priority strategies to ensure AI is designed and deployed in ways that maximize benefits to society.

The “AI, Human Dignity & Inclusive Societies: Protection of Vulnerable Populations & Inclusion of Minority Groups” session, co-convened by the Global Digital Policy Incubator at Stanford, the CITRIS Policy Lab at UC Berkeley, and UNICEF, brought together leadership from Salesforce, UNICEF, AI4ALL, and UNHCR to discuss priority recommendations to mitigate negative impacts of AI on vulnerable populations and marginalized groups.

Priority Recommendations

Priority recommendations emerged from the session discussion in the areas of ethical, responsible, humane, and inclusive AI.

Ethical AI

AI developers must put the interests and priorities of those affected by AI-enabled systems, especially vulnerable and marginalized groups, at the forefront of development and deployment. Values-based principles for AI, including fairness, accountability, and transparency, must be clearly articulated and operationalized for these groups in governmental, intergovernmental, and private sector AI strategies.

Responsible AI

Industry must establish robust processes to move values-based principles for the responsible development of AI into sound practices, including development of incentive mechanisms to support technical and ethical training and sound evaluation processes to test and refine an AI application throughout its life cycle.

Humane AI

AI developments must be guided by the rule of law and human rights principles to ensure applications are fair and just for all. Robust evaluations must be conducted to determine whether an AI application is necessary in the first place. If the AI system is to be deployed, evaluation strategies to identify and mitigate negative impacts must be implemented throughout its life cycle.

Inclusive AI

The ethical, responsible, and humane development of AI is dependent on inclusion of diverse stakeholders in the design and deployment of these systems. The public and private sectors must collaboratively develop strategies to ensure inclusion in AI development, education, and the workforce.

Below we provide summaries for each of the four presentations and their expanded recommendations for ethical, responsible, humane, and inclusive AI.

#EthicalAI

While AI is pervasive in children’s lives, there is a lack of inclusion of child rights into AI strategies and policies. Steve Vosloo, Policy Specialist, Digital Connectivity & Policy Lab, UNICEF, provided insight into the varied ways AI has been integrated into children’s lives and the need for ethical standards to govern use of AI by and for children.

Recommendations

Integrate children’s rights into national and corporate AI strategies. National and corporate AI strategies are increasingly being developed globally. These strategies must incorporate values-based principles that take into account children’s rights.

Corporations must engage in multidisciplinary development and evaluation. Corporations must establish multidisciplinary approaches to inclusive design, safety and privacy by design, and evaluation mechanisms implemented throughout the product life cycle.

Public consultation is critical. Multistakeholder engagement should be pursued to form guidelines for AI and child rights.

Interested in learning more about the impacts of AI and child rights? Check out the recent report from UNICEF and the Human Rights Center at UC Berkeley: “Artificial Intelligence and Child Rights.”

#ResponsibleAI

Creating responsible AI systems that respect human rights is similar to raising a child. Kathy Baxter, Architect of Ethical AI Practice at Salesforce, provided key recommendations, based off of parenting advice, to better ensure development and deployment of responsible AI.

Recommendations

It takes a village to raise a child. Research shows that diverse teams are harder working, more creative, and smarter. Development of responsible AI systems requires the collaboration of diverse individuals who can collaboratively shape AI in the interest of society.

Show me your friends, and I’ll show you your character. We must create ethical mindsets within companies. Principles of ethical AI practice are not enough. Training must start during college and continue into the workplace, providing training to employees on how to design and deploy AI responsibly. The private sector must implement incentive structures that reward this responsible behavior.

Do as I say, not as I do. Children mirror behaviors of those around them. The same is true for AI systems. We must remove bias from our business processes as well as our training data to better ensure AI systems do not reflect these biases.

Give encouragement and feedback. Children don’t always know when they’ve done something wrong, or right. It’s imperative to establish explicit feedback mechanisms on AI developments and deployments on what should and should not be done.

Parent with love. AI is impacting society at a rate never experienced before. AI-enabled systems must be continuously monitored for negative outcomes and should be guided toward responsible applications.

#HumaneAI

Automated decision systems are increasingly deployed by state- and non-state actors to inform decisions that affect forcibly displaced people, including automated screenings of eligibility, country placement, and allocation of resources. Rebeca Moreno Jimenez, Innovation Officer & Data Scientist, UNHCR Innovation Service highlighted emerging recommendations from UNHCR to ensure automated decision systems exhibit humane characteristics grounded in inclusivity, diversity, fairness, and accountability.

Photo Credit: Georgios Giannopoulos

Recommendations

Inclusion of those most at risk. Practitioners must take into account different facets of complex processes that the automated decision system is trying to optimize. Consider those who will be put most at risk with that optimization and evaluate the positive and negative impacts.

Fairness. Developers should test different assumptions, different historical datasets, and different parameters to avoid discrimination.

Integrity. Ensure information and data fed to the systems are reliable. Ensure consistency of experiments to avoid bias (e.g., collection bias, human, analytical)

Diversity. Engage naysayers in the evaluation of the AI system, including questioning assumptions and engaging individuals with diverse backgrounds.

Openness. Allow the methodology, algorithms — and if consistent with privacy — the data itself to be of public scrutiny. Allow feedback from those impacted to contest machine decisions.

Transparency. Publicly document and share processes and outcomes.

Dignity. Be humble, ask those who are most affected how the system can be improved.

#Inclusive AI

The diversity crisis in AI development and deployment has led to innumerable negative impacts. Ecem Yılmazhaliloğlu, Inclusivity Advocate, AI4ALL alumna and Founder, Technoladies highlighted insights into the scale of the diversity crisis and shared effective strategies for enabling greater inclusion in the field of AI through tangible examples from AI4ALL and Technoladies.

Recommendations

Support skills development. Create safe spaces for new entrants into the AI space to develop and mature their skillsets, including providing support groups and online training materials such as the AI4ALL Open Learning Program.

Support diversity and inclusion strategies within the private sector. AI development teams should be inclusive of individuals with diverse skills and life experiences. More diverse teams are superior at solving complex problems than homogeneous teams.

The 2019 ITU AI for Good Global Summit brought together thousands of stakeholders from government, academia, civil society and industry to identify ways AI can be used to support the UN sustainable development goals. Our session, “Protection of Vulnerable Populations & Inclusion of Minority Groups” explored priority technical and policy strategies to mitigate negative impacts of AI on vulnerable populations and marginalized groups and strategies to support greater diversity and inclusion in the field of AI.

--

--

Brandie M. Nonnecke

PhD, Founding Director of @CITRISPolicyLab @citrisnews @UCBerkeley, Former Fellow @AspenPolicyHub & @WEF, #AIforGood & #TechforGood Advocate, Tech Policy Wonk.