Are We Ready to Co-Design the Ethical Frameworks of Constitutional AI?

Nicole Cacal
9 min readNov 8, 2023

--

A white feather pen resting on top of the US constitution.
Credit: Creativeye99

Decision-making, once the exclusive domain of human thought and intuition, is increasingly delegated to algorithms that predict, recommend, and even decide with minimal human oversight. From the news articles that shape our worldview to the justice systems that affect millions of lives, AI’s influence is profound and, often, invisible.

Against this backdrop, Constitutional AI shows promise for ethical governance, with aspirations to embed AI with a backbone of moral and legal standards, comparable to a societal constitution. This approach extends beyond lists of “do’s and don’ts,” endeavoring to weave human values deeply into the AI’s decision-making, aiming for outcomes that are fair and just. However, the initiative is not without its hurdles. The development of Constitutional AI, often orchestrated by a select few, can inadvertently echo undemocratic practices, undercutting the very values it seeks to uphold. Such challenges underscore the necessity for continuous adaptation and inclusive dialogue in the pursuit of truly equitable AI governance.

Enter Contestable AI, the dynamic partner that allows stakeholders — from end-users to ethicists — to challenge and refine AI systems. This is the living dialogue to Constitutional AI’s foundational script, ensuring that the systems remain adaptable and aligned with shifting cultural norms and ethical expectations.

The harmonization of these two paradigms, when underpinned by design research, holds the promise of a democratic AI governance system. This is a model that is not just built for the people, but by and with them, consistently tuned to the nuanced symphony of human needs and values. By combining Constitutional and Contestable AI with the rich insights of design research, we can architect an AI landscape that is responsive, responsible, and resolutely human-centric.

Setting Standards: Exploring Constitutional AI

Constitutional AI is an innovative concept where the governance of artificial intelligence is based on a set of foundational principles or “constitution,” much like the legal framework that underpins a democratic society. Its evolution stems from the recognition of AI’s deepening integration into critical decision-making processes and the imperative for an ethical scaffolding to guide its operations.

The underpinnings of Constitutional AI can be attributed to Anthropic, a prominent AI research and safety company, which expanded upon their earlier work on Constitutional AI through an experimental approach called “Collective Constitutional AI.” This approach sought to integrate the public’s voice into the AI’s guiding principles. Rather than a static set of rules, Anthropic’s experiment aimed at creating a living document, written by a diverse group of citizens, that would provide the AI, specifically their chatbot Claude, with a clear set of ethical guidelines. This constitution drew inspiration from significant authoritative documents, such as the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service, ensuring that the AI’s behavior aligns with widely accepted human rights and ethical standards.

The role of these established principles is to steer the AI away from potential pitfalls and towards actions that are safe, responsible, and in harmony with human values. The constitutional framework is designed to ensure consistency in AI behavior, which is crucial for users to develop trust in these systems. Safety is another significant advantage, as clear guidelines help mitigate the risks associated with AI autonomy by preempting unethical actions or decisions.

Empowering Users: The Role of Contestable AI

Contestable AI, a term popularized by Kars Alfrink, refers to the design of AI systems that allow users to challenge and question not only the decisions made by these systems but also their underlying operational mechanisms. The idea of contestability is central to democratic governance — it empowers individuals and communities by granting them the agency to demand justifications, seek redress, and influence future behavior of AI.

Contestable AI functions through several key mechanisms. At the individual level, users must be able to question AI decisions that affect them directly — a process that involves transparency about how those decisions were made. This could involve exposing the data inputs that led to a decision or the weighting of different variables in an AI’s algorithm. Collectively, mechanisms for contestation might include public forums for discussing and debating the moral and ethical parameters within which AI operates, or platforms that allow users to propose modifications to AI systems that are then voted upon by a wider community.

One example that illustrates the clear need for Contestable AI principles involves the International Baccalaureate Organization (IBO). In response to the COVID-19 pandemic, the IBO canceled its spring 2020 exams, affecting over 170,000 students worldwide. Instead of traditional exams, the IBO used an algorithm to assign final grades, resulting in widespread disappointment as many students received lower grades than expected, jeopardizing their university admissions and scholarship opportunities. An online petition of more than 15,000 parents and students quickly highlighted concerns over a “faulty algorithm,” and though the IBO revised grades based on feedback, they defended their method and remained opaque about the system’s specifics. Critics feared biases against students from historically underperforming schools, potentially exacerbating inequalities. Initially, the IBO’s solutions were fee-based appeals or retakes, but under pressure, they modified the appeals process without clarifying their grading system. A more transparent, human-led re-evaluation process could have been a fairer solution, allowing students to contest the algorithm’s decisions effectively.

This ability to contest AI decisions plays a pivotal role in democratizing AI. It encourages a culture of active participation, where AI is not seen as an infallible authority but as a tool subject to scrutiny and continuous improvement. Enabling community input ensures that a diverse array of perspectives are considered in the development and deployment of AI, leading to systems that are better aligned with the multifaceted values of the society they serve. Furthermore, it fosters an environment where AI is responsive to societal changes, able to adapt as norms and values evolve.

Moreover, incorporating contestability into AI can act as a safeguard against biases within automated systems. As Alfrink suggests, by ensuring that AI systems are not only interpretable but also adjustable by the communities they affect, we can mitigate the risk of perpetuating historical injustices or encoding new forms of discrimination.

The integration of Contestable AI principles does not just provide a feedback loop for improving AI; it also aligns these systems more closely with the democratic ideals of participation and representation. It embodies a commitment to a form of governance where technology is continually held accountable to the human values it is meant to serve. Through such a model, AI can evolve to become not only more transparent and equitable but also a true partner in the collective pursuit of societal well-being.

Paper faces with different colors that represent diversity and inclusion.
Credit: Wildpixel

Principles and People: Design Research in AI

Design research exists at the intersection of human experience and technological development, offering a structured approach to understanding not just what users say they need, but what their behaviors, environments, and interactions reveal about their unspoken needs and desires. It is this deep dive into the human context that makes design research an invaluable tool in the synthesis of Constitutional and Contestable AI, ensuring that the principles guiding AI systems are grounded in the lived realities of the people they serve.

At the intersection of Constitutional AI and Contestable AI, design research serves as the bridge that connects overarching ethical guidelines with the nuanced, often variable, landscape of human values. Through methodologies such as ethnographic studies, user interviews, and participatory design workshops, researchers glean insights into the complexities of cultural, social, and individual factors that inform user needs and perspectives. These insights are crucial for developing AI systems that not only adhere to ethical standards, but are also flexible enough to accommodate the diverse and dynamic nature of human societies.

Design research enables the crafting of user-centric AI systems that are inherently adaptable. By engaging directly with the end-users, design researchers can capture the evolving nature of user needs and expectations, feeding this information back into the AI development process. This iterative process ensures that AI systems can be updated and refined in response to changes in societal norms or user feedback, fostering systems that remain relevant and aligned with human values over time.

Through design research, the application of Constitutional and Contestable AI principles can be tailored to the particular contexts in which AI systems operate. Whether it’s adjusting privacy settings to reflect varying cultural attitudes towards data sharing or modifying content moderation algorithms to align with local norms, design research ensures that AI systems are not only principled but also pragmatically aligned with the people they impact.

Harmonizing Principle and Participation: Integrating Constitutional with Contestable AI

The integration of Constitutional and Contestable AI is a formidable yet promising way forward in the journey towards AI ethics. Together, they forge a robust governance model that combines foundational principles with the agility of responsive contestation. This symbiosis can significantly bolster user empowerment, as it not only enforces consistent ethical behavior from AI systems but also vests individuals with the power to influence these behaviors.

A system that is both constitutionally bound and contestable inherently promotes transparency. Users can see the ethical bedrock upon which decisions are made and, critically, they can question and probe these decisions. This two-way interaction ensures that AI systems are not opaque monoliths issuing edicts from the silicon clouds but participatory tools whose operations and outputs are subject to public scrutiny and consent.

However, weaving together these complex threads is not without its challenges. There is a need to ensure that while AI systems are flexible enough to be contested and adapted, they do not become so malleable that they fail to offer consistent and reliable outputs. Additionally, there’s the peril of system abuse, where the mechanisms for contestation could be manipulated by bad actors or biased groups to skew AI behavior in unethical directions.

Addressing varied cultural perspectives adds another layer of complexity. Global AI systems must navigate divergent values and norms, where what is considered ethical in one culture may be contentious in another. This multicultural nature challenges the universality of any constitutional framework and tests the inclusivity of contestable mechanisms. It raises the question: Can a universal set of ethical principles be reconciled with the localized, contextual needs that design research uncovers?

Achieving this balance requires a principled yet pragmatic approach. Universal ethics can serve as the common ground — the non-negotiable baseline — upon which more nuanced, culturally sensitive layers can be added. For example, while privacy may be a universal concern, the degree and nature of privacy sought can vary, necessitating adjustable AI systems that can align with local expectations without compromising on the core principle.

The interplay between Constitutional and Contestable AI should be orchestrated with careful consideration for both the ethical commitments and contextual applications. It’s a balancing act that requires ongoing dialogue among technologists, ethicists, legal experts, and, crucially, the communities served by AI. By fostering an iterative, inclusive process where principles are lived and not just legislated, and where AI systems are seen as evolving entities rather than fixed solutions, we can edge closer to AI governance that harmonizes principle with participation, ensuring that the benefits of AI are equitably shared and its risks democratically managed.

The combination of Constitutional and Contestable AI — tempered by the human-centered approach of design research — holds the promise of a democratically governed digital future. We explored an AI governance model that is not only rooted in firm ethical foundations, but is also flexible enough to be shaped by the voices of those it is designed to serve. By weaving together the steadfast guidelines of Constitutional AI with the dynamic, responsive framework of Contestable AI, and grounding the interplay in design research, we can aspire to create AI systems that are as attuned to human dignity as they are to technological prowess.

The journey towards such an AI governance model is continuous and requires active, ongoing participation from a diverse array of societal actors. It is through this inclusive participation that AI can be effectively guided to align with the mosaic of human values and needs. Stakeholders from all walks of life must not only be informed about the development and implications of AI but also be provided with accessible avenues to influence its governance.

We should see ourselves as active contributors to the AI narrative. Engaging with the evolution of AI governance is not a passive observation, but a civic action. Whether by providing feedback on AI systems, advocating for ethical practices, or participating in community dialogues about AI, each individual has the power to shape the trajectory of AI towards a more just, accountable, and transparent future. The democratic governance of AI is not a finished product, but an evolving process, one that thrives on the collective wisdom and vigilance of the global community.

--

--

Nicole Cacal

A Filipino-American (FilAm) entrepreneur, educator, writer, and speaker. I write about human-centered tech, digital strategy, and FilAm entrepreneurship.