Governance with and of AI: The Role of Ethics, Equity, and Trustworthiness

By Nydia Remolina, Cigdem Gurgur, Mustafa Özbilgin and Jeannie Paterson

Data & Policy Blog
Data & Policy Blog
7 min readNov 3, 2023

--

This blog from the Area 4 Committee (Focus on Ethics, Equity & Trustworthiness) working across the Data for Policy Conference and Data & Policy journal discusses the lines of inquiry related to the question of how AI and data systems can improve governance and invites contributions by the 27th November deadline of the 2024 Conference.

We find ourselves in a time of unparalleled global turmoil. From the impacts of climate change and pandemics to geopolitical conflicts, political unrest, the spread of misinformation, and widening social disparities, today’s policy and societal leaders confront not just new challenges, but entirely new categories of them. In this intricate and interlinked world, governance is more intricate and using AI for good governance is becoming critical. The unprecedented advances in AI development and deployment over the past years led to a stimulating and rapidly evolving research agenda of issues that touches upon various aspects of the digital ecosystem. This places a special focus on the challenges and opportunities of governance with AI stemming from collaborative design, shared ownership, and joint envisioning of the future. Recognizing this reality, the UN Secretary-General Antonio Guterres delivered a speech at the AI for Good Global Summit, exploring the potential and challenges of AI. Guterres emphasized the importance of using AI as a transformative tool to drive the 2030 Agenda and Sustainable Development Goals. Hence, Governance with AI and the human-machine interaction play a crucial role as an instrument of policy in this challenging environment.

Yet equally researchers, NGOs and policy makers have been concerned to ensure that the potential negative consequences — intentional or unintentional — of AI are controlled. This inquiry — governance of AI — requires looking at questions pertaining to the trustworthiness of AI Governance tools with a strong normative push. For some years, the roadmap to ‘Trustworthy’, or ‘Responsible’, AI has been paved with several conceptual frameworks aiming at organizing the conceptualization of the governance of AI from stakeholders in the civil society, government, inter-governmental organizations, academia, and the private sector. The multiplicity of stakeholders and conceptual differences across the globe have resulted in a polycentric and fragmented discussion about Trustworthy AI. Nonetheless, the frameworks brought to light key insights. A key concern has risen to the forefront: how do we ensure that the interplay between policy and data upholds principles of ethics, and equity? Can indeed we have trustworthy AI without addressing the key questions of equity, both within and between nation states? How does the proposal and implementation of AI regulations — or the absence thereof — play a role in this discussion?

Aligned with this perspective, the Data for Policy 2024 Conference, taking place 9–11 July 2024 at Imperial College London, calls for contributions analysing the future of governance and decision making with AI. Authors are encouraged to submit full papers, abstracts and panel proposals before the deadline of November 27th. Full papers will be simultaneously reviewed for publication at Data & Policy, the open-access journal serving this community at Cambridge University Press.

Submission deadline: 27th November 2023

In exploring the capabilities of AI and data systems to enable innovation that could contribute to improve Governance there are at least two levels of inquiry. This first layer of the Governance challenge in this context analyses the human-technology interaction and intentionally introduces the debate of trustworthy governance with AI. The second layer studies how AI advancements create risks and how governance mechanisms should be designed to mitigate them. Area 4 as a theme focuses on this second stage of inquiry with a particular focus on questions of equity, ethics and trustworthiness. The topics are pressing and align with the Bletchley Declaration, endorsed by countries at the AI Safety Summit in the UK in November 2024. This declaration explicitly states that to harness the potential benefits of AI for everyone, AI must be designed, developed, deployed, and utilised in a manner that is not only safe but also human-centric, trustworthy, and responsible.

This layered perspective of the Governance is key in a present and future that are not about machines replacing humans but collaborating with them. Strategic decision-making benefits from algorithmic inputs but should never be solely dependent on them. Algorithms and machines equipped with advanced algorithms, can process vast amounts of data, uncover patterns humans might miss, and offer predictions based on intricate models. Meanwhile, humans bring to the table nuanced understanding, emotional intelligence, contextual insights, and ethical considerations. When combined, this collaboration can lead to more informed, efficient, and comprehensive decisions. However, blindly relying on algorithms can be perilous. This is where the importance of digital ethics to ensure trustworthiness in those mechanisms comes into play.

On the one hand, digital ethics acts as a bedrock to ensure that the development and implementation of AI and machine learning tools respect human values and rights, emphasizing transparency, fairness, accountability, and privacy. As machines become integral to decision-making processes, it’s crucial that they do not perpetuate biases, infringe on privacy, or make opaque decisions. Furthermore, ethics helps in navigating the fine line between automation and human intervention, ensuring that machines augment human capabilities rather than replace them without oversight . For instance, addressing biases is essential for constructing ethical and equitable AI systems, and new techniques are emerging to proof bias in AI. However, no single technique can guarantee a completely bias-free AI system. A combination of these techniques, coupled with regular checks and balances and human judgment in critical decision-making stages, is imperative.

On the other hand, trustworthiness is foundational in AI development. At its core, the integration of AI systems into daily life and decision-making processes necessitates that these systems be trustworthy. Those deploying the systems must be able to guarantee that the AI is not only accurate and efficient but also respects the values, rights, and privacy of individuals. In scenarios where data is used to provide content, services to or is used in decisions about individuals — this performance must align with the core normative values of the society in which it is operating and respect fundamental rights of those impacted.

As part of this process, operationalising principles such as digital self-determination becomes critical in creating trustworthy data-spaces and policy data interactions have the potential to create such environment. Individuals should have control over their digital identity, data, and interactions. In an AI-driven world, this means having clarity and control over how AI systems use one’s data and the kind of decisions they make. It’s not solely about data privacy but about ensuring that AI systems empower individuals rather than diminish their autonomy. Trust becomes the linchpin here.

Similarly, the role of ethics and equity in governance with AI and in AI governance requires proactive efforts to eliminate unequal outcomes and social inequalities that AI systems can perpetuate and exacerbate. One example is the job recruitment sector. Research has documented how Large Language Models (LLMSs) might associate certain occupations more with one gender than the other based on historical data. For instance, it might be more likely to complete the sentence “The nurse said…” with “she” rather than “he,” or “The engineer said…” with “he” rather than “she,” reinforcing traditional gender roles. In fact, LLMs are 3–6 times more likely to choose an occupation that stereotypically aligns with a person’s gender. Thus, fairness in AI means awareness of the risks of bias and discrimination arising from historical data and programming decisions. But equity also goes further. It means acknowledging the impact of intersectional disadvantage and the need for positive initiatives to support digital self-determination and inclusion in the very debates about AI governance and governance of AI.

By focusing on ethics and equity, in navigating the intricate web of policy data interactions, we could pave the way to genuinely trustworthy and responsible technology. The role of these values is about more than just building functional technology, it is also about taking the opportunity to reshape the contours of governance with and of AI, making processes more efficient, insightful, and responsive. Without embedding ethics, ensuring equity, and fostering trustworthiness, we risk creating systems that exclude and even oppress the very populations they are intended to benefit.

If you are currently working on these topics or interested in knowing more about cutting-edge research in the field of Ethics, Equity, and Trust and Policy Data interactions, join us in the Data for Policy 2024 ConferenceDecoding the Future: Trustworthy Governance with AI. For any enquiries or further information, see the website or please contact team@dataforpolicy.org.

Full Area 4 Committee

Jeannie Paterson (Professor of Law and Co-Director the Centre for AI and Digital Ethics at the University of Melbourne, Australia).

Cigdem Gurgur (Associate Professor of Management; Purdue University, Fort Wayne, USA).

Tristan Henderson (Senior Lecturer in Computer Science at the University of St Andrews, UK)

Mustafa Özbilgin (Professor of Organisational Behaviour at Brunel Business School, London, UK).

Nydia Remolina (Assistant Professor of Law; Head, Industry Relations, Centre for AI and Data Governance at Singapore Management University).

***********************

This is the blog for Data & Policy (cambridge.org/dap), a peer-reviewed open access journal exploring the interface of data science and governance. Read on for five ways to contribute to Data & Policy.

--

--

Data & Policy Blog
Data & Policy Blog

Blog for Data & Policy, an open access journal at CUP (cambridge.org/dap). Eds: Zeynep Engin (Turing), Jon Crowcroft (Cambridge) and Stefaan Verhulst (GovLab)