Trustworthy Governance with AI?

By Zeynep Engin, Stefaan Verhulst, David Hand, Jon Crowcroft, Mark Kennedy, Rossella Arcucci

Data & Policy Blog
Data & Policy Blog
8 min readJul 26, 2023

--

The first quarter of the 21st century has been marked by seemingly intractable global challenges. Complex issues, such as climate change and global inequality, pose threats to economic stability, environmental health, and human survival. Traditional societal institutions have largely failed to address these issues effectively and legitimately, leading to widespread frustration. The solution to these complex problems necessitates innovation in the design of how we govern.

Technological progress and a revival in new modes of citizen engagement have spurred political scientists, technologists, non-profit organisations, and government officials to reevaluate our governance methods. When designed responsibly, Artificial Intelligence (AI) may offer promising prospects for innovating our approach to public problem-solving, public service design, and policy development. In essence, AI has the potential to augment, rather than just automate, our governance methods — making them more trustworthy in the process.

It is, therefore, time to chart a more informed and empirical exploration of governance powered by AI.

We are thrilled to announce the launch of our Data for Policy 2024 conference with the theme “Decoding the Future: Trustworthy Governance with AI?”. Our aim is to explore both the operational and technical models of human-machine collaboration in decision-making and governance. We would like to cut across ‘tool AI’, ‘agent AI’ and ‘regulation of AI’ discussions to build a holistic vision for the future of critical decision-making, accounting for both the opportunities and risks emerging from AI-driven transformations. Our emphasis on ‘trustworthy governance’ rather than ‘trustworthy AI’ is intentional, as we seek to shift away from anthropomorphic language when referring to data-driven algorithmic processes, and to underscore the ultimate responsibility of humans in AI-assisted decision-making.

The UN’s AI for Good summit, held in Geneva earlier in July, conveyed numerous positive messages highlighting the potential roles for AI-powered technologies in addressing pressing global challenges, ranging from climate change adaptation to pandemic control and combating long-standing inequalities. Notably, the summit concluded with the world’s first human-robot press conference. The summit also took place amidst heated discussions about the potential apocalyptic impact of AI. Following the sudden rise of generative AI technologies like ChatGPT, several high profile statements emerged warning about the profound or even existential risks these technologies pose to humankind. In the meantime, many other critics are drawing attention to the more immediate concerns AI technologies are already causing in society. Discriminatory practices, manipulation of election processes, and the concentration of power among a small group of tech elites are well-documented and very current challenges.

The opposing positions — positive vs negative messages — need be put in the context of no technology being of itself either good or bad. It is the use we make of it which matters: as noted above, responsibility lies with humans. That this is recognised is illustrated by the fact that, in parallel to these developments, the pursuit of ‘taming’ AI has gained momentum. From the EU’s draft AI Act to the global roadshow of OpenAI’s CEO discussing the issue with the world leaders; there is no shortage of statements of intent to prevent potential harms emerging from these technologies. The current emphasis on AI regulation is highly justified, even more so if we recognise that it is long overdue. However, it will also require innovation in governance itself to ensure AI technology development and deployment is governed appropriately and effectively.

Through the Data for Policy 2024 conference, we are interested in broadening the conversation, shifting our attention to critical decision-making and governance processes holistically. In a world where human cognitive abilities are increasingly enhanced with machine capabilities, we are interested in building a collective vision to steer technology development towards shared progress. Our focus on ‘trustworthy governance with AI’ promotes a symbiotic relationship, combining human and AI capabilities for better outcomes while maintaining human decision-making responsibility. This parallel between human-AI collaboration, seen in medicine and games like chess, highlights the potential to harness AI’s transformative power while upholding human values for a more equitable and efficient future.

Data for Policy 2024, 9–11 July at Imperial College London. Find out more at dataforpolicy.org

The way citizens across the globe interact with cutting-edge AI technologies has already shifted fundamentally and irreversibly, thanks to the recent large language models (LLMs) and foundation models combined with user-friendly interfaces like ChatGPT. The demand for these technologies in government, education, legal services, medical practice, financial investment, and countless other applications is only set to increase. Even if we take a step back and focus on the more established and relatively narrow AI applications — such as those we see in hiring, urban planning, autonomous driving, trading, dispute resolution or political campaigning — it does not take a lot of imagination to say that AI-powered decision-making and governance is here to stay with us in the long term — with all its potentials and perils. We simply cannot “un-learn” or “un-see” what has been invented, as Alan Penn has put it nicely in his recent blog piece — “we must instead learn to take advantage of the good and to manage the negative consequences in the hope that on balance we come out better off than we were”.

As a community, our primary interest is on the creative transformation that is necessary to regenerate, upscale and enhance the governance functions themselves, given the new types of ‘intelligence’ we are able to incorporate into the process. Admittedly, an intrinsic problem still remains as to how we will avoid regressing on our hard-earned democratic values and governance principles that form the foundation of our societies under such massive-scale transformations. We are therefore looking to chart a new path towards more trustworthy decision-making and governance with the transformative potential of AI, also fully acknowledging that this conversation cannot be decoupled from AI technology and regulation conversations. More trustworthy processes require safer, more robust and transparent components for sure. But in our view, focusing on the latter part in isolation will not suffice if we are interested in collective human progress — hence our invitation to broaden the conversation.

We are, in particular, interested in the following questions (non-exclusive list):

  • How can machine learning contribute to more effective and trustworthy policy-making? What strategies can maximise technology development to support good governance?
  • At both micro and macro levels, how can AI-driven technologies address specific governance problems? What are the limitations of AI-assisted decision-making in governance?
  • How can we ensure safety, transparency, fairness, accountability, and trust in AI-driven decision-making processes?
  • What are the essential legal, regulatory, and technical frameworks and solutions required to ensure optimum AI behaviour in socio-economic contexts?
  • Components of AI-models in governance: How far data, algorithm design and interaction models affect AI-assisted decision-making?
  • What are the most effective and legitimate models of human-machine cooperation in governance, from technical, practical, and philosophical perspectives?
  • To what extent can AI-powered predictive analysis be utilised for proactive decision-making in governance?
  • How can government services be personalised and localised with the help of AI? What are the best strategies, processes and tools for more effective and responsive citizen engagement and service delivery?
  • Can AI chatbots make the government more accessible and responsive by handling large volumes of public queries?
  • How can AI be used in infrastructure management to optimise resource allocation and predict maintenance needs?
  • How can AI assist in monitoring environmental conditions, predicting climate patterns, and informing sustainable development policies?
  • How can AI aid law enforcement agencies in predicting crime hotspots and optimising resource deployment without discriminating against certain groups and individuals in society?
  • In what ways can AI systems improve educational planning and personalised learning for more effective education?

Data for Policy 2024 will be the eighth edition in our international conference series that captures early scholarly contributions and cross-sector synergies at the interface of data science and AI innovation and governance. We have been so fortunate to drive the global conversations in this space since 2014, curating knowledge from across the globe and fostering countless number of new collaborations. We started with the “Policy-making in the Big Data Era” theme for our 2015 conference when hyped expectations were still largely around ‘data speaking for itself’. In 2017, we visited the “Government by Algorithm?” question at a time when it was deemed to be a very risky and daring intellectual undertaking. This was then followed by the “Digital Trust and Personal Data” theme we highlighted in 2019. Looking ahead, our chosen highlight theme for next year, “Trustworthy Governance with AI?”, represents a natural progression and maturation of our previous community discussions. As AI tools continue to advance and permeate our societies, we remain committed to fostering debate and new types of collaborations to ensure new capabilities ‘optimally’ serve the public interest.

We are excited to embark on this next chapter and invite you to join us in shaping our joint futures ensuring trustworthy governance in the age of AI.

For more information, please refer to the conference website: https://dataforpolicy.org/data-for-policy-2024/

About the Authors

Zeynep Engin is the Founding Director of Data for Policy CIC, a global community of interest that runs the Data for Policy Conferences; and one of the founding Editors-in-Chief for the first open-access journal in this field, Data & Policy, published by Cambridge University Press. She is the leading General Chair for Data for Policy 2024.

Stefaan Verhulst is Co-Founder and Chief Research and Development Officer as well as Director of The GovLab’s Data Program at NYU. He is one of the founding Editors-in-Chief of Data & Policy and a General Chair for Data for Policy 2024.

David Hand is the Emeritus Professor of Mathematics and Senior Research Investigator at Imperial College, London, where he formerly held the Chair in Statistics. He is a Fellow of the British Academy, and an Honorary Fellow of the Institute of Actuaries, and has served (twice) as President of the Royal Statistical Society. He sits on the Board of Directors of Data for Policy CIC.

Jon Crowcroft is the Marconi Professor of Communications Systems in the Computer Lab at the University of Cambridge, Researcher-at-Large at the Alan Turing Institute and Visiting Professor at IX and the Department of Computing at Imperial College London. He is the co-Founder of the Data for Policy Conferences, one of the founding Editors-in-Chief of Data & Policy journal and a General Chair for Data for Policy 2024.

Mark Kennedy is Associate Professor of Strategy & Organisation at Imperial College Business School, Director of Imperial Business Analytics — a research lab at Imperial College London’s Data Science Institute (DSI) — and co-Director of the DSI. He is one of the Imperial Local Chairs for Data for Policy 2024.

Rossella Arcucci is Associate Professor in Data Science and Machine Learning at Imperial College London where she leads the Data Assimilation and Machine Learning (Data Learning) Group. She is one of the Imperial Local Chairs for Data for Policy 2024

***

This is the blog for Data & Policy (cambridge.org/dap), a peer-reviewed open access journal exploring the interface of data science and governance. Read on for five ways to contribute to Data & Policy.

--

--

Data & Policy Blog
Data & Policy Blog

Blog for Data & Policy, an open access journal at CUP (cambridge.org/dap). Eds: Zeynep Engin (Turing), Jon Crowcroft (Cambridge) and Stefaan Verhulst (GovLab)