Illumination on standardisation in a data-driven world

A Report From Adjacent Workshops to Examine Standards and Assurance Regimes for AI and Data-Driven Services

Henry Fraser
Automated Decision-Making and Society
9 min readOct 13, 2023

--

Lead author: Adam Green

Contributing authors: Henry Fraser, Fiona Haines, Rotem Medzini, Christine Parker, Kimberlee Weatherall, Karen Yeung

Standards and assurance regimes are seen as an important way to promote ‘trustworthy’ AI and data-driven services. Workshops hosted by ADM+S, and centre partners at the University of Birmingham, brought together diverse participants to build a multi-disciplinary understanding of challenges and opportunities for this new regulatory ecosystem.

The emergence of the first publicly accessible Generative AI tools in late 2022 intensified the debate about the consequences and reach of contemporary artificial intelligence for livelihoods, governance and the fate of humanity itself. Governments are simultaneously hopeful that AI and emerging data-driven services can optimise public services and significantly improve economic productivity, and worried about foreseeable and unknown risks. AI’s lead developers are positioning themselves as advisers and oracles for an era of superintelligence.

The task for governments, and the citizens they represent, is daunting. There is already a thick rulebook for mitigating the risks of technologies but nothing is as mercurial and ubiquitous as AI. The term AI is a shorthand for a plethora of data technologies at different levels of maturity and with varied risk profiles. Foundation AI models are systems trained on massive amounts of internet data that can be used for a wide range of purposes pose significant risks. Their flexibility and the way they are developed mean they are hard to contain within any sector or horizontal category and perhaps impossible to interrogate in terms of potential problems with the training data or in terms of how they perform when they are deployed in real-world settings.

Conventional concepts like safety, anchored in domains like medical device regulation, break down in the AI era when safety is increasingly a systems property based on the interactions between algorithms, data, people, and other automated systems. AI is harder than other technologies to monitor when it is released for broad public or private consumption to ensure it continues to fall within set standards. However, when seen from the perspective of innovation and potential national benefit, governments are worried about regulatory overreach, which could impact international competitiveness and stifle innovation.

Rules and standards developed for AI are both critical but also challenging. Technical standards are private governance tools, which record best practice as determined by respected standards organisations such as the International Standards Organisation (ISO). Assurance is a process that generally involves assessment and certification of technical systems against standards. Standards do not have legal force by themselves, but may be adopted into law: for example though legislation stating that certification against a standard will be deemed to meet some legal requirement. Europe plans to use standards in this way for AI.

Done well, standards can encourage innovation in AI and data services, provide a common language and benchmark for developers, and build trust among the public. Standards need not be inhibitors. At their best, they can encourage a race to the top; create jobs and specialisms, including in academia; foster credibility; provide a vocabulary, project a rational image of practice; and permit interoperability. But to achieve this, they need to be inclusive of different perspectives while being pragmatic and deliverable; their implementation needs to be valid and meaningful. Policymakers around the globe and the AI community are now tasked with building a regulatory and assurance architecture that is tough enough to mitigate harm without being so onerous that good ideas go undeveloped.

Two back to back workshops

In collaboration with our partners at the University of Birmingham, a number of ADM+S researchers were involved in two workshops intended to critically evaluate standards for data-driven technologies, including AI. The first workshop on 22 May 2023 at the Birmingham City Library, was hosted by Professor Karen Yeung and Dr Rotem Medzini from the University of Birmingham as part of their research project forming part of the European Lighthouse on Secure and Safe AI Network of Excellence (ELSA) for which she is joint-PI, funded by EU Horizon and UKRI. This closed workshop was hosted in order to bring together professionals in standards, certification, accreditation, and enforcement, from across Europe, with experience in assurance regimes for GDPR, medical devices and artificial intelligence. in order to gather insight under Chatham House rules from those actively involved in the operation and implementation of these certification regimes in order to assist in the refinement of the research questions which Yeung’s team were seeking to investigate. Participants included policy officers, public regulators, accreditation bodies, non-governmental organisations, advocacy groups, scholars and legal professionals as well as members of the ADM+S institutions stream who were invited to the workshop.

The second workshop on 23 May 2023 at the Exchange was organised by Professors Christine Parker, Fiona Haines, and Kimberlee Weatherall, and Dr Henry Fraser from ADM+S and co-hosted by Professor Yeung and colleagues at UoB. It included an open international academic workshop with participants from a range of disciplines including law, computer science, political science, regulatory theory, criminology, science and technology studies, and platform governance. The workshop included panels on the role of standards in AI governance; lessons for AI standards from platform governance; environmental standards for AI; and the standards development process. A keynote lecture for both workshops titled “Intermediation and Trust in the Regulatory State: More Regulation, Less Trust?” was presented by Professor David Levi-Faur, a leading scholar of regulatory governance.

The participation imperative

The blend of expertise and perspectives at both workshops allowed different groups to better understand each other’s’ objectives and constraints. The interaction between participants helped to create understanding concerning the challenges associated with developing standards for AI. Advocacy groups were able to stress-test recommendations by learning about practical constraints faced by regulators in areas like medical devices and data privacy. Industry participants heard from lawyers and civil society groups about blind-spots and provided their perspectives on the most viable interventions for optimal governance of data-driven services. Regulators conveyed to academic researchers the realities of standards and assurance to inform the creation of frameworks that can be operationalised and iterated. Academic researchers were able to understand obstacles to data-driven standards in practice, down to prosaic but important factors like legacy IT systems in healthcare, for instance, and its implications for compliance.

Without this proactive sharing of expertise and perspective across domains, there is a risk of echo chambers in the AI governance discourse(s) in which each community develops practices, or critiques of practice, that lack empathic context. Conversely, decision makers benefit from academic perspectives on the risks and downsides of current approaches which they may not have considered in the rush to respond to a growing need.

The convening of stakeholders across disciplines, sectors and geographies is especially important given the already gaping participation gap in AI governance, dominated by a narrow community of powerful voices to the exclusion of voices from civil society, academia and the global South. NGOs and academic researchers often cannot afford the time or cost of attending and meaningfully participating in conversations that are now establishing the AI frameworks of tomorrow. Supporting more inclusive participation is especially valuable when developing socio technical standards for a capability as broad and consequential as AI, which include value judgements and normative frameworks on issues including privacy, safety and rights. Diverse perspectives can improve the foresight around future risks as different communities can anticipate applications or failures that others may not consider. Both workshops helped to bring together communities that may not otherwise interact in an open, informal forum.

Defining the problem

Powerful technology businesses and governments are already defining the ‘problem space’ of AI regulation, whether it be Sam Altman’s calls for an International Atomic Energy Agency (IAEA) for AI or the UK Prime Minister’s suggestion for a ‘CERN’-like institution. Existing standards bodies like the International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), Institute of Electrical and Electronics Engineers Standards Association (IEEE), and International Telecommunications Union (ITU) are seeking to participate in the new era and have a wealth of existing tools that could support standards in the AI age. The proliferation of certification providers was already causing confusion which will likely accelerate in the months ahead. Then comes the question of how to synchronise the large number of existing instruments such as the Digital Markets Bill, the Online Safety Bill, and the EU AI Act.

A major concern with this current jostling for influence, and the race to construct standards and regulations for data-driven services, is the risk of locking into systems and institutions that are not fit for purpose, or of issues being framed in ways that preclude broader or differently posed-conversations about the rules and guardrails around tools like AI. The opportunity to regulate high-risk products might be missed, while products that pose low risks might fall under the shadow of overly constraining regulation. To be sure, past efforts should not be discounted. ISO is a treasure trove of standards, some of which could address the kinds of risks that Generative AI may aggravate.

However, existing frameworks may not cope with the unique challenges of AI and contemporary digital technologies. During the first workshop, instructive lessons from the history of regulation and assurance failures were offered, from the role of credit ratings agencies in the financial crisis to the Rana Plaza disaster in Bangladesh and class action lawsuits in areas like silicone breast implants. All of these travesties provide instructive insights on the limits of regulations, the importance of incentives in shaping outcomes. and blind spots which could be avoided or forestalled as a new rule book is written.

The discussions helped identify issues including safety becoming a technical compliance objective, missing the broader democratic questions that need to be asked of a powerful new technology, and the unclear, or absent, normative framework for AI regulation. Should there be red lines, and where? The sessions revealed the lack of consensus and transparency over what ‘good’ looks like, or how to define ‘foreseeable risk’ in a technology domain as unpredictable as AI and identified emerging trends like the growing embrace of synthetic data (using AI generated data to train other AI systems) in areas like healthcare. Other challenges include lack of competence among corporate boards to oversee investments into AI, limited resources of market surveillance authorities, and the challenge of regulatory bodies relying on industry funding, as well as the need to ensure that safeguards like third party conformity assessment are accessible and affordable to SMEs.

Identifying academic opportunities

During the second workshop in particular, academic researchers were reminded of the rare opportunity they could play a role in the development of rules and standards for data-driven services. This could include expanding the scope of assessment beyond the narrow confines of safety to include rights-critical considerations; interrogating how and whether standards and assurance regimes work as they purport to; learning from the success and failures of regulatory regimes historically; and widening the framing of AI standards and governance issues to include, for instance, environmental and ecological considerations. Academic research can also identify sources of leverage that could strengthen governance beyond standards alone.

Academic experts and NGOs can play a crucial role in shaping the digital economy for good; witness the work of Max Schrems, the privacy campaigner who took on Facebook and won. But academic researchers must ensure their research communicates and connects to those researching in related fields, and to stakeholders outside of academia. Universities themselves can be sandboxes for experimentation and convene networks to participate in public good initiatives to create evidence that regulators can draw from, such as the ADM+S Centre Ad Observatory project.

These workshops offered early career researchers an opportunity to hear from regulatory experts from the likes of Ofcom about decision-making processes on issues like regulatory inspection, or auditing, of social media algorithms, and a chance for regulators to consider academic perspectives as they develop practices and policies. It helped improve understanding of how different terms and concepts are interpreted across communities and jurisdictions. For example, platform governance scholars and practitioners have shown that concepts such as ‘safety’ may become richer and more useful than they first appear.

The workshops allowed researchers to interface with experts with direct engagement in policy discussions at the European Commission level, closely follow the latest discussions and permutations of the EU AI Act, and to hear success cases, such as environmental activism in Europe that led to negotiations to pause licensing new data centres in the absence of a national plan. It allowed researchers to identify commonalities and synergies in their work, especially related to the political economy factors and the multiple challenges of measurement and transparency.
The coming months will be crucial in laying down the rails on which data-driven technologies run. To build confidence requires understanding trustworthiness, there is a need for a golden thread from the policy and political intent around regulating norms and values, and bottom-up work of practitioners, academics, data scientists and engineers who are building these data-driven technologies. Both groups need to develop pragmatic, workable, informed, and trustworthy ‘quality’ standards for a data-driven future.

--

--