AI legislation is coming– what does it mean for Black and marginalised communities?

Data, Tech & Black Communities
8 min readMar 7, 2022

--

People protesting in the street demanding Justice for all. On their faces is a geo- location sign indicating that the protestors are being digitally tagged. There is a picture of a computer server and cctv cameras to show how the protestors are being digitally recorded.

Photo adapted from Canva: RODNAE Productions

The UK AI strategy green paper sets out the government’s framing of the potential opportunities, risks and mitigation strategies of AI for a consultation due in 2022. In this post, we summarise the strategy and explain why we think Black and marginalised communities need to pay attention to it. Our opinion is that the strategy isn’t adequate in supporting Black and marginalised communities to participate in realising the opportunities of AI and data centric technologies, and needs to include measures for mitigating against real harms associated with AI, which we know fall more heavily on communities like ours. Lastly, we set out our view of what needs to happen in order to create an environment in which AI facilitates flourishing for all.

Don’t believe the hype

Last month, we wrote about the government’s consultation on its proposals to erode our data rights in order to facilitate vaguely defined ‘innovation’. Well, around the time DCMS opened that consultation (which was closer to a white paper) , a strategy paper was released in another part of the government universe. BEIS released its vision for an AI-fuelled tech utopia — the National AI Strategy. As is always the case with narratives that hype the potential benefits of new technologies, it speaks in general terms, rather than specific use cases or policy challenges, and is generally light on detail. This is troublesome for at least a couple of reasons. For one thing, it makes it difficult for experts and affected groups to offer useful insights about the nature and scope of the policy challenges the government is looking to address or the suitability of these data-centric technologies for resolving them. For another, it makes it nearly impossible to meaningfully assess what benefits would be delivered by implementing the proposed strategy, if any. But perhaps that’s the point.

Getting into the details of the strategy

So what’s in the AI strategy? The strategy is laid out as three pillars, each one targets an aspect of the UK’s AI landscape, here is our summary:

  • Pillar 1 (Investing in the long-term needs of the AI ecosystem) acknowledges the necessity of an enabling environment for AI to flourish. Further, it identifies limited access to skills, data, large-scale computing capacity, funding (including from venture capital) and trade opportunities as some of the gaps that need to be plugged.
  • Pillar 2 (Ensuring AI benefits all sectors and regions) highlights the importance of boosting the use of AI in so-called “high potential, lower-AI-maturity” sectors. Health and defence are two prominent examples deemed as fitting within this category. Interestingly, despite the title, the paper has very little to say about how to distribute funding for the development of AI or ensure ‘benefits’ from it are felt across geographic regions in the UK.
  • Pillar 3 (Governing AI effectively) sets out a hodgepodge of initiatives and programmes that the government is proposing will serve as a basis for a national governance framework for AI technologies. Success criteria for this framework include being ‘pro-innovation’ and imbuing organisations with the confidence to adopt AI technologies.

If we take the Pillar headlines at face value, they might seem innocuous. It does make sense to try to identify and plug the structural gaps within the existing AI landscape. And having seen some of the fallout from inappropriately applied AI, who could argue against the need for better governance? The trouble is, these sensible summary headlines aren’t a good reflection of the content of the government’s AI strategy.

Do the pillars stand up?

In a word, no. The government’s strategy paper doesn’t provide a clear evidence-based assessment of the current state of AI development, deployment practices or their impact here in the UK. Further, it fails to set out a clear set of policy objectives or provide a good explanation of how its proposed actions might deliver them. Nor does it provide any measurable targets. Instead we get grand aspirations and wishful thinking — press release as policy, if you like.

This isn’t just our opinion; the Royal Statistical Society (RSS), in a blog post that was published around the time the strategy paper was released, had this to say:

The emerging UK National AI Strategy is out of step with the needs of the nation’s technical community and [… ] is unlikely to result in a well-functioning AI industry.

In the same blog post the RSS also notes that of the 52 individuals who contributed to the AI Roadmap (which informed the government’s AI strategy), only four represented the interests of software companies. We think it’s even more noteworthy that none of the individuals represent or can claim direct access to the interests of vulnerable or negatively impacted groups. The skew of contributors tells us a lot about the government’s focus and priorities. The missing voices should tell us something about the likelihood of an AI ecosystem that supports equitable human flourishing being realised through this strategy.

Take pillar 1, for example. The AI strategy cites a lack of skills, data, funding (especially from venture capital) and trade opportunities as hindrances to the development of a vibrant AI ecosystem. However, the government’s plans to “train and attract the brightest and best people at developing AI” is very much focused on tertiary education programmes that have not demonstrated a strong commitment to reaching, admitting and teaching racially and socio-economically diverse cohorts. There are no plans to support (or even seriously explore) the emergent worker-led data science practice that’s driving the development of applications that are helping gig workers hold their employers to account. The paper makes much of the government’s plans to attract experienced, world-class specialists via its Global Talent visa scheme despite the well-publicised failure of the scheme to attract its target audience.

In Pillar 2 the government sets out a vision for “Leveraging the whole public sector’s capacity to create demand for AI and markets for new services.” We can infer, as per many other examples provided in the AI strategy, that this a nod to the use of data collected for the delivery of public services to be handed over to the private sector. This is concerning — we know it is people on the lowest incomes that have the most interactions with the public sector. Creating new markets based on public sector data means primarily exploiting low income communities. Communities that are both less likely to be aware of these markets yet facing the brunt of the risk of harm from their use. Research by Big Brother Watch found 1 in 3 councils already use opaque algorithms developed by private sector firms to risk-score people who receive housing benefit and council tax support. With the government’s active encouragement, the use of these data-centric technologies will only grow in councils, schools and other public institutions. The NHS is repeatedly highlighted within this strategy, and, given UK government plans to further centralise the health records in England, health data will be further exploited by the private sector in pursuit of innovation.

It would be reasonable to expect Pillar 3 with its focus on a “a national governance framework for AI technologies” to address the clear risks opened up by Pillar 2. It does no such thing. The paper pays lip service to the need for some AI Assurance infrastructure to provide some basis for accountability with regards to the safety and proper application of AI applications. This is both too weak and too vague given the scale, complexity and risks of the terrain the government wants to so boldly explore. Later, the text alludes to an emergent “AI assurance ecosystem […] within both the public and private sectors, with a range of companies including established accountancy firms and specialised start-ups, beginning to offer assurance services”. It fails to mention that there are relatively few firms offering these services, take-up is patchy and that organisations are under no obligation to follow or implement any recommendations or to even declare the results of such audits. In fact there have been high profile cases of organisations using such audits to whitewash ethically dubious practises. As a result they offer no protection to end users or affected groups. Rather than address this with concrete action, all the government promises is that the CDEI will publish the AI assurance roadmap; it doesn’t commit to implementing anything within the roadmap. Even the lowest of low hanging fruit is beyond the reach of our government — it won’t even set a target for the introduction of a mandatory transparency obligation for all public sector organisations (although it notes that the Commission on Race and Ethnic Disparities recommended this). The promised Algorithmic Transparency Standard has now been published but there is still no commitment for its adoption to be mandated.

The AI consultation is heading our way

White papers like this one are important because they give us some clues about what we can expect in the consultation that follows. For example, the government started fairly cautiously with the National Data Strategy in 2020 which presented a strong rhetoric on the need to advance the use of data whilst acknowledging the importance of data protection rights. Not satisfied with the feedback it received on the strategy, the government commissioned a Taskforce on Innovation, Growth and Regulatory Reform (TIGRR) in 2021 which took a more bullish tone and advocated for the shredding and burning of our existing data rights. It neglected to share any details about what would come after the bonfire. The TIGRR report was easy to dismiss as the ravings of senior Conservative ideologues. However, the rights-eroding Data: A New direction consultation made us sit up and take notice that this proposed act of lunacy was real. The government views data protection rights as a barrier (to innovation) which must be dismantled, rather than a safeguard to minimise the harms that often accompanies innovation by allowing the public to scrutinise what happens to data collected about us. The government has promised to release a consultation on its proposals for delivering its AI strategy and a separate consultation on AI within the NHS, at some point this year. We need to get ready.

What needs to happen next?

Hopefully we have convinced you of the importance of scrutinising the AI consultation when it’s published. We are prepared to put in the hours to respond, but we can provide a stronger argument if we can draw on the expertise and experience of others. We would love to hear your ideas or suggestions for people and groups we should be speaking to. Offers to help with drafting a response are also welcome. Do please get in touch, together we can help insist on the creation of and use of data-centric technologies in ways that actually support flourishing for all.

--

--

Data, Tech & Black Communities

DTBC is a group of diverse Black/Black heritage people working together to ensure data & data driven-technologies enhances rather than curtails Black lives