AI.Westminster #4 (23rd April, 2019)

Elliot Jones
7 min readApr 23, 2019

--

A summary of AI developments and policy in the United Kingdom.

Welcome to AI.Westminster, a newsletter covering developments in AI policy and the wider ecosystem in the United Kingdom. Subscribe here.

Hope you’ve all had a great Easter! With everyone enjoying the sunshine and four-day-weekend, this edition is a short one.

Contents

UKRI partners with France and Canada to run workshops on AI and society.

What happened: UKRI, A government-funded body which directs and promotes research and innovation funding, has partnered with counterparts in France, CNRS, and Canada, CIFAR, to spend ~£231,000 to run eight international workshops on the ethical, legal and policy implications of AI.

Each workshop will produce a non-technical publication aimed at both public and private policy-makers as well as the public. The first workshop, on Fairness, Interpretability and Privacy for Algorithmic Systems, will take place at the Alan Turing Institute on June 3rd/4th.

Why this matters: AI is a global technology, with technical developments and associated socio-economic impacts spreading easily across borders. The US and China are clear leaders in the development and implementation of AI, so their national policies will have a significant impact on how AI develops in the UK and what ethical and legal standards are feasibly enforceable.

However, The UK, France and Canada are not insignificant players; they have relatively mature AI ecosystems and geopolitical clout. Initiatives like this, which bring together academics and policy-makers to develop collective understanding and positions, will allow all countries involved to have a much more significant influence on the direction of global AI developments and regulation. While the EU is already pulling together its own initiatives and positions, the UK will need more (and likely much heftier) collaborations like this to remain a relevant player in the global AI ecosystem and policy landscape.

Also, some of the workshops are addressing some genuinely unexplored policy implications of AI. One workshop is focused on the role of AI in potential geopolitical conflicts resulting from a changing Arctic climate and accompanying access to new resources. It is important when setting AI policy to keep in mind how the development of stronger AI will interact with other dramatic, civilisation-shaping trends over the 21st century, and potentially amplify or ameliorate their impact on humanity.

West Midlands Police ethics committee declines to approve use of predictive policing (for now)

What happened: The West Midlands Police Ethics Committee has raised concerns over the use of a predictive policing system, the Integrated Offender Management Model, and refused to approve its use until the developers address their concerns. The committee highlighted several issues with the proposed system:

  • A lack of privacy impact assessments.
  • Gaps in the legal case around human rights and data protection, e.g. the implications of labelling someone as high-risk?
  • Uncertainty about the process for deciding what data is reliable enough to be used in the model, which could risk wrongly implicating people simply by association with other people known to offend. In particular, a concern around using Stop & Search data which could reinforce police racial bias.
  • Related concerns regarding the age of the data being used, and the use of data relating to young people, which could bias the model’s predictions of future criminality.

Why this matters: This appears to be a case of an ethics committee successfully intervening before the deployment of a potentially biased system. This is a hopeful sign given the significant potential for reinforcing systematic bias and erosion of privacy from law enforcement’s use of AI systems.

As the Guardian points out in their coverage, these systems are being deployed partly as the result of increasing pressures on the police to keep up standards despite government cuts and 12 police forces are considering or are already using predictive systems of some kind. Durham Police’s HART system has already come under fire from civil society groups for ‘crude profiling’.

So it remains to be seen whether ethical standards can be kept up when the predictive and surveillance systems become more powerful and the economic incentives become stronger.

FCA asks for additional funds to upgrade regulation to cope with growing use of AI in finance

What happened: The Financial Conduct Authority’s (FCA) chief executive has said that the FCA needs additional funds to update regulations in light of the growing use of AI by financial firms.

The FCA’s 2019–2020 plan also highlights ‘Data and Data Ethics’ and the ‘Future of Regulation’ (including its interaction with technology) as two of the FCA’s three long-term strategic challenges. They are planning to undertake a review of how data and machine learning will shape financial services, the potential implications for consumers and whether their current approach is sufficient to cover ethical data use in financial services.

Why this matters: The plan indicates the adoption of ‘duty of care’ style regulation in financial services, in the same vein as the regime outlined in the Online Harms White Paper published a couple of weeks ago. So we might expect this kind of regulation to be the go-to when the government comes to regulating other areas of technology, including AI, and applications of AI across sectors.

The FCA also plans to explore machine-executable regulations, so that regulatory constraints can be embedded into autonomous financial systems during development. Machine-executable regulation may be necessary to maintain keep up with the increasing pace of financial trading, increase consistency in reporting and enforcement, and improve the efficiency of the regulator by reducing manpower requires.

On the flip-side, the lack of interpretation and preemptive specificity required code have unintended consequences or fail to prevent bad behaviour if the regulator’s coders fail to capture edge cases or grey areas that would previously have been captured by ambiguity in the regulation.

The prominence of the City of London as a financial hub (even post-Brexit) means decisions by the FCA play a significant role in determining international financial regulation. So any decisions they make on machine-learning system in financial services and automated of regulatory regimes will set a precedent for regimes in other countries.

Google shuts down UK-based AI & health ethics board

What happened: The Wall Street Journal (£) reports that Google has shut down it’s UK-based Independent Review Board on AI & Healthcare. The abolition of the Independent Review Board was reported at the time of Google’s takeover of subsidiary Deepmind’s Health division. However, at the time, it was unclear whether it would be scraped or reformed to fit the new arrangements. Months later, with this new report, it seems to be deader than disco.

The WSJ alleges that problems arose after disputes between panel members and Google over access to information. Further, panel members were apparently unhappy about the lack of binding power for recommendations made by the board.

This follows the failure of Google’s international AI ethics board announced and quickly folded only weeks ago, in part due to internal employee pressure against the appointment of an anti-LGBT conservative think-tanker and a drone company CEO

Why this matters: As Yoshua Bengio, winner of the prestigious Turing award for work on deep learning, highlighted earlier this month, voluntary ethics boards are natural victims of market pressures:

Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t. Companies that follow ethical guidelines would be disadvantaged with respect to the companies that do not.

If even Google, which previously prided itself on its unofficial motto: “don’t be evil”, is unable (or perhaps unwilling) to impose independent ethical oversight, then it seems apparent that government will have to step in and impose external ethical standards on the use of AI. Indeed the EU already seems to be heading down that path.

However, given the international nature of leading AI companies, the UK may well find itself unable to impose ethical oversight in the development phase of most AI systems operating within its borders, unless it invests heavily in the UK’s own internal AI capacity (and even then, realistically those systems won’t be as powerful as international competitors). Instead, it will be faced with the choice of accepting systems as they come and likely bending to economic pressures even if those systems don’t live up to the vision of leading in ethical AI.

Interesting Upcoming Events

Rules for Robots: Building Legal Infrastructure for Artificial Intelligence

18th June, UCL Faculty of Laws

Gillian Hadfield (Professor of Law and Professor of Strategic Management, University of Toronto & Senior Policy Advisor, OpenAI) will be lecturing on the responsibility facing legal professionals and institutions to reform quickly to address the mounting need for more effective regulatory tools to ensure safe, beneficial AI.

Monopolies of Intelligence: Questioning the Political Economy of AI

29th May, King’s College London Strand Campus

KCL Digital Humanities will be hosting a panel discussion on the political economy of AI, proceeded by short presentations by Nick Srnicek (Lecturer, Digital Economy, KCL & author of Platform Capitalism) on how AI can facilitate the greater concentration of capital and power; Leif Weatherby (Co-founder, Digital Theory Lab) on how data has altered the structure of capital; & Mercedes Bunz (Senior Lecturer, Digital Society, KCL) on how AI could be more accessible, collaborative and distributed rather than resulting in monopolies of intelligence.

Thanks for reading. If you found it useful, share the subscription link with someone else who might find it useful too: https://mailchi.mp/e9c4303fce5b/aiwestminster

If you have suggestions, comments, thoughts or feelings, you can contact me at: aiwestminsternewsletter@gmail.com or @elliot_m_jones

--

--

Elliot Jones

Researcher at Demos; Views expressed here are entirely my own