Candidates in the Liberal Democrat and Conservative leadership races address automation in their leadership bids

Liberal Democrats: Jo Swinson, one of the two candidates for leader of the Liberal Democrats, has made “Harnessing the technological revolution for Britain’s future” one of three pillars of her potential leadership, following in the footsteps of the outgoing leader Vince Cable, who also made preparing Britain for the advances in automation and artificial intelligence one of his three national priorities near the beginning of his leadership. In contrast, her opponent Ed Davey made no mention of AI or automation in his launch speech (which to his credit instead focused on combatting the far-right and tackling climate breakdown).

She argues that

  • Greater use of automation and robotics could lead to a shorter working week
  • The benefits from automation need to be felt by people, rather than just increased profit margin for corporates. She highlights retraining and upskilling as one way of doing this.
  • We need to clarify our digital rights and responsibilities in the face of AI

In this, her solutions sound closer to ideas espoused by more left-wing organisations like Autonomy’s plan for a shorter working week or the newly launched Commonwealth’s idea of a Digital Commons. It’s clear Swinson takes these issues seriously. She has been talking about the effects of automation since at least November 2017 and the vision she has outlined in her leadership pitch flow naturally from the focus of the Technology and Artificial Intelligence Commission. which she setup last year and is due to report back by the Liberal Democrat Conference in mid-September.

With the Liberal Democrats performing the best they have in a decade in Westminster polls after their success in the local and European elections, a new leader putting the issues of automation and data ethics at the heart of her policy could have significant influence over how the UK approaches AI. See here for more on the Liberal Democrats’ approach to AI.

Conservative Party: On BBC Question Time, Conservative Party leadership candidate Rory Stewart endorsed a universal training income to enable mid-career retraining to ameliorate the effect that automation and the deployment of robotics will have on the labour market.

Matt Hancock, the Health Secretary, is the only other candidate with a track record that suggests he will seriously prioritise automaton and digital technology, has also pledged to raise the amount the U.K. spends on research and development to 3% of GDP by 2025, moving forward and substantially up the government’s current target of 2.4% by 2027. With artificial intelligence and the future of mobility as two of the government’s current grand challenges, there’s no doubt this would result in significant investment in AI and supporting technologies if the target was reached.

However, Stewart is well behind many of the other candidates, the favourite of only 1% of Conservative voters in a recent poll, and has pledged not to serve in the government of a Prime Minister in favour of no-deal, such as the favourite Boris Johnson, and Hancock is performing similarly poorly. So the effects of automation are likely to remain a second-tier issue in the political executive, another victim of Brexit’s effect on domestic policy.

See also — David Cameron appointed to chair advisory board of Afiniti, a US-based firm using machine learning to automatically pair call centre staff with customers based on behavioural profiling.

Information Commissioner’s Office calls for code to regulate police use of facial recognition and Greater London Authority’s policing ethics panel sets out future framework

The ICO: South Wales Police was taken to court by Ed Bridges over claims that they had violated his privacy and data protection rights by using automated facial recognition, the first legal challenge of the use of this technology.

During the case, a barrister for the ICO told the court the current guidelines around automated facial recognition were “ad hoc” and a clear code was needed. Further, that a legal framework should address the nature of a watchlist and in what circumstances the technology was deployed. They also had questions on the training operators should have, how to ensure the technology was not hacked, and if people could refuse to be scanned.

Greater London Authority: Just days after the case and the ICO’s call, the Greater London Authority’s independent policing ethics panel set out new guidelines on how facial recognition technology should be used by the Met Police. They recommend that live facial recognition software should only be deployed by police if the five conditions below can be met and that the Met does not conduct any further trials until the police have fully reviewed the results of the independent evaluations and are confident they can meet the conditions, which are:

1. The benefit to public safety must be great enough to outweigh any potential public distrust in facial recognition technology.

2. There is evidence it will not generate gender or racial bias in policing operations

3. Each deployment must be assessed and authorised to ensure it is both necessary and proportionate for a specific policing purpose

4. Operators are trained to understand the risks of use and understand they are accountable

5. Both the Met and the Mayor’s Office for Policing and Crime develop strict guidelines to ensure that deployments balance the benefits of this technology with the potential intrusion on the public

Why this matters: London isn’t going as far or as fast as San Francisco, which pre-emptively prohibited all use of facial recognition by public agencies. However, if the ethics panel’s recommendations are fully implemented, then this will be pretty significant restriction of police use of facial recognition.

Given that the Met police is by far the largest force in the country, its adoption of these conditions in its use of facial recognition is likely to set a strong informal standard in what’s expected of police forces across the country. Further, given the increasing public awareness, and therefore political pressure, around the use of facial recognition and the legal questions being raised by the aforementioned case, if these conditions prove sufficient to retain public trust then they may well form the basis for any forthcoming national guidelines.

Law Society Commission examining the use of algorithms in the justice systems finds a lack of standards, best practice, openness or transparency.

What happened: The Law Society’s Public Policy Technology and Law Commission report on algorithms in the justice system has found that, at the most basic level, there is a lack of explicit standards, best practice, and openness or transparency about the use of algorithmic systems in criminal justice across England and Wales. They find that in-house analytical capacity to analyse, oversee and maintain these systems is generally lacking.

Heavily individualised, legal safeguards proposed to algorithmic systems in commercial domains, such as individual explanation rights, are unlikely to be very helpful in criminal justice, where imbalances of power can be extreme and are exacerbated by dwindling levels of legal aid. Further, some systems and databases operating today, such as facial recognition in policing or some uses of mobile device extraction, lack a clear and explicit lawful basis.

They recommend that:

  • A National Register of Algorithmic Systems should be created as a crucial initial scaffold for further openness, cross-sector learning and scrutiny.
  • In-house capacity is built and retained for overseeing and steering these systems.
  • The legal basis for the use of these technologies must be urgently examined, publicly clarified and rectified if necessary.

Information Commissioner’s Office and The Alan Turing Institute publish interim report on public and industry views on explaining AI decision-making

What happened: The ICO and The Turing Institute have published an interim report from their joint Project ExplAIn. The project is intended to produce practical guidance for organisations to assist them in explaining AI decisions to the individuals affected. So far, they have conducted citizens’ juries and industry roundtables to gather views on the subject from across stakeholders, the results of which form the basis for the interim report.

They identified three key themes:

1. The importance of context in explaining AI decisions. The importance of explanations to individuals, and the reasons for wanting them depended significantly on what the decision was about, e.g. justice requiring a much greater level of explanation than healthcare.

2. The need for education and awareness around AI.

3. Technical issues were not a barrier to explainablility for Industry representatives. However, cost, commercial sensitivities like intellectual property, gaming of the system and the lack of a standard approach to establishing internal accountability are more difficult challenges for industry.

Why this matters: Citizen engagement in the development and governance of AI systems is important and the ICO appear to have deployed citizen’s juries very effectively to gauge the informed views of a wide cross-section of the public.

The point that technical issues aren’t the barrier seems particularly interesting to me. The report says that: “Some organisations used a perceived lack of technical feasibility as an excuse for not implementing explainable AI decision-systems. Participants thought that, in reality, cost and resource were more likely the overriding factors.”

This suggests that if legislation making explainablility compulsory were implemented, companies could deliver. However, if cost and resources are the limiting factors, then increased compliance and legal burden could actually empower existing large technology companies who’s influence is rightly currently being questions. — is oligopoly the price of meaningful transparency?

Are there policy levers to allow competition without sacrificing standards? Is it possible to create easily transferable and deployed transparency overlay? I look forward to seeing what answers the ICO and Turing have in their final report due out in the Autumn.

UK signs up to OECD’s Principles on Artificial Intelligence

What happened: The UK, along with 41 other countries, have signed up to the OECD’s Principles on Artificial Intelligence — the first set of intergovernmental policy guidelines on AI. The OECD sets out five principles for the responsible stewardship of trustworthy AI:

  • AI should benefit people and planet
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity. They should include safeguards to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure people understand when they are engaging with AI and can challenge the outcomes of those systems.
  • AI systems must function in a robust, secure and safe way with risks continually assessed and managed.
  • AI developers and deployers should be held accountable for their proper functioning in line with the above principles.

The OECD recommends governments:

  • Facilitate investment in R&D towards trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and mechanisms to share data and knowledge.
  • Create a policy environment to enable the deployment of trustworthy AI systems.
  • Provide training for the skills needed in an automated economy and ensure a just transition to that economy.
  • Co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI.

Why this matters: These principles and recommendations are not legally binding. However, they set a clear precedent for future international binding standards and treaties. They match fairly closely with the European Commission’s Ethics Guidelines for Trustworthy AI and have the endorsement of the United States, unlike previous international principles.

The UK has been leading on implementing a national AI strategy and explicitly making ethics part of that. However, it lacks the clout alone to deal with the likely major players in AI. However, if these translate into national-level policies across even a plurality of states who have endorsed the principles, then they may have enough influence on the multinational corporations leading the development of AI to set an effective ethical and safe standard around the world.

Department of Health and Social Care bans NHS Trusts from signing deals giving a tech company exclusive access to patient data & interim NHS People Plan released

What happened: According to the Health Service Journal, the Department of Health and Social Care will tell hospital trusts they should not enter an exclusive commercial arrangement to share patient data.

In the NHS Interim People Plan (p53), it emphasises the need for the NHS to “attract the best technologists, informaticians and data scientists by making the NHS a destination employer for people with these skills.” With tight budgets, they will struggle to compete with tech giants on salary and unlike GCHQ, will not have quite the same allure of coolness and serving the national interest to attract talent either. Their plan is to

  • Work to build new and innovative relationships with industry to share and develop scarce and specialist resource.
  • Undertake a technology skills audit to understand our current position and then explore and address the factors affecting recruitment and retention in the NHS.

Why it matters: The instruction not to enter into exclusive commercial arrangements fits into the wider strategy being led by the newly formed NHSX on open standards and interoperable systems to prevent being again locked into systems from particular providers.

However, this still doesn’t address how the NHS will prevent the collective value of public health data being extracted by the private companies when the models they train on that data are deployed outside the NHS. The People Plan also highlights that the NHS is currently not up to the task of retaining value by developing a wide range of machine learning tools in-house on its current trajectory.

Instead, as NHSX make clear in their latest blog, the NHS will focus on “creating the platform for digital innovation and creating the standards that will allow that innovation to plug in safely. It means not competing against the market and resisting the urge to build or commission everything ourselves.” So, it is likely to continue to depend on external private provision despite its scale and the size of its datasets putting in position to become an AI player in its own right.

Civil Aviation Authority launches innovation sandbox, including commercial autonomous drones and automated air traffic control

What happened: The Civil Aviation Authority has launched an ‘Innovation Sandbox’. The sandbox will allow companies to discuss, explore, trial and test emerging concepts with the regulator before deployment. Announced participants include an Amazon delivery system using unmanned aerial vehicles and the air traffic control body NATS who are working to implement new technology such as AI into traffic control towers.

Why it matters: Launching an innovation sandbox is an important part of the UK achieving more agile aerospace, a key part of the upcoming Aviation 2050 strategy currently being consulted on. For example, As Jack Clark explores in the most recent Import AI, it is increasingly possible to train and test drones in a completely simulated environment.

For regulators, this makes it possible to assess the capabilities of drones and set guidelines accordingly with very limited real-world trials. It would allow them to do so in a programmatic way and set outcome-based regulation, e.g. a failure rate across simulated trials, which allows regulation to keep pace with exponential developments in the technology and place the technical burden more onto the companies rather than the regulators who often lack technical capacity.

Miscellaneous Links

Interesting Upcoming Events

Drones, Swarming and the Future of Warfare

11th June, Committee Room 15, Palace of Westminster

David Hambling, Journalist and Author of ‘Swarm Troopers’, and Sebastian Brixey-Williams, Programme Director at BASIC, will be outlining the opportunities and risks of drone swarms and how they will affect global security; examine the increasing autonomy in the future of warfare; and provide recommendations on the way forward.

AI Strategy Discussion Group

17th June, Rawthmells Coffeehouse, RSA House

I’m part of a regular AI policy discussion group and I thought I’d share the details of the next meeting if you want to come along.

We’ll be discussing AI timelines

  • How should we measure progress in AI capabilities?
  • Who is best placed to predict when AI will reach various stages of development?
  • What’s our current best guess for when we will get AGI (AI of equal intelligence to a human in all domains)?
  • What’s the impact of different AI timelines on how we should focus our efforts in ensuring AI is used for the benefit of humanity

Rather than all reading the same article, everyone is encouraged to do their own research into these questions. Some suggested sources include:

--

--

Elliot Jones

Researcher at Demos; Views expressed here are entirely my own