Government sets out plans to alter the UK’s regulatory system for the Fourth Industrial Revolution

What happened: The government has published its white paper setting out plans to reform the UK’s regulatory system in the face of disruption from the increasing pace of technological change, particularly the ‘Fourth Industrial Revolution’ including applications of AI and robotics. The reforms particularly focus on promoting innovation and this white paper will be followed with papers describing how government will modernise consumer and competition regulation.

They have identified six aims for regulation going forward:

· Being proactive in reforming regulation in response to technological innovation

· Sufficiently flexible and outcomes-focused regulation to enable innovation to thrive

· Enabling greater experimentation, testing and trialling of innovations under regulatory supervision

· Supporting innovators to navigate the regulatory landscape and comply with regulation

· Building dialogue with society and industry on how technological innovation should be regulated

· Working internationally to reduce regulatory barriers to trade in innovative products and services

They propose to achieve these through:

· A new Regulatory Horizon Council tasked with preparing a regular report on technological innovation with recommendations for broad priorities for regulatory reform and greater public engagement, to facilitate the rapid and safe introduction of emerging technologies, including applications of AI. The Council may prepare ‘deep dive’ reports into the regulatory implications of specific areas of innovation, though it is expected expert bodies will handle most of the specifics.

· Piloting new requirements for departments to monitor and evaluate the impact of legislation on innovation, reviewed by the Regulatory Policy Committee.

· Promoting new ways to trigger when post-implementation reviews of legislation are undertaken to ensure that legislation does not ‘lock-in’ outdated technologies or approaches.

· Supporting the use of standards and as a complement to outcome-focused legislation

· A review of the Regulators’ Pioneer Fund, which backs projects testing new technology in partnership with the regulators in a safe environment to determine whether to expand it to new regulators and local authorities.

· A digital Regulation Navigator to guide firms through the regulatory landscape; bring their ideas to market, for example, through regulatory sandboxes; and potentially allow firms to raise concerns about processes they believe are constraining innovation.

· A partnership with the World Economic Forum to shape global rules on innovation, including machine learning, autonomous vehicles and drones.

Why it matters: This white paper signals a pretty strong intention to move towards outcomes-based regulation. This may help in addressing the limited bandwidth of democracy we keep running up against in the modern era in a few ways.

First, by placing the burden of achieving the goal on the businesses, it changes the technical capacities required by government. For example, when regulating the large social media platforms, instead of requiring significant platform expertise in-house to set specific controls on each quite different system, the regulators just need to be able to audit and verify the data provided by the companies and investigate inconsistencies. This should hopefully make it easier for government to acquire the necessary technical talent.

Secondly, it also provides more policy stability for the organisations being regulated, as goals are unlikely to wildly change, except possibly at critical junctures like a radical change in political control. It also means that they can adjust to meet regulation as soon as they change their technology, rather than waiting for regulators to catch-up, ensuring greater ease of compliance and allow companies to keep regulated goals in mind when designing new parts of their system rather than being defensive in anticipation of a particular feature being squashed by the regulator. Jack Clark, at OpenAI, and Gillian Hadfield, University of Toronto, propose a similar approach for tackling AI safety through global markets for regulation in AI, where companies buy regulation in regulatory markets.

Thirdly, as more deployment of machine learning across government, particularly reinforcement learning, government itself will be better suited to an outcome-oriented approach internally as well as externally. Politicians can spend more time ironing out the questions of goals and ideology, as they are better suited at ironing then allowing the AI systems to translate that into the most effective actions to achieve that goal, complementing the attempts to implement more ‘agile’ working across government.

Government announces next stage of Quantum Computing funding

What happened: The government announced a further £153m funding for the National Quantum Technologies Programme, began in 2013, to enable the commercialisation of quantum technologies developed in previous waves of funding. This is on top of the up-to £235m promised in the 2018 Autumn budget to establish a National Quantum Computing Centre, intended to develop a universal quantum computer.

Why it matters: Quantum technologies are those which exploit the ability of subatomic particles to exist in multiple states at once. IBM and WIRED have a pretty accessible introduction to quantum computing but essentially it should allow certain problems to be solved much faster than on conventional computers. They are anticipated to be used alongside conventional computers, rather than replace them.

The government has already funded prototype technologies that exploit quantum effects, including clocks, communications systems and a gravity sensor for mapping beneath the ground, e.g. to find pipes before building work. The government views quantum technology as an important part of the Industrial Strategy Grand Challenges, including AI and Data. In a speech, Science Minister Chris Skidmore, stated that “quantum combined with AI and robotics can revolution autonomous vehicles”

There certainly seems to be a fair amount of hype around the implications of quantum computing (and to some extent, vice versa). However, Google and IBM seem to be investing in quantum computing at least in part because they believe it has potential to enable machine learning, with Google suggesting that quantum computing could allow machine learning to model complex patterns in physical systems that conventional computers cannot in any reasonable amount of time.

Thus, one plausible avenue quantum computing might increase the impact of AI in through accelerating Deepmind’s work applying machine learning to scientific discovery, e.g. by allowing them to better model proteins and chemicals, which could lead to greater drug discovery and understanding of drug’s effects before testing. Equally, it could be applied to climate modelling or fluid dynamics — understanding exactly what systems quantum computing can model will give us an indication of what its impact might eventually be.

Still, developing the UK’s quantum computing capacity may be an important part of staying competitive in the research and development of machine learning systems, especially as we begin to run up against the limits of the available conventional hardware, both domestically and in the cloud. If you have any further insights or recommended reading on this topic and untangling hype from reality, do let me know.

UKRI partners with Canadian federal agencies to announce Canada-UK Artificial Intelligence Initiative

What happened: UKRI has partnered with three Canadian federal research funding agencies to launch the Canada-UK AI Initiative. This will provide £8.2m across approximately 10 projects, requiring joint leadership from academics in both the UK and Canada. The projects will need to be interdisciplinary and the call intends to support ‘responsible’ AI, though it doesn’t clarify what it means by that.

Why it matters: This announcement is the latest in a series of collaborations between the UK and Canada on AI including:

· The UK-Canada AI Innovation Challenge, focused on improving aircraft performance, launched in September 2018

· The 2018 Canada-UK Colloquium hosted by the Canada-UK Council focusing on Artificial Intelligence & Society

· The UK/Canada AI week, which included ElementAI, one of Canada’s most successful AI start-ups opening an office in the UK, to match Deepmind’s existing operations in Edmonton and Montreal in Canada.

· Ongoing joint AI & Society workshops between CIFAR, UKRI and France’s CNRS

There are clearly strong links developing at a governmental, business and academic level and Canada seems to be emerging as Britain’s most significant strategic partner on AI at a governmental level, as it distances itself from European neighbours.

The Surveillance Camera Commissioner criticises government for lack of legislation to control facial recognition.

What happened: The Surveillance Camera Commissioner, according to The Times (£), said it was unacceptable that no law had been introduced to control how intrusive technologies such as facial recognition were used. Speaking at the Ifsec conference, the Commissioner revealed that he had intervened to stop several police forces from using the technology “disproportionately”.

‘Overt’ surveillance systems, such as police cameras with facial recognition or automatic number plate recognition, do not require approval from a senior police officer or judge, unlike covert surveillance.

Why it matters: With the Commissioner’s Surveillance Camera Day this Thursday, 20th of June, the debate around the use of facial recognition and other automated biometric and recognition systems by law enforcement shows no sign of going away. It also seems like it puts the Commissioner somewhat at odds with the Minister for Policing, Nick Hurd, who in answering questions in the House of Commons last week on facial recognition stated that he believes there is a legal framework for the use of the technology. Though the Minister does recognise the controversy and lack of trust around the technology, stating there is urgent work to review the regulatory framework, including new oversight and advisory boards.

With Darren Jones of the Science and Technology Committee having brought a debate about the facial recognition and biometrics strategy, having previously indicated he would like an inquiry into facial recognition and the minister receiving multiple questions a week specifically on this topic, we may eventually see some legislative action on this topic, especially if the courts rule against the South Wales Police.

The Defence and Security Accelerator launches competition for projects that enable the use of machine learning and autonomous systems within warships

What happened: The Defence and Security Accelerator (DASA) will launch the ‘Intelligent Ship — The Next Generation’ competition to fund projects that enable and improve the use of autonomous systems and machine learning within future warships. £1m is available to fund phase 1 projects, with an additional £3m potentially available to fund follow-on phases. The key challenges they seek to address are:

· High-level mission planning and decision tools

· Information fusion

· Sensor management

· Novel human-machine interaction in a naval environment

They also want entrants to consider how to keep the human engaged in the processing and fully cognisant of what is going on. Yet, on the other hand, they emphasis system integration to enable more autonomous functionality with limited human interaction.

Where it fits: The competition document mentions related workstreams, as this competition is just one part in the Navy’s move towards automated warfare. For example, there is the Maritime Autonomous Platform Exploitation (MAPLE) project, developing a system to allow a single control station on a warship to direct semi-autonomous maritime unmanned vehicles (UxV) integrated with the rest of the weapons systems. We may end up with submarine carriers to complement aircraft carriers (or even have carriers pull double-duty)

An interesting example is the Open Architecture Combat System (OACS), which seems to be a move towards more open and interoperable standards within combat systems to allow technology to be developed to be ‘plug and play’, mirroring moving towards open standards and interoperable systems across civilian parts of government, like NHSX.

Interesting Events

The Technology trap: Capital, labour and power in the age of automation

24th June, Resolution Foundation, Westminster

Carl Frey, co-director of the Oxford Martin Programme on Technology and Employment at the University of Oxford, and Diane Coyle, Bennett Professor of Public Policy at the University of Cambridge, will be discussing Frey’s new book on how technological innovations can boost living standards, disrupt labour markets, shift power balances both between new and established firms, and between capital and labour.

Unlocking the potential for AI to help tackle the climate crisis

2nd July, We Work, 2 Southbank Place

A workshop hosted by ElementAI to scope how and where AI technology is being applied to the climate crisis, identify concrete opportunities for short and medium term impact, and scale up cross-sector collaboration. The workshop will focus on measuring carbon emissions and air pollution, land use, and urban development and city design.

--

--

Elliot Jones

Researcher at Demos; Views expressed here are entirely my own