AI.Westminster #5 (30th April, 2019)

Elliot Jones
6 min readApr 30, 2019

--

A summary of AI developments and policy in the United Kingdom.

Welcome to AI.Westminster, a newsletter covering developments in AI policy and the wider ecosystem in the United Kingdom. Subscribe here.

Contents

Oxford Technology & Elections Commission launched

What happened: The Oxford Internet Institute has launched a commission to find ways to safeguard democracy by analysing how social media, big data, AI and digital platforms are impacting elections and democratic participation across the world.

They aim to aggregate and promote policy solutions that protect the right to privacy, information access and participation in free, fair elections, along with setting out what specific research projects and policy actions need to be undertaken.

The commissioners include Christina Blacklaws, President of The Law Society; Dame Helen Ghosh, former Permanent Secretary at the Home Office; and Paddy McGuinness, former UK Deputy National Security Adviser,

Why this matters: While claims of armies of bots spreading misinformation are currently somewhat overhyped, AI is going to be increasingly capable of micro-targeting auto-generated content which can be used by state and non-state actors to interfere with elections and the process of democracy more generally.

The decision by OpenAI to restrict the full release of their text-generation model, GPT-2, due to concerns of it being used to spread misinformation is a welcome sign of some corporate regulation but we can’t rely solely on self-regulation.However, when it comes to state regulation, there is a fine balance to be struck between ensuring free and fair elections, and unnecessarily restricting people’s freedom of expression and banning potentially positive innovations. After all, personalisation isn’t all bad; if voters only care about specific issues, receiving more targeted adverts would help them make better decisions.

The commission clearly recognises these difficulties and I’m optimistic to see what comes of it.

Intellectual Property Office focuses on the impact of AI on the global IP framework

What happened: The Intellectual Property Office has published its 2019–20 plan and has included the impact of AI on the global IP framework as one of their yearly focuses.

They will explore how AI will affect IP rights both conceptually and in operational enforcement. They will do this through a conference with the World Intellectual Property Office, seminars with universities and industry across the UK, and utilising the Regulators’ Pioneer Fund to explore how AI can be used in the IP filling process.

They plan to publish a report setting out their understanding of AI’s impact of the IP framework, and their key questions and actions, by March 2020.

Why this matters: AI systems can already generate new works theoretically protectable by copyright, such as creating new artwork or music. For example, only this week OpenAI has released MuseNet. Under current UK copyright laws, the legal author of computer-generated literary, dramatic, musical or artistic works is the person “by whom the arrangements necessary for the creation of the work are undertaken.”

But who is this? If all the data and models are produced by a single individual or company, its pretty clear; but when different groups have generated the data, designed the model and trained the model, things become increasingly difficult to untangle. And at when systems can start to generate their own data and design their own architecture, at what point was a living human necessary for the creation of the work? This may make it difficult to monetise the output of sufficiently automated output generation under current rules, either through complex distribution of ownership or outright uncertainty whether the work can be attributed to any human author at all.

However, any new IP regulation needs to be mindful for the potential to entrench inequality. If IP rights for large-scale auto-generated content belong to the developers of the model, and they are able to generate vast amount of monetisable content compared to traditional content producers, then it risks exacerbating concentrations of wealth and capital in the creative fields. If copyright and patent were originally intended as incentives and rewards for the effort and skill required to produce outputs, but automate systems can produce vast amounts of novel content at relatively low unit costs, perhaps time-limited protections shouldn’t apply at all.

Surveillance Camera Day and a Westminster Hall debate on facial recognition

What happened: The Surveillance Camera Commissioner’s Office has announced Surveillance Camera Day (20th June for your diaries). This is intended to open up a conversation around automatic facial recognition and how surveillance cameras are actually used in practice, why they’re used and who is using them.

They will be asking surveillance camera control centres to allow the public to see first-hand how they operate and requesting organisations publish a fact-sheet with an overview of the surveillance system, including what it was design for and the number of cameras.

Darren Jones MP has argued that there is not enough clarity in the law as to how facial recognition used by the state and shared between government departments, for example in policing databases. He is holding a Westminster Hall debate on facial recognition at 2:30pm, 1st of May, and you can tell him what you think about it here.

Why this matters: These come just Heathrow announces it will be spending £50m to deploy facial recognition software throughout the airport, meaning Heathrow will have the world’s largest deployment of biometric products. The UK is already infamous for its density of CCTV cameras, and so facial recognition and its application for surveillance can pervade British public life at a pace if given the change.

While have some have argued facial recognition technology will be an essential part of ensuring cost-effective policing and national security, others like Luke Stark, Microsoft researcher and affiliate of Harvard’s Berkman Klein Center for Internet & Society, has described facial recognition as ‘the Plutonium of AI’.

Having a national debate on what is the acceptable use, if any, of facial recognition software, especially in state surveillance, is crucial to have now while the technology is developing rather than waiting until it becomes embedded in our public life by default.

UK Space Agency, NHS England and the European Space Agency fund diagnostic AI

What happened: Researchers at Odin Vision and UCL have been given a £1m grant by the UK Space Agency (in partnership with NHS England and the European Space Agency) to develop a computer vision machine learning tool that can diagnose bowel cancer from a live colonoscopy feed.

What makes this system different is that it will be deployed via satellite communication traditionally used for space missions. This should allow the AI diagnostic tool to be deployed anywhere on Earth with the same speed and reliability but without onsite computational hardware or data infrastructure.

Why this matters: Developing powerful machine learning systems in one thing but for it to have a transformational impact, it needs to be robustly deployed to real world problems. One part of the problem obtaining good datsets for specific tasks and ensuring the model can handle the imperfections and unpredictability of the real world.

But another big challenge is being able to access to the AI system at all. While our mobile devices are increasingly capable and projects like MobileNet are providing efficient deployment on those devices, they won’t always be up to the task at hand. Investing in reliable and fast data infrastructure, whether it be fiber, 5G or in this case, satellites, will be necessary to ensure AI successfully in places and contexts where onsite computational power is impractical and continuous service is essential.

Interesting Upcoming Events

WIRED Pulse: AI at the Barbican

15th June, Barbican Hall

A day of keynote speeches featuring Terah Lyons, Founding Executive Director at Partnership on AI and previous Policy Advisor to the U.S. Chief Technology Officer during the Obama administration, and Sandra Wachter, Research Fellow in data ethics, AI and robotics at the Oxford Internet Institute and Fellow, Alan Turing Institute, among other AI policy and industry experts.

Thanks for reading. If you found it useful, share the subscription link with someone else who might find it useful too: https://mailchi.mp/e9c4303fce5b/aiwestminster

If you have suggestions, comments, thoughts or feelings, you can contact me at: aiwestminsternewsletter@gmail.com or @elliot_m_jones

--

--

Elliot Jones

Researcher at Demos; Views expressed here are entirely my own