A view of the Elizabeth Tower, courtesy of UK Parliament/Jessica Taylor . Parliamentary copyright images are reproduced with the permission of Parliament

UK Parliament wakes up to AI

Libby Kinsey
Project Juno AI

--

At Project Juno, we spend a lot of our time thinking about machine intelligence in terms of opportunity — what problems can it solve today and tomorrow; where can money be made? But of course we don’t operate in a vacuum, and this focus is regularly punctuated by wondering what government is doing about this or that… and finding it rather difficult to find out.

(Excerpt from Future Advocacy’s ‘An Intelligent Future?’report)

In particular, shouldn’t we have an industrial policy to build on the AI expertise here (*)? Shouldn’t parliament step in to regulate before there’s a wholesale transference of power (via our data) to corporates operating outside of any effective jurisdiction (assuming that’s not closing the stable door after the horse has already bolted)? Shouldn’t there be much more public discourse about related legal, social, and ethical matters?

So it was with great enthusiasm that I attended the first meeting of the Prospective All Party Parliamentary Artificial Intelligence Group on Monday evening, convened by Stephen Metcalfe (Conservative MP) and Lord Clement-Jones CBE (Liberal Democrat peer) to consider, ‘the future and implications of Artifical Intelligence’. The audience (I’m told this APPG was unusually well-attended) was packed with experts from all domains and the discussion was kicked off by an excellent panel, between them covering research, corporate, ethical, practical, and economic perspectives, speaking for 5 minutes each:

I won’t attenpt to reproduce the whole discussion here, only some highlights:

How do we currently talk about AI?

  • Public discourse has tended to polarise between AI being a ‘silver bullet’, solver-of-all-ills, and complete scepticism and fear. The reality is somewhere in the middle at the moment, but it’s a fast-moving, complex space that the average individual cannot hope to keep up with. We need authorities/public bodies to do this work for us, much as the HFEA does for human embryo research or the BSI’s kitemark does in certifying quality.

What are the UK’s strengths?

  • AI is of course a global concern, and attendees noted that other countries may have greater resource, a more relaxed attitude to ethics and data, or greater success at exploiting university IP.
  • In contrast, aside from the UK’s general research excellence, two specific competitive attributes emerged from the discussion. The first is the UK’s expertise across diverse AI techniques, whereas recent interest and investment has focused on a narrow subset (deep learning). In particular, data-driven approaches alone may not suffice for many decision-making and planning tasks, and symbolic AI is required: ‘[Machine learning] threw the baby out with the bathwater.’
  • Secondly, given that global regulation and standards are required, the UK is good at this, skilled at building consensus amongst stakeholders and creating frameworks that are flexible and permit innovation. The BSI’s track record and recent work on Robot Ethics were highlighted.

Standards and regulation

  • We need governments to take the lead on matters of trust and trustworthiness in AI. Standards and regulation should measure more than just the performance of technology (although transparency via third-party benchmarking in realistic scenarios is required here). They should take in to account purpose, social value, quality, and sustainability too.
  • More concretely, regulation needs to encompass this type of scenario (referring to the dynamic pricing used by airlines): ‘Data has changed the terms of business between corporates and customers. Are there limits to what we will permit? Is it OK to scrape social media data to find out that a relative is very sick and to adjust the price to reflect the urgent desire to visit?
  • As followers of debate around autonomous vehicles will know, regulation must also consider the Trolley Problem, and its innumerable variations, where a choice must be made between human lives. European Union regulations on algorithmic decision-making demand that such choices can be explained, which is a bit tricky when even philosophers have not yet agreed that there is a correct answer. It was noted that we won’t naturally have the same sympathy for hard decisions made by AI that we do for humans in analogous situations. Without agreed standards, innovation will grind to a halt in the litigation courts.
  • On purpose, a research paper published this month came to mind, in which researchers attempted to infer criminality from images of faces. This seems to illustrate very nicely the points made about technology applications needing purpose, to solve well-defined problems, and to have measures of success that include ethical dimensions. Something that seems to have been entirely absent in the ill-conceived research mentioned.

Data

  • The machine learning component of AI is fuelled by data, and much of that data has been, and will continue to be, personal. The contracts between corporates/others and individuals that permit use of this data are multiple, verbose, and opaque, so that questions of consent and ownership have ceased to have meaning.
  • The problem is such that perhaps a complete rethink is required, and the idea of a charter on how data can be used was suggested. That is (I think), a framework that governs data use, and is ‘opt-out’, rather than the data protection laws that we currently have. In this ‘sharing revolution’, the individual would have control of his/her data, and consent, transparency, fairness, and accountability, would be built-in.
  • We could also look to the field of medicine to learn from the work already done on consent and data sharing.

Jobs

  • AI will impact jobs in two ways. In the short-term, there will be demand for workers with the STEM skills needed to design and implement AI systems. Given the rest of the discussion, it became apparent to me that there will also be a need for smart generalists, those who can bring critical thinking, diplomacy, communication, and other skills to the functions of global regulation and public discourse.
  • In the longer term, whilst it is arguable whether AI will create more jobs than it destroys, it will certainly substantially change the types of jobs available. Education will be important, but in the context of universal basic income, under-employment etc., it is also critical to keep in mind ‘what it is to be human and live well’.
  • The pace of change in different industries will differ, so for instance the automation of the professions (as AI takes on repetitive cognitive tasks) will cause fewer qualified professionals to be available for industries that still require them.

In conclusion, the session highlighted to me the extent that government (via multiple agencies and institutions) is highly engaged in wrestling with the challenges and opportunities created by AI. I hope the group becomes a focal point for gathering evidence, increasing understanding, and coordinating and guiding activity.

Thank you to Justin Anderson (@jpeanderson) of Hypercat for extending me an invitation to the meeting.

(*) I look forward to hearing more about Prime Minister May’s new Industrial Strategy Challenge Fund to ‘help Britain capitalise on its strengths in cutting-edge research like AI and biotech’ (bit.ly/2ffB03u) announced on the same day as the APPG.

A couple of pieces of background reading:

--

--

Libby Kinsey
Project Juno AI

Machine Learning | Venture Capital | startups | anything blue.