Doteveryone report suggests UK AI developers feel products they are working on might harm society but sceptical of regulatory solutions.

What happened: Doteveryone has published People, Power and Technology: The Tech Workers’ View, a study examining the attitudes of UK tech workers. As part of this, they surveyed 1010 tech workers, 192 self-identified as working in AI. So what do people working in AI think?

  • 59% have experience of working on products they felt might be harmful for society, v/s 28% of all UK tech workers. 27% of who experienced such a situation quit their jobs as a result, compared to 18% of all tech workers. This suggests 16% of all people in AI have left their company over such issues, compared to 5% of all tech workers.
  • 81% would like more opportunities to assess potential impacts of their technology v/s 63% of all tech workers. However, 23% see companies’ focus on revenue and growth as the greatest barrier to considering the consequences of technologies, v/s 15% of all tech workers. 15% cite a lack of interest personally or among colleagues as the greatest barrier to considering the consequences of technologies v/s 8% of all tech workers.
  • They consider company policies to be the most effective mechanism of ensuring the consequences of technologies are taken into account and not regulation as the rest of the tech workforce believes.
  • 36% think there’s too much regulation v/s 31% think there’s too little, compared to 14% of UK tech workers who think there is too much regulation v/s 45% who think there’s too little. Still, 36% do believe regulation has potential to ensure tech workers consider the consequences of their actions.
  • Only 3% believed voluntary ethical guidelines would be the most effective solution to ensure tech workers consider the consequences of their work on society, though 25% did believe it did have some potential.

Why this matters: A significant amount of time, energy and intellect has been invested in voluntary guidelines. Leaving aside the competitive dynamics and profit-driven nature business which are in tension with unilateral adoption of voluntary ethics agreements, if the employees themselves don’t have much faith in these guidelines to have an impact, then there is likely to be a self-fulfilling prophecy leading to the guidelines’ failure.

One reason suggested for why this might be the case is that the workers themselves are not involved in the creation of the guidelines and so don’t feel invested in them.

In contrast, while there is clearly some resistance to greater regulation of AI, the report suggests a plurality of the public and tech workers as a whole are supportive, and even those who think there’s too much regulation still recognise its efficacy in forcing companies to consider the societal implications of what they are developing (in part because it can help solve the coordination problem in a competitive arena).

The Royal Society publishes report analysing the data science job market

What it says: The Royal Society has published Dynamics of data science skills which finds that demand (measured by UK job listings) for workers with specialist data skills like data scientists and data engineers is up 231% over five years, compared to 36% for all workers over the same period.

The report recommends that:

  • School curriculums need to allow students to study a wide range of subjects to 18, and focus more on communication, problem solving, and teamwork.
  • Training needs to change. Professional-level courses should be flexible and responsive. Training may need to be industry-approved and accredited, and coordination is needed between industry and universities. More informal mechanisms such as online material are also needed to allow people to (re)train through self-learning.
  • Universities need to train and retain the staff required to meet teaching demands from this training. Funding bodies like UKRI could support joint appointments for the UK’s most talented researchers to work in both industry and academia, to reduce the brain drain.
  • Further, the public and university sectors should invest more computing power, as the availability of data and computing power are major draws for talent. Industry currently dominates this domain, offering both at considerable scale.

Why it matters: First, the statistics for the demand for data-skills highlighted by the report support the decisions and investment made in the AI Sector Deal this time last year, particularly the £406 million allocated for maths, digital, and technical education, including funding to provide training for 8,000 computer science teachers and creating a National Centre for Computing.

Secondly, being able to retain academics in universities seem important to ensure the next generation of computer and data scientists get the train they need and more basic, foundational, research takes place rather than the short to medium term applications private companies are incentivised to prioritise. Yet, in a March survey of AI researchers and university administrators by Times Higher Education and Microsoft, 89% said it was “difficult” or “very difficult” to hire and retain AI experts. The recommendations in this report, particularly addressing the hardware imbalance between the public and private sector, seem like a solid way of increasing retention.

Work and Pensions Secretary lays out vision for the future of the labour market in the face of automation

What happended: In speech to the Recruitment and Employment Confederation, the Work and Pensions Secretary Amber Rudd laid out her vision for the future of the labour market. Alongside globalisation and ‘uberisation’, she spoke about automation and artificial intelligence disrupting the demand for labour.

In particular, she highlighted that “automation is driving the decline of banal and repetitive tasks. So the jobs of the future are increasingly likely to be those that need human sensibilities, with personal relationships, qualitative judgement and creativity coming to the fore.”

Why it matters: While there seems to be some consensus that AI will automate highly routine tasks, that does not mean we won’t be left with banal tasks or free to exercise out creativity.

As Daniel Susskind points out, much of the work that does not appear easily automatable is badly paid and poor quality, e.g. care workers, and demand for them is going to balloon as society ages. Further, a lot of work that humans find interesting can be automated; AI is already well on the way to creating music and art, writing newspaper articles, and designing chairs. For now, this is augmenting rather than replacing humans but the decomposition of work into more banal tasks and the replacement of human creatives is not an unrealistic possibility. (I would highly recommend his book, The Future of the Professions, if you’re interested in knowing more)

And Hettie O’Brien in this week’s New Statesman points out, even if automation destroys more meaningful work than it creates, how the rewards from increased productivity are distributed ultimately depend as much on the existing distribution of capital ownership and the political decisions taken on redistribution, as it does on the form work takes in the future.

Bureau of Investigate Journalism finds adoption of AI in government driven by austerity and a lack of transparency in their procurement and operation

Key findings: The Bureau of Investigative Journalism has a published a report, ‘Government Data Systems: The Bureau Investigates’, on how data and AI is currently used by the UK government. They found that:

  • Development of algorithmic and data-driven systems is frequently predicated on austerity. The adoption of systems, the combining of legacy databases and “digital by default” services are a major driving force in current policy.
  • Procurement for data infrastructure and automated systems are still dominated by traditional big names but many smaller companies are also now entering the game. The Home Office’s plans for new data-driven systems have used 40 different companies in the last two years. However, many authorities were unwilling or unable to specify how and why they purchased these services or what their precise specifications were.
  • There seems to be a failure of transparency at a time when the state is driving a data-driven revolution predicated on saving money through digital transformation programmes and legacy system overhauls. Public authorities aren’t keeping transparent, accessible records of the services they purchase as obligated under the Public Contracts Regulations 2015. The current transparency datasets don’t offer sufficient detail to understand purchases, particularly from large companies which offer a multiplicity of services.

Why it matters: Inline with a recent report from the Data Justice Lab, this report highlights the key role political decisions around austerity have made in driving the adoption of these automated decision making systems for the sake of efficiency. Its also noted that the onus is therefore placed on staff to explain why they deviated from what these systems recommend and the lack of resources means that there is often a lack of proper accountability over bias in these systems, whether from poor data or optimising for the wrong goals.

Staff being overstretched and unable to ensure proper internal accountability of systems makes the need for transparency all the more important and the apparent lack especially concerning. If autonomous systems are making important decisions, such as which at-risk families need early intervention, then it needs to be clear how those systems are making those decisions, who is responsible for providing these systems, and how those decisions can be contested.

Finally, interviews in the report highlight that sellers, buyers and the media all sometimes exaggerate the sophistication of systems and package traditional statistical modelling as ‘AI’. AI is a pretty nebulous concept and as this newsletter demonstrates, it often encapsulates a wide range of areas from autonomous drones to hospitality chatbots. While some of this stems from the omni-use nature of some of the underlying technology, it is a good reminder (for me especially) to be clear and specific what we mean when we use ‘AI’.

Parties publish their European Election Manifestos, with Liberal Democrats, Greens and Change UK acknowledging automation and autonomous weapons.

Ahead of the European Elections on the 23rd of May, parties have begun to publish their manifestos. While the European Elections are seen as somewhat irrelevant policy-wise, since the MEPs elected are unlikely to sit for very long, if at all, its still a chance for parties to signal their priorities in a concrete way. The Liberal Democrats and the Greens have addressed AI in their manifestos, so I’ll be looking at them this week.

Neither the Labour Party, UKIP nor Plaid Cymru addressed AI or its applications in their European Manifestos. The Conservative Party, The Brexit Party and the Scottish National Party have yet to publish their manifestos and will be covered in future issues. If you’re interested in where parties appear to stand on AI and how they’ve been taking about it, I’ve written up a longer summary here.

Liberal Democrats

What the manifesto says: The Liberal Democrats manifesto’s section on innovation states that they “will encourage competition among companies in the digital space, and support the decisive use of European and UK competition powers to prevent the tech giants from exploiting consumers and to ensure innovation through competition.”

Specially on AI, they believe “the EU should be the first to create a solid legal framework for new technologies such as artificial intelligence to be used in the economy and public life. Legislation should, however, be focused on applications that use these new technologies and not on the underlying technologies themselves, since this would otherwise limit innovation and the creation of new applications.”

Why it matters: Liberal Democrats have been resurgent in recent local elections and could plausibly stand to gain a number of seats at the next general election, which especially in likely hung parliament will make their perspective on AI more important than it otherwise would be as the 3rd party.

This especially true as the Liberal Democrats have taken a more pro-active and targeted approach to AI than other parties. In their 2017 general election manifesto, The Liberal Democrats were one of only two parties (along with the Conservatives) to even name-checked artificial intelligence, and the only party to mention ‘machine learning’.

Further, in November 2018, they setup a Technology and Artificial Intelligence Commission focusing on develop polices on core ethical principles for data scientists developing new technologies; ensuring minority groups are involved in the development and application of technology; and an industrial strategy for British technology. The latest reports suggest they are currently looking at AI Ethics. The commission is due to report back by the Liberal Democrat Conference in mid-September.

The Greens

What the manifesto says: The Green Party manifesto states that they believe “Europe must not seek profits from unscrupulous exports of arms, police and security equipment and surveillance technologies to those who use them for harm. New stronger controls over surveillance technology, drones, and artificial intelligence are needed.”

Why this matters: The Greens are unlikely to make much ground in either the European elections nor a subsequent general election unless climate change shoots up the agenda and remains there (which is not unthinkable with rise of Extinction Rebellion and the school strikes but not very likely in the short-term either).

At a European level, working with the rest of the Green group, they have been successful in putting Lethal Autonomous Weapons on the agenda. This led to the European Parliament calling for an international treaty to ban killer robots and demanding that no money from the European Defence Fund goes into the research and development of killer robots, which is a significant win.

So, with only one MP and likely staying that way, their stance isn’t likely to prove very directly important in how UK AI policy develops. However, they still might have role in putting certain issues on the agenda and acting as a coalition builder for potential cross-party initiatives.

Change UK

What the manifesto says: The Change UK manifesto states they believe: “We must focus on skills, training and a proper system which helps people as they change career. We must prepare for the rise of automation and support investment in the industries of the future where, thanks to its world-leading universities and science infrastructure, Britain is well placed to lead.”

Why this matters: Given that Change UK are set for a fairly poor performance at the EU elections and lack a party infrastructure, they probably won’t survive the next general election. For now, they have 11 MPs and so do have so influence over the current parliament but aren’t suggesting any beyond what had already been said by other parties on AI.

Interesting Upcoming Events

AI for Good London Meetup

16th May, WeWork Space Paddington

Milena Marin, Senior Advisor in Amnesty International’s Evidence Lab, will be presenting on how Amnesty is adapting and increasing its abilities to both scrutinise biased AI systems and implement the technology in large scale investigations.

DataKind Data Science Ethics Book Club: Session 2, Facial Recognition

22nd May, The Bower, 211 Old Street, EC1V 9NR

DataKind are starting a book club, meeting every 6 weeks, focusing on ethical issues in AI and data science. The first public session will be debating the value of facial recognition systems.

Breakfast ThinkIn — Who is winning the global AI race?

4th June, Fora Fitzrovia

A discussion about the race for dominance in AI, especially role 2nd Tier players like the UK, Israel, Germany, Finland and other states might play compared the US and China. No panellists confirmed yet but Tortoise always manages to pull in great speakers so confident it will be good regardless.

Thanks for reading. If you found it useful, share the subscription link with someone else who might find it useful too: https://mailchi.mp/e9c4303fce5b/aiwestminster

If you have suggestions, comments, thoughts or feelings, you can contact me at: aiwestminsternewsletter@gmail.com or @elliot_m_jones

--

--

Elliot Jones

Researcher at Demos; Views expressed here are entirely my own