The Centre for Data Ethics and Innovation calls for evidence on online targeting and bias in algorithmic decision making

What happened: The Centre for Data Ethics and Innovation is seeking evidence on online targeting and bias in algorithmic decision making to support its forthcoming reviews.

Online targeting: Their review of online targeting focuses on the customisation of products and services online (including content, services standards and prices) based on data about individual users. Questions they are particularly interested in include:

  • What are the key technical aspects of online targeting? (what data is used; how is it collected, processed, and/or shared; what customisation is carried out; on what media; and how is this monitored, evaluated, and iterated on)?
  • How is this work governed within organisations (how are risks monitored and mitigated, including unintended consequences, who is accountable within organisations, how are complaints dealt with)?
  • What emerging technologies might change the way that online targeting is carried out? Might these present novel issues?

Bias in algorithmic decision-making: Their review of algorithmic bias will take a sector by sector approach because they believe ethical considerations from algorithmic decisions are significantly context dependent. They have chosen to review 4 sectors where they believe decision-making has a significant impact on individuals and has a record of bias:

  • Financial services
  • Crime and justice
  • Recruitment
  • Local government

Interim reports on both will be published over the summer. The final report and recommendations to government, on online targeting will be published in December 2019 and the final report and recommendations on bias in algorithmic decision-making in March 2020

Plans announced to introduce new laws for internet-connected devices

What happened: Follows the government’s voluntary Secure by Design Code of Practice for consumer Internet of Things (IoT), the government is consulting on further regulation including a mandatory new labelling scheme, telling consumers how secure their IoT devices are. The security label will initially be launched as a voluntary scheme to help consumers identify products that have basic security features and those that don’t.

They are also considering making some security requirements a condition of sale, including:

  • IoT device passwords must be unique and not resettable to any universal factory setting.
  • Manufacturers of IoT products provide a public point of contact as part of a vulnerability disclosure policy.
  • Manufacturers explicitly state the minimum length of time for which the device will receive security updates through an end of life policy.

Why this matters: While not directly related to AI, these proposals still provide a signal about the UK’s preferred approach to technology regulation and IoT are likely to be one significant way consumer-facing AI systems manifest themselves in the near-future.

The government had previously said its preference was for industry to self-regulate, by adopting high standards voluntarily and improve the level of transparency with consumers regarding the standard of in-built cyber security measures.

However, the government are concerned the risk to individuals and the economy had grown without sufficient action from industry. Instead, these proposals signal that governments’ preferred approach to technology policy is moving away from self-regulation towards upstream regulation on building safety and security into the design stage of technology, which also features heavily in the recent Online Harms White Paper.

As the paper, The Malicious Use of Artificial Intelligence, notes IoT devices are generally highly insecure and are one way in which AI controlling key systems, e.g. the power grid, could be subverted, potentially causing more damage than would have been possible were those systems under human control. Better cybersecurity could make these attacks more difficult but individuals affected by these failures, e.g. victims of DDOS attacks using AI botnets, are not typically able to improve cybersecurity themselves and instead must rely on what is provided by the market.

There are also poor incentives for companies to deploy greater safety features, as that will make them the more expensive option and be out-competed by other firms unless consumers display a strong preference for security features. Regulation can help prevent this race to the bottom.

Therefore, regulation raising compulsory IoT security standards could help improve resilience to the malicious use of AI in future. These current proposals don’t seem sufficient to make much difference but as part of a trajectory towards stronger regulatory standards they may be promising.

UKRI and Japan Science & Technology to fund collaborative research on Artificial Intelligence and Society

What happened: From 8th May to mid July, UKRI and the Japan Science and Technology Agency (JST) will be taking joint proposals from British and Japanese researchers to explore how AI will impact the economy and society, under three broad areas:

· Impacts on humans and society

· Economic implications, skills, work and education

· Transparency, responsibility, governance and ethics.

The total budget is ~£2.8m and successful projects are expected to commence in January 2020 for a period of 3 years.

Why this matters: With the Huawei 5G controversy over the last few weeks, the UK is clearly trying to tread a fine line between courting Chinese and American interests as it leaves the European Union. The UK positioning itself to build links with other 2nd tier AI players will help in stay relevant and increase its influence without being seen to pick a side.

Further, The AI Narratives project from the Leverhulme Centre for the Future of Intelligence and the Royal Society highlights that different cultures perceive AI in distinct ways, e.g. “As in Western narratives, AI is predominantly portrayed in Japanese fiction in embodied form. However, it is represented less as a slave or servant, and more frequently as a friend or tool.”

These narratives will shape the approach different countries take to the governance of their AI systems and the ways it will manifest in society. As the safe governance of AI is very likely to require international collaboration, greater common understanding will be useful when it comes to setting international standards and finding common ground for legitimising the use of AI (or not). Collaborations like this, promoting the building of links between social scientists and humanities scholars in the East and West will be important in bridging that gap.

UK CEOs believe government should play an integral role in AI development but torn on self-regulation and safety nets

What happened: PwC has published analysis of UK CEOs (of companies with 500+ employees or more than $50m in revenue) thoughts on AI, from its annual global survey of CEOs. The key takeaways from UK CEOs are:

On their plans:

  • Only 2% (Global: 6%) have introduced wide-spread AI initiatives but 35% (Global: 35%) say they plan to introduce AI into the business in the next three years.
  • 36% (Global: 23%) have no plans at all and of those who have no plans to pursue AI, 76% said a deficit in supply of skilled workers was their primary reason.

On government and governance:

  • 70% support government-led national strategies and policies on AI,
  • 63% believe government should play an integral role in AI development
  • 47% think organisations should be allowed to self-regulate the use of AI, while 44% disagree.
  • 49% believe government should provide a safety net for workers displaced by AI, but 41% disagree.
  • 65% think government should incentivise organisations to retrain workers whose jobs are automated.
  • 82% agree that AI-based decisions need to be explainable to be trusted.

On the future:

  • 42% believe AI will become as smart as humans, but 42% disagree.
  • Only 32% believe AI will remove human bias (Global: 48%)

Why this matters: Some CEOs like Mark Zuckerberg are calling for regulation to take the responsibility (and blame) away from the tech giants and provide clarity on what they should be doing. This survey suggests that while industry is divided, the UK government may find allies in domestic big business as it pivots away from self-regulation and starts to impose standards and controls on AI development.

APPG on Heart and Circulatory Disease

What happened: The APPG on Heart and Circulatory Disease has published the results of its inquiry into the impact of AI on heart and circulatory disease patients, with its conclusions endorsed by Health Secretary Matt Hancock. The report argues that policy makers, charities, industry, clinicians, and the research community should immediately begin engaging patients about AI in healthcare, during policy, service design and technology implementation.

Surveying heart and circulatory disease patients, they found 85% supported using AI in diagnostics and treatment but only 17% were aware of any usage of AI in diagnosis and treatment of heart and circulatory disease. 86% of respondents are happy for their anonymised health data to be shared.

The report recommends:

  • NHSX should set up discussions with charities and the public, to explore patients’ views and concerns about the use of AI in healthcare.
  • Understanding Patient Data (UPD) should work with charities, patients and the healthcare sector to develop tools and resources for engaging the public on AI.
  • NHSX should work with UPD, charities, and patient organisations to ensure that policy development in AI is designed with the explicit purpose of understanding, promoting and protecting public values and that this is clearly and openly communicated.
  • NHS England and NICE should develop standards for publication for AI research, providing trustworthy guidelines for researchers, the media and the public.
  • Academic Health Science Networks should facilitate the exchange of information around new developments in AI between patients charities, and industry partners.
  • NHS England and NHS Digital should explore the impact of AI on health inequalities.

Why this matters: This builds on the recent Topol review into the NHS’s workforce and the digital future, which emphasised that patient benefit must remain the driving criterion for AI design and use.

The Ada Lovelace Institute’s recent report highlights that increasing public understanding of AI alone isn’t sufficient or possibly even necessary for ethical deployment of AI and instead the focus should be on communicating the impact of the technology. Meaningful public engagement instead means building mutual understanding between researchers, developers, policymakers, and users, using a range of methods from uninformed polling aims to engagement strategies, such as citizens assemblies that aim first to increase the knowledge base of the surveyed groups before investigating their informed opinions.

So while the report and its recommendations are promising, especially the focus bringing patients into the discussion, it will have to be seen whether this translates into concrete meaningful engagement.

Interesting Upcoming Events

How to Future-Proof Humanity

23rd May, RSA House

Paul Mason, left-wing economics commentator and journalist, will be arguing that as we enter a future defined by AI, we must make a choice: will we accept machine control of human beings, or resist it? He will argue need a theory of humanity that protects our rights and freedoms against the forces eroding who we are. This talk is based on his upcoming book, Clear Bright Future

Regulating Unreality

11th July, Barbican Centre

Professor Lilian Edwards (University of Strathclyde) will be presenting on models of governance for ‘deep fakes’, including standards for evidence, whether we should focus on technological or legal solutions, and asking whether the right to know what is real versus computer-generated should be a new human right?

Thanks for reading. If you found it useful, share the subscription link with someone else who might find it useful too: https://mailchi.mp/e9c4303fce5b/aiwestminster

If you have suggestions, comments, thoughts or feelings, you can contact me at: aiwestminsternewsletter@gmail.com or @elliot_m_jones

--

--

Elliot Jones

Researcher at Demos; Views expressed here are entirely my own