An EU Strategy on Artificial Intelligence: What the Global Experts Think

EPSC
6 min readApr 26, 2018

--

Courtesy of the EPSC’s Mario Mariniello (@MarioEPSC), Lewin Schmitt (@lewinontheedge) & Rachel Smit

On 25 April 2018, the European Commission released its much-awaited Communication on Artificial Intelligence for Europe. It outlines a series of measures geared towards putting Artificial Intelligence (AI) at the service of Europeans and boosting Europe’s competitiveness in this field. The Communication was preceded by a Declaration of Cooperation signed by 24 Member States, as well as Norway — a clear signal of the combined determination to make a success of this file. The Commission aims to agree on a coordinated plan of action with Member States by the end of the year.

With this in mind, the European Commission’s in-house think-tank — the European Political Strategy Centre — had previously convened six of the world’s leading experts on AI to discuss the main issues at stake in any EU strategy on AI.

Here’s a short summary of what they had to say. You can access a full transcript here.

It’s not just about technology…

Artificial Intelligence (AI) is a broad and multidisciplinary concept. It goes beyond narrow, techno-centric aspects to incorporate psychological, societal and geopolitical dimensions.

The potential benefits are huge…

  • AI technologies and applications have the potential to drastically improve performance in multiple areas, including medicine (especially for diagnostics and treatment optimisation), finance (through better control and monitoring), and space exploration.
  • Public services can also benefit in many ways, e.g. through improving digital content accessibility, environmental protection tools or climatology forecasting.
  • Development in AI is also driving advances in consumer applications such as navigation, speech translation, and personal assistant apps.
  • Significant cost reductions in the manufacturing and services industry are foreseeable thanks to increases in efficiency and productivity.
  • Most foreseeable applications would require human interaction with AI, as the technology would mostly be used as a supplementary tool assisting the user.

But so can the potential harm…

  • Harm from AI is most likely to be human-driven. This is because AI is primarily a tool which, like other tools, can and probably will be abused for actively malicious actions. Intentional weaponisation of AI already takes place with hacking and mass-manipulation or exploitation of data or vulnerabilities found in deficient codes.
  • Unintended and unforeseen harm could result from imprudent applications of AI technology in sensitive areas. Inadvertent outcomes could include software that repeats or even enforces patterns of discrimination and bias; unanticipated interactions within complex systems that can result in catastrophic accidents; tracking systems intended to protect wildlife populations that would be exploited by poachers.
  • There is also a concern that the intensification of AI developments could have structural ramifications, leading to an asymmetry between AI ‘haves’ and ‘have-nots’. Geopolitically, this might not only strengthen US and Chinese clout, but also reinforce the global North-South divide, further enshrining existing power imbalances.
  • From a civic/political aspect, the proliferation of AI in the public sphere may result in an erosion of civil rights and personal liberties. Generally, the ascent of AI in combination with other technological developments may also pose new challenges to liberal democratic forms of governance.
  • At the individual level, relying increasingly on AI systems may reduce our self-sufficiency and decision-making ability, and cause illiteracy of basic human skills, such as map-reading.

Policy intervention will be needed…

  • The current lack of diversity of AI developers, not only regarding demographic plurality but also of multidisciplinary specialisations, should be addressed through public policy initiatives for more diversity in the sector. Ethical guidelines, codes of conducts, or even a Hippocratic Oath could also be considered.
  • Human-centred regulatory frameworks might help to uphold principles of accountability, intelligibility and transparency, thus ensuring citizens’ trust into AI applications. The experts also mentioned the need to develop clear mechanisms for assigning responsibility when deploying AI. To enforce regulations, effective auditing and assessment procedures need to be put in place.
  • The education system should be updated to reflect the demand for new skill sets. Preparing the human workforce for the implementation of AI in the workplace will be a key vector for a successful transition. Equipping humans with the necessary common sense and a healthy level of distrust when dealing with AI will be essential. In the same vein, digital and AI literacy and ethics, critical thinking, and lateral skills were all seen as necessary focus points for new curricula. More generally, bridging the STEM-humanities divide in current education systems was seen as important by several of the speakers.
  • While the public sector and policymakers alike should embrace AI, the experts called for caution when deploying AI systems in core public agencies (such as law enforcement, justice, health, welfare), where critical decision-making should remain within human responsibility.

The EU has a major card to play…

  • Strengths and opportunities were seen in the EU’s diversity and cultural richness, experience in providing quality STEM education, its skilled workforce, and a strong position regarding fundamental research on AI.
  • Among the biggest challenge and competitive disadvantage vis-à-vis the US and China is the EU’s lack of a homogeneous (language) market. This correlates with the absence of leading big-data corporations (such as Facebook, Amazon, Alibaba, or Tencent) and relatively low private sector activity in AI R&D.
  • The EU is perceived as having high credibility in leading the global debate on AI governance, as a promoter of ethical standards and a champion of data privacy protection. In this, there might be an option for a third, European, way — strategically positioned between the USA and China.
  • The experts expressed their hope that the EU would act as a guardian of public trust, due process, ethics, and high standards of accountability — guaranteeing safety and fairness for people within the EU and beyond.
  • A comprehensive EU strategy was deemed essential not only for fostering the healthy development of the AI sector, but also for defining the parameters within which this shall take place.

By the way, the EPSC has also published its own thoughts on the opportunities and ethical challenges that come with AI in a paper that focuses on how Europe can sharpen its competitive edge vis-à-vis other leading economies, such as the United States and China. You can read it here.

--

--

EPSC

European Political Strategy Centre | In-house think tank of @EU_Commission, led by @AnnMettler. Reports directly to President @JunckerEU.