Summary of EU white paper on Artificial Intelligence — A European approach to excellence and trust

Tiba Razmi
7 min readFeb 22, 2020

--

European initiatives that aim to increase the availability of quantum testing and experimentation facilities will help apply these new quantum solutions to a number of industrial and academic sectors.

In parallel, Europe will continue to lead progress in the algorithmic foundations of AI, building on its own scientific excellence. There is a need to build bridges between disciplines that currently work separately, such as machine learning and deep learning (characterized by limited interpretability, the need for a large volume of data to train the models and learn through correlations) and symbolic approaches (where rules are created through human intervention). Combining symbolic reasoning with deep neural networks may help us improve the explainability of AI outcomes.

Developing the skills necessary to work in AI and upskilling the workforce to become fit for the AI-led transformation will be a priority of the revised Coordinated Plan on AI to be developed with the Member States.

The Commission published a Communication welcoming the seven key requirements identified in the Guidelines of the High-Level Expert Group:

  • Human agency and oversight,
  • Technical robustness and safety,
  • Privacy and data governance,
  • Transparency,
  • Diversity, non-discrimination, and fairness,
  • Societal and environmental well being, and
  • Accountability.

The specific characteristics of many AI technologies, including opacity (‘black box-effect’), complexity, unpredictability and partially autonomous behavior, may make it hard to verify compliance with and may hamper the effective enforcement of, rules of existing EU law meant to protect fundamental rights. Enforcement authorities and affected persons might lack the means to verify how a given decision made with the involvement of AI was taken and, therefore, whether the relevant rules were respected. Individuals and legal entities may face difficulties with effective access to justice in situations where such decisions may negatively affect them.

Member States are pointing at the current absence of a common European framework. The German Data Ethics Commission has called for a five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones. Denmark has just launched the prototype of a Data Ethics Seal. Malta has introduced a voluntary certification system for AI. If the EU fails to provide an EU-wide approach, there is a real risk of fragmentation in the internal market, which would undermine the objectives of the trust, legal certainty, and market uptake.

As from 2025, the rules on accessibility requirements for goods and services, set out in the European Accessibility Act will apply. In addition, fundamental rights need to be respected when implementing other EU legislation, including in the field of financial services, migration or responsibility of online intermediaries.

The EU has a strict legal framework in place to ensure inter alia consumer protection, to address unfair commercial practices and to protect personal data and privacy. In addition, the acquis contains specific rules for certain sectors (e.g. healthcare, transport).

A risk-based approach is important to help ensure that the regulatory intervention is proportionate. However, it requires clear criteria to differentiate between the different AI applications, in particular in relation to the question of whether or not they are ‘high-risk’. The determination of what is a high-risk AI application should be clear and easily understandable and applicable to all parties concerned. Nevertheless, even if an AI application is not qualified as high-risk, it remains entirely subject to already existing EU-rules.

An AI application should be considered high-risk where it meets the following two cumulative criteria:

  • First, the AI application is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur. The sectors covered should be specifically and exhaustively listed in the new regulatory framework. For instance, healthcare; transport; energy and parts of the public sector.
  • Second, the AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise. This second criterion reflects the acknowledgment that not every use of AI in the selected sectors necessarily involves significant risks. For example, whilst healthcare generally may well be a relevant sector, a flaw in the appointment scheduling system in a hospital will normally not pose risks of such significance as to justify legislative intervention. (risk of emerging)

AI applications for certain purposes is to be considered as high-risk as such:

  • The use of AI applications for recruitment processes as well as in situations impacting workers’ rights would always be considered “high-risk” and therefore the below requirements would at all times apply.
  • The use of AI applications for the purposes of remote biometric identification and other intrusive surveillance technologies would always be considered “high-risk” and therefore the below requirements would at all times apply.

The types of mandatory legal requirements high-risk applications:

  • training data;
  • data and record-keeping;
  • information to be provided;
  • robustness and accuracy;
  • human oversight;
  • specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.

The following requirements relating to the data set used to train AI systems could be envisaged:

  • Requirements ensuring that AI systems are trained on data sets that are sufficiently broad and cover all relevant scenarios needed to avoid dangerous situations.
  • Requirements to take reasonable measures aimed at ensuring that such subsequent use of AI systems does not lead to outcomes entailing prohibited discrimination especially to ensure that all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected in those data sets.
  • Requirements aimed at ensuring that privacy and personal data are adequately protected during the use of AI-enabled products and services.

Keeping of records and data, to this aim, the regulatory framework could prescribe that the following should be kept:

  • accurate records regarding the data set used to train and test the AI systems, including a description of the main characteristics and how the data set was selected;
  • in certain justified cases, the data sets themselves;
  • documentation on the programming and training methodologies, processes and techniques used to build, test and validate the AI systems, including where relevant in respect of safety and avoiding the bias that could lead to prohibited discrimination.

Following requirements could be considered in information provision:

  • Ensuring clear information to be provided as to the AI system’s capabilities and limitations.
  • Citizens should be clearly informed when they are interacting with an AI system and not a human being. Whilst EU data protection legislation already contains certain rules of this kind.

In Robustness and accuracy the following elements could be considered:

  • Requirements ensuring that the AI systems are robust and accurate, or at least correctly reflect their level of accuracy, during all life cycle phases;
  • Requirements ensuring that outcomes are reproducible;
  • Requirements ensuring that AI systems can adequately deal with errors or inconsistencies during all life cycle phases.
  • Requirements ensuring that AI systems are resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves and that mitigating measures are taken in such cases.

Human oversight could have the following, non-exhaustive, manifestations:

  • The output of the AI system does not become effective unless it has been previously reviewed and validated by a human;
  • The output of the AI system becomes immediately effective, but human intervention is ensured afterward;
  • Monitoring of the AI system while in operation and the ability to intervene in real-time and deactivate;
  • In the design phase, by imposing operational constraints on the AI system.

In relation to the addressees of the legal requirements that would apply in relation to the high-risk:

  • It is the Commission’s view that, in a future regulatory framework, each obligation should be addressed to the actor(s) who is (are) best placed to address any potential risks. For example, while the developers of AI may be best placed to address risks arising from the development phase, their ability to control risks during the use phase may be more limited. In that case, the deployer should be subject to the relevant obligation.
  • There is a question about the geographic scope of the legislative intervention. In the view of the Commission, it is paramount that the requirements are applicable to all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.

Audit AI

In compliance in order to ensure that AI is trustworthy, secure and in respect of European values and rules. The prior conformity assessment could include procedures for testing, inspection or certification. It could include checks of the algorithms and of the data sets used in the development phase. The conformity assessments for high-risk AI applications should be part of the conformity assessment mechanisms that already exist for a large number of products being placed on the EU’s internal market.

When designing and implementing a system relying on prior conformity assessments, particular account should be taken of the following:

  • Not all requirements outlined above may be suitable to be verified through a prior conformity assessment.
  • Particular account should be taken of the possibility that certain AI systems evolve and learn from experience, which may require repeated assessments over the life-time of the AI systems in question.
  • The need to verify the data used for training and the relevant programming and training methodologies, processes and techniques used to build, test and validate AI systems.
  • In case the conformity assessment shows that an AI system does not meet the requirements for example relating to the data used to train it, the identified shortcomings will need to be remedied.

Governance

  • A European governance structure on AI in the form of a framework for cooperation of national competent authorities is necessary to avoid fragmentation of responsibilities, increase capacity in the Member States, and make sure that Europe equips itself progressively with the capacity needed for testing and certification of AI-enabled products and services.
  • A European governance structure could have a variety of tasks, as a forum for a regular exchange of information and best practice, identifying emerging trends, advising on standardization activity as well as on certification. It should also play a key role in facilitating the implementation of the legal framework, such as through issuing guidance, opinions, and expertise.
  • Testing centers should enable the independent audit and assessment of AI-systems in accordance with the requirements outlined above. Independent assessment will increase trust and ensures objectivity. It could also facilitate the work of relevant competent authorities.

Reference:

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

--

--