Digital Diplomacy
Published in

Digital Diplomacy

Abstract, Distorted view of Computer Motherboard. Photo by @lazycreekimages on Unsplash.

The EU Artificial Intelligence Act

Screenshot of the page where the proposal and annexes were retrieved.

1. Harmonised rules on AI and amending legislation

2. Explanatory Memorandum?

“The purpose of the explanatory memorandum is to explain the reasons for, and the context of, the Commission’s proposal drawing on the different stages of the preparatory process.” Tool #38 in the regulation toolbox (yes, there is such a thing).

“By improving prediction, optimising operations and resource allocation, and personalising service delivery, the use of artificial intelligence can support socially and environmentally beneficial outcomes and provide key competitive advantages to companies and the European economy.” (p. 1)

“The common foundation that unites these rights can be understood as rooted in respect for human dignity — thereby reflecting what we describe as a “human-centric approach” in which the human being enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields.”

Here is an illustration from ESHRE on the three main institutions involved in EU legislation.

“This proposal constitutes a core part of the EU digital single market strategy.”

“…the proposal defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards.”

“…the principle that a central authority should have a subsidiary function, performing only those tasks which cannot be performed at a more local level.”

“Compliance with these requirements would imply costs amounting to approximately EUR € 6000 to EUR € 7000 for the supply of an average high- risk AI system of around EUR € 170000 by 2025.”

“Those have been estimated at approximately EUR € 5000 to EUR € 8000 per year. Verification costs could amount to another EUR € 3000 to EUR € 7500 for suppliers of high-risk AI.

“Member States will have to designate supervisory authorities in charge of implementing the legislative requirements. Their supervisory function could build on existing arrangements, for example regarding conformity assessment bodies or market surveillance, but would require sufficient technological expertise and human and financial resources. Depending on the pre- existing structure in each Member State,

“The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation.”

“The conformity assessment procedure is a proof that the general safety and performance requirements are fulfilled.”

“New ex ante re-assessments of the conformity will be needed in case of substantial modifications to the AI systems.”

AI regulatory sandboxes establish a controlled environment to test innovative technologies for a limited time on the basis of a testing plan agreed with the competent authorities.”

“Ex-post enforcement should ensure that once the AI system has been put on the market, public authorities have the powers and resources to intervene in case AI systems generate unexpected risks, which warrant rapid action.”

“Those codes may also include voluntary commitments related, for example, to environmental sustainability, accessibility for persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of development teams.”

--

--

--

Tech, digital, and innovation, at the intersection with policy, government, and social good.

Recommended from Medium

Personality of AIs

Augmented Reality

7 Reasons Why You Should Invest In A Chatbot NOW

Artificial Intelligence developer — Enroll for AI Certification

The modern issues surrounding AI

2 Google Engineers Resign Over Firing Of Artificial Intelligence Researcher

2D to 3D scene reconstruction from a single image. DEMO

Computing three-dimensional depth from a flat image has long been a problem in computer vision. However, despite our innate ability to see a scene from a single picture while simultaneously considering geometry and semantics, decades of research have shown that this is a very tough feat to do. A variety of algorithms now use Lidar or other specialized depth sensors to provide more accurate 3D estimations. In contrast to cameras in smartphones, drones, and other devices, these sensors are frequen

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alex Moltzau

Alex Moltzau

AI Policy and Ethics at www.nora.ai. Student at University of Copenhagen MSc in Social Data Science. All views are my own. twitter.com/AlexMoltzau

More from Medium

The term artificial intelligence was first coined in 1956 and has been used ever since to describe…

Keynote on Neuro-Symbolic AI -Building Sentient Machines at BUITEMS Quetta.

Two schools of thoughts for Responsible AI: which one you subscribe ?

The Great Debate: Tim from 2020 vs Tim from 2022 on whether AI will kill us all