the global race to REGULATE AI

marta gaia zanchi
nina capital
Published in
7 min readNov 17, 2023

--

and its implications on healthtech founders

NOVEMBER 2023

by Nadin Youssef

Recently, the rapid and widespread adoption of artificial intelligence (AI) tools has caused a significant surge in policy discussions surrounding their use. Governments are recognizing the potential risks caused by AI and the need for regulatory measures.

paper vs AI : 1-0

what does this mean for manufacturers of AI-based medical devices?

As governments acknowledge the critical importance of safeguarding the public from AI’s potential risks, medical device manufacturers will now face the challenge of aligning existing practices with evolving regulatory frameworks. This entails not only addressing the usual concerns related to the effectiveness and safety of their technologies but also navigating the ethical considerations surrounding AI in healthcare. Striking a balance between innovation and compliance becomes crucial, as regulatory measures will likely shape the future landscape of AI-driven medical solutions.

Here, we’ll break down the most recent AI initiatives in three key markets — Europe, UK, and US — and discuss how medical device manufacturers should prepare for what could change.

European Parliament votes in favor of adopting the EU AI Act

In the summer of 2023, the European Union’s Parliament voted in favor of the proposed AI Act draft legislation, marking the initiation of the final phase in the legislative process. During this stage, discussions among the council, parliament, and commission will ensue, culminating in the negotiation and finalization of an agreement for the final version of the EU AI Act. The landmark bill adopts a risk-based approach to the design, development, marketing, and use of AI and will likely be negotiated for the following 12–15 months before a final agreement is reached. Once adopted, it is expected that a 3-year transition period be put in place, similar to that of the EU MDR 2017/45.

Under the AI Act, medical devices and IVDs with AI components requiring approval from a Notified Body under the EU MDR or IVDR will be automatically classified as high-risk devices. The proposed framework categorizes devices as:

Minimal risk — AI systems are not required to conform to any legal obligations. It has been proposed that manufacturers of minimal-risk systems follow codes of conduct to encourage voluntary compliance to the mandatory requirements set out in the legislation.

Limited risk — AI systems that interact with the population, such as chatbots, emotion recognition programs, and deep fake systems that can manipulate images, sounds, and videos. This category requires manufacturers to comply with a set of transparency requirements.

High risk — AI systems that require third-party conformity, such as that from a Notified Body, will fail under ‘high risk’. This will include clinical decision support systems, diagnostics, biometric systems, and critical infrastructure programs.

Unacceptable risk — AI systems considered a threat to the population will be banned from the market. These include devices that can manipulate the cognitive behavior of people, social scoring devices based on people’s socio-economic status or personal characteristics, and facial recognition databases.

As you’ve probably noticed, Class I devices are in a gray area — they conform to the MDR but don’t require a Notified Bodies assessment. It is still unclear if the EU will provide guidance around this, although it’s suspected that some form of best practice ‘guidance’ may be published to encourage Class I manufacturers to voluntarily apply to Notified Body assessments. This highlights the lack of harmonization between current medical device regulations and the proposed AI Act, creating uncertainty around the up-classing of AI-based medical devices. Even more concerning is the lack of clarity around who will be held accountable for assessing these technologies for compliance with the AI Act. It seems Notified Bodies will need to audit compliance with the EU AI Act in parallel to the existing EU MDR, but given the current 18-month waitlists, it is unclear how challenging this will be to implement in practice.

UK catches up

The consequences of Brexit on the UK’s medical device landscape have left the country in an awkward position — the UK currently requires manufacturers to comply with the EU MDD (referred to as the UK MDR), unlike the rest of Europe, which has now fully transitioned to the EU MDR. However, efforts to develop new regulations are underway. In the meantime, the UK has committed to several initiatives that will minimize the risk of disrupting the supply of innovative medical devices for UK patients.

In 2021, the UK Government opened a public consultation on the future of the country’s medical device regulation. As a result of the feedback, the MHRA aims to harmonize future regulations with the International Medical Device Regulators Forum, which the EU MDR is based on, and has also committed to establishing recognition routes for trusted jurisdictions such as the EMA and the FDA. Moreover, a Statutory Instrument has also come into effect to ensure that devices complying to the EU MDR, which are CE-marked, are granted access to the market until June 30, 2030. The MHRA has also expressed its ambition to tighten post-market surveillance requirements and is expected to publish legislation that will apply to both CE and UKCA-marked devices at the end of this year.

In further efforts to remain competitive, the UK Government, NICE, and MHRA announced the launch of the Innovative Devices Access Pathway (IDAP) pilot scheme. The initiative aims to accelerate NHS access to innovative technologies by providing manufacturers with regulatory and market access support at the key stages of development. It is believed that this will also facilitate reimbursement and, therefore wider adoption.

Specifically for AI technologies, the MHRA has most recently announced a ‘regulatory sandbox’, termed the AI-Airlock, to support AI manufacturers in safely developing and deploying AI-based medical devices in the NHS. In practice, the initiative will allow manufacturers to identify where they need to gather more evidence within a safe and controlled environment. The hope is that using a regulatory sandbox would allow patients quicker access to novel innovative technologies.

In contrast to the EU, the UK’s AI approach seems much more conservative. Earlier this month, the UK AI Safety Summit brought the topic of regulation to the wider public. Although no legislation has been published, the government has announced plans for non-statutory frameworks that will apply to all AI-based technologies.

across the pond: Biden’s executive order

Biden’s administration recently published an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — marking the US’s entry into the global to regulate AI.

In the EO the Biden administration puts the Department of Health and Human Services (HSS) at the forefront of AI’s implications on US patients. The HSS will be responsible for establishing an AI Task Force to develop a safety program to avoid harmful and unsafe healthcare practices involving AI and encourage responsible development and use. The department will need to consult with relevant agencies, most notably the FDA, to determine how AI-enabled technologies are assessed, deployed, and monitored. Specifically, the EO stated several directives:

Establishing an AI Assurance Policy

As part of the broader strategy, the HHS has also been ordered to develop an AI Assurance Policy that sets out infrastructure requirements for pre-market and post-market performance evaluations against real-world data.

Establishing an AI Safety Programme

The HHS will need to coordinate with recognized Patient Safety Organisations to produce a common framework for capturing AI-induced clinical errors, identify appropriate methods to capture these clinical errors, develop best practices and guidelines, and disseminate those best practices amongst key stakeholders.

Developing a Strategy for Use of AI in Drug Development

A strategy for the regulation of AI-enabled drug development will set out principles and goals for suitable regulation across each phase of the drug development process. This will also include a framework for public-private partnerships to bolster this new regulation.

Grants, Awards, and Promotion

Finally, the HHS will also be responsible for supporting responsible AI manufacturers by collaborating with private sector companies, allocating awards to projects that improve healthcare data quality, and accelerating relevant grants through the NIH.

As the FDA and the HHS initiate discussions on how they will approach the published directives, it will be interesting to see whether the US can effectively implement patient safety safeguards while maintaining its highly competitive and innovative landscape.

a call to (informed) action

Today, there’s no immediate action required for founders of companies offering AI-based medical devices. However, leadership teams should remain aware of the rapidly evolving landscape. The extent of changes required to the regulatory efforts of developers of AI technologies in the healthcare industry remains to be seen. A vigilant approach and early planning are critical to the optimal use of resources to navigate the shifting landscape while staying ahead of the market.

For those developing AI technologies that have yet to reach the market, it is essential to pay attention to these proposed upcoming changes and understand how they can impact your product’s risk classification and post-market surveillance processes. Not only that, but there is a high probability that the already monumentally busy regulatory bodies responsible for regulating medical devices, such as the EMA, FDA, or MHRA, will also be responsible for ensuring these new potential changes are adhered to — increasing timelines for approval.

Finally, keeping up to date with the International Medical Devices Regulators Forum’s (IMDRF) publications can help build your understanding of standards associated with AI medical devices and ensure you’re following globally established guidance as best practice.

at last, a reminder

We started this blog with an important statement — that the adoption of artificial intelligence (AI) has been rapid and widespread. While this is undeniable, in our experience, much of the rapid and widespread adoption has been relegated to insipid marketing claims in a technology-push approach that has lost sight of the true utility of such technology in specific applications. Until now, there’s been no downside to companies freely attaching their “AI-enabled” tag on their device packaging and pitch presentations. Especially now and as always, we encourage founders to focus on finding the best solution to a significant problem, which may or may not include AI — versus finding what they can do with AI.

Nadin

--

--

marta gaia zanchi
nina capital

health∩tech. recognizing the need = primary condition for innovation. founder, managing partner @ninacapital