How EU Proposals to Regulate AI Will Stifle Innovation

MIT’s Andrew McAfee advocates for permission-less innovation to encourage the flow of breakthrough ideas

Sep 10 · 4 min read

By Andrew McAfee, courtesy of

While some of us under lockdown churned through streaming services and sourdough starters, others decided to use the time for a little self-improvement — taking up Dutch or Danish, Swahili or Esperanto.

Duolingo, the free app many downloaded, has become the world’s most popular way to learn a second language. The company is now hoping to ride that interest into an initial public offering: last week it said it wanted to be valued at up to $3.4 billion in its IPO.

But EU proposals for regulating AI threaten the use of one of Duolingo’s niftiest innovations, the English Test, in its current form. They also make it less likely that the next round of similar innovations will be developed in the bloc. That’s a problem.

The English Test provides a way for people to demonstrate their language proficiency to more than 3,000 educational institutions around the world. Test-takers don’t need to register in advance or travel anywhere; they just need an internet-connected device with a webcam and an hour to spare. The test guards against cheating (that’s what the webcam is for); assesses literacy, conversation, and comprehension; returns results in two days; and costs less than $50. It’s also a high-risk AI system, according to the EU proposal.

This label applies because the test uses AI, both for personalization — questions appropriate to the taker’s skill level are generated on the fly — and for grading. Systems that use AI for “assessing participants in tests commonly required for admission to educational institutions” are put in the high-risk category by the EU’s proposal. Providers of high-risk AI systems have to take a number of steps under the proposed regulations related to a long list of factors: data and data governance, transparency, oversight, robustness, accuracy, and security, among others. The requirements for high-risk AI systems and the obligations placed on their providers take up 10 pages in the proposal. And as far as I can tell, all these requirements and obligations would kick in before the systems can be offered in any form: before a first beta offering is released or before the provider has any indication of product-market fit. Or whether there’s a market at all.

It’s a safe bet that AI-using entrepreneurs and early-stage investors — which these days means essentially all tech entrepreneurs and investors — will balk at these expensive and time-consuming requirements and direct their energies away from high-risk application areas.

The EU, then, will generate less tech innovation in, among other important activities, admitting students to schools and grading their exams, making hiring and promotion decisions, establishing creditworthiness, dispatching first responders, letting prisoners out on parole and using analytics to fight crime.

On the other hand, Europeans will know that all AI used in these areas has gone through an extensive vetting process. Your views about that trade-off indicate which of the two main schools of thought about tech regulation you belong to.

One school holds that precisely because some technologies are so powerful, they must be rolled out with a lot of upfront planning and continued oversight. This is the “upstream governance” approach, of which the EU’s proposed AI regulation is a clear example.

Advocates of the other approach, often called “permission-less innovation,” say the activities listed above need to be exposed to lots of attempts to improve them, even wacky ones that come from unknowns and outsiders on a shoestring budget (like Duolingo when it started). That’s the best way to find real breakthroughs.

Restricting the field of potential innovators to those who can afford high upfront costs is a bad idea. It leads to slower progress and growth and fewer hometown success stories, which are also risks. These risks aren’t just theoretical. The EU’s General Data Protection Regulation, which came into effect in May 2018, has been accompanied by increased market share for Google and reduced venture investment in Europe.

And the benefits to the EU of all the extra governance are not obvious; the EU’s own progress report, released in June 2020, found fragmented enforcement and insufficient resources.

The EU is a rich and well-educated region with great technological strengths. Yet, it is lagging as we move deeper into the “second machine age,” and by some measures, lagging badly. There are many reasons why. One of them, I believe, is that more upstream governance translates to less downstream innovation.

This blog first appeared o July 25 here and is re-posted with permission.

Andrew McAfee is a principal research scientist at MIT and cofounder of the MIT Initiative on the Digital Economy

MIT Initiative on the Digital Economy

The IDE explores how people and businesses work, interact, and prosper in an era of profound digital transformation. We are leading the discussion on the digital economy.


Written by


Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.

MIT Initiative on the Digital Economy

The IDE explores how people and businesses work, interact, and prosper in an era of profound digital transformation. We are leading the discussion on the digital economy.