AI in Europe

Ensuring a Competitive Future for the EU Market

Nathan Summers
Luxembourg Tech School
9 min readMar 25, 2024

--

Written by Nathan Summers and Dr Sergio Coronado

AI can help propel Europe to a brighter future — but we must continue to be proactive — Generated using DALL·E 3 (Prompt: Anthropomorphic AI helping Europe to a prosperous future)

On Wednesday, March 13th, 2024, the European Parliament voted to approve the AI Act after having reached a provisional consensus last December. This represents a significant milestone in the global regulation of Artificial Intelligence, as the EU AI Act is the first of its kind, anywhere in the world. Once the regulations are fully implemented, AI systems developed and deployed within Europe will be divided into four categories based on an assessment of their risk and will be subject to oversight on this basis (For more information on these risk categorizations and the corresponding legal obligations, please see our previous publication on the EU AI Act). The risk categories are as follows:

EU AI Act Risk Hierarchy (data from European Commission)
  1. Unacceptable Risk: These are systems deemed too dangerous to ever be permitted within the EU. These include systems for social scoring and real-time public identification.
  2. High Risk: These systems will need to be in strict compliance with certain obligations before being put to market. AI systems used in healthcare, vocational applications, or other critical infrastructure will be classified here.
  3. Limited Risk: These systems will require deployers to be transparent about AI usage. This will apply to chatbots, AI-generated text and images, etc.
  4. Minimal or No Risk: These systems require no further compliance. This category will encompass AI used in video games or spam filters, among others.

While it is undeniable that the AI Act will provide necessary protections for consumers, there is reasonable concern as to how it will impact the future of AI investment and development within Europe. Already, European investment into AI is significantly lower than in the United States (7% of US investment) or China (20% of Chinese investment) and there is fear that these regulations could create additional hurdles that stifle innovation, hinder growth, and deter investment into European AI firms.

When it comes to investment in AI, the United States leads the way, with over $29 billion in private funding dedicated to the sector last year. China follows closely behind with investments of almost $10 billion. However, the European Union (EU) is significantly behind, with investments amounting to just over $2 billion.

Relative Investment into AI Research and Development by Region (data from above source)

Although it remains to be seen whether this will be the case, some of Europe’s largest AI firms are already eyeing the US for expansion and investment opportunities.

Mistral AI, one of Europe’s most influential AI companies, has recently entered into a partnership with Microsoft with an apparent goal of expanding their operations into the US market. As one of Europe’s few AI “unicorns,” where Mistral leads, others are inclined to follow. Hungary’s OnCompass and Denmark’s Corti, both AI healthcare startups, have also recently expanded into the US. As both companies are set to be considered “High Risk” within the EU — and given the level of AI investment in the US — these moves are unsurprising.

In order to maintain competitiveness in the ever-growing AI market, it is crucial for the EU to offer compelling incentives and funding for firms to stay within the region. The European Commission’s initiative to provide European AI firms with access to EU supercomputers is a strong start, but alone will likely not go far enough. The question then remains: how best to ensure a competitive future for the EU AI market?

Expand Public Funding

Although Europe may lack the extensive venture capital infrastructure found in the US, it compensates with robust public funding initiatives. Horizon Europe stands out as a prime example, ensuring that European technology and innovation maintain their global competitiveness. Currently, public funding for AI research falls within this initiative, laying a strong foundation for advancement. However, this also implies that AI research will compete with other innovative endeavors for a share of the budget, presenting a potential challenge amidst competing priorities.

Horizon Europe’s Broad Innovation Domains (from Horizon Europe)

To address this challenge and to further incentivize AI firms to remain and develop in Europe, the EU should establish dedicated initiatives for the public funding of AI research and development. This will ensure that AI projects do not need to compete for funding within the Horizon Europe framework. By creating such a dedicated funding mechanism, the EU can provide tangible support to firms, promoting competitiveness and fostering innovation in the AI sector. This initiative will help make Europe an attractive destination for future AI research and development.

Foster Stakeholder Collaboration

Because of the far-reaching potential impact of AI, its stakeholders are uniquely interdisciplinary. Academics, entrepreneurs, regulators, industry, and the public are all invested in the success, but also the development, of AI technologies. These stakeholders hold a variety of positions on what AI should be used for, how it is regulated, and how its benefits and risks should be balanced. Furthermore, the expertise required to develop safe, robust, and effective AI goes beyond that of only computer scientists. To develop effective healthcare AI, healthcare professionals must be involved.

The EU can bolster its AI market by investing in initiatives that foster interdisciplinary collaboration among relevant domain experts and AI developers. This will ensure that AI development benefits from a wide range of perspectives and expertise. By organizing workshops, conferences, competitions, hackathons, and other events that encourage interdisciplinary dialogue and establishing collaborative platforms for knowledge sharing, the EU will foster the creation of AI technologies that are not only technologically advanced but also ethically robust, socially responsible, and aligned with societal needs. This emphasis on interdisciplinary collaboration enhances the EU’s position in the global AI market by cultivating innovation, fostering public trust, and driving sustainable growth.

Generated using DALL·E 3 (Prompt: Depict the harmonious collaboration between academics, entrepreneurs, policymakers, and other interdisciplinary stakeholders)

Facilitating Data and Resource Availability

Key to the development of safe, robust, and effective AI is the availability of data. In line with the European Commission’s initiative to provide access to EU supercomputers, access to high quality data could provide a significant incentive to AI firms. This could be particularly true for startups and small companies, which may be unable to acquire such data themselves.

By generating, curating, and providing access to data, the EU will not only save AI firms the expense of sourcing this data themselves but will also save them the legal burden of ensuring GDPR compliance, as it can be safely assumed that any data the EU provides will be in compliance with its own laws. Such an initiative would thus minimize regulatory burdens while simultaneously stimulating the AI startup sector within Europe.

Strengthen Talent Development

Beyond data and computational resources, an educated, talented, and experienced workforce is a prerequisite of a thriving AI research and development sector. By supporting initiatives such as AI education programs, research grants, and entrepreneurship schemes, the EU can cultivate a skilled workforce and provide European tech firms with the human resources to drive innovation and create value in the AI sector.

Facilitate Regulatory Compliance

Regulations must avoid needlessly burdening AI development — Generated using DALL·E 3 (Prompt: An artificial intelligence constrained by legality)

An area of pressing concern in adapting to these new regulations is the cost of adhering to the regulatory requirements. These new regulations will inevitably incur time, resource, and financial expenses, all of which will present challenges for AI companies, especially those in their early stages.

One approach to mitigate this issue involves streamlining the regulatory process to minimize the impact of these costs. Simplifying compliance procedures and offering tax incentives for AI research and development could attract more investment into the European AI sector. Clear and predictable regulations can reduce uncertainty for AI firms, making it easier for them to navigate compliance requirements and operate within the region. By implementing such measures, the EU can create a more conducive environment for AI firms to thrive, ultimately incentivizing them to remain in Europe.

Institutional Responsibilities

Photo by Frederic Köberl on Unsplash

To some extent, early versions of these recommended initiatives already exist within the EU. However, these current iterations are not sufficient to ensure that the European AI market remains robust. The recommendations discussed herein seek to address these concerns by explicitly and proactively enhancing these new legal frameworks by streamlining regulatory compliance, expanding public funding initiatives, fostering interdisciplinary collaboration, facilitating access to data and resources, and strengthening talent development initiatives.

By implementing these measures comprehensively and effectively, the EU can create an environment that not only safeguards consumer interests but also fosters innovation, bolsters competitiveness, and sustains growth in the European AI market. As the global landscape of AI regulation and development continues to evolve, the EU’s proactive approach in advancing these initiatives will be critical in shaping a future where AI technologies benefit society while maintaining ethical standards and promoting economic prosperity.

With the passing of the AI Act, the EU has reiterated its commitment to consumers. Now, it remains important to reassure AI developers that Europe will remain a strong contender for a robust AI market.

Organizational Recommendations

Photo by Felicia Varzari on Unsplash

While there remains much that European Institutions should do to alleviate concerns surrounding the competitiveness of the European AI market, private organizations can take certain preemptive measures to continue the development of AI within the new context of the AI Act.

  1. Leverage Existing Public Initiatives: Private organizations based in the EU should actively seek out and leverage public initiatives supporting AI research and development, such as funding provided through Horizon Europe and access to EU supercomputers. By using these existing public structures, organizations will not only gain access to resources and funding to advance their AI initiatives, but also demonstrate to EU Institutions that these public initiatives are valuable and that their continued support is necessary for the success of the EU AI market.
  2. Forge Strategic Partnerships: Collaboration with other private organizations, research institutions, and relevant domain experts is essential for navigating the complex landscape of AI development and regulation. Private organizations should actively seek out strategic partnerships that enable knowledge sharing, technology transfer, and joint research initiatives. By pooling resources and expertise, organizations can reduce resource drain, accelerate innovation, overcome common challenges, and position themselves competitively in the global AI market.
  3. Advance Skill and Talent Development: By prioritizing the development of AI talent within Europe, private organizations can cultivate a skilled workforce equipped with the expertise and capabilities needed to drive innovation and maintain competitiveness in the AI market. This could include investing in educational programs, training initiatives, and skill-building workshops focused on AI technologies and applications. By nurturing a diverse and highly skilled talent pool, organizations not only strengthen their own capacity for innovation but also contribute to the growth and sustainability of the European AI ecosystem as a whole.
  4. Engage in Policy Advocacy: Private organizations should actively engage in advocacy efforts aimed at shaping AI infrastructure and standards in the EU. This includes participating in industry associations, contributing to policy discussions, and providing feedback to policymakers on the potential impact of regulatory measures on AI innovation and competitiveness. By advocating for policies that support responsible AI development while fostering innovation and investment, organizations can help create a regulatory environment conducive to long-term growth and success for Europe within the global AI market.

By adopting these proactive measures, private organizations can continue to drive innovation and maintain competitiveness in the evolving landscape of the global AI market while upholding the ethical AI practices outlined in the EU AI Act.

--

--