The State of GenAI for 2025: Observations and Predictions — Part 2: Industry, Safety & Politics
The year 2024 has been a landmark for generative AI research, as highlighted in the first part of our series. Beyond technological breakthroughs, it has also ushered in major shifts across industries, safety standards, and the political landscape. As AI adoption speeds up, conversations about regulations, ethical considerations, and market trends have become more intense.
In this second part of our article series, we will dive into the major developments shaping the AI industry, examine the evolving challenges around safety and misuse, and analyze the growing impact of AI on global politics and regulations.
Industry
Rapid Revenue Growth in GenAI vs. Traditional SaaS
According to recent analyses [1], the 100 highest-grossing AI companies are generating revenue at a far faster pace than their SaaS counterparts did in previous waves of software innovation. Indeed, Stripe company reported that from the first payment on their platform, GenAI companies took on average 11 months to reach a 1 million dollars annual revenue vs 15 months on average for previous tech companies, while reaching 30 million in annual revenue is 5 times faster for GenAI companies. This revenue acceleration reflects GenAI’s ability to capture a variety of high-demand markets quickly, spanning from public use cases to specialized professional applications. As GenAI products continue to mature, industry analysts expect revenue to keep ramping up, setting GenAI apart from traditional software models with its versatility and faster route to market adoption.
Revenue Growth Amid AI Bubble Concerns
As pointed out Nathan Benaich’s state of AI report [2], GenAI investments have strongly increased this year (see fig. 2). Mega rounds such as OpenAI and xAI raising $6B each dominate AI financings, despite a drift in M&A activities (it drifted 23% from its 2021 peak). Yet, for prominent model providers, questions around long-term profitability have given rise to concerns about an AI bubble. The most striking example is the most popular genAI company OpenAI, whose valuation rose at 157B$ in October 2024 even if revenues should represent $3.7 billion in 2024 with a 5B$ in losses. To balance that, OpenAI’s revenue is expected to jump around 11.6B$ next year making its business model more sustainable and its valuation more coherent
The Shift from Models to Products
Driven by the need for revenue and long-term viability, GenAI-focused companies have thus shifted from developing foundational models to creating practical, market-ready products, like ChatGPT Entreprise or Microsoft Copilot.
Some GenAI-first companies have already begun to generate significant revenues. As an example, ElevenLabs, which became the market leader in text-to-speech technology, has achieved unicorn status this year with a valuation of $1.1 billion and now sees its tools used by 62% of Fortune 500 companies.
Expansion of AI-Powered Applications
In 2024, GenAI began making significant inroads into professional sectors, notably in law, where AI-powered tools are now used for tasks like drafting documents, case management, discovery, and due diligence. Major U.S. law firms have started hiring in-house AI experts to integrate these tools and streamline operations.
For developers, AI coding assistants, with GitHub Copilot as the frontrunner, are changing the way they work, while Anthropic’s Claude and Vercel’s V0 launched new capabilities to code and execute requests directly in-browser, expanding the potential of AI agents in the field.
While law and software development are key examples, generative AI is rapidly expanding into many other industries. A study by IBM found that 74% of business leaders plan to implement GenAI in their organizations within the next three years [3]. Additionally, a Terra Nova report highlights its impact on hiring, career development, and workforce transformation [4]. As adoption grows, AI is set to play an increasingly central role across diverse professional sectors.
The perspective of on-device AI
As seen in our previous article, the performance of LLMs increased significantly, along with improvements in their performance-to-size trade-off, thanks to techniques such as quantization, hybrid SSM architectures, and small LLMs. Models like the Gemma family, SmolLM or the smallest versions of Llama 3 now enable direct on-device inference. This advancement unlocks new opportunities for confidentiality, a critical concern for industries handling sensitive data.
Moreover, the ability to perform on-device inference extends to smartphones, opening new possibilities for secure and private AI-powered applications directly on users’ personal devices. On that topic, Google released Gemini Nano, a lightened version of its well-known model, to let android application developers integrate its capacities to their products [5].
This is particularly impactful as it enables real-time, privacy-preserving AI functionalities without dependence on cloud infrastructure. Apple, recognizing its lag in this area, recently announced a partnership with OpenAI to address this gap [6], signaling the potential rise of a new major player in the sector. The partnership will allow Apple to integrate ChatGPT in its products UX and Siri’s resources.
Regulatory and Legal Challenges
Concerns are raising about copyright issues and potential monopolistic practices.
With AI-generated content on the rise, legal complaints are mounting from creators who point out the use of their work to train AI models. After facing complaints from the French newspaper Le Monde about OpenAI using its article content without authorization, the two companies established a partnership. This agreement granted OpenAI access to some Le Monde data in exchange for clear attribution whenever that content was utilized. However, OpenAI recently scored a significant victory in a copyright case, even though it might be a temporary one [7]. OpenAI won’t have to compensate its plaintiffs because the court ruled that they failed to demonstrate concrete harm caused by the alleged misuse of their copyrighted materials.
Furthermore, regulatory bodies (parliaments, courts…) are scrutinizing the partnerships and alliances among top GenAI companies for anticompetitive behavior, particularly in the U.S. which leads the global GenAI market.
Market Leadership and Hardware Dominance
Confirmation of Nvidia’s dominance in AI hardware: In 2024, Nvidia solidified its dominance in the AI hardware market, supplying high-performance GPUs, notably the H100, which remain essential for training and running large GenAI models. This market control still places Nvidia significantly ahead of competitors like Intel and AMD (see fig. 5), who continue to trail in revenue despite efforts to develop competitive AI-ready hardware. Nvidia’s chips are now at the core of most advanced GenAI applications, with major players such as Meta and xAI relying on thousands of Nvidia GPUs to drive their models training.
However, at the end of January 2025, Deepseek-R1 caused a sensation and weakened Nvidia’s position. Although this event significantly shook Nvidia’s stock value in the short term, its market value appears to be gradually recovering, reaffirming its dominance in the AI hardware market.
Actions to prevent over-dependence on Nvidia: Those firms are yet willing to be less dependent on Nvidia: Google unveils its new processor Axion [8a], and Meta considers using its in-house AI inference chip for GenAI training in the future [8b]. On top of that, newcomers like Cerebras and Groq are trying to challenge Nvidia’s monopoly with innovative products such as LPU, which are designed to optimize processing efficiency for AI workloads.
Safety
Models’ alignment
A wave of questions followed the release of Google’s Gemini 1.5 model, which exhibited significant biases [9]. For instance, the AI generated images of Black or Asian individuals when prompted to depict German soldiers from World War II or the Founding Fathers of the United States. Even if this behavior was fixed, this incident has thrown light on the possible failures of models, especially on very sensitive topics.
Models misuse, and its threat to democracy
Another concern is the use of deepfakes. 2024 has seen scandals involving defamation through degrading generated images of real individuals — ranging from celebrities to high school students. Deepfakes are also a strong threat to democracy because of its disinformation potential, as Indian elections recently witnessed [10] with various posts online of dead politicians. On top of that, with open-source models like Stable diffusion 3.5, proprietary ones like SORA or X’s recent release of Grok-2 usage inside its application, it is increasingly easy to generate high quality malicious contents.
Numerous disinformation campaigns on social media have relied on artificial intelligence. Russia, for instance, has created fake accounts and generated false images for political destabilization purposes [11]. Even if currently, there is still little evidence suggesting these practices have had a significant impact, this practice is watched closely.
Some initiatives have emerged to address these issues, such as an annual artificial intelligence summit organized by the United Kingdom and South Korea.
GenAI for hacking
The ability of models to generate malicious code is also concerning, as it makes cybercriminality more accessible. A malicious individual could, for example, prompt a chatbot to produce code for an SQL injection attack, or generate phishing emails quicker.
Additionally, the models tend to have inappropriate behavior if the user applies the right prompts. Even if this behavior and the associated exploit, known as jail breaking, is often fixed by model providers, it has remained a studied phenomenon in the field of LLM security.
As many LLM users send confidential information when using tools like ChatGPT [12], models are sometimes trained on proprietary data (found on the internet or users’ data). Then, when these models are used, the right prompt can reveal this confidential information. For instance, entering the beginning of Samsung documentation leading to the LLM answering with the full documentation. This phenomenon is known as data resurgence.
Politics
Emerging regulations
Initial regulations for AI are beginning to emerge. Beyond the European AI Act that is the strongest regulation until now, California issued a controversial AI safety bill, SB 1047. The bill intended to make it mandatory for AI model providers to implement a “kill switch” to be able to deactivate their systems if needed. However, it was vetoed by Governor Newsom, highlighting the difficulty of implementing relevant legislation for the field [13].
Geostrategic growing stakes
Generative AI is increasingly regarded as a geostrategic challenge, as several events tend to show it:
- The United States mandated the relocation of a TSMC factory to Arizona — TSMC being one of the world’s leading semiconductor manufacturers — and opposed a Saudi fund’s partial acquisition of the AI company Anthropic
- Although Chinese companies can still utilize U.S.-based data centers, U.S. policies have restricted Chinese access to Nvidia hardware.
- Retired U.S. army general Paul M. Nakasone was nominated at OpenAI’s board
- Anthropic and Palantir recently partnered so that the latter could use Claude models [14]
- Japan views GenAI sector as an opportunity to boost economic growth and has stated its intent to implement regulations favorable to AI practices [2].
Our Predictions For 2025
Industry
On-device GenAI will diversify unlocking new use cases
On-device GenAI is poised to deeply impact industrial applications by enabling advanced functionalities directly on user devices. As an example, the fashion sector has been leveraging GenAI for personalized shopping experiences with use cases like virtual try-ons and more innovative AI-driven recommendations [15].
These on-device GenAI will go even deeper in our life with the integration of GenAI in our day-to-day devices. Smartphones are obviously the first devices with such capabilities as we have seen with the Samsung Galaxy S24 series having all GenAI capabilities. This trend will grow as historical major players like Apple announced introducing GenAI [16] in their devices but also more recent ones like Realme, Oppo or Vivo have also commercialized their GenAI-enabled smartphones. Even if smartphones are the most striking example, other devices tend to integrate GenAI for a more personalized user experience. For instance, the Ray-Ban | Meta smart glasses, announced a few months ago, enable real-time language translation and improved voice control thanks to GenAI [17].
GenAI leaders will tend to build applications
As seen previously, OpenAI has raised a record $6.5 billion in funding in the last months, making investors very careful about revenues and potential benefits. To improve their balance sheet, OpenAI needs to monetize more deeply their innovations. Consequently, OpenAI will develop new business lines in parallel with developing cutting edge models. Concretely, this will be mainly new apps and products to differentiate themselves from other AI leaders, generate higher margins and create greater retention from customers. This is even more likely that open-source models such as Llama or Qwen are narrowing the gap in terms of performance. OpenAI is the most striking example but many other companies such as xAI or Anthropic are in a similar situation and are likely to do so.
Politics
As previously discussed, GenAI is facing increasing political and safety challenges. In 2025, these issues will become even more pronounced, with two opposing blocs emerging:
- The Pro-Regulation Bloc — Led by European policymakers and segments of the U.S. political landscape, this group advocates for stronger regulation and oversight of GenAI to ensure ethical development and minimize risks.
- The Deregulation Bloc — Comprising major tech companies and influential figures such as Donald Trump and Elon Musk, this faction argues that excessive regulation stifles innovation and economic growth.
The Growing Demand for AI Regulation
Public protests will intensify
Since ChatGPT’s release in late 2022, GenAI innovations have accelerated dramatically. In response, concerns about AI’s impact have fueled global protests. In May 2024, PauseAI, a movement founded in 2023 to halt AI advancements beyond GPT-4 until safety measures are in place, staged demonstrations worldwide ahead of the 2nd AI Safety Summit in Seoul [18].
Initially led by tech experts, the protest movement has now expanded. Artists, one of the most affected groups, are increasingly vocal, with over 10,000 creatives — including Kevin Bacon, Thom Yorke, and Julianne Moore — signing petitions against GenAI’s impact on their industry [19].
As AI continues to disrupt various fields, new groups — including journalists — are beginning to push back. For instance, The Guardian’s staff recently expressed outrage over AI being used to generate content during a strike. This signals that public resistance against AI will likely escalate in frequency and scale [20].
Increased AI Regulations in Europe
The AI Act, approved in August 2024, marked a turning point in AI governance. As GenAI becomes further embedded in daily life, European leaders are expected to push for even stricter oversight. This push for regulation will coincide with Donald Trump returning to the White House, backed by Elon Musk, creating a sharp contrast between regulatory approaches in the U.S. and Europe.
One of the key events driving AI governance discussions will be the Artificial Intelligence Action Summit in Paris (February 2025). This summit aims to establish concrete strategies for responsible AI development, bringing together world leaders, corporate CEOs, and NGO representatives [21].
The Push for Deregulation
Tech leaders and politicians advocating minimal AI regulations
Elon Musk has made his stance against AI regulation clear by making Grok, his AI chatbot, available to all X users. This move signals his resistance to government-imposed restrictions on AI technologies. Grok offers “unfiltered answers” and has recently introduced an image generation model named Aurora [22].
However, the “unfiltered” nature of Grok has raised concerns about the lack of regulation. Experts warn that the release of X’s AI software, Grok, was followed by a surge in online racism. Signify, an organization tracking online hate in sports, reported an increase in abusive content. Organizations like the Premier League and FA are actively addressing the issue and urging social media companies to combat online abuse. [23]
With Trump’s re-election, the U.S. is unlikely to impose strict AI regulations — if any at all. His administration is expected to take a hands-off approach, allowing AI companies to operate with greater freedom [24].
Meta has also shifted toward deregulation by ending content moderation on its platforms. The company discontinued its third-party fact-checking program and replaced it with a user-driven “Community Notes” system, similar to X’s approach. This reflects a broader move towards self-regulation and minimal oversight in AI-driven content moderation [25].
Conclusion
In 2024, the AI industry is undergoing rapid transformation. GenAI companies are achieving record-breaking revenue growth, shifting from foundational models to more product-driven strategies, and expanding into professional applications such as law and software development. Companies like OpenAI and Anthropic are accelerating their efforts to develop GenAI-based applications, leveraging their expertise in foundational models to justify the massive investments they have received.
Additionally, the push for on-device AI is unlocking new possibilities for privacy and efficiency, while emerging competitors like Cerebras and Groq seek to challenge Nvidia’s dominance. These trends in on-device AI are expected to diversify further in 2025, making GenAI even more integral to daily life.
However, these advancements come with risks. Concerns around AI safety, misuse, and potential monopolistic practices continue to grow. Issues such as deepfakes, data resurgence, and cyber threats highlight the urgent need for responsible AI deployment. Despite increasing investment and commercial success, companies must navigate legal scrutiny and ethical challenges.
As GenAI profoundly impacts the economic and industrial landscape while also raising safety and political concerns, governments are taking an increasingly active role in shaping its future, with a strong focus on security and geopolitical strategy. In this context, and with the election of Donald Trump, the political landscape in 2025 is expected to be highly polarized regarding AI regulation:
- Europe will likely strengthen AI regulations, particularly following the implementation of the AI Act.
- The U.S., under Trump’s administration, is expected to minimize AI oversight, favoring rapid innovation.
- Tech leaders such as Elon Musk and Meta are advocating for deregulation, prioritizing free-market dynamics over government intervention.
As AI continues to reshape industries, economies, and global politics, the coming years will determine how well innovation and regulation can coexist. The upcoming Paris AI Summit and similar initiatives signal a growing global effort to ensure AI develops in a secure, fair, and sustainable way.
General conclusion of both articles
The year 2024 has been a pivotal one for generative AI, marked by groundbreaking advancements in model performance, multimodal capabilities, and industry adoption. AI systems have become more efficient, adaptable, and widely applicable across diverse fields, from corporate applications to fundamental research. However, these rapid innovations also bring growing challenges related to safety, regulation, and geopolitical power struggles.
As AI continues to reshape industries and societies, governments are increasingly stepping in — not solely for ethical reasons, but to secure strategic advantages in an ever more competitive landscape. While AI regulation remains a contentious issue, the focus is shifting toward controlling AI infrastructure rather than restricting research itself.
Looking ahead to 2025, the rise of agentic models and the expansion of multimodal AI, particularly in robotics, will further redefine the boundaries of artificial intelligence. These advancements will necessitate new evaluation methods, regulatory frameworks, and ethical considerations. As we move forward, striking a balance between innovation, governance, and societal impact will be more critical than ever.
Credits: Théophile Loiseau , Louri Charpentier
References
[1] Murgia, M. (2024, September 27). AI start-ups generate money faster than past hyped tech companies. Financial Times. Retrieved from https://www.ft.com
[2] Benaich, N. (2024, October 10). State of AI report. Retrieved from https://www.stateof.ai/
[3] Kobielski, M. (2024b, May 30). Comment l’IA Générative redéfinit les entreprises de demain — IBM-France.
[4] Ce que l’IA générative fait au travail et à l’emploi | Terra Nova. (n.d.).
[5] Gemini 1.0 Nano. (2024b, December 17).
[6] OpenAI. (2024, June 10). OpenAI and Apple announce partnership to integrate ChatGPT into Apple experiences.
[7] Knibbs, K. (2024, November 13). OpenAI scored a legal win over progressive publishers — but the fight’s not finished | WIRED Middle East.
[8a] Vahdat, A. (2024, April 9). Introducing Google’s new Arm-based CPU.
[8b] (2024, April 10). Our next-generation Meta Training and Inference Accelerator.
[9] Anand, N. (2024, February 22). Google’s Gemini AI accused of acting too “woke”, company admits mistake.
[10] Landrin, S. (2024). India’s general election is being impacted by Deepfakes.
[11] Iyengar, R. (2024, November 4). Russia behind fake Haitian voter election videos, U.S. officials say. Foreign Policy. Retrieved from https://foreignpolicy.com
[12] Writer, R. L. C. (2023, December 8). Employees are feeding sensitive Biz data to ChatGPT, raising security fears.
[13] Lesnes, C. (2024b, September 30). California Governor Gavin Newsom vetoes AI safety bill. Le Monde.fr. Retrieved from https://www.lemonde.fr
[15] Team, B., & Company, M. &. (2024, July 2). The Year Ahead: How Gen AI is reshaping fashion’s creativity. The Business of Fashion. Retrieved from https://www.businessoffashion.com
[16] Apple. (2025, January 13). Introducing Apple Intelligence, the personal intelligence system that puts powerful generative models at the core of iPhone, iPad, and Mac. Apple Newsroom. Retrieved from https://www.apple.com
[17] Company, F., & Meta. (2024, September 25). Ray-Ban | Meta Glasses are getting new AI features and more partner integrations. Meta. Retrieved from https://about.fb.com
[18] Gordon, A. (2024, May 13). Why protesters around the world are demanding a pause on AI development. TIME. Retrieved from https://time.com
[19] Milmo, D. (2024, October 23). Thom Yorke and Julianne Moore join thousands of creatives in AI warning. The Guardian. Retrieved from https://www.theguardian.com
[20] Farber, A. (2025, January 17). Guardian staff ‘deeply disturbed’ over AI use during strike. The Times. Retrieved from https://www.thetimes.com
[21] Artificial Intelligence Action Summit. (n.d.).[21
[22] GROK Image Generation Release. (n.d.).
[23] Boyd, R. (2025, January 14). ‘Just the start’: X’s new AI software driving online racist abuse, experts warn. The Guardian. Retrieved from https://www.theguardian.com
[24] Transcript: Tech in 2025 — Trump and the tech bros. (2025, January 21). Financial Times. Retrieved from https://www.ft.com
[25] Lunden, I. (2025, January 7). Meta drops fact-checking, loosens its content moderation rules. TechCrunch. Retrieved from https://techcrunch.com

