Limitations of current AI rapid development and what is beyond
Artificial Intelligence (AI) has emerged as a transformative force across multiple industries, mainly owing to the near exponential development of its capability in recent months. This functionality explosion leads to debates on the limitations of current generative AI development and what can eventually turn the exponential growth of AI into an S-shape curve. Let me share a perspective of an investor, former technology executive and AI scientist.
I see the following bottlenecks of explosive AI development:
1. AI Security
AI is both a core technology for building robust cybersecurity defense as much as a vector for novel threats. Attackers are leveraging all forms of AI to automate, scale and personalise complex cybersecurity attacks. I see 4 different categories of AI Security in a sense of how we need to protect people against AI weaponisation and AI vulnerabilities:
• Malware Creation and Campaign Automation: Generative AI enables simplified malware development, automated discovery of new software vulnerabilities, and advanced obfuscation techniques. Tools like AI copilots facilitate these attacks, pushing cybersecurity response teams to adopt radical automation.
• Human-Level Cyber Attacks: AI amplifies the effectiveness of targeted scams, including personalised phishing attempts, social engineering, and real-time deepfake impersonations. Such advancements challenge traditional defences, requiring innovations in behavioural analytics and machine learning-based threat detection.
• Adversarial Attacks on AI Systems: These attacks exploit vulnerabilities within AI itself, such as poisoning training datasets, introducing adversarial samples, or injecting malicious prompts into large language models (LLMs). Vulnerabilities in the supply chain of AI models further exacerbate risks.
• Exploitation of AI-Generated Systems: AI-generated systems, such as code generated by tools like GitHub Copilot, often contain security flaws (e.g., SQL injection vulnerabilities, improper input validation). The hallucination of incorrect responses by AI systems adds to the complexity of addressing these issues.
AI-powered threats represent a paradigm shift in the cybersecurity landscape, paralleling the transformative impact of the introduction of Windows 95, at the time not well secured operating system, to the general market.
Explosion of AI Security challenges can in its effect slow down AI deployment in critical applications and divert resources to fighting AI threats over further developing new AI capability.
2. AI Safety
Fear of SuperIntelligence (or AGI — Artificial General Intelligence) — an advanced form of AI that surpasses human intelligence in all domains — can indeed act as a limitation to further AI development. Even the current level of AI capability hasve a profound negative effect on human agency, intellectual fitness and our decision- making autonomy: (shorter attention span developed by social media algorithms, limited choices online developed by recommendation algorithms or limited ability to critically think and accept other people’s opinions due to hyper personalisation). A prevalent fear of AI can lead to public demand for restrictive policies that limit AI development. This could slow down innovation and the deployment of beneficial AI applications.
The potential for superintelligent AI raises ethical questions about control, accountability, and the implications of creating entities that could operate beyond human oversight. These concerns necessitate careful consideration in AI development, potentially leading to slower progress as developers navigate complex moral landscapes. There is a legitimate concern about the unintended consequences of deploying advanced AI systems. Fears about job displacement, privacy violations, and misuse (such as deepfakes or autonomous weapons) can lead to hesitancy in advancing AI technologies without comprehensive safeguards in place.
Companies and researchers may adopt a more risk-averse stance in response to fears surrounding superintelligence. This could limit investment in ambitious projects that push the boundaries of what AI can achieve, thereby slowing overall technological progress. If fears surrounding AI lead to negative public sentiment or restrictive policies, talented researchers may choose to pursue careers in less contentious fields, further hindering innovation within the AI sector.
As concerns about the risks associated with advanced AI grow, regulatory bodies may prioritize safety measures over the exploration of new capabilities. This could result in a cautious approach that limits the scope of AI research and applications.
3. Economic Challenges: Balancing Costs and ROI
The rising operational costs of AI systems are becoming a limiting factor in their scalability and further development. These costs include:
(i) Research and Development: The human capital required to design, train, and maintain models, including research and innovation work aiming at technology progress;
(ii) Hardware Expenses: GPU and other computational resources are significant cost drivers;
and (iii) Energy Costs: Data centres hosting AI models account for an increasing share of global energy consumption. Presently, data centers’ energy demand represents 4% of all U.S. energy consumption, expected to grow to 9% in the next 5 years. AI will represent 30–40% of net new energy demands in the next 5 years. Future energy costs limit the exponential growth of Generative AI and transformer architecture.
While raising energy and hardware costs will call for more innovation, it will also impact the ROI expectation from model deployment. The AI industry attracted $142.3 billion in investments in 2023, reflecting high expectations of return of investment in the respective AI use cases. The uneven expected success of AI applications creates risks of failed expectations, potentially dampening future investments. Successful AI use cases remain areas of strong growth and sustained investment interest.
4. The Data Bottleneck
Foundation models are trained primarily on multimodal data available on the internet. Important parts of public data like Wikipedia, books, magazines and news but also videos and sound have been already used for training models. AI companies are looking for new modalities and types of training data. The industry is now exploring alternative data streams, including:
• Simulation and Synthetic Data: Autonomous driving systems and fraud detection platforms use synthetic datasets to improve the precision of the systems for a long time. We are seeing that code generation companies are using data logs from generated code execution as a new source of data that they use for further fine-tuning of their models.
• IoT Data: We expect huge demand for data from the a physical world and from human behaviour in the physical world, like people, cars and goods movement, people developing health patterns, building occupancy and store traffic, long term changes in the physical world related to climate change and weather patterns.
• Privacy-Preserving Models: There is a growing concern about the privacy related implications of training data and data used for training and fine-tuning of models. There are very promising but privacy restrictive AI applications like health care or personalised education. Undoubtedly, privacy concerns of data sharing in such domain represent a major bottleneck. We see a growing interest in technologies that would allow training and inference with fully encrypted data — like confidential computing.
The scarcity of novel training data, coupled with privacy concerns, poses a barrier to the development of advanced AI models.
5. Beyond Transformer Architecture
While the capability of transformer based architecture (e.g. GPT = Generative Pre- trained Transformer) proved very powerful and is attributed to the last 5 years of AI success, currently it is known for its several key limitations, such as high computational and memory cost, high carbon footprint, long training times, lack of interpretability, difficulty with sequential processing, compositionality challenges or limited robustness and hallucinations.
As AI progresses, there is a growing emphasis on systems that move beyond language processing to reasoning, planning, and specialised problem-solving. Innovations include:
- Reasoning Algorithms: We expect further research and development of AI systems that algorithmically solve complex problems, which cannot be easily approximated by an LLM. Besides training new action models we will see the adoption of new planning algorithms, chain-of-reasoning algorithms, and specialised solvers and reasoning algorithms to augment the functionality of foundation models.
- Small and specialised models: We expect further progress in the space of small language models, models that are well distilled and fine-tuned in order to deliver highly specialised and high precision capability. While generic and powerful models can augment the wider scale of human activity, for many of automation tasks working with small, fast, inexpensive and energy efficient models will be a preferred approach.
- Energy efficient models: We expect substantial progress in energy-efficient solutions allowing all pre-training, fine-tuning, inference and maintenance of the models aiming to reduce computational demands while maintaining superb performance. Despite of the big efforts of major AI companies to deliver their energy independence by building new energy sources, we are convinced that energy focused AI innovation will complement these efforts.
- AI Agents and Multi-Agent Systems: We see an opportunity of competing the paradigm to the concept of massive, and hugely powerful frontier models. We expect the energy efficient, specialised small models to operate as AI agents, autonomous computational entities and get orchestrated in a dynamic manner, with agents that encapsulate various software tools and data sources into complex multiagent systems. Advances in agent routing, orchestration, and payment mechanisms for agents are central to this evolution. Multi-agent approach will bring new AI security challenges
- Quantum AI: Quantum computing’s unprecedented parallelisation capabilities promise transformative advancements in training AI models and building AI solvers for complex inference and problem solving. Similarly how we saw AI making a groundbreaking leap in one of biology’s biggest challenges — predicting the 3D structure of proteins from their amino acid sequences, in a similar way we expect Quantum AI to be in the center of solving other big scientific problems of our society. Quantum AI will play a key role in cybersecurity by helping us to develop quantum resistant cryptography.
5. Top 8 Projections for AI in 2035
Despite the limitations of the current approach to AI, we continue to be technology optimistic and we believe that ai AI is going to become the biggest invention of our times and will change our society and the way how we live to the positive. See below for 8 predictions for 2035:
1. Workforce automation: We expect that within 10 years, the workforce automation will finish the transition from AI augmentation to AI replacement. Most of the current jobs that can be automated will be automated by 2035. Most of the hazardous and dangerous work will be taken over by AI and robots. People will no longer compete with AI for economical value created through actual work, people will focus on controlling AI long term, AI innovation and creating new AI businesses. AI revolution will create a new class of jobs for people.
2. Human knowledge and problem solving will be ubiquitous and democratised. An interactive access to most of the human knowledge will be available for free via AI infrastructure, with training costs well amortized and minimal inference costs covered by non-for-profits.
3. AI will be a driving force in security, safety and defence: Most of the security and safety of our civilisation will be delivered through the ever- evolving and improving AI systems. Technology progress will focus on security resilience, a system diversity, recovery scenarios, game theory and maximising security resilience as a function of a given budget.
4. Agentic Systems: With a massive development of decentralised AI and multi-agent systems, AI enabled wealth creation will go beyond the current AI oligopoly. Applications will be builtd on top of AI agents integrating small, cost efficient models, solvers, tools and data sources. This will be made possible by more convergence of AI and Web3 technologies.
5. Human- Centric Systems: Tools and systems will be human-centred and human-friendly, will maximise efficiency in aiAI-human interaction. Humans will be surrounded by highly personalised AI systems. Personalisation will be exclusively client side and under full control of the users. There will be massive progress in AI transparency and AI explainability. There will be a new trust layer on the Internet that will remove all the digital threats and unwanted or harmful content.
6. AI enabled healthcare and well- being: AI based diagnostics, disease treatment, drug discovery and predictive diagnostics together with drug discovery and massive automation will substantially reduce costs and increase the effectiveness of healthcare. AI will make healthcare accessible to many times more people. AI augment psychology and psychiatry will eliminate pandemics of mental desisease.
7. AI based scientific discovery: Following the 2024 Nobel Prize for chemistry awarded to the Demis Hassabis (DeepMind team), AI enabled automation of scientific discovery across most of the disciplines will accelerate technology progress beyond our current knowledge. In it’s effect AI will help to solve the most pressing problems of humanity such as energy abundance, climate safety, healthcare and longevity, space exploration and others.
8. Software development: The vast majority of software will be AI generated, more than 50% of legacy code will be AI refactored and the majority of the code will be AI maintained. The most of AI programmers will become no-code software creators, system architects and AI safety and security engineers.
Conclusion
AI is a transformative force with immense potential to address global challenges. However, its development must be guided by a thorough understanding of its limitations and risks. By prioritizing ethical considerations, robust safeguards, and sustainable practices, society can harness AI’s power to create a safer, more equitable future.
Written by: Michal Pěchouček