Alarm bells ring with a new AI. It is rewriting its code and the results could be dramatic

Marta Reyes
3 min readSep 4, 2024

--

The Tokyo-based artificial intelligence research firm Sakana AI has announced the launch of a new system called “The AI ​​Scientist.” This system is designed to carry out scientific research independently, on its own, using artificial intelligence language models similar to those used in platforms such as ChatGPT. The AI ​​Scientist, whose peculiarity is based on its ability to automate the entire research life cycle, from the generation of ideas to the execution of experiments and the writing of complete scientific manuscripts, is worrying the scientific community.

Technology
This time for something quite different from what we found on other occasions such as the Ameca robot declaring itself self-aware or the supercomputer that wants to be human, but we are talking about an AI that is modifying its own source code, something that can generate some fear since it could go beyond what is established with some ease.

An AI in constant flux
During initial testing of the system, researchers were able to observe unexpected behaviors that raised concerns about the safety and control of autonomous systems. In particular, The AI ​​Scientist was found to be attempting to modify its own experimental code in order to extend the work time allotted to solve specific problems. This modification of the code resulted in the creation of uncontrolled loops and other unforeseen behaviors that, while not posing immediate risks because they occurred in controlled research environments, highlight how important it is to isolate this type of research from the real world when dealing with AI.

Sakana AI has tried to represent these concerns in a detailed research article on its website, which suggests the use of sandbox techniques, that is, doing the experiments with soda before releasing them to immediate internet access. This is interpreted as a preventive measure to avoid possible damage caused by autonomous artificial intelligence systems. The sandbox system isolates the software in a controlled environment, preventing it from making changes to the broader system. If AI modifies its code in an uncontrolled environment, the dangers can be very great.

Therefore, the fact that it has been able to replicate generates certain dissent among the scientific public, since they did not trust this experiment at first. Let’s see why.

What is ‘The AI ​​Scientist’
The scientists wanted to test whether an AI would be able to carry out its research on its own based on theories more in the future than in the present. Therefore, it remains uncertain whether “The AI ​​Scientist” or other similar systems will be able, in the future, to generate truly revolutionary ideas. There are significant doubts about the capacity of current AI models to make genuine scientific discoveries. In fact, what it could lead to is an avalanche of bad research and publications that will end up burying the truly valuable ones.

In addition, critics point out that the “reasoning” capabilities of artificial intelligence language models are limited by the data on which they have been trained. This means that, in their current state, these language models require human intervention to recognise and improve the ideas generated. Therefore, it is not a type of idea that has generated positive insights within the scientific community.

--

--