Is it possible to “Generation AI by AI”?

Toma Tanaka
5 min readMar 10, 2024

--

It has been almost 70 years since humans began researching AI, and the field has advanced so much that today’s large language models and diffusion models can generate a wide variety of objects with accuracy that rivals or even exceeds that of human creations.

This raises a question:

Is it possible to “Generation AI by AI”?

Of course, we are talking about weak AI here, but it would represent a new paradigm shift in AI research if AI could generate algorithms that exceed the accuracy of those designed and engineered by humans.

日本語verの記事

In this article, we introduce our pioneering work on “Generation AI by AI”, Inductive-bias Learning: Generating Code Models with Large Language Models.

First of all, we introduce related research. The first one that comes to mind is “AutoML”. This technology automates the process of developing machine learning models by automatically performing data preprocessing, feature selection, model selection, hyperparameter tuning, and other tasks, enabling the creation of high-performance machine learning models without knowledge of data science. “AutoML” can indeed create highly accurate models automatically, but it is not “Generation AI by AI” because it is patterned and is systems.

Reference:https://venturebeat.com/ai/adopting-automl-lets-do-a-reality-check/

“Generation AI by AI” is also closely related to Meta Learning in that an “AI” is able to learn how to learn. MAML, a leading study of Meta Learning, proposes a method for learning parameters that enables Few-shot Learning. Previous studies of Meta Learning have focused on adapting quickly to new tasks. On the other hand, the LLM-based method presented here generates predictive models from a given dataset without additional learning, so LLM may be said to be truly “learning how to learn”.

Reference:https://arxiv.org/abs/1703.03400

Our approach differs from the “AutoML” and “Meta Learning” approaches described above in that we introduce a method for true “Generation AI by AI”.

We focused on the ability of Large Language Models (LLMs) to grasp the logical structure of text by learning a large amount of text data. We call this In-context Learning (ICL). ICL is able to understand the pattern of input-output relationships in context and predict the corresponding value for an unknown value when prompted as follows.

In-context Learning

In our method, we input pairs of inputs and outputs into the prompt, similar to ICL. On the other hand, the output is designed to produce Python code for predicting labels from features (referred to as x1, x2 to y, in this context). We call this approach Inductive-bias Learning (IBL), and the outputted code is referred to as the Code Model. Despite this simple approach, IBL is capable of generating highly accurate prediction models.

Inductive-bias Learning

The name “Inductive-bias Learning (IBL)” is an homage to ICL, and it derives from the behavior of the prediction models generated by IBL, which act as though they are determining the inductive bias itself.

Below, we present the results of applying our method to actual data. The data used in the experiment was pseudo data from scikit-learn, and the test was conducted in a situation where LLMs could not use any related prior knowledge from the data. This means that the LLMs had to recognize the patterns in the given data and generate the logic for prediction on their own.

The first LLM we used was OpenAI’s gpt-4–0125-preview. Below are the Code Model generated by this method and the model’s prediction accuracy.

AUC:0.988,Accuracy:0.903

We also tested the recently released Anthoropic claude-3-opus-20240229. The output Code Model has the following very detailed logic, and was found to achieve high prediction accuracy.

AUC:0.991,Accuracy:0.978

The following are the results of accuracy verification of the data set verified above using a general machine learning model. The machine learning model has a high prediction accuracy because the task prepared in this study was a simple one. However, it is very interesting to note that LLMs can generate very high prediction models from data alone without training.

Comparison with typical machine learning models

Currently, IBL can only be successfully applied to small data sets, and its accuracy degrades for large data sets. In the future, we plan to improve IBL so that it can be applied to a variety of data sets. For example, we are considering using statistical machine learning methods such as bagging and boosting to improve accuracy.

We believe that the research on “Generation AI by AI” will enable us to generate models that surpass AI (machine learning algorithms) that have been researched and developed by humans. We believe that this research will be a pioneering study.

All these results can be verified at the following GitHub repository.

https://github.com/fuyu-quant/IBLM

If you are interested in this research, please contact us.

ulti4939@gmail.com

--

--