Announcing Optuna 3.2

Toshihiko Yanase
Optuna
Published in
9 min readMay 30, 2023

We are pleased to announce the release of Optuna 3.2. In version 3.2, we have included two new features aimed at broadening Optuna’s applicability. In addition, numerous improvements have been made to the sampling algorithms and visualization features. We will walk you through these updates.

Summary:

  • Highlight 1: Human-in-the-loop optimization
  • Highlight 2: Automatic optimization terminator (Optuna Terminator)
  • New sampling algorithms
    - NSGA-III for many-objective optimization
    - BI-population CMA-ES: a new restart strategy
  • New visualization functions
    - Timeline plot for trial life cycle
    - Rank plot to understand input-output relationship
  • Optuna Dashboard
    - New Optuna Dashboard UI
    - Markdown note
    - Official documentation of Optuna Dashboard
  • Other improvements
    - Isolating integration modules as an independent package
    - Starting support for Mac & Windows

Highlight 1: Human-in-the-loop optimization

Recent advancement of Generative AI attracts an increasing number of people to work on tasks such as image generation, natural language processing, and speech synthesis(*1,2). In such tasks, evaluating results mechanically can be tough, and human evaluation becomes crucial. Until now, managing such tasks with Optuna has been challenging.

However, with the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI.

Overview of human-in-the-loop optimization. Generated images and sounds are displayed on Optuna Dashboard, and users can directly evaluate them there.

The key to human-in-the-loop optimization lies in the integration with the Optuna Dashboard. Images or sounds generated by Generative AI can be saved as Artifacts and then displayed on the dashboard. Furthermore, the use of widgets allows users to enter evaluations via buttons or sliders. For further details, see Tutorial.

*1 ChatGPT: https://openai.com/blog/chatgpt
*2 Stable Diffusion: https://arxiv.org/abs/2112.10752

Highlight 2: Automatic optimization terminator(Optuna Terminator)

Optuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (n_trial), or unnecessarily consuming computational resources by indefinitely running the optimization loop.

Transition of estimated room for improvement. It steadily decreases towards the level of cross-validation errors.

Here is an example of the code. There are three key points:

  1. In the objective function, the cross-validation error is reported to the Terminator.
  2. The TerminatorCallback compares the estimated room for optimization (i.e., the estimated regret bound) with the reported cross-validation error and automatically stops the optimization if necessary.
  3. With plot_terminator_improvement, users can visualize the relationship between the potential for optimization and the variance of the evaluation values.
import optuna
from optuna.terminator import report_cross_validation_scores
from optuna.terminator import TerminatorCallback
from optuna.visualization import plot_terminator_improvement

def objective(trial):
...
# 1. Calculate cross validation score and report it to Optuna Terminator.
scores = cross_val_score(clf, X, y, cv=KFold(n_splits=5, shuffle=True))
report_cross_validation_scores(trial, scores)

return scores.mean()

study = optuna.create_study()

# 2. Add TerminatorCallback to stop optimization loop.
study.optimize(objective, n_trials=50, callbacks=[TerminatorCallback()])

# 3. Visualize cross validation error and room for improvement.
plot_terminator_improvement(study, plot_error=True).show()

Please note that this feature is implemented with the algorithm presented at the AutoML Conference 2022(*3) and uses a probabilistic model to estimate the room for optimization, so it won’t always work perfectly. If you think that the termination decision is too early, Optuna’s Study allows you to resume the optimization (See the FAQ page). We appreciate your feedback at GitHub issues or discussions if you encounter any issues while using it.

*3 Anastasiia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cédric Archambeau, “Automatic termination for hyperparameter optimization”, In Proc. AutoML Conference 2022, 2022

New sampling algorithms

NSGA-III for many-objective optimization

We’ve introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III(*4), which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it’s four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by #4436. For those interested in more detailed information about the algorithm or more practical uses, please look forward to our feature blog, which will be published soon.

Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front.
study = optuna.create_study(
directions=["maximize", "maximize"],
sampler=optuna.samplers.NSGAIIISampler(),
)

*4 An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints, link

BI-population CMA-ES

Continuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we’ve added the BI-population CMA-ES algorithm(*5), a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we’ve been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you’re struggling with local optima, please try BI-population CMA-ES as well. For more details, please see #4464.

An overview of the optimization of a benchmark function (Himmelblau function) with BI-population CMA-ES. When the optimization is determined to have converged, the optimization restarts with a different population size and initial value.
# To enable BI-population CMA-ES, specify "bipop" for 
# the restart_strategy argument of the CmaEsSampler.
sampler = optuna.samplers.CmaEsSampler(restart_strategy="bipop")
study = optuna.create_study(sampler=sampler)

*5 Nikolaus Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. ACM-GECCO Genetic and Evolutionary Computation Conference, Jul 2009, Montreal, Canada.

New visualization functions

Timeline plot for trial life cycle

The timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.

An example of the timeline plot. The horizontal axis represents time and trials are plotted vertically. Each horizontal bar corresponds to the duration of a trial. In this example, we can see that optimization is being performed in four parallel processes.

Similar to other plot functions, all you need to do is pass the study object to plot_timeline. For more details, please refer to the reference. Also, we have an upcoming feature blog post that will explain the details and practical use-cases of this function. Stay tuned!

study = optuna.create_study()
study.optimize(objective, n_jobs=4, n_trials=50)
optuna.visualization.plot_timeline(study).show()

Rank plot to understand input-output relationship

A new visualization feature, plot_rank, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.

This plot is similar to a Contour plot. Which plot is easier to read will depend on the situation, so when analyzing landscapes of objective functions, be sure to try plot_rank as well. For instance, outliers (trials with exceptionally large objective values) extremely expands the range of objective values, and they can collapse the contour around valid trials. On the other hand, plot_rank is robust against outliers since it ignores the objective values themselves and focuses on their orders only.

An example of a Rank Plot. It represents the relationship between four parameters and rank of trials. Because this is a maximization problem, good points are represented by red colors.

Similar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to the reference. In addition, stay tuned for an upcoming feature blog post that will explain the details of this feature and more practical use-cases.

study = optuna.create_study()
study.optimize(objective, n_trials=500)
optuna.visualization.plot_rank(study).show()

Optuna Dashboard

New Optuna Dashboard UI

Optuna Dashboard, a web dashboard that allows you to interact with studies, and visualize the optimization results in graphs and tables, has been updated with a new version, v0.10.0. This release comes with a significantly enhanced UI. Various new features and improvements have been added. Please try it out!

$ pip install optuna-dashboard
$ optuna-dashboard sqlite:///db.slqite3

GitHub: https://github.com/optuna/optuna-dashboard

Markdown note

Optuna Dashboard now supports notes in Markdown format. You can leave notes like the following, which allow you to embed formulas, code blocks, and images.

import optuna
from optuna_dashboard import save_note


def objective(trial: optuna.Trial) -> float:
x = trial.suggest_float("x", -10, 10)

save_note(trial, f"""
## What is this feature for?

Here you can freely take a note in *(GitHub flavored) Markdown format*.
In addition, **code blocks with syntax highlights** and **formula** are also supported here, as shown below.

```python
def hello():
print("Helo World")
```

$$
L = \\frac{{1}}{{2}} \\rho v^2 S C_L
$$
""")
return (x - 5) ** 2

An official documentation of Optuna Dashboard

The official documentation for Optuna Dashboard has been published. It includes not only tutorials for human-in-the-loop optimization but also API references and advanced usages. Please check it out.

https://optuna-dashboard.readthedocs.io/en/latest/

Other improvements

Isolating integration module

We have separated Optuna’s integration module into a different package called optuna-integration. As of v3.1, the number of supported libraries had increased to 20, and maintaining them within the Optuna package was becoming costly (e.g., CI failures due to updates in external libraries). By separating the integration module, we aim to improve the development speed of both Optuna itself and its integration module.

Considering the breaking changes and the user impact, the separation of the integration module will be done in stages. As of the release of v3.2, we have migrated six integrations: allennlp, catalyst, chainer, keras, skorch, and tensorflow (excepting for the TensorBoard integration). To use integration module, pip install optuna-integration will be necessary. For further details, please refer to optuna-integration reference .

Contributions towards the separation work, as well as feedback on usage, are highly appreciated. The tracking issue is #4484.

Starting support for Mac & Windows

We have started supporting Optuna on Mac and Windows. While many features already worked in previous versions, we have fixed issues that arose in certain modules, such as Storage (#4457, #4458).

In the past, we’ve been verifying functionality through CI on Linux. Moving forward, we will also be verifying through CI on Mac and Windows. However, this is not a mandatory requirement for merging pull requests, so CI might be failing. Contributions from Mac and Windows users are very welcome.

What’s Ahead

In v3.2, we introduced several experimental features, including two highlights. These have room for improvement in both functionality and interface, and we plan to continuously enhance them in the future.

For instance, in human-in-the-loop optimization, most developments have been made primarily in the Optuna Dashboard, but functions such as Artifacts could also be useful in other than human-in-the-loop optimization scenarios, and might be beneficial to implement in Optuna itself. In addition, optimization algorithms specialized for Human-in-the-loop optimization have not been implemented in Optuna yet, which remains a challenge for the future.

In Optuna Terminator, we could consider improving the precision of the estimation of room for improvement.

We would be delighted if you would share use cases or bug reports about Optuna v3.2 on blogs, social media and GitHub.

Contributors

@Alnusjaponica, @HideakiImamura, @Ilevk, @Jendker, @Kaushik-Iyer, @amylase, @c-bata, @contramundum53, @cross32768, @eukaryo, @g-votte, @gen740, @gituser789, @harupy, @himkt, @hvy, @jrbourbeau, @keisuke-umezawa, @keisukefukuda, @knshnb, @kstoneriv3, @li-li-github, @nomuramasahir0, @not522, @nzw0301, @toshihikoyanase, @tungbq

Next Step

Check out the release notes for more information. To get the latest news from Optuna, follow us on Twitter. For feedback, please file an issue or create a pull request on GitHub.

--

--