Published in

# Introduction

We have been offering a beta version of OptunaHub, a new feature-sharing platform designed for Optuna, since July. In this article, we introduce a newly implemented optimization algorithm called Implicit Natural Gradient Optimization (INGO) [1], which performs optimization based on natural gradient. INGO is an algorithm closely related to CMA-ES (Covariance Matrix Adaptation Evolution Strategy), a powerful evolutionary algorithm. We demonstrate that the registered INGO outperforms CMA-ES in our experiment.

# INGO Algorithm Registered on OptunaHub

In this section, we will run the INGO-based sampler registered on OptunaHub. Yuhei Otomo helped us with this implementation. Please check here for more details.

The implementation of this sampler was verified by basic test cases and by checking whether the benchmarking results are consistent with the claims made in the original paper [1]. Figure 1 shows that the registered INGO outperforms CMA-ES on 100-dimensional L₁-Norm Ellipsoid as claimed in the paper . The horizontal axis shows the number of trials, and the vertical axis shows the objective function value, which is better when it is lower. The solid lines (red: INGO, blue: CMA-ES) show the minimum objective function value found up to each trial, and the weak-color bands are collections of dots that represent the objective function values found at each trial. The appearance of bands is due to the high density of dots. Similar to Figure 1 (b) in the paper, you can see that INGO steadily improves, whereas CMA-ES does not improve as effectively.

The code used for benchmarking is as follows:

`from argparse import ArgumentParserimport numpy as npimport optunaimport optunahubparser = ArgumentParser()parser.add_argument("--use-ingo", action="store_true")args = parser.parse_args()def objective(trial):    dimension = 100    X = np.abs([trial.suggest_float(f"x{d}", -10.0, 10.0) for d in range(dimension)])    coefficients = [10 ** (6 * d / (dimension - 1)) for d in range(dimension)]    return float(coefficients @ X)if args.use_ingo:    mod = optunahub.load_module("samplers/implicit_natural_gradient")    sampler = mod.ImplicitNaturalGradientSampler(sigma0=0.5, seed=0)else:    sampler = optuna.samplers.CmaEsSampler(sigma0=0.5, seed=0)study = optuna.create_study(sampler=sampler)study.optimize(objective, n_trials=20000)`

The code above, let’s say we name it as `demo.py`, can be run with the command below.

In addition to the INGO sampler, many other packages are already registered on OptunaHub, and a number of them can be used simply by installing `optunahub` via `pip`! (Of course, some packages require additional dependencies, but their requirements are usually described in each description)

`# Installation of Dependencies.\$ pip install optuna optunahub cmaes# Run INGO.\$ python demo.py --use-ingo# Run CMA-ES.\$ python demo.py`

# Brief Description of the INGO Algorithm

Implicit Natural Gradient Optimization (INGO) is a black-box optimization method that does not require the explicit gradient of the objective function. Lyun and Tsang [1] made it possible to apply the natural gradient method well-known in information geometry to black-box optimization. Note that INGO approximates natural gradient based on the Fisher information matrix estimated from observations, so it also does not require an explicit gradient with respect to the search space.

Covariance Matrix Adaptation Evolution Strategy (CMA-ES), widely used in practice and available in Optuna, is a related method to INGO. Both methods determine the search region by updating the parameters (mean vector and covariance matrix) of a multivariate normal distribution defined over the search space. While INGO updates the parameters via the natural gradient, the update of the parameters in CMA-ES under certain conditions is performed using the Monte-Carlo approximation of the natural gradient [2]. Since INGO estimates the natural gradient in a mathematically less biased manner, INGO is expected to be more efficient. In fact, the original paper reports that some benchmark results in continuous spaces are competitive to those of CMA-ES [1].

# Conclusion

Implicit Natural Gradient Sampler (INGO) | OptunaHub

On OptunaHub, we welcome many contributions from the community, not only pull requests for new packages, but also code refactoring, bug reports, and suggestions for desired packages. If you’re interested, please check out the OptunaHub GitHub repository and help us make OptunaHub even better together with your contributions!

# References

[1] Yueming Lyu and Ivor Tsang (2020). Black-box Optimizer with Implicit Natural Gradient. arXiv Preprint arXiv:1910.04301.

[2] Youhei Akimoto, Yuichi Nagata, Isao Ono, and Shigenobu Kobayashi (2010). Bidirectional Relation between CMA Evolution Strategies and Natural Evolution Strategies. PPSN, pp. 154–163 (2010).

--

--

Editor for

I obtained my Ph.D. in Bioinformatics from the University of Tokyo. I am presently working at Preferred Networks Inc.