Results can be consulted on https://benchopt.github.io/results/benchmark_bilevel.html
BenchOpt is a package to simplify and make more transparent and reproducible the comparisons of optimization algorithms. This benchmark is dedicated to solvers for bilevel optimization:
$$\min{x} f(x, z^*(x)) \quad \text{with} \quad z^*(x) = \arg\min_z g(x, z), $$
where
This benchmark currently implements two bilevel optimization problems: regularization selection, and hyper data cleaning.
In this problem, the inner function
$$g(x, z) = \frac{1}{n} \sum{i=1}^{n} \ell(d_i; z) + \mathcal{R}(x, z)$$
where
The outer function
$$f(x, z) = \frac{1}{m} \sum{j=1}^{m} \ell(d'_j; z)$$
where the
There are currently two datasets for this regularization selection problem.
Homepage : https://archive.ics.uci.edu/dataset/31/covertype
This is a logistic regression problem, where the data is of the form
Homepage : https://www.openml.org/search?type=data&sort=runs&id=1575&status=active
This is a multicalss logistic regression problem, where the data is of the form
This problem was first introduced by [Fra2017] . In this problem, the data is the MNIST dataset. The training set has been corrupted: with a probability
$$g(x, z) =\frac1n \sum{i=1}^n \sigma(x_i)\ell(d_i, z) + \frac C 2 \^2$$
where the
$$f(x, z) =\frac1m \sum{j=1}^n \ell(d'_j, z)$$
where the
This benchmark can be run using the following commands:
$ pip install -U benchopt
$ git clone https://github.com/benchopt/benchmark_bilevel
$ benchopt run benchmark_bilevel
Apart from the problem, options can be passed to benchopt run, to restrict the benchmarks to some solvers or datasets, e.g.:
$ benchopt run benchmark_bilevel -s solver1 -d dataset2 --max-runs 10 --n-repetitions 10
You can also use config files to setup the benchmark run:
$ benchopt run benchmark_bilevel --config config/X.yml
where X.yml is a config file. See https://benchopt.github.io/index.html#run-a-benchmark for an example of a config file. This will possibly launch a huge grid search. When available, you can rather use the file X_best_params.yml in order to launch an experiment with a single set of parameters for each solver.
Use benchopt run -h for more details about these options, or visit https://benchopt.github.io/api.html.
If you use this benchmark in your research project, please cite the following paper:
@inproceedings{saba,
title = {A Framework for Bilevel Optimization That Enables Stochastic and Global Variance Reduction Algorithms},
booktitle = {Advances in {{Neural Information Processing Systems}} ({{NeurIPS}})},
author = {Dagr{\'e}ou, Mathieu and Ablin, Pierre and Vaiter, Samuel and Moreau, Thomas},
year = {2022}
}
- Fra2017
Franceschi, Luca, et al. "Forward and reverse gradient-based hyperparameter optimization." International Conference on Machine Learning. PMLR, 2017.