Skip to content

Add CMAES optimizer from nevergrad and refactor existing code #591

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

gauravmanmode
Copy link
Collaborator

@gauravmanmode gauravmanmode commented Apr 23, 2025

Updated PR Description

This PR aims to

  1. Add CMAES optimizer from the nevergrad library
  2. Refactor existing code of NeverGrad PSO optimizer with the aid of internal helper function _nevergrad_internal.

Hi @janosg
I am wrapping the CMA-ES optimizer from nevergrad
Will be adding tests and docs shortly.
Referring from the discussion in the existing PR's and issues,
some things I have experimented with are:

  1. Refactor code (use helper function nevergrad_internal to simplify code)
  2. Tried using a custom executor that call problem.batch_fun inside
    Using a CustomExecutor, for time consuming objective functions, benchmarking reveals
    Screenshot from 2025-04-21 11-32-57,
    whereas with lightweight objective functions, n_cores = 1 seemed more preferrable

Is this in the right direction?

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Hi @gauravmanmode, thanks for the PR.

I definitely like the idea of your nevergrad_internal function. We currently have several independent nevergrad PRs open and a function like this is good to avoid code duplication.

Regarding the Executor: There was an argument brought forward by @r3kste that suggests it would be better to use the low-level ask-and-tell interface if we want to support parallelism. While I still think the solution with the custom Executor can be made to work, I think that the ask-and-tell interface is simpler and more readable for this.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Currently your tests fail because nevergrad is not compatible with numpy 2.0 and higher. You can pin numpy in the environment file for now.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Or better: Install nevergrad via pip instead of conda. The conda version is outdated. Then you don't need to pin any numpy versions.

Copy link

codecov bot commented Apr 30, 2025

Codecov Report

Attention: Patch coverage is 95.41284% with 5 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/optimagic/optimizers/nevergrad_optimizers.py 95.77% 3 Missing ⚠️
src/optimagic/config.py 60.00% 2 Missing ⚠️
Files with missing lines Coverage Δ
src/optimagic/algorithms.py 86.10% <100.00%> (+0.15%) ⬆️
src/optimagic/config.py 68.83% <60.00%> (-0.62%) ⬇️
src/optimagic/optimizers/nevergrad_optimizers.py 96.93% <95.77%> (-0.74%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gauravmanmode
Copy link
Collaborator Author

gauravmanmode commented May 5, 2025

Hi, @janosg ,
Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review

what kind of tests should i have for the internal helper function ?
Should I have tests for ftol, stopping_maxfun?
Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something.
image
For reference, I have attached a notebook I used while exploring here

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg,
I am thinking of refactoring the code for the already added nevergrad_pso optimizer and nevergrad_cmaes in this pr.
Does this sound good?
Also, I would like your thoughts on this.

  1. currently I am passing the optimizer object to the helper function _nevergrad_internal.
    image
  2. Another approach is to pass the optimizer name as a string as in pygmo
    image
    image
    What would be a better choice?

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi @gauravmanmode, yes please go ahead and refactor the code for pso as well.

I would stick to approach one, i.e. passing the configured optimizer object to the internal function. It is more in line with the design philosophy shown here.

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi, @janosg , Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review
what kind of tests should i have for the internal helper function ? Should I have tests for ftol, stopping_maxfun? Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something. image For reference, I have attached a notebook I used while exploring here

About the names:

  • xtol and ftol are convergence criteria, so the name would be convergence_xtol. Ideally you would also find out if this is an absolute or relative tolerance and then add the corresponding abbreviation (e.g. convergence_xtol_rel). You can find examples of the naming scheme here
  • The otrher names are god

I would mainly add a name for stopping_maxfun. Other convergence criteria are super hard to test.

If you cannot get a loss out of nevergrad for some optimizers you can evaluate problem.fun at the solution for now and create an issue with a minimal example at nevergrad to get feedback. I wouldn't frame it as a bug report (unless you are absolutely sure) but rather frame it as a question whether you are using the library correctly.

@gauravmanmode gauravmanmode changed the title Add CMAES optimizer from nevergrad Add CMAES optimizer from nevergrad and refactor existing code May 22, 2025
@gauravmanmode
Copy link
Collaborator Author

gauravmanmode commented Jun 3, 2025

Update

Refactored the code and improved type annotations.
Unable to get optimal loss in case of CMAES optimizer. Here is a related issue. I have also created a issue at nevergrad here. .
Keeping in line with the styleguide, I have added the algo info in the docstrings itself.
These are the additional options:

"nevergrad_cmaes"

Old Name Proposed Name
scale scale
elitist elitist
popsize population_size
diagonal diagonal
high_speed high_speed
fcmaes fast_cmaes
random_init random_init
AdaptSigma step_size_adaptive
CMA_active negative_update
CMA_cmean learning_rate_mean_update
CMA_const_trace normalize_cov_trace
CMA_diagonal diag_covariance_iters
CMA_diagonal_decoding learning_rate_diagonal_update
CMA_mirrormethod mirror_sampling_strategy
CMA_mu num_parents
CMA_on learning_rate_cov_mat_update
CMA_rankmu learning_rate_rank_mu_update
CMA_rankone learning_rate_rank_one_update
CMA_dampsvec_fade step_size_damping_rate
CSA_dampfac step_size_damping_factor
CSA_squared step_size_update_squared
CSA_invariant_path CSA_invariant_path
eval_final_mean eval_final_mean
maxfevals stopping_maxfun
maxiter stopping_maxiter
timeout stopping_timeout
tolconditioncov stopping_cov_mat_cond
tolfun convergence_ftol_abs
tolfunrel convergence_ftol_rel
tolstagnation convergence_iter_noimprove
tolx convergence_xtol_abs

@janosg
Copy link
Member

janosg commented Jun 12, 2025

Hi @gauravmanmode, thanks for the update. For all names that start with CSA_ or CMA I would stick to the original. Your names are a bit more descriptive but I would still need a longer description to understand what they do and what values they can take. So we can stick to the originals to make it easier to switch from nevergrad and put the rest into the documentation.

Copy link
Member

@janosg janosg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @gauravmanmode, this looks very good already. I just made a few minor comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants