Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce the LTMADS solver #433

Open
wants to merge 38 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
8baa2f0
Initial sketch of (LT)MADS
kellertuer Sep 27, 2024
d0ee13f
Merge branch 'master' into kellertuer/LTMADS
kellertuer Jan 2, 2025
4c6d740
Design concrete search and poll structs further and document them.
kellertuer Jan 2, 2025
fb945c2
Add remaining todos.
kellertuer Jan 2, 2025
fa6004d
continue docs.
kellertuer Jan 3, 2025
435aea1
Implement most of the logic, just not yet the updates(vector transpor…
kellertuer Jan 3, 2025
88b9683
forgot to store poll_size.
kellertuer Jan 3, 2025
9d40813
Merge branch 'master' into kellertuer/LTMADS
kellertuer Jan 5, 2025
3c2b537
first MADS variant that includes all necessary functions.
kellertuer Jan 5, 2025
354c8ff
extend docs.
kellertuer Jan 5, 2025
7b8ad8d
Fix a few typos.
kellertuer Jan 6, 2025
939d7b0
Fix two typos.
kellertuer Jan 7, 2025
3a4e2ac
Fix typos add a first running, but failing test.
kellertuer Jan 8, 2025
9167d93
Stabilize I
kellertuer Jan 9, 2025
2b29824
Finally found the bug in scaling the mesh to be the culprit
kellertuer Jan 9, 2025
57ee145
Fix state print a bit.
kellertuer Jan 9, 2025
a7e9f8c
change poll and mesh size to be internal parameters.
kellertuer Jan 9, 2025
e430d73
unify naming and add docstrings to all new (small) functions
kellertuer Jan 9, 2025
5a59142
Fix docs.
kellertuer Jan 9, 2025
aff7900
work on code coverage.
kellertuer Jan 26, 2025
df0f042
Cover a final line.
kellertuer Jan 26, 2025
a8d47e4
improve typing and performance a little
mateuszbaran Jan 26, 2025
1d6454e
formatting
mateuszbaran Jan 26, 2025
83da62e
fix some typos, add some types
mateuszbaran Jan 27, 2025
0c6322b
A bit of work on typos.
kellertuer Feb 4, 2025
5e7f232
Update metadata.
kellertuer Feb 4, 2025
577dfb5
Rearrange the order of names.
kellertuer Feb 4, 2025
6ba7b9a
Update docs/src/references.bib
kellertuer Feb 4, 2025
58f3b1a
fix 2 more typos.
kellertuer Feb 4, 2025
20901fc
Bring vale to zero errors.
kellertuer Feb 4, 2025
8cd4880
Fix a few more typos.
kellertuer Feb 5, 2025
b81d0c8
Add input to docs.
kellertuer Feb 10, 2025
5063d8f
Merge branch 'master' into kellertuer/LTMADS
kellertuer Feb 10, 2025
4d87704
Fix dependency on ADTypes.
kellertuer Feb 11, 2025
e89895e
Fix the seed and remove an unnecessary case, which could also be reso…
kellertuer Feb 11, 2025
f69f660
add decorate and output section notes.
kellertuer Feb 11, 2025
94ea68b
complete the list of keyword arguments.
kellertuer Feb 12, 2025
22733d9
Add literature section to the docs page.
kellertuer Feb 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 11 additions & 5 deletions .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,16 @@ Packages = Google
[formats]
# code blocks with Julia in Markdown do not yet work well
qmd = md
jl = md

[docs/src/*.md]
BasedOnStyles = Vale, Google

[docs/src/contributing.md]
BasedOnStyles =
BasedOnStyles = Vale, Google
Google.Will = false ; given format and really with intend a _will_
Google.Headings = false ; some might jeally ahabe [] in their headers
Google.FirstPerson = false ; we pose a few contribution points as first-person questions

[Changelog.md, CONTRIBUTING.md]
BasedOnStyles = Vale, Google
Expand All @@ -39,12 +43,14 @@ TokenIgnores = \$(.+)\$,\[.+?\]\(@(ref|id|cite).+?\),`.+`,``.*``,\s{4}.+\n
Google.Units = false #wto ignore formats= for now.
TokenIgnores = \$(.+)\$,\[.+?\]\(@(ref|id|cite).+?\),`.+`,``.*``,\s{4}.+\n

[tutorials/*.md] ; actually .qmd for the first, second autogenerated
[tutorials/*.qmd] ; actually .qmd for the first, second autogenerated
BasedOnStyles = Vale, Google
; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to address the user directly.

[docs/src/tutorials/*.md]
; ignore since they are derived files
BasedOnStyles =
[docs/src/tutorials/*.md] ; actually .qmd for the first, second autogenerated
BasedOnStyles = Vale, Google
; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to address the user directly.
5 changes: 5 additions & 0 deletions .zenodo.json
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,11 @@
"name": "Riemer, Tom-Christian",
"type": "ProjectMember"
},
{
"affiliation": "NTNU Trondheim",
"name": "Oddsen, Sander Engen",
"type": "ProjectMember"
},
{
"name": "Schilly, Harald",
"type": "Other"
Expand Down
10 changes: 8 additions & 2 deletions Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,13 @@ All notable Changes to the Julia package `Manopt.jl` will be documented in this
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.5.5] Januaey 4, 2025
## [0.5.6] unreleased

### Added

* A mesh adaptive direct search algorithm (MADS), for now with the LTMADS variant using a lower triangular random matrix in the poll step.

## [0.5.5] January 4, 2025

### Added

Expand Down Expand Up @@ -122,7 +128,7 @@ In general we introduce a few factories, that avoid having to pass the manifold
* the previous `stabilize=true` is now set with `(project!)=embed_project!` in general,
and if the manifold is represented by points in the embedding, like the sphere, `(project!)=project!` suffices
* the new default is `(project!)=copyto!`, so by default no projection/stabilization is performed.
* the positional argument `p` (usually the last or the third to last if subsolvers existed) has been moved to a keyword argument `p=` in all State constructors
* the positional argument `p` (usually the last or the third to last if sub solvers existed) has been moved to a keyword argument `p=` in all State constructors
* in `NelderMeadState` the `population` moved from positional to keyword argument as well,
* the way to initialise sub solvers in the solver states has been unified In the new variant
* the `sub_problem` is always a positional argument; namely the last one
Expand Down
2 changes: 1 addition & 1 deletion Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ In Julia you can get started by just typing
using Pkg; Pkg.add("Manopt");
```

and then checkout the [Get started: optimize!](https://manoptjl.org/stable/tutorials/Optimize/) tutorial.
and then checkout the [🏔️ Get started with Manopt.jl](https://manoptjl.org/stable/tutorials/Optimize/) tutorial.

## Related packages

Expand Down
3 changes: 2 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ tutorials_in_menu = !("--exclude-tutorials" ∈ ARGS)
# (a) setup the tutorials menu – check whether all files exist
tutorials_menu =
"How to..." => [
"🏔️ Get started: optimize." => "tutorials/Optimize.md",
"🏔️ Get started with Manopt.jl." => "tutorials/Optimize.md",
"Speedup using in-place computations" => "tutorials/InplaceGradient.md",
"Use automatic differentiation" => "tutorials/AutomaticDifferentiation.md",
"Define objectives in the embedding" => "tutorials/EmbeddingObjectives.md",
Expand Down Expand Up @@ -200,6 +200,7 @@ makedocs(;
"Gradient Descent" => "solvers/gradient_descent.md",
"Interior Point Newton" => "solvers/interior_point_Newton.md",
"Levenberg–Marquardt" => "solvers/LevenbergMarquardt.md",
"MADS" => "solvers/mesh_adaptive_direct_search.md",
"Nelder–Mead" => "solvers/NelderMead.md",
"Particle Swarm Optimization" => "solvers/particle_swarm.md",
"Primal-dual Riemannian semismooth Newton" => "solvers/primal_dual_semismooth_Newton.md",
Expand Down
5 changes: 3 additions & 2 deletions docs/src/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Thanks to the following contributors to `Manopt.jl`:
* [Hajg Jasa](https://www.ntnu.edu/employees/hajg.jasa) implemented the [convex bundle method](solvers/convex_bundle_method.md) and the [proximal bundle method](solvers/proximal_bundle_method.md) and a default subsolver each of them.
* Even Stephansen Kjemsås contributed to the implementation of the [Frank Wolfe Method](solvers/FrankWolfe.md) solver.
* Mathias Ravn Munkvold contributed most of the implementation of the [Adaptive Regularization with Cubics](solvers/adaptive-regularization-with-cubics.md) solver as well as its [Lanczos](@ref arc-Lanczos) subsolver
* [Sander Engen Oddsen](https://github.com/oddsen) contributed to the implementation of the [LTMADS](solvers/mesh_adaptive_direct_search.md) solver.
* [Tom-Christian Riemer](https://www.tu-chemnitz.de/mathematik/wire/mitarbeiter.php) implemented the [trust regions](solvers/trust_regions.md) and [quasi Newton](solvers/quasi_Newton.md) solvers as well as the [truncated conjugate gradient descent](solvers/truncated_conjugate_gradient_descent.md) subsolver.
* [Markus A. Stokkenes](https://www.linkedin.com/in/markus-a-stokkenes-b41bba17b/) contributed most of the implementation of the [Interior Point Newton Method](solvers/interior_point_Newton.md) as well as its default [Conjugate Residual](solvers/conjugate_residual.md) subsolver
* [Manuel Weiss](https://scoop.iwr.uni-heidelberg.de/author/manuel-weiß/) implemented most of the [conjugate gradient update rules](@ref cg-coeffs)
Expand All @@ -28,8 +29,8 @@ to clone/fork the repository or open an issue.
* [ExponentialFamilyProjection.jl](https://github.com/ReactiveBayes/ExponentialFamilyProjection.jl) package uses `Manopt.jl` to project arbitrary functions onto the closest exponential family distributions. The package also integrates with [`RxInfer.jl`](https://github.com/ReactiveBayes/RxInfer.jl) to enable Bayesian inference in a larger set of probabilistic models.
* [Caesar.jl](https://github.com/JuliaRobotics/Caesar.jl) within non-Gaussian factor graph inference algorithms

Is a package missing? [Open an issue](https://github.com/JuliaManifolds/Manopt.jl/issues/new)!
It would be great to collect anything and anyone using Manopt.jl
If you are missing a package, that uses `Manopt.jl`, please [open an issue](https://github.com/JuliaManifolds/Manopt.jl/issues/new).
It would be great to collect anything and anyone using Manopt.jl in this list.

## Further packages

Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ or in other words: find the point ``p`` on the manifold, where ``f`` reaches its
It belongs to the “Manopt family”, which includes [Manopt](https://manopt.org) (Matlab) and [pymanopt.org](https://www.pymanopt.org/) (Python).

If you want to delve right into `Manopt.jl` read the
[🏔️ Get started: optimize.](tutorials/Optimize.md) tutorial.
[🏔️ Get started with Manopt.jl.](tutorials/Optimize.md) tutorial.

`Manopt.jl` makes it easy to use an algorithm for your favourite
manifold as well as a manifold for your favourite algorithm. It already provides
Expand Down Expand Up @@ -94,7 +94,7 @@ The notation in the documentation aims to follow the same [notation](https://jul
### Visualization

To visualize and interpret results, `Manopt.jl` aims to provide both easy plot functions as well as [exports](helpers/exports.md). Furthermore a system to get [debug](plans/debug.md) during the iterations of an algorithms as well as [record](plans/record.md) capabilities, for example to record a specified tuple of values per iteration, most prominently [`RecordCost`](@ref) and
[`RecordIterate`](@ref). Take a look at the [🏔️ Get started: optimize.](tutorials/Optimize.md) tutorial on how to easily activate this.
[`RecordIterate`](@ref). Take a look at the [🏔️ Get started with Manopt.jl.](tutorials/Optimize.md) tutorial on how to easily activate this.

## Literature

Expand Down
7 changes: 7 additions & 0 deletions docs/src/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,13 @@ @article{DiepeveenLellmann:2021
VOLUME = {14},
YEAR = {2021},
}
@techreport{Dreisigmeyer:2007,
AUTHOR = {Dreisigmayer, David W.},
kellertuer marked this conversation as resolved.
Show resolved Hide resolved
INSTITUTION = {Optimization Online},
TITLE = {Direct Search Alogirthms over Riemannian Manifolds},
URL = {https://optimization-online.org/?p=9134},
YEAR = {2007}
}
@article{DuranMoelleSbertCremers:2016,
AUTHOR = {Duran, J. and Moeller, M. and Sbert, C. and Cremers, D.},
TITLE = {Collaborative Total Variation: A General Framework for Vectorial TV Models},
Expand Down
5 changes: 3 additions & 2 deletions docs/src/solvers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ For derivative free only function evaluations of ``f`` are used.

* [Nelder-Mead](NelderMead.md) a simplex based variant, that is using ``d+1`` points, where ``d`` is the dimension of the manifold.
* [Particle Swarm](particle_swarm.md) 🫏 use the evolution of a set of points, called swarm, to explore the domain of the cost and find a minimizer.
* [Mesh adapive direct search](mesh_adaptive_direct_search.md) performs a mesh based exploration (poll) and search.
* [CMA-ES](cma_es.md) uses a stochastic evolutionary strategy to perform minimization robust to local minima of the objective.

## First order
Expand Down Expand Up @@ -98,7 +99,7 @@ For these you can use
* [Steihaug-Toint Truncated Conjugate-Gradient Method](truncated_conjugate_gradient_descent.md) a solver for a constrained problem defined on a tangent space.


## Alphabetical list List of algorithms
## Alphabetical list of algorithms

| Solver | Function | State |
|:---------|:----------------|:---------|
Expand Down Expand Up @@ -202,7 +203,7 @@ also use the third (lowest level) and just call
solve!(problem, state)
```

### Closed-form subsolvers
### Closed-form sub solvers

If a subsolver solution is available in closed form, `ClosedFormSubSolverState` is used to indicate that.

Expand Down
62 changes: 62 additions & 0 deletions docs/src/solvers/mesh_adaptive_direct_search.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Mesh adaptive direct search (MADS)


```@meta
CurrentModule = Manopt
```

```@docs
mesh_adaptive_direct_search
mesh_adaptive_direct_search!
```

## State

```@docs
MeshAdaptiveDirectSearchState
```

## Poll

```@docs
AbstractMeshPollFunction
LowerTriangularAdaptivePoll
```

as well as the internal functions

```@docs
Manopt.get_descent_direction(::LowerTriangularAdaptivePoll)
Manopt.is_successful(::LowerTriangularAdaptivePoll)
Manopt.get_candidate(::LowerTriangularAdaptivePoll)
Manopt.get_basepoint(::LowerTriangularAdaptivePoll)
Manopt.update_basepoint!(M, ltap::LowerTriangularAdaptivePoll{P}, p::P) where {P}
```

## Search

```@docs
AbstractMeshSearchFunction
DefaultMeshAdaptiveDirectSearch
```

as well as the internal functions

```@docs
Manopt.is_successful(::DefaultMeshAdaptiveDirectSearch)
Manopt.get_candidate(::DefaultMeshAdaptiveDirectSearch)
```

## Additional stopping criteria

```@docs
StopWhenPollSizeLess
```

## Technical details

The [`mesh_adaptive_direct_search`](@ref) solver requires the following functions of a manifold to be available

* A [`retract!`](@extref ManifoldsBase :doc:`retractions`)`(M, q, p, X)`; it is recommended to set the [`default_retraction_method`](@extref `ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}`) to a favourite retraction. If this default is set, a `retraction_method=` does not have to be specified.
* Within the default initialization [`rand`](@extref Base.rand-Tuple{AbstractManifold})`(M)` is used to generate the initial population
* A [`vector_transport_to!`](@extref ManifoldsBase :doc:`vector_transports`)`M, Y, p, X, q)`; it is recommended to set the [`default_vector_transport_method`](@extref `ManifoldsBase.default_vector_transport_method-Tuple{AbstractManifold}`) to a favourite retraction. If this default is set, a `vector_transport_method=` does not have to be specified.
5 changes: 5 additions & 0 deletions docs/styles/config/vocabularies/Manopt/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ Cartis
canonicalization
canonicalized
Constantin
[Cc]ubics
Dai
deactivatable
Diepeveen
Expand Down Expand Up @@ -76,9 +77,11 @@ Munkvold
[Mm]ead
[Nn]elder
Nesterov
Nesterovs
Newton
nonmonotone
nonpositive
[Nn]onsmooth
[Pp]arametrising
Parametrising
[Pp]ock
Expand Down Expand Up @@ -110,9 +113,11 @@ Stephansen
Stokkenes
[Ss]ubdifferential
[Ss]ubgradient
[Ss]ubgradients
subsampled
[Ss]ubsolver
summand
summands
superlinear
supertype
th
Expand Down
7 changes: 7 additions & 0 deletions src/Manopt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,7 @@ include("solvers/FrankWolfe.jl")
include("solvers/gradient_descent.jl")
include("solvers/interior_point_Newton.jl")
include("solvers/LevenbergMarquardt.jl")
include("solvers/mesh_adaptive_direct_search.jl")
include("solvers/particle_swarm.jl")
include("solvers/primal_dual_semismooth_Newton.jl")
include("solvers/proximal_bundle_method.jl")
Expand Down Expand Up @@ -340,6 +341,7 @@ export AbstractGradientSolverState,
InteriorPointNewtonState,
LanczosState,
LevenbergMarquardtState,
MeshAdaptiveDirectSearchState,
NelderMeadState,
ParticleSwarmState,
PrimalDualSemismoothNewtonState,
Expand Down Expand Up @@ -430,6 +432,8 @@ export WolfePowellLinesearch, WolfePowellBinaryLinesearch
export AbstractStateAction, StoreStateAction
export has_storage, get_storage, update_storage!
export objective_cache_factory
export AbstractMeshPollFunction, LowerTriangularAdaptivePoll
export AbstractMeshSearchFunction, DefaultMeshAdaptiveDirectSearch
#
# Direction Update Rules
export DirectionUpdateRule
Expand Down Expand Up @@ -479,6 +483,8 @@ export adaptive_regularization_with_cubics,
interior_point_Newton!,
LevenbergMarquardt,
LevenbergMarquardt!,
mesh_adaptive_direct_search,
mesh_adaptive_direct_search!,
NelderMead,
NelderMead!,
particle_swarm,
Expand Down Expand Up @@ -540,6 +546,7 @@ export StopAfter,
StopWhenKKTResidualLess,
StopWhenLagrangeMultiplierLess,
StopWhenModelIncreased,
StopWhenPollSizeLess,
StopWhenPopulationCostConcentrated,
StopWhenPopulationConcentrated,
StopWhenPopulationDiverges,
Expand Down
2 changes: 0 additions & 2 deletions src/plans/cache.jl
Original file line number Diff line number Diff line change
Expand Up @@ -250,8 +250,6 @@ which function evaluations to cache.
number of (least recently used) calls to cache
* `cache_sizes=Dict{Symbol,Int}()`:
a named tuple or dictionary specifying the sizes individually for each cache.


"""
struct ManifoldCachedObjective{E,P,O<:AbstractManifoldObjective{<:E},C<:NamedTuple{}} <:
AbstractDecoratedManifoldObjective{E,P}
Expand Down
4 changes: 2 additions & 2 deletions src/plans/constrained_plan.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ A common supertype for fucntors that model constraint functions with slack.

This supertype additionally provides access for the fields
* `μ::T` the dual for the inequality constraints
* `s::T` the slack parametyer, and
* `s::T` the slack parameter, and
* `β::R` the the barrier parameter
which is also of typee `T`.
which is also of type `T`.
"""
abstract type AbstractConstrainedSlackFunctor{T,R} end

Expand Down
4 changes: 2 additions & 2 deletions src/plans/debug.jl
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Whether internal variables are updates is determined by `always_update`.

This method does not perform any print itself but relies on it's children's print.

It also sets the subsolvers active parameter, see |`DebugWhenActive`}(#ref).
It also sets the sub solvers active parameter, see |`DebugWhenActive`}(#ref).
Here, the `activattion_offset` can be used to specify whether it refers to _this_ iteration,
the `i`th, when this call is _before_ the iteration, then the offset should be 0,
for the _next_ iteration, that is if this is called _after_ an iteration, it has to be set to 1.
Expand All @@ -185,7 +185,7 @@ function (d::DebugEvery)(p::AbstractManoptProblem, st::AbstractManoptSolverState
elseif d.always_update
d.debug(p, st, -1)
end
# set activity for this iterate in subsolvers
# set activity for this iterate in sub solvers
set_parameter!(
st,
:SubState,
Expand Down
Loading
Loading