Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce the LTMADS solver #433

Open
wants to merge 38 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
8baa2f0
Initial sketch of (LT)MADS
kellertuer Sep 27, 2024
d0ee13f
Merge branch 'master' into kellertuer/LTMADS
kellertuer Jan 2, 2025
4c6d740
Design concrete search and poll structs further and document them.
kellertuer Jan 2, 2025
fb945c2
Add remaining todos.
kellertuer Jan 2, 2025
fa6004d
continue docs.
kellertuer Jan 3, 2025
435aea1
Implement most of the logic, just not yet the updates(vector transpor…
kellertuer Jan 3, 2025
88b9683
forgot to store poll_size.
kellertuer Jan 3, 2025
9d40813
Merge branch 'master' into kellertuer/LTMADS
kellertuer Jan 5, 2025
3c2b537
first MADS variant that includes all necessary functions.
kellertuer Jan 5, 2025
354c8ff
extend docs.
kellertuer Jan 5, 2025
7b8ad8d
Fix a few typos.
kellertuer Jan 6, 2025
939d7b0
Fix two typos.
kellertuer Jan 7, 2025
3a4e2ac
Fix typos add a first running, but failing test.
kellertuer Jan 8, 2025
9167d93
Stabilize I
kellertuer Jan 9, 2025
2b29824
Finally found the bug in scaling the mesh to be the culprit
kellertuer Jan 9, 2025
57ee145
Fix state print a bit.
kellertuer Jan 9, 2025
a7e9f8c
change poll and mesh size to be internal parameters.
kellertuer Jan 9, 2025
e430d73
unify naming and add docstrings to all new (small) functions
kellertuer Jan 9, 2025
5a59142
Fix docs.
kellertuer Jan 9, 2025
aff7900
work on code coverage.
kellertuer Jan 26, 2025
df0f042
Cover a final line.
kellertuer Jan 26, 2025
a8d47e4
improve typing and performance a little
mateuszbaran Jan 26, 2025
1d6454e
formatting
mateuszbaran Jan 26, 2025
83da62e
fix some typos, add some types
mateuszbaran Jan 27, 2025
0c6322b
A bit of work on typos.
kellertuer Feb 4, 2025
5e7f232
Update metadata.
kellertuer Feb 4, 2025
577dfb5
Rearrange the order of names.
kellertuer Feb 4, 2025
6ba7b9a
Update docs/src/references.bib
kellertuer Feb 4, 2025
58f3b1a
fix 2 more typos.
kellertuer Feb 4, 2025
20901fc
Bring vale to zero errors.
kellertuer Feb 4, 2025
8cd4880
Fix a few more typos.
kellertuer Feb 5, 2025
b81d0c8
Add input to docs.
kellertuer Feb 10, 2025
5063d8f
Merge branch 'master' into kellertuer/LTMADS
kellertuer Feb 10, 2025
4d87704
Fix dependency on ADTypes.
kellertuer Feb 11, 2025
e89895e
Fix the seed and remove an unnecessary case, which could also be reso…
kellertuer Feb 11, 2025
f69f660
add decorate and output section notes.
kellertuer Feb 11, 2025
94ea68b
complete the list of keyword arguments.
kellertuer Feb 12, 2025
22733d9
Add literature section to the docs page.
kellertuer Feb 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 4 additions & 9 deletions .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,7 @@ jl = md
[docs/src/*.md]
BasedOnStyles = Vale, Google

[docs/src/contributing.md]
BasedOnStyles = Vale, Google
Google.Will = false ; given format and really with intend a _will_
Google.Headings = false ; some might jeally ahabe [] in their headers
Google.FirstPerson = false ; we pose a few contribution points as first-person questions

[Changelog.md, CONTRIBUTING.md]
[{docs/src/contributing.md, Changelog.md, CONTRIBUTING.md}]
BasedOnStyles = Vale, Google
Google.Will = false ; given format and really with intend a _will_
Google.Headings = false ; some might jeally ahabe [] in their headers
Expand Down Expand Up @@ -49,8 +43,9 @@ BasedOnStyles = Vale, Google
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to address the user directly.

[docs/src/tutorials/*.md] ; actually .qmd for the first, second autogenerated
[docs/src/tutorials/*.md] ; Can I somehow just deactivate these?
BasedOnStyles = Vale, Google
; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to address the user directly.
Google.We = false # For tutorials we want to address the user directly.
Google.Spacing = false # one reference uses this
51 changes: 29 additions & 22 deletions Changelog.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,17 @@
# Changelog

All notable Changes to the Julia package `Manopt.jl` will be documented in this file. The file was started with Version `0.4`.
All notable Changes to the Julia package `Manopt.jl` are documented in this file.
The file was started with Version `0.4`.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.5.7] February 17, 20265

### Added

* A mesh adaptive direct search algorithm (MADS), for now with the LTMADS variant using a lower triangular random matrix in the poll step.

## [0.5.6] February 10, 2025

### Changed
Expand All @@ -29,16 +36,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* The geodesic regression example, first because it is not correct, second because it should become part of ManoptExamples.jl once it is correct.

## [0.5.4] - December 11, 2024
## [0.5.4] December 11, 2024

### Added

* An automated detection whether the tutorials are present
if not an also no quarto run is done, an automated `--exclude-tutorials` option is added.
* Support for ManifoldDiff 0.4
* icons upfront external links when they link to another package or wikipedia.
* icons upfront external links when they link to another package or Wikipedia.

## [0.5.3] October 18, 2024
## [0.5.3] October 18, 2024

### Added

Expand All @@ -48,9 +55,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* stabilize `max_stepsize` to also work when `injectivity_radius` dos not exist.
It however would warn new users, that activate tutorial mode.
* Start a `ManoptTestSuite` subpackage to store dummy types and common test helpers in.
* Start a `ManoptTestSuite` sub package to store dummy types and common test helpers in.

## [0.5.2] October 5, 2024
## [0.5.2] October 5, 2024

### Added

Expand All @@ -61,7 +68,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* fix a few typos in the documentation
* improved the documentation for the initial guess of [`ArmijoLinesearchStepsize`](https://manoptjl.org/stable/plans/stepsize/#Manopt.ArmijoLinesearch).

## [0.5.1] September 4, 2024
## [0.5.1] September 4, 2024

### Changed

Expand All @@ -71,17 +78,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* the `proximal_point` method.

## [0.5.0] August 29, 2024
## [0.5.0] August 29, 2024

This breaking update is mainly concerned with improving a unified experience through all solvers
and some usability improvements, such that for example the different gradient update rules are easier to specify.

In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments
In general this introduces a few factories, that avoid having to pass the manifold to keyword arguments

### Added

* A `ManifoldDefaultsFactory` that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal `_produce_type(factory, M)`, for example before passing something to the state.
* a `documentation_glossary.jl` file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.
* a `documentation_glossary.jl` file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, text, and math notation that is often used within the doc-strings.

### Changed

Expand All @@ -106,12 +113,12 @@ In general we introduce a few factories, that avoid having to pass the manifold
* `HestenesStiefelCoefficient` is now called `HestenesStiefelCoefficientRule`. For the `HestenesStiefelCoefficient` the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the `vector_transport_method=` keyword.
* `LiuStoreyCoefficient` is now called `LiuStoreyCoefficientRule`. For the `LiuStoreyCoefficient` the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the `vector_transport_method=` keyword.
* `PolakRibiereCoefficient` is now called `PolakRibiereCoefficientRule`. For the `PolakRibiereCoefficient` the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the `vector_transport_method=` keyword.
* the `SteepestDirectionUpdateRule` is now called `SteepestDescentCoefficientRule`. The `SteepestDescentCoefficient` is equivalent, but creates the new factory interims wise.
* the `SteepestDirectionUpdateRule` is now called `SteepestDescentCoefficientRule`. The `SteepestDescentCoefficient` is equivalent, but creates the new factory temporarily.
* `AbstractGradientGroupProcessor` is now called `AbstractGradientGroupDirectionRule`
* the `StochasticGradient` is now called `StochasticGradientRule`. The `StochasticGradient` is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
* the `StochasticGradient` is now called `StochasticGradientRule`. The `StochasticGradient` is equivalent, but creates the new factory temporarily, so that the manifold is not longer necessary.
* the `AlternatingGradient` is now called `AlternatingGradientRule`.
The `AlternatingGradient` is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
* `quasi_Newton` had a keyword `scale_initial_operator=` that was inconsistently declared (sometimes bool, sometimes real) and was unused.
The `AlternatingGradient` is equivalent, but creates the new factory temporarily, so that the manifold is not longer necessary.
* `quasi_Newton` had a keyword `scale_initial_operator=` that was inconsistently declared (sometimes boolean, sometimes real) and was unused.
It is now called `initial_scale=1.0` and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the $\frac{1}{\lVert g_k\rVert}$ scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter.
* Unify doc strings and presentation of keyword arguments
* general indexing, for example in a vector, uses `i`
Expand Down Expand Up @@ -144,14 +151,14 @@ In general we introduce a few factories, that avoid having to pass the manifold
* `AdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...)` replaces
the (anyways unused) variant to only provide the objective; both `X` and `p` moved to keyword arguments.
* `AugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...)` was added
* ``AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `ExactPenaltyMethodState(M, sub_problem; evaluation=...)` was added and `ExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `DifferenceOfConvexState(M, sub_problem; evaluation=...)` was added and `DifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* `DifferenceOfConvexProximalState(M, sub_problem; evaluation=...)` was added and `DifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...)` now has `p=rand(M)` as keyword argument instead of being the second positional one
* bumped `Manifolds.jl`to version 0.10; this mainly means that any algorithm working on a product manifold and requiring `ArrayPartition` now has to explicitly do `using RecursiveArrayTools`.
### Fixed

* the `AverageGradientRule` filled its internal vector of gradients wrongly or mixed it up in parallel transport. This is now fixed.
* the `AverageGradientRule` filled its internal vector of gradients wrongly or mixed it up in parallel transport. This is now fixed.

### Removed

Expand All @@ -171,31 +178,31 @@ In general we introduce a few factories, that avoid having to pass the manifold
* to update a stopping criterion in a solver state, replace the old `update_stopping_criterion!(state, :Val, v)` tat passed down to the stopping criterion by the explicit pass down with `set_parameter!(state, :StoppingCriterion, :Val, v)`


## [0.4.69] August 3, 2024
## [0.4.69] August 3, 2024

### Changed

* Improved performance of Interior Point Newton Method.

## [0.4.68] August 2, 2024
## [0.4.68] August 2, 2024

### Added

* an Interior Point Newton Method, the `interior_point_newton`
* a `conjugate_residual` Algorithm to solve a linear system on a tangent space.
* `ArmijoLinesearch` now allows for additional `additional_decrease_condition` and `additional_increase_condition` keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.
* add a `DebugFeasibility` to have a debug print about feasibility of points in constrained optimisation employing the new `is_feasible` function
* add a `InteriorPointCentralityCondition` check that can be added for step candidates within the line search of `interior_point_newton`
* add a `InteriorPointCentralityCondition` that can be added for step candidates within the line search of `interior_point_newton`
* Add Several new functors
* the `LagrangianCost`, `LagrangianGradient`, `LagrangianHessian`, that based on a constrained objective allow to construct the hessian objective of its Lagrangian
* the `LagrangianCost`, `LagrangianGradient`, `LagrangianHessian`, that based on a constrained objective allow to construct the Hessian objective of its Lagrangian
* the `CondensedKKTVectorField` and its `CondensedKKTVectorFieldJacobian`, that are being used to solve a linear system within `interior_point_newton`
* the `KKTVectorField` as well as its `KKTVectorFieldJacobian` and ``KKTVectorFieldAdjointJacobian`
* the `KKTVectorFieldNormSq` and its `KKTVectorFieldNormSqGradient` used within the Armijo line search of `interior_point_newton`
* New stopping criteria
* A `StopWhenRelativeResidualLess` for the `conjugate_residual`
* A `StopWhenKKTResidualLess` for the `interior_point_newton`

## [0.4.67] July 25, 2024
## [0.4.67] July 25, 2024

### Added

Expand Down Expand Up @@ -241,7 +248,7 @@ In general we introduce a few factories, that avoid having to pass the manifold
* Remodel `ConstrainedManifoldObjective` to store an `AbstractManifoldObjective`
internally instead of directly `f` and `grad_f`, allowing also Hessian objectives
therein and implementing access to this Hessian
* Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.
* Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since the algorithm initially divides by the gradient norm.

### Deprecated

Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Manopt"
uuid = "0fc0a36d-df90-57f3-8f93-d78a9fc72bb5"
authors = ["Ronny Bergmann <[email protected]>"]
version = "0.5.6"
version = "0.5.7"

[deps]
ColorSchemes = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
Expand Down
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@ makedocs(;
"Gradient Descent" => "solvers/gradient_descent.md",
"Interior Point Newton" => "solvers/interior_point_Newton.md",
"Levenberg–Marquardt" => "solvers/LevenbergMarquardt.md",
"MADS" => "solvers/mesh_adaptive_direct_search.md",
"Nelder–Mead" => "solvers/NelderMead.md",
"Particle Swarm Optimization" => "solvers/particle_swarm.md",
"Primal-dual Riemannian semismooth Newton" => "solvers/primal_dual_semismooth_Newton.md",
Expand Down
1 change: 1 addition & 0 deletions docs/src/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Thanks to the following contributors to `Manopt.jl`:
* [Hajg Jasa](https://www.ntnu.edu/employees/hajg.jasa) implemented the [convex bundle method](solvers/convex_bundle_method.md) and the [proximal bundle method](solvers/proximal_bundle_method.md) and a default subsolver each of them.
* Even Stephansen Kjemsås contributed to the implementation of the [Frank Wolfe Method](solvers/FrankWolfe.md) solver.
* Mathias Ravn Munkvold contributed most of the implementation of the [Adaptive Regularization with Cubics](solvers/adaptive-regularization-with-cubics.md) solver as well as its [Lanczos](@ref arc-Lanczos) subsolver
* [Sander Engen Oddsen](https://github.com/oddsen) contributed to the implementation of the [LTMADS](solvers/mesh_adaptive_direct_search.md) solver.
* [Tom-Christian Riemer](https://www.tu-chemnitz.de/mathematik/wire/mitarbeiter.php) implemented the [trust regions](solvers/trust_regions.md) and [quasi Newton](solvers/quasi_Newton.md) solvers as well as the [truncated conjugate gradient descent](solvers/truncated_conjugate_gradient_descent.md) subsolver.
* [Markus A. Stokkenes](https://www.linkedin.com/in/markus-a-stokkenes-b41bba17b/) contributed most of the implementation of the [Interior Point Newton Method](solvers/interior_point_Newton.md) as well as its default [Conjugate Residual](solvers/conjugate_residual.md) subsolver
* [Manuel Weiss](https://scoop.iwr.uni-heidelberg.de/author/manuel-weiß/) implemented most of the [conjugate gradient update rules](@ref cg-coeffs)
Expand Down
7 changes: 7 additions & 0 deletions docs/src/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,13 @@ @article{DiepeveenLellmann:2021
VOLUME = {14},
YEAR = {2021},
}
@techreport{Dreisigmeyer:2007,
AUTHOR = {Dreisigmeyer, David W.},
INSTITUTION = {Optimization Online},
TITLE = {Direct Search Alogirthms over Riemannian Manifolds},
URL = {https://optimization-online.org/?p=9134},
YEAR = {2007}
}
@article{DuranMoelleSbertCremers:2016,
AUTHOR = {Duran, J. and Moeller, M. and Sbert, C. and Cremers, D.},
TITLE = {Collaborative Total Variation: A General Framework for Vectorial TV Models},
Expand Down
1 change: 1 addition & 0 deletions docs/src/solvers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ For derivative free only function evaluations of ``f`` are used.

* [Nelder-Mead](NelderMead.md) a simplex based variant, that is using ``d+1`` points, where ``d`` is the dimension of the manifold.
* [Particle Swarm](particle_swarm.md) 🫏 use the evolution of a set of points, called swarm, to explore the domain of the cost and find a minimizer.
* [Mesh adaptive direct search](mesh_adaptive_direct_search.md) performs a mesh based exploration (poll) and search.
* [CMA-ES](cma_es.md) uses a stochastic evolutionary strategy to perform minimization robust to local minima of the objective.

## First order
Expand Down
69 changes: 69 additions & 0 deletions docs/src/solvers/mesh_adaptive_direct_search.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Mesh adaptive direct search (MADS)


```@meta
CurrentModule = Manopt
```

```@docs
mesh_adaptive_direct_search
mesh_adaptive_direct_search!
```

## State

```@docs
MeshAdaptiveDirectSearchState
```

## Poll

```@docs
AbstractMeshPollFunction
LowerTriangularAdaptivePoll
```

as well as the internal functions

```@docs
Manopt.get_descent_direction(::LowerTriangularAdaptivePoll)
Manopt.is_successful(::LowerTriangularAdaptivePoll)
Manopt.get_candidate(::LowerTriangularAdaptivePoll)
Manopt.get_basepoint(::LowerTriangularAdaptivePoll)
Manopt.update_basepoint!(M, ltap::LowerTriangularAdaptivePoll{P}, p::P) where {P}
```

## Search

```@docs
AbstractMeshSearchFunction
DefaultMeshAdaptiveDirectSearch
```

as well as the internal functions

```@docs
Manopt.is_successful(::DefaultMeshAdaptiveDirectSearch)
Manopt.get_candidate(::DefaultMeshAdaptiveDirectSearch)
```

## Additional stopping criteria

```@docs
StopWhenPollSizeLess
```

## Technical details

The [`mesh_adaptive_direct_search`](@ref) solver requires the following functions of a manifold to be available

* A [`retract!`](@extref ManifoldsBase :doc:`retractions`)`(M, q, p, X)`; it is recommended to set the [`default_retraction_method`](@extref `ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}`) to a favourite retraction. If this default is set, a `retraction_method=` does not have to be specified.
* Within the default initialization [`rand`](@extref Base.rand-Tuple{AbstractManifold})`(M)` is used to generate the initial population
* A [`vector_transport_to!`](@extref ManifoldsBase :doc:`vector_transports`)`M, Y, p, X, q)`; it is recommended to set the [`default_vector_transport_method`](@extref `ManifoldsBase.default_vector_transport_method-Tuple{AbstractManifold}`) to a favourite retraction. If this default is set, a `vector_transport_method=` does not have to be specified.

## Literature

```@bibliography
Pages = ["mesh_adaptive_direct_search.md"]
Canonical=false
```
4 changes: 2 additions & 2 deletions docs/src/tutorials/InplaceGradient.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ We can also benchmark this as
Time (median): 55.145 ms ┊ GC (median): 9.99%
Time (mean ± σ): 56.391 ms ± 6.102 ms ┊ GC (mean ± σ): 9.92% ± 1.43%

▅██▅▃▁
▅██▅▃▁
▅███████▁▅▇▅▁▅▁▁▅▅▁▁▁▅▅▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅ ▁
53 ms Histogram: log(frequency) by time 81.7 ms <

Expand Down Expand Up @@ -121,7 +121,7 @@ We can again benchmark this
Time (median): 37.559 ms ┊ GC (median): 0.00%
Time (mean ± σ): 38.658 ms ± 3.904 ms ┊ GC (mean ± σ): 0.73% ± 2.68%

██▅▅▄▂▁ ▂
██▅▅▄▂▁ ▂
███████▁██▁▅▁▁▁▅▁▁▁▁▅▅▅▁▁▁▅▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▁▁▁▁▁▁▁▁▁▅ ▅
36.6 ms Histogram: log(frequency) by time 61 ms <

Expand Down
2 changes: 2 additions & 0 deletions docs/styles/config/vocabularies/Manopt/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ eigen
eigendecomposition
elementwise
Ehresmann
Engen
Fenchel
Ferreira
Frank
Expand Down Expand Up @@ -82,6 +83,7 @@ Newton
nonmonotone
nonpositive
[Nn]onsmooth
Oddsen
[Pp]arametrising
Parametrising
[Pp]ock
Expand Down
Loading
Loading