Skip to content

Commit

Permalink
small changes to explanation files. (#531)
Browse files Browse the repository at this point in the history
  • Loading branch information
ChristianZimpelmann authored Sep 20, 2024
1 parent 42fd8a5 commit 216b41e
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@ The main principles we describe here are:
- Derivative free trust region algorithms
- Derivative free direct search algorithms

This covers a large range of the algorithms that come with optimagic. We do currently
not cover:
This covers a large range of the algorithms that come with optimagic. In contrast, the
following classes of optimizers are also accessible via optimagic, but not yet covered
in this overview:

- Conjugate gradient methods
- Genetic algorithms
Expand Down
2 changes: 1 addition & 1 deletion docs/source/explanation/internal_optimizers.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ internal optimizer interface.

The advantages of using the algorithm with optimagic over using it directly are:

- optimagic turns an unconstrained optimizer into constrained ones.
- optimagic turns unconstrained optimizers into constrained ones.
- You can use logging.
- You get great error handling for exceptions in the criterion function or gradient.
- You get a parallelized and customizable numerical gradient if the user did not provide
Expand Down
8 changes: 4 additions & 4 deletions docs/source/explanation/numdiff_background.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Background and methods
# Numerical differentiation: methods

In this section we explain the mathematical background of forward, backward and central
differences. The main ideas in this chapter are taken from {cite}`Dennis1996`. x is used
Expand All @@ -24,9 +24,9 @@ The central difference for the gradient is given by:

$$
\nabla f(x) =
\begin{pmatrix}\frac{f(x + e_0 * h_0) - f(x - e_0 * h_0)}{h_0}\\
\frac{f(x + e_1 * h_1) - f(x - e_1 * h_1)}{h_1}\\.\\.\\.\\ \frac{f(x + e_n * h_n)
- f(x - e_n * h_n)}{h_n} \end{pmatrix}
\begin{pmatrix}\frac{f(x + e_0 * h_0) - f(x - e_0 * h_0)}{2 h_0}\\
\frac{f(x + e_1 * h_1) - f(x - e_1 * h_1)}{2 h_1}\\.\\.\\.\\ \frac{f(x + e_n * h_n)
- f(x - e_n * h_n)}{2 h_n} \end{pmatrix}
$$

For the optimal stepsize h the following rule of thumb is applied:
Expand Down

0 comments on commit 216b41e

Please sign in to comment.