You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, pydgq only supports Picard (fixed-point) iteration in all implicit solvers (including the Galerkin solvers). This is the simplest thing that works, but the convergence rate is linear (constant number of added correct bits per iteration).
Newton-Raphson would be a useful option to have for improving performance, because it converges quadratically (number of correct bits doubled per iteration).
To make it easy to use, approximate the Jacobian with finite differences, to avoid the need for the user to provide a function for the Jacobian to be able to use Newton iteration.
Allow the user to optionally provide such a function, if they prefer.
The Right Thing for producing a Jacobian automatically is (forward-mode) automatic differentiation, but in a low-level language such as Cython this is not easy (if at all possible) to make conform to the same interface as standard double-precision floats.
Maybe optionally also allow a mixed scheme which starts by a user-configurable number of Picard iterations, then switches to Newton. This can sometimes help convergence (if the final value from the previous timestep is outside the basin of attraction of the Newton method; Picard is not that sensitive to the initial value).
The text was updated successfully, but these errors were encountered:
Currently,
pydgq
only supports Picard (fixed-point) iteration in all implicit solvers (including the Galerkin solvers). This is the simplest thing that works, but the convergence rate is linear (constant number of added correct bits per iteration).Newton-Raphson would be a useful option to have for improving performance, because it converges quadratically (number of correct bits doubled per iteration).
The text was updated successfully, but these errors were encountered: