elsa solvers¶
Table of Contents
Solver¶

template<typename
data_t
= real_t>
classelsa
::
Solver
: public elsa::Cloneable<Solver<real_t>>¶ Base class representing a solver for an optimization problem.
This class represents abstract (typically iterative) solvers acting on optimization problems.
 Author
Matthias Wieczorek  initial code
 Author
Maximilian Hornung  modularization
 Author
Tobias Lasser  rewrite, modernization
 Template Parameters
data_t
: data type for the domain and range of the problem, defaulting to real_t
Public Functions

Solver
(const Problem<data_t> &problem)¶ Constructor for the solver, accepting an optimization problem.
 Parameters
[in] problem
: the problem that is supposed to be solved

~Solver
() override = default¶ default destructor

const DataContainer<data_t> &
getCurrentSolution
() const¶ return the current estimated solution (const version)

DataContainer<data_t> &
getCurrentSolution
()¶ return the current estimated solution

DataContainer<data_t> &
solve
(index_t iterations = 0)¶ Solve the optimization problem (most likely iteratively)
Please note: this method calls _solve, which has to be overridden in derived classes.
 Return
a reference to the current solution (after solving)
 Parameters
[in] iterations
: number of iterations to execute (optional argument, the default 0 value lets the solve choose how many iterations to execute)
Protected Functions

DataContainer<data_t> &
solveImpl
(index_t iterations) = 0¶ the solve method to be overridden in derived classes
CG¶

template<typename
data_t
= real_t>
classelsa
::
CG
: public elsa::Solver<real_t>¶ Class implementing the conjugate gradient method.
CG is an iterative method for minimizing quadric functionals \( \frac{1}{2} x^tAx  x^tb \) with a symmetric positivedefinite operator \( A \). Some common optimization problems, e.g. weighted least squares or Tikhonovregularized weighted least squares, can be reformulated as quadric problems, and solved using CG, even with nonspd operators through the normal equation. Conversion to the normal equation is performed automatically, where possible.
 Author
Matthias Wieczorek  initial code
 Author
David Frank  modularization and modernization
 Author
Nikola Dinev  rewrite, various enhancements
Convergence is considered reached when \( \ Ax  b \ \leq \epsilon \Ax_0  b\ \) is satisfied for some small \( \epsilon > 0\). Here \( x \) denotes the solution obtained in the last step, and \( x_0 \) denotes the initial guess.
References: https://doi.org/10.6028%2Fjres.049.044 https://www.cs.cmu.edu/~quakepapers/painlessconjugategradient.pdf
Public Functions

CG
(const Problem<data_t> &problem, data_t epsilon = std::numeric_limits<data_t>::epsilon())¶ Constructor for CG, accepting an optimization problem and, optionally, a value for epsilon.
If the problem is not a
QuadricProblem, a conversion will be attempted. Throws if conversion fails. See QuadricProblem for details on problems that are convertible to quadric form. Parameters
[in] problem
: the problem that is supposed to be solved[in] epsilon
: affects the stopping condition

CG
(const Problem<data_t> &problem, const LinearOperator<data_t> &preconditionerInverse, data_t epsilon = std::numeric_limits<data_t>::epsilon())¶ Constructor for preconditioned CG, accepting an optimization problem, the inverse of the preconditioner, and, optionally, a value for epsilon.
If the problem is not a
QuadricProblem, a conversion will be attempted. Throws if conversion fails. See QuadricProblem for details on problems that are convertible to quadric form. Parameters
[in] problem
: the problem that is supposed to be solved[in] preconditionerInverse
: the inverse of the preconditioner[in] epsilon
: affects the stopping condition

~CG
() override = default¶ default destructor
Private Functions

DataContainer<data_t> &
solveImpl
(index_t iterations) override¶ Solve the optimization problem, i.e. apply iterations number of iterations of CG.
 Return
a reference to the current solution
 Parameters
[in] iterations
: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)
GradientDescent¶

template<typename
data_t
= real_t>
classelsa
::
GradientDescent
: public elsa::Solver<real_t>¶ Class representing a simple gradient descent solver with a fixed, given step size.
This class implements a simple gradient descent iterative solver with a fixed, given step size. No particular stopping rule is currently implemented (only a fixed number of iterations, default to 100).
 Author
Tobias Lasser  initial code
 Template Parameters
data_t
: data type for the domain and range of the problem, defaulting to real_t
Public Functions

GradientDescent
(const Problem<data_t> &problem, real_t stepSize)¶ Constructor for gradient descent, accepting a problem and a fixed step size.
 Parameters
[in] problem
: the problem that is supposed to be solved[in] stepSize
: the fixed step size to be used while solving

GradientDescent
(const Problem<data_t> &problem)¶ Constructor for gradient descent, accepting a problem. The step size will be computed as \( 1 \over L \) with \( L \) being the Lipschitz constant of the function.
 Parameters
[in] problem
: the problem that is supposed to be solved[in] stepSize
: the fixed step size to be used while solving

GradientDescent
(const GradientDescent<data_t>&) = delete¶ make copy constructor deletion explicit

~GradientDescent
() override = default¶ default destructor
Private Functions

DataContainer<data_t> &
solveImpl
(index_t iterations) override¶ Solve the optimization problem, i.e. apply iterations number of iterations of gradient descent.
 Return
a reference to the current solution
 Parameters
[in] iterations
: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)

GradientDescent<data_t> *
cloneImpl
() const override¶ implement the polymorphic clone operation
Nesterov’s Fast Gradient Method¶

template<typename
data_t
= real_t>
classelsa
::
FGM
: public elsa::Solver<real_t>¶ Class representing Nesterov’s Fast Gradient Method.
This class implements Nesterov’s Fast Gradient Method.
FGM is a first order method to efficiently optimize convex functions with LipschitzContinuous gradients. Author
Michael Loipführer  initial code
 Template Parameters
data_t
: data type for the domain and range of the problem, defaulting to real_t
No particular stopping rule is currently implemented (only a fixed number of iterations, default to 100).
References: https://doi.org/10.1007/s1010701509493
Public Functions

FGM
(const Problem<data_t> &problem, data_t epsilon = std::numeric_limits<data_t>::epsilon())¶ Constructor for FGM, accepting an optimization problem and, optionally, a value for epsilon.
 Parameters
[in] problem
: the problem that is supposed to be solved[in] epsilon
: affects the stopping condition

~FGM
() override = default¶ default destructor
Private Functions

DataContainer<data_t> &
solveImpl
(index_t iterations) override¶ Solve the optimization problem, i.e. apply iterations number of iterations of gradient descent.
 Return
a reference to the current solution
 Parameters
[in] iterations
: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)
Optimized Gradient Method¶

template<typename
data_t
= real_t>
classelsa
::
OGM
: public elsa::Solver<real_t>¶ Class representing the Optimized Gradient Method.
This class implements the Optimized Gradient Method as introduced by Kim and Fessler in 2016.
OGM is a first order method to efficiently optimize convex functions with LipschitzContinuous gradients. It can be seen as an improvement on Nesterov’s Fast Gradient Method. Author
Michael Loipführer  initial code
 Template Parameters
data_t
: data type for the domain and range of the problem, defaulting to real_t
No particular stopping rule is currently implemented (only a fixed number of iterations, default to 100).
References: https://doi.org/10.1007/s1010701509493
Public Functions

OGM
(const Problem<data_t> &problem, data_t epsilon = std::numeric_limits<data_t>::epsilon())¶ Constructor for OGM, accepting an optimization problem and, optionally, a value for epsilon.
 Parameters
[in] problem
: the problem that is supposed to be solved[in] epsilon
: affects the stopping condition

~OGM
() override = default¶ default destructor
Private Functions

DataContainer<data_t> &
solveImpl
(index_t iterations) override¶ Solve the optimization problem, i.e. apply iterations number of iterations of gradient descent.
 Return
a reference to the current solution
 Parameters
[in] iterations
: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)