# elsa solvers¶

## Solver¶

template<typename data_t = real_t>
class elsa::Solver : public elsa::Cloneable<Solver<real_t>>

Base class representing a solver for an optimization problem.

This class represents abstract (typically iterative) solvers acting on optimization problems.

Author

Matthias Wieczorek - initial code

Author

Maximilian Hornung - modularization

Author

Tobias Lasser - rewrite, modernization

Template Parameters
• data_t: data type for the domain and range of the problem, defaulting to real_t

Public Functions

Solver(const Problem<data_t> &problem)

Constructor for the solver, accepting an optimization problem.

Parameters
• [in] problem: the problem that is supposed to be solved

~Solver() override = default

default destructor

const DataContainer<data_t> &getCurrentSolution() const

return the current estimated solution (const version)

DataContainer<data_t> &getCurrentSolution()

return the current estimated solution

DataContainer<data_t> &solve(index_t iterations = 0)

Solve the optimization problem (most likely iteratively)

Please note: this method calls _solve, which has to be overridden in derived classes.

Return

a reference to the current solution (after solving)

Parameters
• [in] iterations: number of iterations to execute (optional argument, the default 0 value lets the solve choose how many iterations to execute)

Protected Functions

DataContainer<data_t> &solveImpl(index_t iterations) = 0

the solve method to be overridden in derived classes

bool isEqual(const Solver<data_t> &other) const override

implement the polymorphic comparison operation

Protected Attributes

std::unique_ptr<Problem<data_t>> _problem

the optimization problem

## CG¶

template<typename data_t = real_t>
class elsa::CG : public elsa::Solver<real_t>

Class implementing the conjugate gradient method.

CG is an iterative method for minimizing quadric functionals $$\frac{1}{2} x^tAx - x^tb$$ with a symmetric positive-definite operator $$A$$. Some common optimization problems, e.g. weighted least squares or Tikhonov-regularized weighted least squares, can be reformulated as quadric problems, and solved using CG, even with non-spd operators through the normal equation. Conversion to the normal equation is performed automatically, where possible.

Author

Matthias Wieczorek - initial code

Author

David Frank - modularization and modernization

Author

Nikola Dinev - rewrite, various enhancements

Convergence is considered reached when $$\| Ax - b \| \leq \epsilon \|Ax_0 - b\|$$ is satisfied for some small $$\epsilon > 0$$. Here $$x$$ denotes the solution obtained in the last step, and $$x_0$$ denotes the initial guess.

Public Functions

CG(const Problem<data_t> &problem, data_t epsilon = std::numeric_limits<data_t>::epsilon())

Constructor for CG, accepting an optimization problem and, optionally, a value for epsilon.

If the problem is not a

QuadricProblem, a conversion will be attempted. Throws if conversion fails. See QuadricProblem for details on problems that are convertible to quadric form.
Parameters
• [in] problem: the problem that is supposed to be solved

• [in] epsilon: affects the stopping condition

CG(const Problem<data_t> &problem, const LinearOperator<data_t> &preconditionerInverse, data_t epsilon = std::numeric_limits<data_t>::epsilon())

Constructor for preconditioned CG, accepting an optimization problem, the inverse of the preconditioner, and, optionally, a value for epsilon.

If the problem is not a

QuadricProblem, a conversion will be attempted. Throws if conversion fails. See QuadricProblem for details on problems that are convertible to quadric form.
Parameters
• [in] problem: the problem that is supposed to be solved

• [in] preconditionerInverse: the inverse of the preconditioner

• [in] epsilon: affects the stopping condition

CG(const CG<data_t>&) = delete

make copy constructor deletion explicit

~CG() override = default

default destructor

Private Functions

DataContainer<data_t> &solveImpl(index_t iterations) override

Solve the optimization problem, i.e. apply iterations number of iterations of CG.

Return

a reference to the current solution

Parameters
• [in] iterations: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)

CG<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Solver<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

const index_t _defaultIterations = {100}

the default number of iterations

std::unique_ptr<LinearOperator<data_t>> _preconditionerInverse = {}

the inverse of the preconditioner (if supplied)

data_t _epsilon

variable affecting the stopping condition

template<typename data_t = real_t>
class elsa::GradientDescent : public elsa::Solver<real_t>

Class representing a simple gradient descent solver with a fixed, given step size.

This class implements a simple gradient descent iterative solver with a fixed, given step size. No particular stopping rule is currently implemented (only a fixed number of iterations, default to 100).

Author

Tobias Lasser - initial code

Template Parameters
• data_t: data type for the domain and range of the problem, defaulting to real_t

Public Functions

GradientDescent(const Problem<data_t> &problem, real_t stepSize)

Constructor for gradient descent, accepting a problem and a fixed step size.

Parameters
• [in] problem: the problem that is supposed to be solved

• [in] stepSize: the fixed step size to be used while solving

GradientDescent(const Problem<data_t> &problem)

Constructor for gradient descent, accepting a problem. The step size will be computed as $$1 \over L$$ with $$L$$ being the Lipschitz constant of the function.

Parameters
• [in] problem: the problem that is supposed to be solved

• [in] stepSize: the fixed step size to be used while solving

GradientDescent(const GradientDescent<data_t>&) = delete

make copy constructor deletion explicit

~GradientDescent() override = default

default destructor

Private Functions

DataContainer<data_t> &solveImpl(index_t iterations) override

Solve the optimization problem, i.e. apply iterations number of iterations of gradient descent.

Return

a reference to the current solution

Parameters
• [in] iterations: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)

GradientDescent<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Solver<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

real_t _stepSize

the step size

const index_t _defaultIterations = {100}

the default number of iterations

template<typename data_t = real_t>
class elsa::FGM : public elsa::Solver<real_t>

Class representing Nesterov’s Fast Gradient Method.

This class implements Nesterov’s Fast Gradient Method.

FGM is a first order method to efficiently optimize convex functions with Lipschitz-Continuous gradients.
Author

Michael Loipführer - initial code

Template Parameters
• data_t: data type for the domain and range of the problem, defaulting to real_t

No particular stopping rule is currently implemented (only a fixed number of iterations, default to 100).

References: https://doi.org/10.1007/s10107-015-0949-3

Public Functions

FGM(const Problem<data_t> &problem, data_t epsilon = std::numeric_limits<data_t>::epsilon())

Constructor for FGM, accepting an optimization problem and, optionally, a value for epsilon.

Parameters
• [in] problem: the problem that is supposed to be solved

• [in] epsilon: affects the stopping condition

FGM(const FGM<data_t>&) = delete

make copy constructor deletion explicit

~FGM() override = default

default destructor

Private Functions

DataContainer<data_t> &solveImpl(index_t iterations) override

Solve the optimization problem, i.e. apply iterations number of iterations of gradient descent.

Return

a reference to the current solution

Parameters
• [in] iterations: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)

FGM<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Solver<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

const index_t _defaultIterations = {100}

the default number of iterations

data_t _epsilon

variable affecting the stopping condition

template<typename data_t = real_t>
class elsa::OGM : public elsa::Solver<real_t>

Class representing the Optimized Gradient Method.

This class implements the Optimized Gradient Method as introduced by Kim and Fessler in 2016.

OGM is a first order method to efficiently optimize convex functions with Lipschitz-Continuous gradients. It can be seen as an improvement on Nesterov’s Fast Gradient Method.
Author

Michael Loipführer - initial code

Template Parameters
• data_t: data type for the domain and range of the problem, defaulting to real_t

No particular stopping rule is currently implemented (only a fixed number of iterations, default to 100).

References: https://doi.org/10.1007/s10107-015-0949-3

Public Functions

OGM(const Problem<data_t> &problem, data_t epsilon = std::numeric_limits<data_t>::epsilon())

Constructor for OGM, accepting an optimization problem and, optionally, a value for epsilon.

Parameters
• [in] problem: the problem that is supposed to be solved

• [in] epsilon: affects the stopping condition

OGM(const OGM<data_t>&) = delete

make copy constructor deletion explicit

~OGM() override = default

default destructor

Private Functions

DataContainer<data_t> &solveImpl(index_t iterations) override

Solve the optimization problem, i.e. apply iterations number of iterations of gradient descent.

Return

a reference to the current solution

Parameters
• [in] iterations: number of iterations to execute (the default 0 value executes _defaultIterations of iterations)

OGM<data_t> *cloneImpl() const override

implement the polymorphic clone operation

bool isEqual(const Solver<data_t> &other) const override

implement the polymorphic comparison operation

Private Members

const index_t _defaultIterations = {100}

the default number of iterations

data_t _epsilon

variable affecting the stopping condition