A manifold objective

The Objective describes that actual cost function and all its properties.

Manopt.AbstractManifoldObjective β€” Type
AbstractManifoldObjective{E<:AbstractEvaluationType}

Describe the collection of the optimization function `f: \mathcal M β†’ \bbR (or even a vectorial range) and its corresponding elements, which might for example be a gradient or (one or more) proximal maps.

All these elements should usually be implemented as functions (M, p) -> ..., or (M, X, p) -> ... that is

  • the first argument of these functions should be the manifold M they are defined on
  • the argument X is present, if the computation is performed in-place of X (see InplaceEvaluation)
  • the argument p is the place the function ($f$ or one of its elements) is evaluated at.

the type T indicates the global AbstractEvaluationType.

source

Which has two main different possibilities for its containing functions concerning the evaluation mode, not necessarily the cost, but for example gradient in an AbstractManifoldGradientObjective.

Decorators for objectives

An objective can be decorated using the following trait and function to initialize

Manopt.dispatch_objective_decorator β€” Function
dispatch_objective_decorator(o::AbstractManoptSolverState)

Indicate internally, whether an AbstractManifoldObjectiveo to be of decorating type, it stores (encapsulates) an object in itself, by default in the field o.objective.

Decorators indicate this by returning Val{true} for further dispatch.

The default is Val{false}, so by default an state is not decorated.

source
Manopt.decorate_objective! β€” Function
decorate_objective!(M, o::AbstractManifoldObjective)

decorate the AbstractManifoldObjectiveo with specific decorators.

Optional arguments

optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.

  • cache: (missing) specify a cache. Currently :Simple is supported and :LRU if you load LRUCache.jl. For this case a tuple specifying what to cache and how many can be provided, has to be specified. For example (:LRU, [:Cost, :Gradient], 10) states that the last 10 used cost function evaluations and gradient evaluations should be stored. See objective_cache_factory for details.
  • count: (missing) specify calls to the objective to be called, see ManifoldCountObjective for the full list
  • objective_type: (:Riemannian) specify that an objective is :Riemannian or :Euclidean. The :Euclidean symbol is equivalent to specifying it as :Embedded, since in the end, both refer to converting an objective from the embedding (whether its Euclidean or not) to the Riemannian one.

See also

objective_cache_factory

source

Embedded objectives

Manopt.EmbeddedManifoldObjective β€” Type
EmbeddedManifoldObjective{P, T, E, O2, O1<:AbstractManifoldObjective{E}} <:
   AbstractDecoratedManifoldObjective{O2, O1}

Declare an objective to be defined in the embedding. This also declares the gradient to be defined in the embedding, and especially being the Riesz representer with respect to the metric in the embedding. The types can be used to still dispatch on also the undecorated objective type O2.

Fields

  • objective: the objective that is defined in the embedding
  • p: (nothing) a point in the embedding.
  • X: (nothing) a tangent vector in the embedding

When a point in the embedding p is provided, embed! is used in place of this point to reduce memory allocations. Similarly X is used when embedding tangent vectors

source

Cache objective

Since single function calls, for example to the cost or the gradient, might be expensive, a simple cache objective exists as a decorator, that caches one cost value or gradient.

It can be activated/used with the cache= keyword argument available for every solver.

Manopt.reset_counters! β€” Function
reset_counters(co::ManifoldCountObjective, value::Integer=0)

Reset all values in the count objective to value.

source
Manopt.objective_cache_factory β€” Function
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Symbol)

Generate a cached variant of the AbstractManifoldObjectiveo on the AbstractManifold M based on the symbol cache.

The following caches are available

  • :Simple generates a SimpleManifoldCachedObjective
  • :LRU generates a ManifoldCachedObjective where you should use the form (:LRU, [:Cost, :Gradient]) to specify what should be cached or (:LRU, [:Cost, :Gradient], 100) to specify the cache size. Here this variant defaults to (:LRU, [:Cost, :Gradient], 100), caching up to 100 cost and gradient values.[1]
source
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array, Array})
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array})

Generate a cached variant of the AbstractManifoldObjectiveo on the AbstractManifold M based on the symbol cache[1], where the second element cache[2] are further arguments to the cache and the optional third is passed down as keyword arguments.

For all available caches see the simpler variant with symbols.

source

A simple cache

A first generic cache is always available, but it only caches one gradient and one cost function evaluation (for the same point).

Manopt.SimpleManifoldCachedObjective β€” Type
 SimpleManifoldCachedObjective{O<:AbstractManifoldGradientObjective{E,TC,TG}, P, T,C} <: AbstractManifoldGradientObjective{E,TC,TG}

Provide a simple cache for an AbstractManifoldGradientObjective that is for a given point p this cache stores a point p and a gradient $\operatorname{grad} f(p)$ in X as well as a cost value $f(p)$ in c.

Both X and c are accompanied by booleans to keep track of their validity.

Constructor

SimpleManifoldCachedObjective(M::AbstractManifold, obj::AbstractManifoldGradientObjective; kwargs...)

Keyword

  • p: (rand(M)) a point on the manifold to initialize the cache with
  • X: (get_gradient(M, obj, p) or zero_vector(M,p)) a tangent vector to store the gradient in, see also initialize
  • c: (get_cost(M, obj, p) or 0.0) a value to store the cost function in initialize
  • initialized: (true) whether to initialize the cached X and c or not.
source

A generic cache

For the more advanced cache, you need to implement some type of cache yourself, that provides a get! and implement init_caches. This is for example provided if you load LRUCache.jl. Then you obtain

Manopt.ManifoldCachedObjective β€” Type
ManifoldCachedObjective{E,P,O<:AbstractManifoldObjective{<:E},C<:NamedTuple{}} <: AbstractDecoratedManifoldObjective{E,P}

Create a cache for an objective, based on a NamedTuple that stores some kind of cache.

Constructor

ManifoldCachedObjective(M, o::AbstractManifoldObjective, caches::Vector{Symbol}; kwargs...)

Create a cache for the AbstractManifoldObjective where the Symbols in caches indicate, which function evaluations to cache.

Supported symbols

SymbolCaches calls to (incl. ! variants)Comment
:Constraintsget_constraintsvector of numbers
:Costget_cost
:EqualityConstraintget_equality_constraintnumbers per (p,i)
:EqualityConstraintsget_equality_constraintsvector of numbers
:GradEqualityConstraintget_grad_equality_constrainttangent vector per (p,i)
:GradEqualityConstraintsget_grad_equality_constraintsvector of tangent vectors
:GradInequalityConstraintget_inequality_constrainttangent vector per (p,i)
:GradInequalityConstraintsget_inequality_constraintsvector of tangent vectors
:Gradientget_gradient(M,p)tangent vectors
:Hessianget_hessiantangent vectors
:InequalityConstraintget_inequality_constraintnumbers per (p,j)
:InequalityConstraintsget_inequality_constraintsvector of numbers
:Preconditionerget_preconditionertangent vectors
:ProximalMapget_proximal_mappoint per (p,Ξ»,i)
:StochasticGradientsget_gradientsvector of tangent vectors
:StochasticGradientget_gradient(M, p, i)tangent vector per (p,i)
:SubGradientget_subgradienttangent vectors
:SubtrahendGradientget_subtrahend_gradienttangent vectors

Keyword arguments

  • p: (rand(M)) the type of the keys to be used in the caches. Defaults to the default representation on M.
  • value: (get_cost(M, objective, p)) the type of values for numeric values in the cache
  • X: (zero_vector(M,p)) the type of values to be cached for gradient and Hessian calls.
  • cache: ([:Cost]) a vector of symbols indicating which function calls should be cached.
  • cache_size: (10) number of (least recently used) calls to cache
  • cache_sizes: (Dict{Symbol,Int}()) a named tuple or dictionary specifying the sizes individually for each cache.
source
Manopt.init_caches β€” Function
init_caches(caches, T::Type{LRU}; kwargs...)

Given a vector of symbols caches, this function sets up the NamedTuple of caches, where T is the type of cache to use.

Keyword arguments

  • p: (rand(M)) a point on a manifold, to both infer its type for keys and initialize caches
  • value: (0.0) a value both typing and initialising number-caches, the default is for (Float) values like the cost.
  • X: (zero_vector(M, p) a tangent vector at p to both type and initialize tangent vector caches
  • cache_size: (10) a default cache size to use
  • cache_sizes: (Dict{Symbol,Int}()) a dictionary of sizes for the caches to specify different (non-default) sizes
source
init_caches(M::AbstractManifold, caches, T; kwargs...)

Given a vector of symbols caches, this function sets up the NamedTuple of caches for points/vectors on M, where T is the type of cache to use.

source

Count objective

Manopt.ManifoldCountObjective β€” Type
ManifoldCountObjective{E,P,O<:AbstractManifoldObjective,I<:Integer} <: AbstractDecoratedManifoldObjective{E,P}

A wrapper for any AbstractManifoldObjective of type O to count different calls to parts of the objective.

Fields

  • counts a dictionary of symbols mapping to integers keeping the counted values
  • objective the wrapped objective

Supported symbols

SymbolCounts calls to (incl. ! variants)Comment
:Constraintsget_constraints
:Costget_cost
:EqualityConstraintget_equality_constraintrequires vector of counters
:EqualityConstraintsget_equality_constraintsdoes not count single access
:GradEqualityConstraintget_grad_equality_constraintrequires vector of counters
:GradEqualityConstraintsget_grad_equality_constraintsdoes not count single access
:GradInequalityConstraintget_inequality_constraintrequires vector of counters
:GradInequalityConstraintsget_inequality_constraintsdoes not count single access
:Gradientget_gradient(M,p)
:Hessianget_hessian
:InequalityConstraintget_inequality_constraintrequires vector of counters
:InequalityConstraintsget_inequality_constraintsdoes not count single access
:Preconditionerget_preconditioner
:ProximalMapget_proximal_map
:StochasticGradientsget_gradients
:StochasticGradientget_gradient(M, p, i)
:SubGradientget_subgradient
:SubtrahendGradientget_subtrahend_gradient

Constructors

ManifoldCountObjective(objective::AbstractManifoldObjective, counts::Dict{Symbol, <:Integer})

Initialise the ManifoldCountObjective to wrap objective initializing the set of counts

ManifoldCountObjective(M::AbtractManifold, objective::AbstractManifoldObjective, count::AbstractVecor{Symbol}, init=0)

Count function calls on objective using the symbols in count initialising all entries to init.

source

Internal decorators

Manopt.ReturnManifoldObjective β€” Type
ReturnManifoldObjective{E,O2,O1<:AbstractManifoldObjective{E}} <:
   AbstractDecoratedManifoldObjective{E,O2}

A wrapper to indicate that get_solver_result should return the inner objective.

The types are such that one can still dispatch on the undecorated type O2 of the original objective as well.

source

Specific Objective typed and their access functions

Cost objective

Manopt.AbstractManifoldCostObjective β€” Type
AbstractManifoldCostObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}

Representing objectives on manifolds with a cost function implemented.

source
Manopt.ManifoldCostObjective β€” Type
ManifoldCostObjective{T, TC} <: AbstractManifoldCostObjective{T, TC}

specify an AbstractManifoldObjective that does only have information about the cost function $f: \mathbb M β†’ ℝ$ implemented as a function (M, p) -> c to compute the cost value c at p on the manifold M.

  • cost: a function $f: \mathcal M β†’ ℝ$ to minimize

Constructors

ManifoldCostObjective(f)

Generate a problem. While this Problem does not have any allocating functions, the type T can be set for consistency reasons with other problems.

Used with

NelderMead, particle_swarm

source

Access functions

Manopt.get_cost β€” Function
get_cost(amp::AbstractManoptProblem, p)

evaluate the cost function f stored within the AbstractManifoldObjective of an AbstractManoptProblemamp at the point p.

source
get_cost(M::AbstractManifold, obj::AbstractManifoldObjective, p)

evaluate the cost function f defined on M stored within the AbstractManifoldObjective at the point p.

source
get_cost(M::AbstractManifold, mco::AbstractManifoldCostObjective, p)

Evaluate the cost function from within the AbstractManifoldCostObjective on M at p.

By default this implementation assumed that the cost is stored within mco.cost.

source
get_cost(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the tangent space TrustRegionModelObjective

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p.\]

source
get_cost(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)

Evaluate the tangent space AdaptiveRagularizationWithCubicsModelObjective

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p + \frac{Οƒ}{3} \lVert X \rVert^3,\]

at X, cf. Eq. (33) in [ABBC20].

source
get_cost(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, i)

Evaluate the ith summand of the cost.

If you use a single function for the stochastic cost, then only the index Γ¬=1` is available to evaluate the whole cost.

source
get_cost(M::AbstractManifold,emo::EmbeddedManifoldObjective, p)

Evaluate the cost function of an objective defined in the embedding by first embedding p before calling the cost function stored in the EmbeddedManifoldObjective.

source

and internally

Manopt.get_cost_function β€” Function
get_cost_function(amco::AbstractManifoldCostObjective)

return the function to evaluate (just) the cost $f(p)=c$ as a function (M,p) -> c.

source

Gradient objectives

Manopt.ManifoldGradientObjective β€” Type
ManifoldGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}

specify an objective containing a cost and its gradient

Fields

  • cost: a function $f: \mathcal M β†’ ℝ$
  • gradient!!: the gradient $\operatorname{grad}f: \mathcal M β†’ \mathcal T\mathcal M$ of the cost function $f$.

Depending on the AbstractEvaluationTypeT the gradient can have to forms

Constructors

ManifoldGradientObjective(cost, gradient; evaluation=AllocatingEvaluation())

Used with

gradient_descent, conjugate_gradient_descent, quasi_Newton

source
Manopt.ManifoldAlternatingGradientObjective β€” Type
ManifoldAlternatingGradientObjective{E<:AbstractEvaluationType,TCost,TGradient} <: AbstractManifoldGradientObjective{E}

An alternating gradient objective consists of

  • a cost function $F(x)$
  • a gradient $\operatorname{grad}F$ that is either
    • given as one function $\operatorname{grad}F$ returning a tangent vector X on M or
    • an array of gradient functions $\operatorname{grad}F_i$, Γ¬=1,…,n s each returning a component of the gradient
    which might be allocating or mutating variants, but not a mix of both.
Note

This Objective is usually defined using the ProductManifold from Manifolds.jl, so Manifolds.jl to be loaded.

Constructors

ManifoldAlternatingGradientObjective(F, gradF::Function;
    evaluation=AllocatingEvaluation()
)
ManifoldAlternatingGradientObjective(F, gradF::AbstractVector{<:Function};
    evaluation=AllocatingEvaluation()
)

Create a alternating gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.

source
Manopt.ManifoldStochasticGradientObjective β€” Type
ManifoldStochasticGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}

A stochastic gradient objective consists of

  • a(n optional) cost function ``f(p) = \displaystyle\sum{i=1}^n fi(p)
  • an array of gradients, $\operatorname{grad}f_i(p), i=1,\ldots,n$ which can be given in two forms
    • as one single function $(\mathcal M, p) ↦ (X_1,…,X_n) ∈ (T_p\mathcal M)^n$
    • as a vector of functions $\bigl( (\mathcal M, p) ↦ X_1, …, (\mathcal M, p) ↦ X_n\bigr)$.

Where both variants can also be provided as InplaceEvaluation functions (M, X, p) -> X, where X is the vector of X1,...Xn and (M, X1, p) -> X1, ..., (M, Xn, p) -> Xn, respectively.

Constructors

ManifoldStochasticGradientObjective(
    grad_f::Function;
    cost=Missing(),
    evaluation=AllocatingEvaluation()
)
ManifoldStochasticGradientObjective(
    grad_f::AbstractVector{<:Function};
    cost=Missing(), evaluation=AllocatingEvaluation()
)

Create a Stochastic gradient problem with the gradient either as one function (returning an array of tangent vectors) or a vector of functions (each returning one tangent vector).

The optional cost can also be given as either a single function (returning a number) pr a vector of functions, each returning a value.

Used with

stochastic_gradient_descent

Note that this can also be used with a gradient_descent, since the (complete) gradient is just the sums of the single gradients.

source
Manopt.NonlinearLeastSquaresObjective β€” Type
NonlinearLeastSquaresObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}

A type for nonlinear least squares problems. T is a AbstractEvaluationType for the F and Jacobian functions.

Specify a nonlinear least squares problem

Fields

  • f a function $f: \mathcal M β†’ ℝ^d$ to minimize
  • jacobian!! Jacobian of the function $f$
  • jacobian_tangent_basis the basis of tangent space used for computing the Jacobian.
  • num_components number of values returned by f (equal to d).

Depending on the AbstractEvaluationTypeT the function $F$ has to be provided:

  • as a functions (M::AbstractManifold, p) -> v that allocates memory for v itself for an AllocatingEvaluation,
  • as a function (M::AbstractManifold, v, p) -> v that works in place of v for a InplaceEvaluation.

Also the Jacobian $jacF!!$ is required:

  • as a functions (M::AbstractManifold, p; basis_domain::AbstractBasis) -> v that allocates memory for v itself for an AllocatingEvaluation,
  • as a function (M::AbstractManifold, v, p; basis_domain::AbstractBasis) -> v that works in place of v for an InplaceEvaluation.

Constructors

NonlinearLeastSquaresProblem(M, F, jacF, num_components; evaluation=AllocatingEvaluation(), jacobian_tangent_basis=DefaultOrthonormalBasis())

See also

LevenbergMarquardt, LevenbergMarquardtState

source

There is also a second variant, if just one function is responsible for computing the cost and the gradient

Manopt.ManifoldCostGradientObjective β€” Type
ManifoldCostGradientObjective{T} <: AbstractManifoldObjective{T}

specify an objective containing one function to perform a combined computation of cost and its gradient

Fields

  • costgrad!!: a function that computes both the cost $f: \mathcal M β†’ ℝ$ and its gradient $\operatorname{grad}f: \mathcal M β†’ \mathcal T\mathcal M$

Depending on the AbstractEvaluationTypeT the gradient can have to forms

Constructors

ManifoldCostGradientObjective(costgrad; evaluation=AllocatingEvaluation())

Used with

gradient_descent, conjugate_gradient_descent, quasi_Newton

source

Access functions

Manopt.get_gradient β€” Function
X = get_gradient(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)
get_gradient!(M::ProductManifold, P::ManifoldAlternatingGradientObjective, X, p)

Evaluate all summands gradients at a point p on the ProductManifold M (in place of X)

source
X = get_gradient(M::AbstractManifold, p::ManifoldAlternatingGradientObjective, p, k)
get_gradient!(M::AbstractManifold, p::ManifoldAlternatingGradientObjective, X, p, k)

Evaluate one of the component gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x (in place of Y).

source
get_gradient(s::AbstractManoptSolverState)

return the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand

source
get_gradient(amp::AbstractManoptProblem, p)
get_gradient!(amp::AbstractManoptProblem, X, p)

evaluate the gradient of an AbstractManoptProblemamp at the point p.

The evaluation is done in place of X for the !-variant.

source
get_gradient(M::AbstractManifold, mgo::AbstractManifoldGradientObjective{T}, p)
get_gradient!(M::AbstractManifold, X, mgo::AbstractManifoldGradientObjective{T}, p)

evaluate the gradient of a AbstractManifoldGradientObjective{T}mgo at p.

The evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.

Note that the order of parameters follows the philosophy of Manifolds.jl, namely that even for the mutating variant, the manifold is the first parameter and the (in-place) tangent vector X comes second.

source
get_gradient(agst::AbstractGradientSolverState)

return the gradient stored within gradient options. THe default returns agst.X.

source
get_gradient(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the gradient of the TrustRegionModelObjective

\[\operatorname{grad} m(X) = \operatorname{grad} f(p) + \operatorname{Hess} f(p)[X].\]

source
get_gradient(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)

Evaluate the gradient of the AdaptiveRagularizationWithCubicsModelObjective

\[\operatorname{grad} m(X) = \operatorname{grad} f(p) + \operatorname{Hess} f(p)[X] + Οƒ\lVert X \rVert X,\]

at X, cf. Eq. (37) in [ABBC20].

source
get_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, k)
get_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, Y, p, k)

Evaluate one of the summands gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x (in place of Y).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.

source
get_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)
get_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, X, p)

Evaluate the complete gradient $\operatorname{grad} f = \displaystyle\sum_{i=1}^n \operatorname{grad} f_i(p)$ at p (in place of X).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.

source
get_gradient(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)
get_gradient!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)

Evaluate the gradient function of an objective defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_gradients β€” Function
get_gradients(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)
get_gradients!(M::AbstractManifold, X, sgo::ManifoldStochasticGradientObjective, p)

Evaluate all summands gradients $\{\operatorname{grad}f_i\}_{i=1}^n$ at p (in place of X).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient) can not be determined.

source

and internally

Manopt.get_gradient_function β€” Function
get_gradient_function(amgo::AbstractManifoldGradientObjective, recursive=false)

return the function to evaluate (just) the gradient $\operatorname{grad} f(p)$, where either the gradient function using the decorator or without the decorator is used.

By default recursive is set to false, since usually to just pass the gradient function somewhere, one still wants for example the cached one or the one that still counts calls.

Depending on the AbstractEvaluationTypeE this is a function

source

Internal helpers

Manopt.get_gradient_from_Jacobian! β€” Function
get_gradient_from_Jacobian!(
    M::AbstractManifold,
    X,
    nlso::NonlinearLeastSquaresObjective{InplaceEvaluation},
    p,
    Jval=zeros(nlso.num_components, manifold_dimension(M)),
)

Compute gradient of NonlinearLeastSquaresObjectivenlso at point p in place of X, with temporary Jacobian stored in the optional argument Jval.

source

Subgradient objective

Manopt.ManifoldSubgradientObjective β€” Type
ManifoldSubgradientObjective{T<:AbstractEvaluationType,C,S} <:AbstractManifoldCostObjective{T, C}

A structure to store information about a objective for a subgradient based optimization problem

Fields

  • cost: the function $f$ to be minimized
  • subgradient: a function returning a subgradient $βˆ‚f$ of $f$

Constructor

ManifoldSubgradientObjective(f, βˆ‚f)

Generate the ManifoldSubgradientObjective for a subgradient objective, consisting of a (cost) function f(M, p) and a function βˆ‚f(M, p) that returns a not necessarily deterministic element from the subdifferential at p on a manifold M.

source

Access functions

Manopt.get_subgradient β€” Function
get_subgradient(amp::AbstractManoptProblem, p)
get_subgradient!(amp::AbstractManoptProblem, X, p)

evaluate the subgradient of an AbstractManoptProblemamp at point p.

The evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.

source
X = get_subgradient(M;;AbstractManifold, sgo::ManifoldSubgradientObjective, p)
get_subgradient!(M;;AbstractManifold, X, sgo::ManifoldSubgradientObjective, p)

Evaluate the (sub)gradient of a ManifoldSubgradientObjectivesgo at the point p.

The evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.

source

Proximal map objective

Manopt.ManifoldProximalMapObjective β€” Type
ManifoldProximalMapObjective{E<:AbstractEvaluationType, TC, TP, V <: Vector{<:Integer}} <: AbstractManifoldCostObjective{E, TC}

specify a problem for solvers based on the evaluation of proximal maps.

Fields

  • cost - a function $F:\mathcal M→ℝ$ to minimize
  • proxes - proximal maps $\operatorname{prox}_{Ξ»\varphi}:\mathcal Mβ†’\mathcal M$ as functions (M, Ξ», p) -> q.
  • number_of_proxes - (ones(length(proxes))` number of proximal maps per function, to specify when one of the maps is a combined one such that the proximal maps functions return more than one entry per function, you have to adapt this value. if not specified, it is set to one prox per function.

See also

cyclic_proximal_point, get_cost, get_proximal_map

source

Access functions

Manopt.get_proximal_map β€” Function
q = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, Ξ», p)
get_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, Ξ», p)
q = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, Ξ», p, i)
get_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, Ξ», p, i)

evaluate the (ith) proximal map of ManifoldProximalMapObjective p at the point p of p.M with parameter $Ξ»>0$.

source

Hessian objective

Manopt.ManifoldHessianObjective β€” Type
ManifoldHessianObjective{T<:AbstractEvaluationType,C,G,H,Pre} <: AbstractManifoldHessianObjective{T,C,G,H}

specify a problem for Hessian based algorithms.

Fields

  • cost: a function $f:\mathcal M→ℝ$ to minimize
  • gradient: the gradient $\operatorname{grad}f:\mathcal M β†’ \mathcal T\mathcal M$ of the cost function $f$
  • hessian: the Hessian $\operatorname{Hess}f(x)[β‹…]: \mathcal T_{x} \mathcal M β†’ \mathcal T_{x} \mathcal M$ of the cost function $f$
  • preconditioner: the symmetric, positive definite preconditioner as an approximation of the inverse of the Hessian of $f$, a map with the same input variables as the hessian to numerically stabilize iterations when the Hessian is ill-conditioned

Depending on the AbstractEvaluationTypeT the gradient and can have to forms

Constructor

ManifoldHessianObjective(f, grad_f, Hess_f, preconditioner = (M, p, X) -> X;
    evaluation=AllocatingEvaluation())

See also

truncated_conjugate_gradient_descent, trust_regions

source

Access functions

Manopt.get_hessian β€” Function
Y = get_hessian(amp::AbstractManoptProblem{T}, p, X)
get_hessian!(amp::AbstractManoptProblem{T}, Y, p, X)

evaluate the Hessian of an AbstractManoptProblemamp at p applied to a tangent vector X, computing $\operatorname{Hess}f(q)[X]$, which can also happen in-place of Y.

source
get_hessian(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the Hessian of the TrustRegionModelObjective

\[\operatorname{Hess} m(X)[Y] = \operatorname{Hess} f(p)[Y].\]

source
get_hessian(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, X)
get_hessian!(M::AbstractManifold, Y, emo::EmbeddedManifoldObjective, p, X)

Evaluate the Hessian of an objective defined in the embedding, that is embed p and X before calling the Hessian function stored in the EmbeddedManifoldObjective.

The returned Hessian is then converted to a Riemannian Hessian calling riemannian_Hessian.

source
Manopt.get_preconditioner β€” Function
get_preconditioner(amp::AbstractManoptProblem, p, X)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function f) of a AbstractManoptProblemamps objective at the point p applied to a tangent vector X.

source
get_preconditioner(M::AbstractManifold, mho::ManifoldHessianObjective, p, X)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a ManifoldHessianObjectivemho at the point p applied to a tangent vector X.

source

and internally

Primal-dual based objectives

Manopt.AbstractPrimalDualManifoldObjective β€” Type
AbstractPrimalDualManifoldObjective{E<:AbstractEvaluationType,C,P} <: AbstractManifoldCostObjective{E,C}

A common abstract super type for objectives that consider primal-dual problems.

source
Manopt.PrimalDualManifoldObjective β€” Type
PrimalDualManifoldObjective{T<:AbstractEvaluationType} <: AbstractPrimalDualManifoldObjective{T}

Describes an Objective linearized or exact Chambolle-Pock algorithm, cf. [BHS+21], [CP11]

Fields

All fields with !! can either be in-place or allocating functions, which should be set depending on the evaluation= keyword in the constructor and stored in T <: AbstractEvaluationType.

  • cost: $F + G(Ξ›(β‹…))$ to evaluate interim cost function values
  • linearized_forward_operator!!: linearized operator for the forward operation in the algorithm $DΞ›$
  • linearized_adjoint_operator!!: the adjoint differential $(DΞ›)^* : \mathcal N β†’ T\mathcal M$
  • prox_f!!: the proximal map belonging to $f$
  • prox_G_dual!!: the proximal map belonging to $g_n^*$
  • Ξ›!!: (fordward_operator) the forward operator (if given) $Ξ›: \mathcal M β†’ \mathcal N$

Either the linearized operator $DΞ›$ or $Ξ›$ are required usually.

Constructor

PrimalDualManifoldObjective(cost, prox_f, prox_G_dual, adjoint_linearized_operator;
    linearized_forward_operator::Union{Function,Missing}=missing,
    Ξ›::Union{Function,Missing}=missing,
    evaluation::AbstractEvaluationType=AllocatingEvaluation()
)

The last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.

source
Manopt.PrimalDualManifoldSemismoothNewtonObjective β€” Type
PrimalDualManifoldSemismoothNewtonObjective{E<:AbstractEvaluationType, TC, LO, ALO, PF, DPF, PG, DPG, L} <: AbstractPrimalDualManifoldObjective{E, TC, PF}

Describes a Problem for the Primal-dual Riemannian semismooth Newton algorithm. [DL21]

Fields

  • cost: $F + G(Ξ›(β‹…))$ to evaluate interim cost function values
  • linearized_operator: the linearization $DΞ›(β‹…)[β‹…]$ of the operator $Ξ›(β‹…)$.
  • linearized_adjoint_operator: the adjoint differential $(DΞ›)^* : \mathcal N β†’ T\mathcal M$
  • prox_F: the proximal map belonging to $F$
  • diff_prox_F: the (Clarke Generalized) differential of the proximal maps of $F$
  • prox_G_dual: the proximal map belonging to $g_n^*$
  • diff_prox_dual_G: the (Clarke Generalized) differential of the proximal maps of $G^\ast_n$
  • Ξ›: the exact forward operator. This operator is required if Ξ›(m)=n does not hold.

Constructor

PrimalDualManifoldSemismoothNewtonObjective(cost, prox_F, prox_G_dual, forward_operator, adjoint_linearized_operator,Ξ›)
source

Access functions

Manopt.adjoint_linearized_operator β€” Function
X = adjoint_linearized_operator(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)
adjoint_linearized_operator(N::AbstractManifold, X, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)

Evaluate the adjoint of the linearized forward operator of $(DΞ›(m))^*[Y]$ stored within the AbstractPrimalDualManifoldObjective (in place of X). Since $Y∈T_n\mathcal N$, both $m$ and $n=Ξ›(m)$ are necessary arguments, mainly because the forward operator $Ξ›$ might be missing in p.

source
Manopt.forward_operator β€” Function
q = forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, p)
forward_operator!(M::AbstractManifold, N::AbstractManifold, q, apdmo::AbstractPrimalDualManifoldObjective, p)

Evaluate the forward operator of $Ξ›(x)$ stored within the TwoManifoldProblem (in place of q).

source
Manopt.get_differential_dual_prox β€” Function
Ξ· = get_differential_dual_prox(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, n, Ο„, X, ΞΎ)
get_differential_dual_prox!(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, Ξ·, n, Ο„, X, ΞΎ)

Evaluate the differential proximal map of $G_n^*$ stored within PrimalDualManifoldSemismoothNewtonObjective

\[D\operatorname{prox}_{Ο„G_n^*}(X)[ΞΎ]\]

which can also be computed in place of Ξ·.

source
Manopt.get_differential_primal_prox β€” Function
y = get_differential_primal_prox(M::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective Οƒ, x)
get_differential_primal_prox!(p::TwoManifoldProblem, y, Οƒ, x)

Evaluate the differential proximal map of $F$ stored within AbstractPrimalDualManifoldObjective

\[D\operatorname{prox}_{ΟƒF}(x)[X]\]

which can also be computed in place of y.

source
Manopt.get_dual_prox β€” Function
Y = get_dual_prox(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, n, Ο„, X)
get_dual_prox!(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, Y, n, Ο„, X)

Evaluate the proximal map of $g_n^*$ stored within AbstractPrimalDualManifoldObjective

\[ Y = \operatorname{prox}_{Ο„G_n^*}(X)\]

which can also be computed in place of Y.

source
Manopt.get_primal_prox β€” Function
q = get_primal_prox(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, Οƒ, p)
get_primal_prox!(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, q, Οƒ, p)

Evaluate the proximal map of $F$ stored within AbstractPrimalDualManifoldObjective

\[\operatorname{prox}_{ΟƒF}(x)\]

which can also be computed in place of y.

source
Manopt.linearized_forward_operator β€” Function
Y = linearized_forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)
linearized_forward_operator!(M::AbstractManifold, N::AbstractManifold, Y, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)

Evaluate the linearized operator (differential) $DΞ›(m)[X]$ stored within the AbstractPrimalDualManifoldObjective (in place of Y), where n = Ξ›(m).

source

Constrained objective

Besides the AbstractEvaluationType there is one further property to distinguish among constraint functions, especially the gradients of the constraints.

Manopt.FunctionConstraint β€” Type
FunctionConstraint <: ConstraintType

A type to indicate that constraints are implemented one whole functions, for example $g(p) ∈ ℝ^m$.

source
Manopt.VectorConstraint β€” Type
VectorConstraint <: ConstraintType

A type to indicate that constraints are implemented a vector of functions, for example $g_i(p) ∈ ℝ, i=1,…,m$.

source

The ConstraintType is a parameter of the corresponding Objective.

Manopt.ConstrainedManifoldObjective β€” Type
ConstrainedManifoldObjective{T<:AbstractEvaluationType, C <: ConstraintType Manifold} <: AbstractManifoldObjective{T}

Describes the constrained objective

\[\begin{aligned} \operatorname*{arg\,min}_{p ∈\mathcal{M}} & f(p)\\ \text{subject to } &g_i(p)\leq0 \quad \text{ for all } i=1,…,m,\\ \quad &h_j(p)=0 \quad \text{ for all } j=1,…,n. \end{aligned}\]

Fields

  • cost the cost $f$`
  • gradient!! the gradient of the cost $f$`
  • g the inequality constraints
  • grad_g!! the gradient of the inequality constraints
  • h the equality constraints
  • grad_h!! the gradient of the equality constraints

It consists of

  • an cost function $f(p)$
  • the gradient of $f$, $\operatorname{grad}f(p)$
  • inequality constraints $g(p)$, either a function g returning a vector or a vector [g1, g2, ..., gm] of functions.
  • equality constraints $h(p)$, either a function h returning a vector or a vector [h1, h2, ..., hn] of functions.
  • gradients of the inequality constraints $\operatorname{grad}g(p) ∈ (T_p\mathcal M)^m$, either a function or a vector of functions.
  • gradients of the equality constraints $\operatorname{grad}h(p) ∈ (T_p\mathcal M)^n$, either a function or a vector of functions.

There are two ways to specify the constraints $g$ and $h$.

  1. as one Function returning a vector in $ℝ^m$ and $ℝ^n$ respectively. This might be easier to implement but requires evaluating all constraints even if only one is needed.
  2. as a AbstractVector{<:Function} where each function returns a real number. This requires each constraint to be implemented as a single function, but it is possible to evaluate also only a single constraint.

The gradients $\operatorname{grad}g$, $\operatorname{grad}h$ have to follow the same form. Additionally they can be implemented as in-place functions or as allocating ones. The gradient $\operatorname{grad}F$ has to be the same kind. This difference is indicated by the evaluation keyword.

Constructors

ConstrainedManifoldObjective(f, grad_f, g, grad_g, h, grad_h;
    evaluation=AllocatingEvaluation()
)

Where f, g, h describe the cost, inequality and equality constraints, respectively, as described previously and grad_f, grad_g, grad_h are the corresponding gradient functions in one of the 4 formats. If the objective does not have inequality constraints, you can set G and gradG no nothing. If the problem does not have equality constraints, you can set H and gradH no nothing or leave them out.

ConstrainedManifoldObjective(M::AbstractManifold, F, gradF;
    G=nothing, gradG=nothing, H=nothing, gradH=nothing;
    evaluation=AllocatingEvaluation()
)

A keyword argument variant of the preceding constructor, where you can leave out either G and gradG or H and gradH but not both pairs.

source

Access functions

Manopt.get_constraints β€” Function
get_constraints(M::AbstractManifold, co::ConstrainedManifoldObjective, p)

Return the vector $(g_1(p),...g_m(p),h_1(p),...,h_n(p))$ from the ConstrainedManifoldObjectiveP containing the values of all constraints at p.

source
get_constraints(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)

Return the vector $(g_1(p),...g_m(p),h_1(p),...,h_n(p))$ defined in the embedding, that is embed p before calling the constraint functions stored in the EmbeddedManifoldObjective.

source
Manopt.get_equality_constraint β€” Function
get_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j)

evaluate the jth equality constraint $(h(p))_j$ or $h_j(p)$.

Note

For the FunctionConstraint representation this still evaluates all constraints.

source
get_equality_constraint(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, j)

evaluate the js equality constraint $h_j(p)$ defined in the embedding, that is embed p before calling the constraint functions stored in the EmbeddedManifoldObjective.

source
Manopt.get_equality_constraints β€” Function
get_equality_constraints(M::AbstractManifold, co::ConstrainedManifoldObjective, p)

evaluate all equality constraints $h(p)$ of $\bigl(h_1(p), h_2(p),\ldots,h_p(p)\bigr)$ of the ConstrainedManifoldObjective$P$ at $p$.

source
get_equality_constraints(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)

Evaluate all equality constraints $h(p)$ of $\bigl(h_1(p), h_2(p),\ldots,h_p(p)\bigr)$ defined in the embedding, that is embed p before calling the constraint functions stored in the EmbeddedManifoldObjective.

source
Manopt.get_inequality_constraint β€” Function
get_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, i)

evaluate one equality constraint $(g(p))_i$ or $g_i(p)$.

Note

For the FunctionConstraint representation this still evaluates all constraints.

source
get_inequality_constraint(M::AbstractManifold, ems::EmbeddedManifoldObjective, p, i)

Evaluate the is inequality constraint $g_i(p)$ defined in the embedding, that is embed p before calling the constraint functions stored in the EmbeddedManifoldObjective.

source
Manopt.get_inequality_constraints β€” Function
get_inequality_constraints(M::AbstractManifold, co::ConstrainedManifoldObjective, p)

Evaluate all inequality constraints $g(p)$ or $\bigl(g_1(p), g_2(p),\ldots,g_m(p)\bigr)$ of the ConstrainedManifoldObjective$P$ at $p$.

source
get_inequality_constraints(M::AbstractManifold, ems::EmbeddedManifoldObjective, p)

Evaluate all inequality constraints $g(p)$ of $\bigl(g_1(p), g_2(p),\ldots,g_m(p)\bigr)$ defined in the embedding, that is embed p before calling the constraint functions stored in the EmbeddedManifoldObjective.

source
Manopt.get_grad_equality_constraint β€” Function
get_grad_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j)

evaluate the gradient of the j th equality constraint $(\operatorname{grad} h(p))_j$ or $\operatorname{grad} h_j(x)$.

Note

For the FunctionConstraint variant of the problem, this function still evaluates the full gradient. For the InplaceEvaluation and FunctionConstraint of the problem, this function currently also calls get_equality_constraints, since this is the only way to determine the number of constraints. It also allocates a full tangent vector.

source
X = get_grad_equality_constraint(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, j)
get_grad_equality_constraint!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p, j)

evaluate the gradient of the jth equality constraint $\operatorname{grad} h_j(p)$ defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_grad_equality_constraints β€” Function
get_grad_equality_constraints(M::AbstractManifold, co::ConstrainedManifoldObjective, p)

evaluate all gradients of the equality constraints $\operatorname{grad} h(x)$ or $\bigl(\operatorname{grad} h_1(x), \operatorname{grad} h_2(x),\ldots, \operatorname{grad}h_n(x)\bigr)$ of the ConstrainedManifoldObjectiveP at p.

Note

For the InplaceEvaluation and FunctionConstraint variant of the problem, this function currently also calls get_equality_constraints, since this is the only way to determine the number of constraints.

source
X = get_grad_equality_constraints(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)
get_grad_equality_constraints!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)

evaluate the gradients of theequality constraints $\operatorname{grad} h(p)$ defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradients are then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_grad_equality_constraints! β€” Function
get_grad_equality_constraints!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p)

evaluate all gradients of the equality constraints $\operatorname{grad} h(p)$ or $\bigl(\operatorname{grad} h_1(p), \operatorname{grad} h_2(p),\ldots,\operatorname{grad} h_n(p)\bigr)$ of the ConstrainedManifoldObjective$P$ at $p$ in place of X, which is a vector ofn` tangent vectors.

source
Manopt.get_grad_equality_constraint! β€” Function
get_grad_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j)

Evaluate the gradient of the jth equality constraint $(\operatorname{grad} h(x))_j$ or $\operatorname{grad} h_j(x)$ in place of $X$

Note

For the FunctionConstraint variant of the problem, this function still evaluates the full gradient. For the InplaceEvaluation of the FunctionConstraint of the problem, this function currently also calls get_inequality_constraints, since this is the only way to determine the number of constraints and allocates a full vector of tangent vectors

source
Manopt.get_grad_inequality_constraint β€” Function
get_grad_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, i)

Evaluate the gradient of the i th inequality constraints $(\operatorname{grad} g(x))_i$ or $\operatorname{grad} g_i(x)$.

Note

For the FunctionConstraint variant of the problem, this function still evaluates the full gradient. For the InplaceEvaluation and FunctionConstraint of the problem, this function currently also calls get_inequality_constraints, since this is the only way to determine the number of constraints.

source
X = get_grad_inequality_constraint(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, i)
get_grad_inequality_constraint!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p, i)

evaluate the gradient of the ith inequality constraint $\operatorname{grad} g_i(p)$ defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_grad_inequality_constraint! β€” Function
get_grad_inequality_constraint!(P, X, p, i)

Evaluate the gradient of the ith inequality constraints $(\operatorname{grad} g(x))_i$ or $\operatorname{grad} g_i(x)$ of the ConstrainedManifoldObjectiveP in place of $X$

Note

For the FunctionConstraint variant of the problem, this function still evaluates the full gradient. For the InplaceEvaluation and FunctionConstraint of the problem, this function currently also calls get_inequality_constraints,

since this is the only way to determine the number of constraints. evaluate all gradients of the inequality constraints $\operatorname{grad} h(x)$ or $\bigl(g_1(x), g_2(x),\ldots,g_m(x)\bigr)$ of the ConstrainedManifoldObjective$p$ at $x$ in place of X, which is a vector ofm` tangent vectors .

source
Manopt.get_grad_inequality_constraints β€” Function
get_grad_inequality_constraints(M::AbstractManifold, co::ConstrainedManifoldObjective, p)

evaluate all gradients of the inequality constraints $\operatorname{grad} g(p)$ or $\bigl(\operatorname{grad} g_1(p), \operatorname{grad} g_2(p),…,\operatorname{grad} g_m(p)\bigr)$ of the ConstrainedManifoldObjective$P$ at $p$.

Note

for the InplaceEvaluation and FunctionConstraint variant of the problem, this function currently also calls get_equality_constraints, since this is the only way to determine the number of constraints.

source
X = get_grad_inequality_constraints(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)
get_grad_inequality_constraints!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)

evaluate the gradients of theinequality constraints $\operatorname{grad} g(p)$ defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradients are then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_grad_inequality_constraints! β€” Function
get_grad_inequality_constraints!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p)

evaluate all gradients of the inequality constraints $\operatorname{grad} g(x)$ or $\bigl(\operatorname{grad} g_1(x), \operatorname{grad} g_2(x),\ldots,\operatorname{grad} g_m(x)\bigr)$ of the ConstrainedManifoldObjectiveP at p in place of X, which is a vector of $m$ tangent vectors.

source

Subproblem objective

This objective can be use when the objective of a sub problem solver still needs access to the (outer/main) objective.

Manopt.AbstractManifoldSubObjective β€” Type
AbstractManifoldSubObjective{O<:AbstractManifoldObjective} <: AbstractManifoldObjective

An abstract type for objectives of sub problems within a solver but still store the original objective internally to generate generic objectives for sub solvers.

source

Access functions

Manopt.get_objective_cost β€” Function
get_objective_cost(M, amso::AbstractManifoldSubObjective, p)

Evaluate the cost of the (original) objective stored within the sub objective.

source
Manopt.get_objective_gradient β€” Function
X = get_objective_gradient(M, amso::AbstractManifoldSubObjective, p)
get_objective_gradient!(M, X, amso::AbstractManifoldSubObjective, p)

Evaluate the gradient of the (original) objective stored within the sub objective amso.

source
Manopt.get_objective_hessian β€” Function
Y = get_objective_Hessian(M, amso::AbstractManifoldSubObjective, p, X)
get_objective_Hessian!(M, Y, amso::AbstractManifoldSubObjective, p, X)

Evaluate the Hessian of the (original) objective stored within the sub objective amso.

source
Manopt.get_objective_preconditioner β€” Function
Y = get_objective_preconditioner(M, amso::AbstractManifoldSubObjective, p, X)
get_objective_preconditioner(M, Y, amso::AbstractManifoldSubObjective, p, X)

Evaluate the Hessian of the (original) objective stored within the sub objective amso.

source