List of Objectives defined for the Examples

Rayleigh Quotient on the Sphere

See the Rayleigh example (TODO) to see these in use.

ManoptExamples.RayleighQuotientCostType
RayleighQuotientCost

A functor representing the Rayleigh Quotient cost function.

Let $A ∈ ℝ^{n×n}$ be a symmetric matrix. Then we can specify the Rayleigh Quotient in two forms. Either

\[f(p) = p^{\mathrm{T}}Ap,\qquad p ∈ 𝕊^{n-1},\]

or extended into the embedding as

\[f(x) = x^{\mathrm{T}}Ax, \qquad x ∈ ℝ^n,\]

which is not the orignal Rayleigh quotient for performance reasons, but useful if you want to use this as the Euclidean cost in the emedding of $𝕊^{n-1}$.

Fields

  • A – storing the matrix internally

Constructor

RayleighQuotientCost(A)

Create the Rayleigh cost function.

See also

RayleighQuotientGrad!!, RayleighQuotientHess!!

source
ManoptExamples.RayleighQuotientGrad!!Type
RayleighQuotientGrad!!

A functor representing the Rayleigh Quotient gradient function.

Let $A ∈ ℝ^{n×n}$ be a symmetric matrix. Then we can specify the gradient of the Rayleigh Quotient in two forms. Either

\[\operatorname{grad} f(p) = 2 Ap - 2 (p^{\mathrm{T}}Ap)*p,\qquad p ∈ 𝕊^{n-1},\]

or taking the Euclidean gradient of the Rayleigh quotient on the sphere as

\[∇f(x) = 2Ax, \qquad x ∈ ℝ^n.\]

For details, see Example 3.62 of [Bou23].

Fields

  • A – storing the matrix internally

Constructor

RayleighQuotientGrad!!(A)

Create the Rayleigh quotient gradient function.

See also

RayleighQuotientCost, RayleighQuotientHess!!

source
ManoptExamples.RayleighQuotientHess!!Type
RayleighQuotientHess!!

A functor representing the Rayleigh Quotient Hessian.

Let $A ∈ ℝ^{n×n}$ be a symmetric matrix. Then we can specify the Hessian of the Rayleigh Quotient in two forms. Either

\[\operatorname{Hess} f(p)[X] = 2 \bigl(AX - (p^{mathrm{T}}AX)p - (p^{\mathrm{T}}Ap)X\bigr),\qquad p ∈ 𝕊^{n-1}, X \in T_p𝕊^{n-1}\]

or taking the Euclidean Hessian of the Rayleigh quotient on the sphere as

\[∇^2f(x)[V] = 2AV, \qquad x, V ∈ ℝ^n.\]

For details, see Example 5.27 of [Bou23].

Fields

  • A – storing the matrix internally

Constructor

RayleighQuotientHess!!(A)

Create the Rayleigh quotient Hessian function.

See also

RayleighQuotientCost, RayleighQuotientGrad!!

source

Bézier Curves

See the Bezier Curves example to see these in use.

ManoptExamples.BezierSegmentType
BezierSegment

A type to capture a Bezier segment. With $n$ points, a Bézier segment of degree $n-1$ is stored. On the Euclidean manifold, this yields a polynomial of degree $n-1$.

This type is mainly used to encapsulate the points within a composite Bezier curve, which consist of an AbstractVector of BezierSegments where each of the points might be a nested array on a PowerManifold already.

Not that this can also be used to represent tangent vectors on the control points of a segment.

See also: de_Casteljau.

Constructor

BezierSegment(pts::AbstractVector)

Given an abstract vector of pts generate the corresponding Bézier segment.

source
ManoptExamples.L2_acceleration_BezierMethod
L2_acceleration_Bezier(M,B,pts,λ,d)

compute the value of the discrete Acceleration of the composite Bezier curve together with a data term, i.e.

\[\frac{λ}{2}\sum_{i=0}^{N} d_{\mathcal M}(d_i, c_B(i))^2+ \sum_{i=1}^{N-1}\frac{d^2_2 [ B(t_{i-1}), B(t_{i}), B(t_{i+1})]}{\Delta_t^3}\]

where for this formula the pts along the curve are equispaced and denoted by $t_i$ and $d_2$ refers to the second order absolute difference second_order_Total_Variation (squared), the junction points are denoted by $p_i$, and to each $p_i$ corresponds one data item in the manifold points given in d. For details on the acceleration approximation, see acceleration_Bezier. Note that the Bézier-curve is given in reduces form as a point on a PowerManifold, together with the degrees of the segments and assuming a differentiable curve, the segments can internally be reconstructed.

See also

grad_L2_acceleration_Bezier, acceleration_Bezier, grad_acceleration_Bezier

source
ManoptExamples.acceleration_BezierMethod
acceleration_Bezier(
    M::AbstractManifold,
    B::AbstractVector{P},
    degrees::AbstractVector{<:Integer},
    T::AbstractVector{<:AbstractFloat},
) where {P}

compute the value of the discrete Acceleration of the composite Bezier curve

\[\sum_{i=1}^{N-1}\frac{d^2_2 [ B(t_{i-1}), B(t_{i}), B(t_{i+1})]}{\Delta_t^3}\]

where for this formula the pts along the curve are equispaced and denoted by $t_i$, $i=1,…,N$, and $d_2$ refers to the second order absolute difference second_order_Total_Variation (squared). Note that the Bézier-curve is given in reduces form as a point on a PowerManifold, together with the degrees of the segments and assuming a differentiable curve, the segments can internally be reconstructed.

This acceleration discretization was introduced in Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018.

See also

grad_acceleration_Bezier, L2_acceleration_Bezier, grad_L2_acceleration_Bezier

source
ManoptExamples.adjoint_differential_Bezier_control_pointsMethod
adjoint_differential_Bezier_control_points(
    M::AbstractManifold,
    T::AbstractVector,
    X::AbstractVector,
)
adjoint_differential_Bezier_control_points!(
    M::AbstractManifold,
    Y::AbstractVector{<:BezierSegment},
    T::AbstractVector,
    X::AbstractVector,
)

Evaluate the adjoint of the differential with respect to the controlpoints at several times T. This can be computed in place of Y.

See de_Casteljau for more details on the curve.

source
ManoptExamples.adjoint_differential_Bezier_control_pointsMethod
adjoint_differential_Bezier_control_points(
    M::AbstractManifold,
    B::AbstractVector{<:BezierSegment},
    t,
    X
)
adjoint_differential_Bezier_control_points!(
    M::AbstractManifold,
    Y::AbstractVector{<:BezierSegment},
    B::AbstractVector{<:BezierSegment},
    t,
    X
)

evaluate the adjoint of the differential of a composite Bézier curve on the manifold M with respect to its control points b based on a points T$=(t_i)_{i=1}^n$ that are pointwise in $t_i∈[0,1]$ on the curve and given corresponding tangential vectors $X = (η_i)_{i=1}^n$, $η_i∈T_{β(t_i)}\mathcal M$ This can be computed in place of Y.

See de_Casteljau for more details on the curve.

source
ManoptExamples.adjoint_differential_Bezier_control_pointsMethod
adjoint_differential_Bezier_control_points(
    M::AbstractManifold,
    b::BezierSegment,
    t::AbstractVector,
    X::AbstractVector,
)
adjoint_differential_Bezier_control_points!(
    M::AbstractManifold,
    Y::BezierSegment,
    b::BezierSegment,
    t::AbstractVector,
    X::AbstractVector,
)

evaluate the adjoint of the differential of a Bézier curve on the manifold M with respect to its control points b based on a points T$=(t_i)_{i=1}^n$ that are pointwise in $t_i∈[0,1]$ on the curve and given corresponding tangential vectors $X = (η_i)_{i=1}^n$, $η_i∈T_{β(t_i)}\mathcal M$ This can be computed in place of Y.

See de_Casteljau for more details on the curve and Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018

source
ManoptExamples.adjoint_differential_Bezier_control_pointsMethod
adjoint_differential_Bezier_control_points(M::AbstractManifold, b::BezierSegment, t, η)
adjoint_differential_Bezier_control_points!(
    M::AbstractManifold,
    Y::BezierSegment,
    b::BezierSegment,
    t,
    η,
)

evaluate the adjoint of the differential of a Bézier curve on the manifold M with respect to its control points b based on a point t$∈[0,1]$ on the curve and a tangent vector $η∈T_{β(t)}\mathcal M$. This can be computed in place of Y.

See de_Casteljau for more details on the curve.

source
ManoptExamples.de_CasteljauMethod
de_Casteljau(M::AbstractManifold, b::BezierSegment NTuple{N,P}) -> Function

return the Bézier curve $β(⋅;b_0,…,b_n): [0,1] → \mathcal M$ defined by the control points $b_0,…,b_n∈\mathcal M$, $n∈\mathbb N$, as a BezierSegment. This function implements de Casteljau's algorithm Casteljau, 1959, Casteljau, 1963 generalized to manifolds by Popiel, Noakes, J Approx Theo, 2007: Let $γ_{a,b}(t)$ denote the shortest geodesic connecting $a,b∈\mathcal M$. Then the curve is defined by the recursion

\[\begin{aligned} β(t;b_0,b_1) &= \gamma_{b_0,b_1}(t)\\ β(t;b_0,…,b_n) &= \gamma_{β(t;b_0,…,b_{n-1}), β(t;b_1,…,b_n)}(t), \end{aligned}\]

and P is the type of a point on the Manifold M.

de_Casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) -> Function

Given a vector of Bézier segments, i.e. a vector of control points $B=\bigl( (b_{0,0},…,b_{n_0,0}),…,(b_{0,m},… b_{n_m,m}) \bigr)$, where the different segments might be of different degree(s) $n_0,…,n_m$. The resulting composite Bézier curve $c_B:[0,m] → \mathcal M$ consists of $m$ segments which are Bézier curves.

\[c_B(t) := \begin{cases} β(t; b_{0,0},…,b_{n_0,0}) & \text{ if } t ∈[0,1]\\ β(t-i; b_{0,i},…,b_{n_i,i}) & \text{ if } t∈(i,i+1], \quad i∈\{1,…,m-1\}. \end{cases}\]

de_Casteljau(M::AbstractManifold, b::BezierSegment, t::Real)
de_Casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}, t::Real)
de_Casteljau(M::AbstractManifold, b::BezierSegment, T::AbstractVector) -> AbstractVector
de_Casteljau(
    M::AbstractManifold,
    B::AbstractVector{<:BezierSegment},
    T::AbstractVector
) -> AbstractVector

Evaluate the Bézier curve at time t or at times t in T.

source
ManoptExamples.differential_Bezier_control_pointsMethod
differential_Bezier_control_points(
    M::AbstractManifold,
    B::AbstractVector{<:BezierSegment},
    T::AbstractVector
    Ξ::AbstractVector{<:BezierSegment}
)
differential_Bezier_control_points!(
    M::AbstractManifold,
    Θ::AbstractVector{<:BezierSegment}
    B::AbstractVector{<:BezierSegment},
    T::AbstractVector
    Ξ::AbstractVector{<:BezierSegment}
)

evaluate the differential of the composite Bézier curve with respect to its control points B and tangent vectors Ξ in the tangent spaces of the control points. The result is the “change” of the curve at the points in T, which are elementwise in $[0,N]$, and each depending the corresponding segment(s). Here, $N$ is the length of B. For the mutating variant the result is computed in Θ.

See de_Casteljau for more details on the curve and Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018.

source
ManoptExamples.differential_Bezier_control_pointsMethod
differential_Bezier_control_points(
    M::AbstractManifold,
    B::AbstractVector{<:BezierSegment},
    t,
    X::AbstractVector{<:BezierSegment}
)
differential_Bezier_control_points!(
    M::AbstractManifold,
    Y::AbstractVector{<:BezierSegment}
    B::AbstractVector{<:BezierSegment},
    t,
    X::AbstractVector{<:BezierSegment}
)

evaluate the differential of the composite Bézier curve with respect to its control points B and tangent vectors Ξ in the tangent spaces of the control points. The result is the “change” of the curve at t$∈[0,N]$, which depends only on the corresponding segment. Here, $N$ is the length of B. The computation can be done in place of Y.

See de_Casteljau for more details on the curve.

source
ManoptExamples.differential_Bezier_control_pointsMethod
differential_Bezier_control_points(
    M::AbstractManifold,
    b::BezierSegment,
    T::AbstractVector,
    X::BezierSegment,
)
differential_Bezier_control_points!(
    M::AbstractManifold,
    Y,
    b::BezierSegment,
    T::AbstractVector,
    X::BezierSegment,
)

evaluate the differential of the Bézier curve with respect to its control points b and tangent vectors X in the tangent spaces of the control points. The result is the “change” of the curve at the points T, elementwise in $t∈[0,1]$. The computation can be done in place of Y.

See de_Casteljau for more details on the curve.

source
ManoptExamples.differential_Bezier_control_pointsMethod
differential_Bezier_control_points(M::AbstractManifold, b::BezierSegment, t::Float, X::BezierSegment)
differential_Bezier_control_points!(
    M::AbstractManifold,
    Y,
    b::BezierSegment,
    t,
    X::BezierSegment
)

evaluate the differential of the Bézier curve with respect to its control points b and tangent vectors X given in the tangent spaces of the control points. The result is the “change” of the curve at t$∈[0,1]$. The computation can be done in place of Y.

See de_Casteljau for more details on the curve.

source
ManoptExamples.get_Bezier_degreeMethod
get_Bezier_degree(M::AbstractManifold, b::BezierSegment)

return the degree of the Bézier curve represented by the tuple b of control points on the manifold M, i.e. the number of points minus 1.

source
ManoptExamples.get_Bezier_degreesMethod
get_Bezier_degrees(M::AbstractManifold, B::AbstractVector{<:BezierSegment})

return the degrees of the components of a composite Bézier curve represented by tuples in B containing points on the manifold M.

source
ManoptExamples.get_Bezier_inner_pointsMethod
get_Bezier_inner_points(M::AbstractManifold, B::AbstractVector{<:BezierSegment} )
get_Bezier_inner_points(M::AbstractManifold, b::BezierSegment)

returns the inner (i.e. despite start and end) points of the segments of the composite Bézier curve specified by the control points B. For a single segment b, its inner points are returned

source
ManoptExamples.get_Bezier_junction_tangent_vectorsMethod
get_Bezier_junction_tangent_vectors(M::AbstractManifold, B::AbstractVector{<:BezierSegment})
get_Bezier_junction_tangent_vectors(M::AbstractManifold, b::BezierSegment)

returns the tangent vectors at start and end points of the composite Bézier curve pointing from a junction point to the first and last inner control points for each segment of the composite Bezier curve specified by the control points B, either a vector of segments of controlpoints.

source
ManoptExamples.get_Bezier_junctionsFunction
get_Bezier_junctions(M::AbstractManifold, B::AbstractVector{<:BezierSegment})
get_Bezier_junctions(M::AbstractManifold, b::BezierSegment)

returns the start and end point(s) of the segments of the composite Bézier curve specified by the control points B. For just one segment b, its start and end points are returned.

source
ManoptExamples.get_Bezier_pointsFunction
get_Bezier_points(
    M::AbstractManifold,
    B::AbstractVector{<:BezierSegment},
    reduce::Symbol=:default
)
get_Bezier_points(M::AbstractManifold, b::BezierSegment, reduce::Symbol=:default)

returns the control points of the segments of the composite Bézier curve specified by the control points B, either a vector of segments of controlpoints or a.

This method reduces the points depending on the optional reduce symbol

  • :default: no reduction is performed
  • :continuous: for a continuous function, the junction points are doubled at $b_{0,i}=b_{n_{i-1},i-1}$, so only $b_{0,i}$ is in the vector.
  • :differentiable: for a differentiable function additionally $\log_{b_{0,i}}b_{1,i} = -\log_{b_{n_{i-1},i-1}}b_{n_{i-1}-1,i-1}$ holds. hence $b_{n_{i-1}-1,i-1}$ is omitted.

If only one segment is given, all points of b, b.pts, is returned.

source
ManoptExamples.get_Bezier_segmentsMethod
get_Bezier_segments(M::AbstractManifold, c::AbstractArray{P}, d[, s::Symbol=:default])

returns the array of BezierSegments B of a composite Bézier curve reconstructed from an array c of points on the manifold M and an array of degrees d.

There are a few (reduced) representations that can get extended; see also get_Bezier_points. For ease of the following, let $c=(c_1,…,c_k)$ and $d=(d_1,…,d_m)$, where $m$ denotes the number of components the composite Bézier curve consists of. Then

  • :default: $k = m + \sum_{i=1}^m d_i$ since each component requires one point more than its degree. The points are then ordered in tuples, i.e.

    \[B = \bigl[ [c_1,…,c_{d_1+1}], (c_{d_1+2},…,c_{d_1+d_2+2}],…, [c_{k-m+1+d_m},…,c_{k}] \bigr]\]

  • :continuous: $k = 1+ \sum_{i=1}{m} d_i$, since for a continuous curve start and end point of successive components are the same, so the very first start point and the end points are stored.

    \[B = \bigl[ [c_1,…,c_{d_1+1}], [c_{d_1+1},…,c_{d_1+d_2+1}],…, [c_{k-1+d_m},…,b_{k}) \bigr]\]

  • :differentiable – for a differentiable function additionally to the last explanation, also the second point of any segment was not stored except for the first segment. Hence $k = 2 - m + \sum_{i=1}{m} d_i$ and at a junction point $b_n$ with its given prior point $c_{n-1}$, i.e. this is the last inner point of a segment, the first inner point in the next segment the junction is computed as $b = \exp_{c_n}(-\log_{c_n} c_{n-1})$ such that the assumed differentiability holds
source
ManoptExamples.grad_L2_acceleration_BezierMethod
grad_L2_acceleration_Bezier(
    M::AbstractManifold,
    B::AbstractVector{P},
    degrees::AbstractVector{<:Integer},
    T::AbstractVector,
    λ,
    d::AbstractVector{P}
) where {P}

compute the gradient of the discretized acceleration of a composite Bézier curve on the Manifold M with respect to its control points B together with a data term that relates the junction points p_i to the data d with a weight $λ$ compared to the acceleration. The curve is evaluated at the points given in pts (elementwise in $[0,N]$), where $N$ is the number of segments of the Bézier curve. The summands are grad_distance for the data term and grad_acceleration_Bezier for the acceleration with interpolation constrains. Here the get_Bezier_junctions are included in the optimization, i.e. setting $λ=0$ yields the unconstrained acceleration minimization. Note that this is ill-posed, since any Bézier curve identical to a geodesic is a minimizer.

Note that the Bézier-curve is given in reduces form as a point on a PowerManifold, together with the degrees of the segments and assuming a differentiable curve, the segments can internally be reconstructed.

See also

grad_acceleration_Bezier, L2_acceleration_Bezier, acceleration_Bezier.

source
ManoptExamples.grad_acceleration_BezierMethod
grad_acceleration_Bezier(
    M::AbstractManifold,
    B::AbstractVector,
    degrees::AbstractVector{<:Integer}
    T::AbstractVector
)

compute the gradient of the discretized acceleration of a (composite) Bézier curve $c_B(t)$ on the Manifold M with respect to its control points B given as a point on the PowerManifold assuming C1 conditions and known degrees. The curve is evaluated at the points given in T (elementwise in $[0,N]$, where $N$ is the number of segments of the Bézier curve). The get_Bezier_junctions are fixed for this gradient (interpolation constraint). For the unconstrained gradient, see grad_L2_acceleration_Bezier and set $λ=0$ therein. This gradient is computed using adjoint_Jacobi_fields. For details, see Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018. See de_Casteljau for more details on the curve.

See also

acceleration_Bezier, grad_L2_acceleration_Bezier, L2_acceleration_Bezier.

source

Riemannian Mean

See the Riemannian mean example to see these in use.

ManoptExamples.RiemannianMeanCostType
RiemannianMeanCost{P}

A functor representing the Riemannian center of mass (or Riemannian mean) cost function.

For a given set of points $d_1,\ldots,d_N$ this cost function is defined as

\[f(p) = \sum_{j=i}^N d_{mathcal M}^2(d_i, p),\]

where $d_{\mathcal M}$ is the distance on a Riemannian manifold.

Constructor

RiemannianMeanCost(M::AbstractManifold, data::AbstractVector{<:P}) where {P}

Initialize the cost function to a data set data of points on a manfiold M, where each point is of type P.

See also

RiemannianMeanGradient!!, Riemannian_mean_objective

source
ManoptExamples.RiemannianMeanGradient!!Type
RiemannianMeanGradient!!{P} where P

A functor representing the Riemannian center of mass (or Riemannian mean) cost function.

For a given set of points $d_1,\ldots,d_N$ this cost function is defined as

\[\operatorname{grad}f(p) = \sum_{j=i}^N \log_p d_i\]

where $d_{\mathcal M}$ is the distance on a Riemannian manifold and we employ grad_distance to compute the single summands.

This functor provides both the allocating variant grad_f(M,p) as well as the in-place variant grad_f!(M, X, p) which computes the gradient in-place of X.

Constructors

RiemannianMeanGradient!!(data::AbstractVector{P}, initial_vector::T=nothing) where {P,T}

Generate the Riemannian mean gradient based on some data points data an intial tangent vector initial_vector. If you do not provide an initial tangent vector to allocate the intermediate storage of a tangent vector, you can only use the allocating variant.

RiemannianMeanGradient!!(
    M::AbstractManifold,
    data::AbstractVector{P};
    initial_vector::T=zero_vector(M, first(data)),
) where {P,T}

Initialize the Riemannian mean gradient, where the internal storage for tangent vectors can be created automatically, since the Riemannian manifold M is provideed.

See also

RiemannianMeanCost, Riemannian_mean_objective

source
ManoptExamples.Riemannian_mean_objectiveMethod
Riemannian_mean_objective(data, initial_vector=nothing, evaluation=AllocatingEvaluation())
Riemannian_mean_objective(M, data;
initial_vector=zero_vector(M, first(data)),
evaluation=AllocatingEvaluton()
)

Generate the objective for the Riemannian mean task for some given vector of data points on the Riemannian manifold M.

See also

RiemannianMeanCost, RiemannianMeanGradient!!

Note

The first constructor should only be used if an additional storage of the vector is not feasible, since initialising the initial_vector to nothing disables the in-place variant. Hence the evaluation is a positional argument, since it only can be changed, if a vector is provided.

source

Robust PCA

See the Robust PCA example to see these in use.

ManoptExamples.RobustPCACostType
RobustPCACost{D,F}

A functor representing the Riemannian robust PCA function on the Grassmann manifold. For some given (column) data $D∈\mathbb R^{d\times n}$ the cost function is defined on some $\operatorname{Gr}(d,m)$, $m<n$ as the sum of the distances of the columns $D_i$ to the subspace spanned by $p\in\operatorname{Gr}(d,m)$ (represented as a point on the Stiefel manifold). The function reads

\[f(U) = \frac{1}{n}\sum_{i=1}^n \lVert pp^{\mathrm{T}}D_i - D_i\rVert\]

This cost additionally provides a Huber regularisation of the cost, that is for some $ε>0$ one use $ℓ_ε(x) = \sqrt{x^2+ε^2} - ε$ in

\[f_{ε}(p) = \frac{1}{n}\sum_{i=1}^n ℓ_ε\bigl(\lVert pp^{\mathrm{T}}D_i - D_i\rVert\bigr)\]

Note that this is a mutable struct so you can adapt the $ε$ later on.

Constructor

RobustPCACost(data::AbstractMatrix, ε=1.0)
RobustPCACost(M::Grassmann, data::AbstractMatrix, ε=1.0)

Initialize the robust PCA cost to some data $D$, and some regularization $ε$. The manifold is optional to comply with all examples, but it is not needed here to construct the cost.

source
ManoptExamples.RobustPCAGrad!!Type
RobustPCAGrad!!{D,F}

A functor representing the Riemannian robust PCA gradient on the Grassmann manifold. For some given (column) data $X∈\mathbb R^{p\times n}$ the gradient of the RobustPCACost can be computed by projecting the Euclidean gradient onto the corresponding tangent space.

Note that this is a mutable struct so you can adapt the $ε$ later on.

Constructor

RobustPCAGrad!!(data, ε=1.0)
RobustPCAGrad!!(M::Grassmannian{d,m}, data, ε=1.0; evaluation=AllocatingEvaluation())

Initialize the robust PCA cost to some data $D$, and some regularization $ε$. The manifold is optional to comply with all examples, but it is not needed here to construct the cost. Also the evaluation= keyword is present only for unification of the interfaces. Indeed, independent of that keyword the functor always works in both variants.

source
ManoptExamples.robust_PCA_objectiveFunction
robust_PCA_objective(data::AbstractMatrix, ε=1.0; evaluation=AllocatingEvaluation())
robust_PCA_objective(M, data::AbstractMatrix, ε=1.0; evaluation=AllocatingEvaluton())

Generate the objective for the robust PCA task for some given data $D$ and Huber regularization parameter $ε$.

See also

RobustPCACost, RobustPCAGrad!!

Note

Since the construction is independent of the manifold, that argument is optional and mainly provided to comply with other objectives. Similarly, independent of the evaluation, indeed the gradient always allows for both the allocating and the in-place variant to be used, though that keyword is used to setup the objective.

source

Rosenbrock Function

See the Rosenbrock example and The Difference of Convex Rosenbrock Example to see these in use.

ManoptExamples.RosenbrockCostType
RosenbrockCost

Provide the Rosenbrock function in 2D, i.e. for some $a,b ∈ ℝ$

\[f(\mathcal M, p) = a(p_1^2-p_2)^2 + (p_1-b)^2\]

which means that for the 2D case, the manifold $\mathcal M$ is ignored.

See also 📖 Rosenbrock (with slightly different parameter naming).

Constructor

f = Rosenbrock(a,b)

generates the struct/function of the Rosenbrock cost.

source
ManoptExamples.RosenbrockGradient!!Type
RosenbrockGradient

Provide Eclidean GRadient fo the Rosenbrock function in 2D, i.e. for some $a,b ∈ ℝ$

\[\nabla f(\mathcal M, p) = \begin{pmatrix} 4a(p_1^2-p_2)p_1 + 2(p_1-b) \\ -2a(p_1^2-p_2) \end{pmatrix}\]

i.e. also here the manifold is ignored.

Constructor

RosenbrockGradient(a,b)

Functors

grad_f!!(M,p)
grad_f!!(M, X, p)

evaluate the gradient at $p$ the manifold$\mathcal M$ is ignored.

source
ManoptExamples.RosenbrockMetricType
RosenbrockMetric <: AbstractMetric

A metric related to the Rosenbrock problem, where the metric at a point $p∈\mathbb R^2$ is given by

\[⟨X,Y⟩_{\mathrm{Rb},p} = X^\mathrm{T}G_pY, \qquad G_p = \begin{pmatrix} 1+4p_{1}^2 & -2p_{1} \\ -2p_{1} & 1 \end{pmatrix},\]

where the $\mathrm{Rb}$ stands for Rosenbrock

source
Base.expMethod
q = exp(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p, X)
exp!(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, q, p, X)

Compute the exponential map with respect to the RosenbrockMetric.

\[ q = \begin{pmatrix} p_1 + X_1 \\ p_2+X_2+X_1^2\end{pmatrix}\]

source
Base.logMethod
X = log(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p, q)
log!(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, X, p, q)

Compute the logarithmic map with respect to the RosenbrockMetric. The formula reads for any $j ∈ \{1,…,m\}$

\[X = \begin{pmatrix} q_1 - p_1 \\ q_2 - p_2 + (q_1 - p_1)^2 \end{pmatrix}\]

source
Manifolds.inverse_local_metricMethod
inverse_local_metric(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p)

Return the inverse of the local metric matrix of the RosenbrockMetric in the canonical unit vector basis of the tangent space $T_p\mathbb R^2$ given as

\[G^{-1}_p = \begin{pmatrix} 1 & 2p_1\\ 2p_1 & 1+4p_1^2 \\ \end{pmatrix}.\]

source
Manifolds.local_metricMethod
local_metric(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p)

Return the local metric matrix of the RosenbrockMetric in the canonical unit vector basis of the tangent space $T_p\mathbb R^2$ given as

\[G_p = \begin{pmatrix} 1+4p_1^2 & -2p_1 \\ -2p_1 & 1 \end{pmatrix}\]

source
ManifoldsBase.change_representerMethod
Y = change_representer(M::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, ::EuclideanMetric, p, X)
change_representer!(M::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, Y, ::EuclideanMetric, p, X)

Given the Euclidan gradient X at p, this function computes the corresponting Riesz representer Ysuch that⟨X,Z⟩ = ⟨ Y, Z ⟩_{\mathrm{Rb},p}holds for allZ, in other wordsY = G(p)^{-1}X`.

this function is used in riemannian_gradient to convert a Euclidean into a Riemannian gradient.

source
ManifoldsBase.innerMethod
inner(M::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p, X, Y)

Compute the inner product on $\mathbb R^2$ with respect to the RosenbrockMetric, i.e. for $X,Y \in T_p\mathcal M$ we have

\[⟨X,Y⟩_{\mathrm{Rb},p} = X^\mathrm{T}G_pY, \qquad G_p = \begin{pmatrix} 1+4p_1^2 & -2p_1\\ -2p_1 & 1 \end{pmatrix},\]

source

Total Variation

See the Total Variation example to see these in use.

ManoptExamples.Intrinsic_infimal_convolution_TV12Method
Intrinsic_infimal_convolution_TV12(M, f, u, v, α, β)

Compute the intrinsic infimal convolution model, where the addition is replaced by a mid point approach and the two functions involved are second_order_Total_Variation and Total_Variation. The model reads

\[E(u,v) = \frac{1}{2}\sum_{i ∈ \mathcal G} d_{\mathcal M}\bigl(g(\frac{1}{2},v_i,w_i),f_i\bigr) +\alpha\bigl( β\mathrm{TV}(v) + (1-β)\mathrm{TV}_2(w) \bigr).\]

for more details see [BFPS17, BFPS18].

See also

Total_Variation, second_order_Total_Variation

source
ManoptExamples.Total_VariationFunction
Total_Variation(M,x [,p=2,q=1])

Compute the $\operatorname{TV}^p$ functional for data xon the PowerManifold manifold M, i.e. $\mathcal M = \mathcal N^n$, where $n ∈ \mathbb N^k$ denotes the dimensions of the data x. Let $\mathcal I_i$ denote the forward neighbors, i.e. with $\mathcal G$ as all indices from $\mathbf{1} ∈ \mathbb N^k$ to $n$ we have $\mathcal I_i = \{i+e_j, j=1,…,k\}\cap \mathcal G$. The formula reads

\[E^q(x) = \sum_{i ∈ \mathcal G} \bigl( \sum_{j ∈ \mathcal I_i} d^p_{\mathcal M}(x_i,x_j) \bigr)^{q/p},\]

see [WDS14] for more details. In long function names, this might be shortened to TV1 and the 1 might be ommitted if only total variation is present.

See also

grad_Total_Variation, prox_Total_Variation, second_order_Total_Variation

source
ManoptExamples.adjoint_differential_forward_logsMethod
Y = adjoint_differential_forward_logs(M, p, X)
adjoint_differential_forward_logs!(M, Y, p, X)

Compute the adjoint differential of forward_logs $F$ occurring, in the power manifold array p, the differential of the function

$F_i(p) = \sum_{j ∈ \mathcal I_i} \log_{p_i} p_j$

where $i$ runs over all indices of the PowerManifold manifold M and $\mathcal I_i$ denotes the forward neighbors of $i$ Let $n$ be the number dimensions of the PowerManifold manifold (i.e. length(size(x))). Then the input tangent vector lies on the manifold $\mathcal M' = \mathcal M^n$. The adjoint differential can be computed in place of Y.

Input

  • M – a PowerManifold manifold
  • p – an array of points on a manifold
  • X – a tangent vector to from the n-fold power of p, where n is the ndims of p

Output

Y – resulting tangent vector in $T_p\mathcal M$ representing the adjoint differentials of the logs.

source
ManoptExamples.differential_forward_logsMethod
Y = differential_forward_logs(M, p, X)
differential_forward_logs!(M, Y, p, X)

compute the differential of forward_logs $F$ on the PowerManifold manifold M at p and direction X , in the power manifold array, the differential of the function

\[F_i(x) = \sum_{j ∈ \mathcal I_i} \log_{p_i} p_j, \quad i ∈ \mathcal G,\]

where $\mathcal G$ is the set of indices of the PowerManifold manifold M and $\mathcal I_i$ denotes the forward neighbors of $i$.

Input

  • M – a PowerManifold manifold
  • p – a point.
  • X – a tangent vector.

Output

  • Y – resulting tangent vector in $T_x\mathcal N$ representing the differentials of the logs, where $\mathcal N$ is the power manifold with the number of dimensions added to size(x). The computation can also be done in place.
source
ManoptExamples.forward_logsMethod
Y = forward_logs(M,x)
forward_logs!(M, Y, x)

compute the forward logs $F$ (generalizing forward differences) occurring, in the power manifold array, the function

\[F_i(x) = \sum_{j ∈ \mathcal I_i} \log_{x_i} x_j,\quad i ∈ \mathcal G,\]

where $\mathcal G$ is the set of indices of the PowerManifold manifold M and $\mathcal I_i$ denotes the forward neighbors of $i$. This can also be done in place of ξ.

Input

  • M – a PowerManifold manifold
  • x – a point.

Output

  • Y – resulting tangent vector in $T_x\mathcal M$ representing the logs, where $\mathcal N$ is the power manifold with the number of dimensions added to size(x). The computation can be done in place of Y.
source
ManoptExamples.grad_Total_VariationFunction
X = grad_Total_Variation(M, λ, x[, p=1])
grad_Total_Variation!(M, X, λ, x[, p=1])

Compute the (sub)gradient $∂f$ of all forward differences occurring, in the power manifold array, i.e. of the function

\[f(p) = \sum_{i}\sum_{j ∈ \mathcal I_i} d^p(x_i,x_j)\]

where $i$ runs over all indices of the PowerManifold manifold M and $\mathcal I_i$ denotes the forward neighbors of $i$.

Input

  • M – a PowerManifold manifold
  • x – a point.

Output

  • X – resulting tangent vector in $T_x\mathcal M$. The computation can also be done in place.
source
ManoptExamples.grad_Total_VariationMethod
X = grad_Total_Variation(M, (x,y)[, p=1])
grad_Total_Variation!(M, X, (x,y)[, p=1])

compute the (sub) gradient of $\frac{1}{p}d^p_{\mathcal M}(x,y)$ with respect to both $x$ and $y$ (in place of X and Y).

source
ManoptExamples.grad_intrinsic_infimal_convolution_TV12Method
grad_u, grad_v = grad_intrinsic_infimal_convolution_TV12(M, f, u, v, α, β)

compute (sub)gradient of the intrinsic infimal convolution model using the mid point model of second order differences, see second_order_Total_Variation, i.e. for some $f ∈ \mathcal M$ on a PowerManifold manifold $\mathcal M$ this function computes the (sub)gradient of

\[E(u,v) = \frac{1}{2}\sum_{i ∈ \mathcal G} d_{\mathcal M}(g(\frac{1}{2},v_i,w_i),f_i) + \alpha \bigl( β\mathrm{TV}(v) + (1-β)\mathrm{TV}_2(w) \bigr),\]

where both total variations refer to the intrinsic ones, grad_Total_Variation and grad_second_order_Total_Variation, respectively.

source
ManoptExamples.grad_second_order_Total_VariationFunction
grad_second_order_Total_Variation(M::PowerManifold, q[, p=1])

computes the (sub) gradient of $\frac{1}{p}d_2^p(q_1,q_2,q_3)$ with respect to all $q_1,q_2,q_3$ occurring along any array dimension in the point q, where M is the corresponding PowerManifold.

source
ManoptExamples.grad_second_order_Total_VariationFunction
Y = grad_second_order_Total_Variation(M, q[, p=1])
grad_second_order_Total_Variation!(M, Y, q[, p=1])

computes the (sub) gradient of $\frac{1}{p}d_2^p(q_1, q_2, q_3)$ with respect to all three components of $q∈\mathcal M^3$, where $d_2$ denotes the second order absolute difference using the mid point model, i.e. let

\[\mathcal C = \bigl\{ c ∈ \mathcal M \ |\ g(\tfrac{1}{2};q_1,q_3) \text{ for some geodesic }g\bigr\}\]

denote the mid points between $q_1$ and $q_3$ on the manifold $\mathcal M$. Then the absolute second order difference is defined as

\[d_2(q_1,q_2,q_3) = \min_{c ∈ \mathcal C_{q_1,q_3}} d(c, q_2).\]

While the (sub)gradient with respect to $q_2$ is easy, the other two require the evaluation of an adjoint_Jacobi_field.

The derivation of this gradient can be found in [BBSW16].

source
ManoptExamples.project_collaborative_TVFunction
project_collaborative_TV(M, λ, x, Ξ[, p=2,q=1])
project_collaborative_TV!(M, Θ, λ, x, Ξ[, p=2,q=1])

compute the projection onto collaborative Norm unit (or α-) ball, i.e. of the function

\[F^q(x) = \sum_{i∈\mathcal G} \Bigl( \sum_{j∈\mathcal I_i} \sum_{k=1}^d \lVert X_{i,j}\rVert_x^p\Bigr)^\frac{q}{p},\]

where $\mathcal G$ is the set of indices for $x∈\mathcal M$ and $\mathcal I_i$ is the set of its forward neighbors. The computation can also be done in place of Θ.

This is adopted from the paper Duran, Möller, Sbert, Cremers, SIAM J Imag Sci, 2016, see their Example 3 for details.

source
ManoptExamples.prox_Total_VariationFunction
ξ = prox_Total_Variation(M,λ,x [,p=1])

compute the proximal maps $\operatorname{prox}_{λ\varphi}$ of all forward differences occurring in the power manifold array, i.e. $\varphi(xi,xj) = d_{\mathcal M}^p(xi,xj)$ with xi and xj are array elements of x and j = i+e_k, where e_k is the $k$th unit vector. The parameter λ is the prox parameter.

Input

  • M – a manifold M
  • λ – a real value, parameter of the proximal map
  • x – a point.

Optional

(default is given in brackets)

  • p – (1) exponent of the distance of the TV term

Output

  • y – resulting point containing with all mentioned proximal points evaluated (in a cyclic order). The computation can also be done in place
source
ManoptExamples.prox_Total_VariationMethod
[y1,y2] = prox_Total_Variation(M, λ, [x1,x2] [,p=1])
prox_Total_Variation!(M, [y1,y2] λ, [x1,x2] [,p=1])

Compute the proximal map $\operatorname{prox}_{λ\varphi}$ of $φ(x,y) = d_{\mathcal M}^p(x,y)$ with parameter λ. A derivation of this closed form solution is given in see [WDS14].

Input

  • M – a manifold M
  • λ – a real value, parameter of the proximal map
  • (x1,x2) – a tuple of two points,

Optional

(default is given in brackets)

  • p – (1) exponent of the distance of the TV term

Output

  • (y1,y2) – resulting tuple of points of the $\operatorname{prox}_{λφ}($(x1,x2)$)$. The result can also be computed in place.
source
ManoptExamples.prox_parallel_TVFunction
y = prox_parallel_TV(M, λ, x [,p=1])
prox_parallel_TV!(M, y, λ, x [,p=1])

compute the proximal maps $\operatorname{prox}_{λφ}$ of all forward differences occurring in the power manifold array, i.e. $φ(x_i,x_j) = d_{\mathcal M}^p(x_i,x_j)$ with xi and xj are array elements of x and j = i+e_k, where e_k is the $k$th unit vector. The parameter λ is the prox parameter.

Input

  • M – a PowerManifold manifold
  • λ – a real value, parameter of the proximal map
  • x – a point

Optional

(default is given in brackets)

  • p – (1) exponent of the distance of the TV term

Output

  • y – resulting Array of points with all mentioned proximal points evaluated (in a parallel within the arrays elements). The computation can also be done in place.

See also prox_Total_Variation

source
ManoptExamples.prox_second_order_Total_VariationMethod
(y1,y2,y3) = prox_second_order_Total_Variation(M,λ,(x1,x2,x3),[p=1], kwargs...)
prox_second_order_Total_Variation!(M, y, λ,(x1,x2,x3),[p=1], kwargs...)

Compute the proximal map $\operatorname{prox}_{λ\varphi}$ of $\varphi(x_1,x_2,x_3) = d_{\mathcal M}^p(c(x_1,x_3),x_2)$ with parameter λ>0, where $c(x,z)$ denotes the mid point of a shortest geodesic from x1 to x3 that is closest to x2. The result can be computed in place of y.

Note that this function does not have a closed form solution but is solbed by a few steps of the subgradient mehtod from manopt.jl by default. See [BBSW16] for a derivation.

Input

  • M – a manifold

  • λ – a real value, parameter of the proximal map

  • (x1,x2,x3) – a tuple of three points

  • p – (1) exponent of the distance of the TV term

Optional

kwargs... – parameters for the internal subgradient_method (if M is neither Euclidean nor Circle, since for these a closed form is given)

Output

  • (y1,y2,y3) – resulting tuple of points of the proximal map. The computation can also be done in place.
source
ManoptExamples.prox_second_order_Total_VariationMethod
y = prox_second_order_Total_Variation(M, λ, x[, p=1])
prox_second_order_Total_Variation!(M, y, λ, x[, p=1])

compute the proximal maps $\operatorname{prox}_{λ\varphi}$ of all centered second order differences occurring in the power manifold array, i.e. $\varphi(x_k,x_i,x_j) = d_2(x_k,x_i.x_j)$, where $k,j$ are backward and forward neighbors (along any dimension in the array of x). The parameter λ is the prox parameter.

Input

  • M – a manifold M
  • λ – a real value, parameter of the proximal map
  • x – a points.

Optional

(default is given in brackets)

  • p – (1) exponent of the distance of the TV term

Output

  • y – resulting point with all mentioned proximal points evaluated (in a cyclic order). The computation can also be done in place.
source
ManoptExamples.second_order_Total_VariationFunction
second_order_Total_Variation(M,x [,p=1])

compute the $\operatorname{TV}_2^p$ functional for data x on the PowerManifold manifoldmanifold M, i.e. $\mathcal M = \mathcal N^n$, where $n ∈ \mathbb N^k$ denotes the dimensions of the data x. Let $\mathcal I_i^{\pm}$ denote the forward and backward neighbors, respectively, i.e. with $\mathcal G$ as all indices from $\mathbf{1} ∈ \mathbb N^k$ to $n$ we have $\mathcal I^\pm_i = \{i\pm e_j, j=1,…,k\}\cap \mathcal I$. The formula then reads

\[E(x) = \sum_{i ∈ \mathcal I,\ j_1 ∈ \mathcal I^+_i,\ j_2 ∈ \mathcal I^-_i} d^p_{\mathcal M}(c_i(x_{j_1},x_{j_2}), x_i),\]

where $c_i(⋅,⋅)$ denotes the mid point between its two arguments that is nearest to $x_i$, see [BBSW16] for a derivation.

In long function names, this might be shortened to TV2.

See also

grad_second_order_Total_Variation, prox_second_order_Total_Variation

source
ManoptExamples.second_order_Total_VariationMethod
second_order_Total_Variation(M,(x1,x2,x3) [,p=1])

Compute the $\operatorname{TV}_2^p$ functional for the 3-tuple of points (x1,x2,x3)on the manifold M. Denote by

\[ \mathcal C = \bigl\{ c ∈ \mathcal M \ |\ g(\tfrac{1}{2};x_1,x_3) \text{ for some geodesic }g\bigr\}\]

the set of mid points between $x_1$ and $x_3$. Then the function reads

\[d_2^p(x_1,x_2,x_3) = \min_{c ∈ \mathcal C} d_{\mathcal M}(c,x_2),\]

see [BBSW16] for a derivation. In long function names, this might be shortened to TV2.

See also

grad_second_order_Total_Variation, prox_second_order_Total_Variation, Total_Variation

source

Literature

[BBSW16]
M. Bačák, R. Bergmann, G. Steidl and A. Weinmann. A second order non-smooth variational model for restoring manifold-valued images. SIAM Journal on Scientific Computing 38, A567–A597 (2016), arXiv:1506.02409.
[BFPS18]
R. Bergmann, J. H. Fitschen, J. Persch and G. Steidl. Priors with coupled first and second order differences for manifold-valued image processing. Journal of Mathematical Imaging and Vision 60, 1459–1481 (2018), arXiv:1709.01343.
[BFPS17]
R. Bergmann, J. H. Fitschen, J. Persch and G. Steidl. Infimal convolution coupling of first and second order differences on manifold-valued images. In: Scale Space and Variational Methods in Computer Vision: 6th International Conference, SSVM 2017, Kolding, Denmark, June 4–8, 2017, Proceedings, edited by F. Lauze, Y. Dong and A. B. Dahl (Springer International Publishing, 2017); pp. 447–459.
[BG18]
R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.
[Bou23]
[Cas59]
P. de Casteljau. Outillage methodes calcul (Enveloppe Soleau 40.040, Institute National de la Propriété Industrielle, Paris., 1959).
[Cas63]
P. de Casteljau. Courbes et surfaces à pôles (Microfiche P 4147-1, Institute National de la Propriété Industrielle, Paris., 1963).
[DMSC16]
J. Duran, M. Moeller, C. Sbert and D. Cremers. Collaborative Total Variation: A General Framework for Vectorial TV Models. SIAM Journal on Imaging Sciences 9, 116–151 (2016), arXiv:1508.01308.
[PN07]
T. Popiel and L. Noakes. Bézier curves and $C^2$ interpolation in Riemannian manifolds. Journal of Approximation Theory 148, 111–127 (2007).
[WDS14]
A. Weinmann, L. Demaret and M. Storath. Total variation regularization for manifold-valued data. SIAM Journal on Imaging Sciences 7, 2226–2257 (2014).