List of Objectives defined for the Examples
Rayleigh Quotient on the Sphere
See the Rayleigh example (TODO) to see these in use.
ManoptExamples.RayleighQuotientCost
— TypeRayleighQuotientCost
A functor representing the Rayleigh Quotient cost function.
Let $A ∈ ℝ^{n×n}$ be a symmetric matrix. Then we can specify the Rayleigh Quotient in two forms. Either
\[f(p) = p^{\mathrm{T}}Ap,\qquad p ∈ 𝕊^{n-1},\]
or extended into the embedding as
\[f(x) = x^{\mathrm{T}}Ax, \qquad x ∈ ℝ^n,\]
which is not the orignal Rayleigh quotient for performance reasons, but useful if you want to use this as the Euclidean cost in the emedding of $𝕊^{n-1}$.
Fields
A
– storing the matrix internally
Constructor
RayleighQuotientCost(A)
Create the Rayleigh cost function.
See also
ManoptExamples.RayleighQuotientGrad!!
— TypeRayleighQuotientGrad!!
A functor representing the Rayleigh Quotient gradient function.
Let $A ∈ ℝ^{n×n}$ be a symmetric matrix. Then we can specify the gradient of the Rayleigh Quotient in two forms. Either
\[\operatorname{grad} f(p) = 2 Ap - 2 (p^{\mathrm{T}}Ap)*p,\qquad p ∈ 𝕊^{n-1},\]
or taking the Euclidean gradient of the Rayleigh quotient on the sphere as
\[∇f(x) = 2Ax, \qquad x ∈ ℝ^n.\]
For details, see Example 3.62 of [Bou23].
Fields
A
– storing the matrix internally
Constructor
RayleighQuotientGrad!!(A)
Create the Rayleigh quotient gradient function.
See also
ManoptExamples.RayleighQuotientHess!!
— TypeRayleighQuotientHess!!
A functor representing the Rayleigh Quotient Hessian.
Let $A ∈ ℝ^{n×n}$ be a symmetric matrix. Then we can specify the Hessian of the Rayleigh Quotient in two forms. Either
\[\operatorname{Hess} f(p)[X] = 2 \bigl(AX - (p^{mathrm{T}}AX)p - (p^{\mathrm{T}}Ap)X\bigr),\qquad p ∈ 𝕊^{n-1}, X \in T_p𝕊^{n-1}\]
or taking the Euclidean Hessian of the Rayleigh quotient on the sphere as
\[∇^2f(x)[V] = 2AV, \qquad x, V ∈ ℝ^n.\]
For details, see Example 5.27 of [Bou23].
Fields
A
– storing the matrix internally
Constructor
RayleighQuotientHess!!(A)
Create the Rayleigh quotient Hessian function.
See also
Bézier Curves
See the Bezier Curves example to see these in use.
ManoptExamples.BezierSegment
— TypeBezierSegment
A type to capture a Bezier segment. With $n$ points, a Bézier segment of degree $n-1$ is stored. On the Euclidean manifold, this yields a polynomial of degree $n-1$.
This type is mainly used to encapsulate the points within a composite Bezier curve, which consist of an AbstractVector
of BezierSegments
where each of the points might be a nested array on a PowerManifold
already.
Not that this can also be used to represent tangent vectors on the control points of a segment.
See also: de_Casteljau
.
Constructor
BezierSegment(pts::AbstractVector)
Given an abstract vector of pts
generate the corresponding Bézier segment.
ManoptExamples.L2_acceleration_Bezier
— MethodL2_acceleration_Bezier(M,B,pts,λ,d)
compute the value of the discrete Acceleration of the composite Bezier curve together with a data term, i.e.
\[\frac{λ}{2}\sum_{i=0}^{N} d_{\mathcal M}(d_i, c_B(i))^2+ \sum_{i=1}^{N-1}\frac{d^2_2 [ B(t_{i-1}), B(t_{i}), B(t_{i+1})]}{\Delta_t^3}\]
where for this formula the pts
along the curve are equispaced and denoted by $t_i$ and $d_2$ refers to the second order absolute difference second_order_Total_Variation
(squared), the junction points are denoted by $p_i$, and to each $p_i$ corresponds one data item in the manifold points given in d
. For details on the acceleration approximation, see acceleration_Bezier
. Note that the Bézier-curve is given in reduces form as a point on a PowerManifold
, together with the degrees
of the segments and assuming a differentiable curve, the segments can internally be reconstructed.
See also
grad_L2_acceleration_Bezier
, acceleration_Bezier
, grad_acceleration_Bezier
ManoptExamples.acceleration_Bezier
— Methodacceleration_Bezier(
M::AbstractManifold,
B::AbstractVector{P},
degrees::AbstractVector{<:Integer},
T::AbstractVector{<:AbstractFloat},
) where {P}
compute the value of the discrete Acceleration of the composite Bezier curve
\[\sum_{i=1}^{N-1}\frac{d^2_2 [ B(t_{i-1}), B(t_{i}), B(t_{i+1})]}{\Delta_t^3}\]
where for this formula the pts
along the curve are equispaced and denoted by $t_i$, $i=1,…,N$, and $d_2$ refers to the second order absolute difference second_order_Total_Variation
(squared). Note that the Bézier-curve is given in reduces form as a point on a PowerManifold
, together with the degrees
of the segments and assuming a differentiable curve, the segments can internally be reconstructed.
This acceleration discretization was introduced in Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018.
See also
grad_acceleration_Bezier
, L2_acceleration_Bezier
, grad_L2_acceleration_Bezier
ManoptExamples.adjoint_differential_Bezier_control_points
— Methodadjoint_differential_Bezier_control_points(
M::AbstractManifold,
T::AbstractVector,
X::AbstractVector,
)
adjoint_differential_Bezier_control_points!(
M::AbstractManifold,
Y::AbstractVector{<:BezierSegment},
T::AbstractVector,
X::AbstractVector,
)
Evaluate the adjoint of the differential with respect to the controlpoints at several times T
. This can be computed in place of Y
.
See de_Casteljau
for more details on the curve.
ManoptExamples.adjoint_differential_Bezier_control_points
— Methodadjoint_differential_Bezier_control_points(
M::AbstractManifold,
B::AbstractVector{<:BezierSegment},
t,
X
)
adjoint_differential_Bezier_control_points!(
M::AbstractManifold,
Y::AbstractVector{<:BezierSegment},
B::AbstractVector{<:BezierSegment},
t,
X
)
evaluate the adjoint of the differential of a composite Bézier curve on the manifold M
with respect to its control points b
based on a points T
$=(t_i)_{i=1}^n$ that are pointwise in $t_i∈[0,1]$ on the curve and given corresponding tangential vectors $X = (η_i)_{i=1}^n$, $η_i∈T_{β(t_i)}\mathcal M$ This can be computed in place of Y
.
See de_Casteljau
for more details on the curve.
ManoptExamples.adjoint_differential_Bezier_control_points
— Methodadjoint_differential_Bezier_control_points(
M::AbstractManifold,
b::BezierSegment,
t::AbstractVector,
X::AbstractVector,
)
adjoint_differential_Bezier_control_points!(
M::AbstractManifold,
Y::BezierSegment,
b::BezierSegment,
t::AbstractVector,
X::AbstractVector,
)
evaluate the adjoint of the differential of a Bézier curve on the manifold M
with respect to its control points b
based on a points T
$=(t_i)_{i=1}^n$ that are pointwise in $t_i∈[0,1]$ on the curve and given corresponding tangential vectors $X = (η_i)_{i=1}^n$, $η_i∈T_{β(t_i)}\mathcal M$ This can be computed in place of Y
.
See de_Casteljau
for more details on the curve and Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018
ManoptExamples.adjoint_differential_Bezier_control_points
— Methodadjoint_differential_Bezier_control_points(M::AbstractManifold, b::BezierSegment, t, η)
adjoint_differential_Bezier_control_points!(
M::AbstractManifold,
Y::BezierSegment,
b::BezierSegment,
t,
η,
)
evaluate the adjoint of the differential of a Bézier curve on the manifold M
with respect to its control points b
based on a point t
$∈[0,1]$ on the curve and a tangent vector $η∈T_{β(t)}\mathcal M$. This can be computed in place of Y
.
See de_Casteljau
for more details on the curve.
ManoptExamples.de_Casteljau
— Methodde_Casteljau(M::AbstractManifold, b::BezierSegment NTuple{N,P}) -> Function
return the Bézier curve $β(⋅;b_0,…,b_n): [0,1] → \mathcal M$ defined by the control points $b_0,…,b_n∈\mathcal M$, $n∈\mathbb N$, as a BezierSegment
. This function implements de Casteljau's algorithm Casteljau, 1959, Casteljau, 1963 generalized to manifolds by Popiel, Noakes, J Approx Theo, 2007: Let $γ_{a,b}(t)$ denote the shortest geodesic connecting $a,b∈\mathcal M$. Then the curve is defined by the recursion
\[\begin{aligned} β(t;b_0,b_1) &= \gamma_{b_0,b_1}(t)\\ β(t;b_0,…,b_n) &= \gamma_{β(t;b_0,…,b_{n-1}), β(t;b_1,…,b_n)}(t), \end{aligned}\]
and P
is the type of a point on the Manifold
M
.
de_Casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) -> Function
Given a vector of Bézier segments, i.e. a vector of control points $B=\bigl( (b_{0,0},…,b_{n_0,0}),…,(b_{0,m},… b_{n_m,m}) \bigr)$, where the different segments might be of different degree(s) $n_0,…,n_m$. The resulting composite Bézier curve $c_B:[0,m] → \mathcal M$ consists of $m$ segments which are Bézier curves.
\[c_B(t) := \begin{cases} β(t; b_{0,0},…,b_{n_0,0}) & \text{ if } t ∈[0,1]\\ β(t-i; b_{0,i},…,b_{n_i,i}) & \text{ if } t∈(i,i+1], \quad i∈\{1,…,m-1\}. \end{cases}\]
de_Casteljau(M::AbstractManifold, b::BezierSegment, t::Real)
de_Casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}, t::Real)
de_Casteljau(M::AbstractManifold, b::BezierSegment, T::AbstractVector) -> AbstractVector
de_Casteljau(
M::AbstractManifold,
B::AbstractVector{<:BezierSegment},
T::AbstractVector
) -> AbstractVector
Evaluate the Bézier curve at time t
or at times t
in T
.
ManoptExamples.differential_Bezier_control_points
— Methoddifferential_Bezier_control_points(
M::AbstractManifold,
B::AbstractVector{<:BezierSegment},
T::AbstractVector
Ξ::AbstractVector{<:BezierSegment}
)
differential_Bezier_control_points!(
M::AbstractManifold,
Θ::AbstractVector{<:BezierSegment}
B::AbstractVector{<:BezierSegment},
T::AbstractVector
Ξ::AbstractVector{<:BezierSegment}
)
evaluate the differential of the composite Bézier curve with respect to its control points B
and tangent vectors Ξ
in the tangent spaces of the control points. The result is the “change” of the curve at the points in T
, which are elementwise in $[0,N]$, and each depending the corresponding segment(s). Here, $N$ is the length of B
. For the mutating variant the result is computed in Θ
.
See de_Casteljau
for more details on the curve and Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018.
ManoptExamples.differential_Bezier_control_points
— Methoddifferential_Bezier_control_points(
M::AbstractManifold,
B::AbstractVector{<:BezierSegment},
t,
X::AbstractVector{<:BezierSegment}
)
differential_Bezier_control_points!(
M::AbstractManifold,
Y::AbstractVector{<:BezierSegment}
B::AbstractVector{<:BezierSegment},
t,
X::AbstractVector{<:BezierSegment}
)
evaluate the differential of the composite Bézier curve with respect to its control points B
and tangent vectors Ξ
in the tangent spaces of the control points. The result is the “change” of the curve at t
$∈[0,N]$, which depends only on the corresponding segment. Here, $N$ is the length of B
. The computation can be done in place of Y
.
See de_Casteljau
for more details on the curve.
ManoptExamples.differential_Bezier_control_points
— Methoddifferential_Bezier_control_points(
M::AbstractManifold,
b::BezierSegment,
T::AbstractVector,
X::BezierSegment,
)
differential_Bezier_control_points!(
M::AbstractManifold,
Y,
b::BezierSegment,
T::AbstractVector,
X::BezierSegment,
)
evaluate the differential of the Bézier curve with respect to its control points b
and tangent vectors X
in the tangent spaces of the control points. The result is the “change” of the curve at the points T
, elementwise in $t∈[0,1]$. The computation can be done in place of Y
.
See de_Casteljau
for more details on the curve.
ManoptExamples.differential_Bezier_control_points
— Methoddifferential_Bezier_control_points(M::AbstractManifold, b::BezierSegment, t::Float, X::BezierSegment)
differential_Bezier_control_points!(
M::AbstractManifold,
Y,
b::BezierSegment,
t,
X::BezierSegment
)
evaluate the differential of the Bézier curve with respect to its control points b
and tangent vectors X
given in the tangent spaces of the control points. The result is the “change” of the curve at t
$∈[0,1]$. The computation can be done in place of Y
.
See de_Casteljau
for more details on the curve.
ManoptExamples.get_Bezier_degree
— Methodget_Bezier_degree(M::AbstractManifold, b::BezierSegment)
return the degree of the Bézier curve represented by the tuple b
of control points on the manifold M
, i.e. the number of points minus 1.
ManoptExamples.get_Bezier_degrees
— Methodget_Bezier_degrees(M::AbstractManifold, B::AbstractVector{<:BezierSegment})
return the degrees of the components of a composite Bézier curve represented by tuples in B
containing points on the manifold M
.
ManoptExamples.get_Bezier_inner_points
— Methodget_Bezier_inner_points(M::AbstractManifold, B::AbstractVector{<:BezierSegment} )
get_Bezier_inner_points(M::AbstractManifold, b::BezierSegment)
returns the inner (i.e. despite start and end) points of the segments of the composite Bézier curve specified by the control points B
. For a single segment b
, its inner points are returned
ManoptExamples.get_Bezier_junction_tangent_vectors
— Methodget_Bezier_junction_tangent_vectors(M::AbstractManifold, B::AbstractVector{<:BezierSegment})
get_Bezier_junction_tangent_vectors(M::AbstractManifold, b::BezierSegment)
returns the tangent vectors at start and end points of the composite Bézier curve pointing from a junction point to the first and last inner control points for each segment of the composite Bezier curve specified by the control points B
, either a vector of segments of controlpoints.
ManoptExamples.get_Bezier_junctions
— Functionget_Bezier_junctions(M::AbstractManifold, B::AbstractVector{<:BezierSegment})
get_Bezier_junctions(M::AbstractManifold, b::BezierSegment)
returns the start and end point(s) of the segments of the composite Bézier curve specified by the control points B
. For just one segment b
, its start and end points are returned.
ManoptExamples.get_Bezier_points
— Functionget_Bezier_points(
M::AbstractManifold,
B::AbstractVector{<:BezierSegment},
reduce::Symbol=:default
)
get_Bezier_points(M::AbstractManifold, b::BezierSegment, reduce::Symbol=:default)
returns the control points of the segments of the composite Bézier curve specified by the control points B
, either a vector of segments of controlpoints or a.
This method reduces the points depending on the optional reduce
symbol
:default
: no reduction is performed:continuous
: for a continuous function, the junction points are doubled at $b_{0,i}=b_{n_{i-1},i-1}$, so only $b_{0,i}$ is in the vector.:differentiable
: for a differentiable function additionally $\log_{b_{0,i}}b_{1,i} = -\log_{b_{n_{i-1},i-1}}b_{n_{i-1}-1,i-1}$ holds. hence $b_{n_{i-1}-1,i-1}$ is omitted.
If only one segment is given, all points of b
, b.pts
, is returned.
ManoptExamples.get_Bezier_segments
— Methodget_Bezier_segments(M::AbstractManifold, c::AbstractArray{P}, d[, s::Symbol=:default])
returns the array of BezierSegment
s B
of a composite Bézier curve reconstructed from an array c
of points on the manifold M
and an array of degrees d
.
There are a few (reduced) representations that can get extended; see also get_Bezier_points
. For ease of the following, let $c=(c_1,…,c_k)$ and $d=(d_1,…,d_m)$, where $m$ denotes the number of components the composite Bézier curve consists of. Then
:default
: $k = m + \sum_{i=1}^m d_i$ since each component requires one point more than its degree. The points are then ordered in tuples, i.e.\[B = \bigl[ [c_1,…,c_{d_1+1}], (c_{d_1+2},…,c_{d_1+d_2+2}],…, [c_{k-m+1+d_m},…,c_{k}] \bigr]\]
:continuous
: $k = 1+ \sum_{i=1}{m} d_i$, since for a continuous curve start and end point of successive components are the same, so the very first start point and the end points are stored.\[B = \bigl[ [c_1,…,c_{d_1+1}], [c_{d_1+1},…,c_{d_1+d_2+1}],…, [c_{k-1+d_m},…,b_{k}) \bigr]\]
:differentiable
– for a differentiable function additionally to the last explanation, also the second point of any segment was not stored except for the first segment. Hence $k = 2 - m + \sum_{i=1}{m} d_i$ and at a junction point $b_n$ with its given prior point $c_{n-1}$, i.e. this is the last inner point of a segment, the first inner point in the next segment the junction is computed as $b = \exp_{c_n}(-\log_{c_n} c_{n-1})$ such that the assumed differentiability holds
ManoptExamples.grad_L2_acceleration_Bezier
— Methodgrad_L2_acceleration_Bezier(
M::AbstractManifold,
B::AbstractVector{P},
degrees::AbstractVector{<:Integer},
T::AbstractVector,
λ,
d::AbstractVector{P}
) where {P}
compute the gradient of the discretized acceleration of a composite Bézier curve on the Manifold
M
with respect to its control points B
together with a data term that relates the junction points p_i
to the data d
with a weight $λ$ compared to the acceleration. The curve is evaluated at the points given in pts
(elementwise in $[0,N]$), where $N$ is the number of segments of the Bézier curve. The summands are grad_distance
for the data term and grad_acceleration_Bezier
for the acceleration with interpolation constrains. Here the get_Bezier_junctions
are included in the optimization, i.e. setting $λ=0$ yields the unconstrained acceleration minimization. Note that this is ill-posed, since any Bézier curve identical to a geodesic is a minimizer.
Note that the Bézier-curve is given in reduces form as a point on a PowerManifold
, together with the degrees
of the segments and assuming a differentiable curve, the segments can internally be reconstructed.
See also
grad_acceleration_Bezier
, L2_acceleration_Bezier
, acceleration_Bezier
.
ManoptExamples.grad_acceleration_Bezier
— Methodgrad_acceleration_Bezier(
M::AbstractManifold,
B::AbstractVector,
degrees::AbstractVector{<:Integer}
T::AbstractVector
)
compute the gradient of the discretized acceleration of a (composite) Bézier curve $c_B(t)$ on the Manifold
M
with respect to its control points B
given as a point on the PowerManifold
assuming C1 conditions and known degrees
. The curve is evaluated at the points given in T
(elementwise in $[0,N]$, where $N$ is the number of segments of the Bézier curve). The get_Bezier_junctions
are fixed for this gradient (interpolation constraint). For the unconstrained gradient, see grad_L2_acceleration_Bezier
and set $λ=0$ therein. This gradient is computed using adjoint_Jacobi_field
s. For details, see Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018. See de_Casteljau
for more details on the curve.
See also
acceleration_Bezier
, grad_L2_acceleration_Bezier
, L2_acceleration_Bezier
.
Riemannian Mean
See the Riemannian mean example to see these in use.
ManoptExamples.RiemannianMeanCost
— TypeRiemannianMeanCost{P}
A functor representing the Riemannian center of mass (or Riemannian mean) cost function.
For a given set of points $d_1,\ldots,d_N$ this cost function is defined as
\[f(p) = \sum_{j=i}^N d_{mathcal M}^2(d_i, p),\]
where $d_{\mathcal M}$ is the distance
on a Riemannian manifold.
Constructor
RiemannianMeanCost(M::AbstractManifold, data::AbstractVector{<:P}) where {P}
Initialize the cost function to a data set data
of points on a manfiold M
, where each point is of type P
.
See also
ManoptExamples.RiemannianMeanGradient!!
— TypeRiemannianMeanGradient!!{P} where P
A functor representing the Riemannian center of mass (or Riemannian mean) cost function.
For a given set of points $d_1,\ldots,d_N$ this cost function is defined as
\[\operatorname{grad}f(p) = \sum_{j=i}^N \log_p d_i\]
where $d_{\mathcal M}$ is the distance
on a Riemannian manifold and we employ grad_distance
to compute the single summands.
This functor provides both the allocating variant grad_f(M,p)
as well as the in-place variant grad_f!(M, X, p)
which computes the gradient in-place of X
.
Constructors
RiemannianMeanGradient!!(data::AbstractVector{P}, initial_vector::T=nothing) where {P,T}
Generate the Riemannian mean gradient based on some data points data
an intial tangent vector initial_vector
. If you do not provide an initial tangent vector to allocate the intermediate storage of a tangent vector, you can only use the allocating variant.
RiemannianMeanGradient!!(
M::AbstractManifold,
data::AbstractVector{P};
initial_vector::T=zero_vector(M, first(data)),
) where {P,T}
Initialize the Riemannian mean gradient, where the internal storage for tangent vectors can be created automatically, since the Riemannian manifold M
is provideed.
See also
ManoptExamples.Riemannian_mean_objective
— FunctionRiemannian_mean_objective(data, initial_vector=nothing, evaluation=Manopt.AllocatingEvaluation())
Riemannian_mean_objective(M, data;
initial_vector=zero_vector(M, first(data)),
evaluation=AllocatingEvaluton()
)
Generate the objective for the Riemannian mean task for some given vector of data
points on the Riemannian manifold M
.
See also
RiemannianMeanCost
, RiemannianMeanGradient!!
The first constructor should only be used if an additional storage of the vector is not feasible, since initialising the initial_vector
to nothing
disables the in-place variant. Hence the evaluation is a positional argument, since it only can be changed, if a vector is provided.
The objective is available when Manopt.jl
is loaded.
Robust PCA
See the Robust PCA example to see these in use.
ManoptExamples.RobustPCACost
— TypeRobustPCACost{D,F}
A functor representing the Riemannian robust PCA function on the Grassmann manifold. For some given (column) data $D∈\mathbb R^{d\times n}$ the cost function is defined on some $\operatorname{Gr}(d,m)$, $m<n$ as the sum of the distances of the columns $D_i$ to the subspace spanned by $p\in\operatorname{Gr}(d,m)$ (represented as a point on the Stiefel manifold). The function reads
\[f(U) = \frac{1}{n}\sum_{i=1}^n \lVert pp^{\mathrm{T}}D_i - D_i\rVert\]
This cost additionally provides a Huber regularisation of the cost, that is for some $ε>0$ one use $ℓ_ε(x) = \sqrt{x^2+ε^2} - ε$ in
\[f_{ε}(p) = \frac{1}{n}\sum_{i=1}^n ℓ_ε\bigl(\lVert pp^{\mathrm{T}}D_i - D_i\rVert\bigr)\]
Note that this is a mutable struct so you can adapt the $ε$ later on.
Constructor
RobustPCACost(data::AbstractMatrix, ε=1.0)
RobustPCACost(M::Grassmann, data::AbstractMatrix, ε=1.0)
Initialize the robust PCA cost to some data
$D$, and some regularization $ε$. The manifold is optional to comply with all examples, but it is not needed here to construct the cost.
ManoptExamples.RobustPCAGrad!!
— TypeRobustPCAGrad!!{D,F}
A functor representing the Riemannian robust PCA gradient on the Grassmann manifold. For some given (column) data $X∈\mathbb R^{p\times n}$ the gradient of the RobustPCACost
can be computed by projecting the Euclidean gradient onto the corresponding tangent space.
Note that this is a mutable struct so you can adapt the $ε$ later on.
Constructor
RobustPCAGrad!!(data, ε=1.0)
RobustPCAGrad!!(M::Grassmannian{d,m}, data, ε=1.0; evaluation=AllocatingEvaluation())
Initialize the robust PCA cost to some data
$D$, and some regularization $ε$. The manifold is optional to comply with all examples, but it is not needed here to construct the cost. Also the evaluation=
keyword is present only for unification of the interfaces. Indeed, independent of that keyword the functor always works in both variants.
ManoptExamples.robust_PCA_objective
— Functionrobust_PCA_objective(data::AbstractMatrix, ε=1.0; evaluation=AllocatingEvaluation())
robust_PCA_objective(M, data::AbstractMatrix, ε=1.0; evaluation=AllocatingEvaluton())
Generate the objective for the robust PCA task for some given data
$D$ and Huber regularization parameter $ε$.
See also
RobustPCACost
, RobustPCAGrad!!
Since the construction is independent of the manifold, that argument is optional and mainly provided to comply with other objectives. Similarly, independent of the evaluation
, indeed the gradient always allows for both the allocating and the in-place variant to be used, though that keyword is used to setup the objective.
The objective is available when Manopt.jl
is loaded.
Rosenbrock Function
See the Rosenbrock example and The Difference of Convex Rosenbrock Example to see these in use.
ManoptExamples.RosenbrockCost
— TypeRosenbrockCost
Provide the Rosenbrock function in 2D, i.e. for some $a,b ∈ ℝ$
\[f(\mathcal M, p) = a(p_1^2-p_2)^2 + (p_1-b)^2\]
which means that for the 2D case, the manifold $\mathcal M$ is ignored.
See also 📖 Rosenbrock (with slightly different parameter naming).
Constructor
f = Rosenbrock(a,b)
generates the struct/function of the Rosenbrock cost.
ManoptExamples.RosenbrockGradient!!
— TypeRosenbrockGradient
Provide Eclidean GRadient fo the Rosenbrock function in 2D, i.e. for some $a,b ∈ ℝ$
\[\nabla f(\mathcal M, p) = \begin{pmatrix} 4a(p_1^2-p_2)p_1 + 2(p_1-b) \\ -2a(p_1^2-p_2) \end{pmatrix}\]
i.e. also here the manifold is ignored.
Constructor
RosenbrockGradient(a,b)
Functors
grad_f!!(M,p)
grad_f!!(M, X, p)
evaluate the gradient at $p$ the manifold$\mathcal M$ is ignored.
ManoptExamples.RosenbrockMetric
— TypeRosenbrockMetric <: AbstractMetric
A metric related to the Rosenbrock problem, where the metric at a point $p∈\mathbb R^2$ is given by
\[⟨X,Y⟩_{\mathrm{Rb},p} = X^\mathrm{T}G_pY, \qquad G_p = \begin{pmatrix} 1+4p_{1}^2 & -2p_{1} \\ -2p_{1} & 1 \end{pmatrix},\]
where the $\mathrm{Rb}$ stands for Rosenbrock
Base.exp
— Methodq = exp(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p, X)
exp!(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, q, p, X)
Compute the exponential map with respect to the RosenbrockMetric
.
\[ q = \begin{pmatrix} p_1 + X_1 \\ p_2+X_2+X_1^2\end{pmatrix}\]
Base.log
— MethodX = log(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p, q)
log!(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, X, p, q)
Compute the logarithmic map with respect to the RosenbrockMetric
. The formula reads for any $j ∈ \{1,…,m\}$
\[X = \begin{pmatrix} q_1 - p_1 \\ q_2 - p_2 + (q_1 - p_1)^2 \end{pmatrix}\]
Manifolds.inverse_local_metric
— Methodinverse_local_metric(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p)
Return the inverse of the local metric matrix of the RosenbrockMetric
in the canonical unit vector basis of the tangent space $T_p\mathbb R^2$ given as
\[G^{-1}_p = \begin{pmatrix} 1 & 2p_1\\ 2p_1 & 1+4p_1^2 \\ \end{pmatrix}.\]
Manifolds.local_metric
— Methodlocal_metric(::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p)
Return the local metric matrix of the RosenbrockMetric
in the canonical unit vector basis of the tangent space $T_p\mathbb R^2$ given as
\[G_p = \begin{pmatrix} 1+4p_1^2 & -2p_1 \\ -2p_1 & 1 \end{pmatrix}\]
ManifoldsBase.change_representer
— MethodY = change_representer(M::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, ::EuclideanMetric, p, X)
change_representer!(M::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, Y, ::EuclideanMetric, p, X)
Given the Euclidan gradient X
at p
, this function computes the corresponting Riesz representer Y
such that
⟨X,Z⟩ = ⟨ Y, Z ⟩_{\mathrm{Rb},p}
holds for all
Z
, in other words
Y = G(p)^{-1}X
`.
this function is used in riemannian_gradient
to convert a Euclidean into a Riemannian gradient.
ManifoldsBase.inner
— Methodinner(M::MetricManifold{ℝ,Euclidean{Tuple{2},ℝ},RosenbrockMetric}, p, X, Y)
Compute the inner product on $\mathbb R^2$ with respect to the RosenbrockMetric
, i.e. for $X,Y \in T_p\mathcal M$ we have
\[⟨X,Y⟩_{\mathrm{Rb},p} = X^\mathrm{T}G_pY, \qquad G_p = \begin{pmatrix} 1+4p_1^2 & -2p_1\\ -2p_1 & 1 \end{pmatrix},\]
ManoptExamples.Rosenbrock_objective
— FunctionRosenbrock_objective(M::AbstractManifold=DefaultManifold(), a=100.0, b=1.0, evaluation=AllocatingEvaluation())
Return the gradient objective of the Rosenbrock example.
See also RosenbrockCost
, RosenbrockGradient!!
The objective is available when Manopt.jl
is loaded.
ManoptExamples.minimizer
— Methodminimizer(::RosenbrockCost)
Return the minimizer of the RosenbrockCost
, which is given by
\[p^* = \begin{pmatrix} b\\b^2 \end{pmatrix}\]
Total Variation
See the Total Variation example to see these in use.
ManoptExamples.Intrinsic_infimal_convolution_TV12
— MethodIntrinsic_infimal_convolution_TV12(M, f, u, v, α, β)
Compute the intrinsic infimal convolution model, where the addition is replaced by a mid point approach and the two functions involved are second_order_Total_Variation
and Total_Variation
. The model reads
\[E(u,v) = \frac{1}{2}\sum_{i ∈ \mathcal G} d_{\mathcal M}\bigl(g(\frac{1}{2},v_i,w_i),f_i\bigr) +\alpha\bigl( β\mathrm{TV}(v) + (1-β)\mathrm{TV}_2(w) \bigr).\]
for more details see [BFPS17, BFPS18].
See also
ManoptExamples.L2_Total_Variation
— MethodL2_Total_Variation(M, p_data, α, p)
compute the $ℓ^2$-TV functional on the PowerManifold M
for given (fixed) data p_data
(on M
), a nonnegative weight α
, and evaluated at p
(on M
), i.e.
\[E(p) = d_{\mathcal M}^2(f,p) + \alpha \operatorname{TV}(p)\]
See also
ManoptExamples.L2_Total_Variation_1_2
— MethodL2_Total_Variation_1_2(M, f, α, β, x)
compute the $ℓ^2$-TV-TV2 functional on the PowerManifold
manifold M
for given (fixed) data f
(on M
), nonnegative weight α
, β
, and evaluated at x
(on M
), i.e.
\[E(x) = d_{\mathcal M}^2(f,x) + \alpha\operatorname{TV}(x) + β\operatorname{TV}_2(x)\]
See also
ManoptExamples.L2_second_order_Total_Variation
— MethodL2_second_order_Total_Variation(M, f, β, x)
compute the $ℓ^2$-TV2 functional on the PowerManifold
manifold M
for given data f
, nonnegative parameter β
, and evaluated at x
, i.e.
\[E(x) = d_{\mathcal M}^2(f,x) + β\operatorname{TV}_2(x)\]
as used in [BBSW16].
See also
ManoptExamples.Total_Variation
— FunctionTotal_Variation(M,x [,p=2,q=1])
Compute the $\operatorname{TV}^p$ functional for data x
on the PowerManifold
manifold M
, i.e. $\mathcal M = \mathcal N^n$, where $n ∈ \mathbb N^k$ denotes the dimensions of the data x
. Let $\mathcal I_i$ denote the forward neighbors, i.e. with $\mathcal G$ as all indices from $\mathbf{1} ∈ \mathbb N^k$ to $n$ we have $\mathcal I_i = \{i+e_j, j=1,…,k\}\cap \mathcal G$. The formula reads
\[E^q(x) = \sum_{i ∈ \mathcal G} \bigl( \sum_{j ∈ \mathcal I_i} d^p_{\mathcal M}(x_i,x_j) \bigr)^{q/p},\]
see [WDS14] for more details. In long function names, this might be shortened to TV1
and the 1
might be ommitted if only total variation is present.
See also
grad_Total_Variation
, prox_Total_Variation
, second_order_Total_Variation
ManoptExamples.adjoint_differential_forward_logs
— MethodY = adjoint_differential_forward_logs(M, p, X)
adjoint_differential_forward_logs!(M, Y, p, X)
Compute the adjoint differential of forward_logs
$F$ occurring, in the power manifold array p
, the differential of the function
$F_i(p) = \sum_{j ∈ \mathcal I_i} \log_{p_i} p_j$
where $i$ runs over all indices of the PowerManifold
manifold M
and $\mathcal I_i$ denotes the forward neighbors of $i$ Let $n$ be the number dimensions of the PowerManifold
manifold (i.e. length(size(x)
)). Then the input tangent vector lies on the manifold $\mathcal M' = \mathcal M^n$. The adjoint differential can be computed in place of Y
.
Input
M
– aPowerManifold
manifoldp
– an array of points on a manifoldX
– a tangent vector to from the n-fold power ofp
, where n is thendims
ofp
Output
Y
– resulting tangent vector in $T_p\mathcal M$ representing the adjoint differentials of the logs.
ManoptExamples.differential_forward_logs
— MethodY = differential_forward_logs(M, p, X)
differential_forward_logs!(M, Y, p, X)
compute the differential of forward_logs
$F$ on the PowerManifold
manifold M
at p
and direction X
, in the power manifold array, the differential of the function
\[F_i(x) = \sum_{j ∈ \mathcal I_i} \log_{p_i} p_j, \quad i ∈ \mathcal G,\]
where $\mathcal G$ is the set of indices of the PowerManifold
manifold M
and $\mathcal I_i$ denotes the forward neighbors of $i$.
Input
M
– aPowerManifold
manifoldp
– a point.X
– a tangent vector.
Output
Y
– resulting tangent vector in $T_x\mathcal N$ representing the differentials of the logs, where $\mathcal N$ is the power manifold with the number of dimensions added tosize(x)
. The computation can also be done in place.
ManoptExamples.forward_logs
— MethodY = forward_logs(M,x)
forward_logs!(M, Y, x)
compute the forward logs $F$ (generalizing forward differences) occurring, in the power manifold array, the function
\[F_i(x) = \sum_{j ∈ \mathcal I_i} \log_{x_i} x_j,\quad i ∈ \mathcal G,\]
where $\mathcal G$ is the set of indices of the PowerManifold
manifold M
and $\mathcal I_i$ denotes the forward neighbors of $i$. This can also be done in place of ξ
.
Input
M
– aPowerManifold
manifoldx
– a point.
Output
Y
– resulting tangent vector in $T_x\mathcal M$ representing the logs, where $\mathcal N$ is the power manifold with the number of dimensions added tosize(x)
. The computation can be done in place ofY
.
ManoptExamples.grad_Total_Variation
— FunctionX = grad_Total_Variation(M, λ, x[, p=1])
grad_Total_Variation!(M, X, λ, x[, p=1])
Compute the (sub)gradient $∂f$ of all forward differences occurring, in the power manifold array, i.e. of the function
\[f(p) = \sum_{i}\sum_{j ∈ \mathcal I_i} d^p(x_i,x_j)\]
where $i$ runs over all indices of the PowerManifold
manifold M
and $\mathcal I_i$ denotes the forward neighbors of $i$.
Input
M
– aPowerManifold
manifoldx
– a point.
Output
- X – resulting tangent vector in $T_x\mathcal M$. The computation can also be done in place.
ManoptExamples.grad_Total_Variation
— MethodX = grad_Total_Variation(M, (x,y)[, p=1])
grad_Total_Variation!(M, X, (x,y)[, p=1])
compute the (deterministic) (sub) gradient of $\frac{1}{p}d^p_{\mathcal M}(x,y)$ with respect to both $x$ and $y$ (in place of X
and Y
).
ManoptExamples.grad_intrinsic_infimal_convolution_TV12
— Methodgrad_u, grad_v = grad_intrinsic_infimal_convolution_TV12(M, f, u, v, α, β)
compute (sub)gradient of the intrinsic infimal convolution model using the mid point model of second order differences, see second_order_Total_Variation
, i.e. for some $f ∈ \mathcal M$ on a PowerManifold
manifold $\mathcal M$ this function computes the (sub)gradient of
\[E(u,v) = \frac{1}{2}\sum_{i ∈ \mathcal G} d_{\mathcal M}(g(\frac{1}{2},v_i,w_i),f_i) + \alpha \bigl( β\mathrm{TV}(v) + (1-β)\mathrm{TV}_2(w) \bigr),\]
where both total variations refer to the intrinsic ones, grad_Total_Variation
and grad_second_order_Total_Variation
, respectively.
ManoptExamples.grad_second_order_Total_Variation
— FunctionY = grad_second_order_Total_Variation(M, q[, p=1])
grad_second_order_Total_Variation!(M, Y, q[, p=1])
computes the (sub) gradient of $\frac{1}{p}d_2^p(q_1, q_2, q_3)$ with respect to all three components of $q∈\mathcal M^3$, where $d_2$ denotes the second order absolute difference using the mid point model, i.e. let
\[\mathcal C = \bigl\{ c ∈ \mathcal M \ |\ g(\tfrac{1}{2};q_1,q_3) \text{ for some geodesic }g\bigr\}\]
denote the mid points between $q_1$ and $q_3$ on the manifold $\mathcal M$. Then the absolute second order difference is defined as
\[d_2(q_1,q_2,q_3) = \min_{c ∈ \mathcal C_{q_1,q_3}} d(c, q_2).\]
While the (sub)gradient with respect to $q_2$ is easy, the other two require the evaluation of an adjoint_Jacobi_field
.
The derivation of this gradient can be found in [BBSW16].
ManoptExamples.grad_second_order_Total_Variation
— Functiongrad_second_order_Total_Variation(M::PowerManifold, q[, p=1])
computes the (sub) gradient of $\frac{1}{p}d_2^p(q_1,q_2,q_3)$ with respect to all $q_1,q_2,q_3$ occurring along any array dimension in the point q
, where M
is the corresponding PowerManifold
.
ManoptExamples.project_collaborative_TV
— Functionproject_collaborative_TV(M, λ, x, Ξ[, p=2,q=1])
project_collaborative_TV!(M, Θ, λ, x, Ξ[, p=2,q=1])
compute the projection onto collaborative Norm unit (or α-) ball, i.e. of the function
\[F^q(x) = \sum_{i∈\mathcal G} \Bigl( \sum_{j∈\mathcal I_i} \sum_{k=1}^d \lVert X_{i,j}\rVert_x^p\Bigr)^\frac{q}{p},\]
where $\mathcal G$ is the set of indices for $x∈\mathcal M$ and $\mathcal I_i$ is the set of its forward neighbors. The computation can also be done in place of Θ
.
This is adopted from the paper Duran, Möller, Sbert, Cremers, SIAM J Imag Sci, 2016, see their Example 3 for details.
ManoptExamples.prox_Total_Variation
— Functionξ = prox_Total_Variation(M,λ,x [,p=1])
compute the proximal maps $\operatorname{prox}_{λ\varphi}$ of all forward differences occurring in the power manifold array, i.e. $\varphi(xi,xj) = d_{\mathcal M}^p(xi,xj)$ with xi
and xj
are array elements of x
and j = i+e_k
, where e_k
is the $k$th unit vector. The parameter λ
is the prox parameter.
Input
M
– a manifoldM
λ
– a real value, parameter of the proximal mapx
– a point.
Optional
(default is given in brackets)
p
– (1) exponent of the distance of the TV term
Output
y
– resulting point containing with all mentioned proximal points evaluated (in a cyclic order). The computation can also be done in place
ManoptExamples.prox_Total_Variation
— Method[y1,y2] = prox_Total_Variation(M, λ, [x1,x2] [,p=1])
prox_Total_Variation!(M, [y1,y2] λ, [x1,x2] [,p=1])
Compute the proximal map $\operatorname{prox}_{λ\varphi}$ of $φ(x,y) = d_{\mathcal M}^p(x,y)$ with parameter λ
. A derivation of this closed form solution is given in see [WDS14].
Input
M
– a manifoldM
λ
– a real value, parameter of the proximal map(x1,x2)
– a tuple of two points,
Optional
(default is given in brackets)
p
– (1) exponent of the distance of the TV term
Output
(y1,y2)
– resulting tuple of points of the $\operatorname{prox}_{λφ}($(x1,x2)
$)$. The result can also be computed in place.
ManoptExamples.prox_parallel_TV
— Functiony = prox_parallel_TV(M, λ, x [,p=1])
prox_parallel_TV!(M, y, λ, x [,p=1])
compute the proximal maps $\operatorname{prox}_{λφ}$ of all forward differences occurring in the power manifold array, i.e. $φ(x_i,x_j) = d_{\mathcal M}^p(x_i,x_j)$ with xi
and xj
are array elements of x
and j = i+e_k
, where e_k
is the $k$th unit vector. The parameter λ
is the prox parameter.
Input
M
– aPowerManifold
manifoldλ
– a real value, parameter of the proximal mapx
– a point
Optional
(default is given in brackets)
p
– (1
) exponent of the distance of the TV term
Output
y
– resulting Array of points with all mentioned proximal points evaluated (in a parallel within the arrays elements). The computation can also be done in place.
See also prox_Total_Variation
ManoptExamples.prox_second_order_Total_Variation
— Function(y1,y2,y3) = prox_second_order_Total_Variation(M,λ,(x1,x2,x3),[p=1], kwargs...)
prox_second_order_Total_Variation!(M, y, λ,(x1,x2,x3),[p=1], kwargs...)
Compute the proximal map $\operatorname{prox}_{λ\varphi}$ of $\varphi(x_1,x_2,x_3) = d_{\mathcal M}^p(c(x_1,x_3),x_2)$ with parameter λ
>0, where $c(x,z)$ denotes the mid point of a shortest geodesic from x1
to x3
that is closest to x2
. The result can be computed in place of y
.
Note that this function does not have a closed form solution but is solbed by a few steps of the subgradient mehtod from manopt.jl by default. See [BBSW16] for a derivation.
Input
M
– a manifoldλ
– a real value, parameter of the proximal map(x1,x2,x3)
– a tuple of three pointsp
– (1
) exponent of the distance of the TV term
Optional
kwargs...
– parameters for the internal subgradient_method
(if M
is neither Euclidean
nor Circle
, since for these a closed form is given)
Output
(y1,y2,y3)
– resulting tuple of points of the proximal map. The computation can also be done in place.Note This function requires
Manopt.jl
to be loaded
ManoptExamples.prox_second_order_Total_Variation
— Methody = prox_second_order_Total_Variation(M, λ, x[, p=1])
prox_second_order_Total_Variation!(M, y, λ, x[, p=1])
compute the proximal maps $\operatorname{prox}_{λ\varphi}$ of all centered second order differences occurring in the power manifold array, i.e. $\varphi(x_k,x_i,x_j) = d_2(x_k,x_i.x_j)$, where $k,j$ are backward and forward neighbors (along any dimension in the array of x
). The parameter λ
is the prox parameter.
Input
M
– a manifoldM
λ
– a real value, parameter of the proximal mapx
– a points.
Optional
(default is given in brackets)
p
– (1
) exponent of the distance of the TV term
Output
y
– resulting point with all mentioned proximal points evaluated (in a cyclic order). The computation can also be done in place.
This function requires Manopt.jl
to be loaded
ManoptExamples.second_order_Total_Variation
— Functionsecond_order_Total_Variation(M,x [,p=1])
compute the $\operatorname{TV}_2^p$ functional for data x
on the PowerManifold
manifold M
, i.e. $\mathcal M = \mathcal N^n$, where $n ∈ \mathbb N^k$ denotes the dimensions of the data x
. Let $\mathcal I_i^{\pm}$ denote the forward and backward neighbors, respectively, i.e. with $\mathcal G$ as all indices from $\mathbf{1} ∈ \mathbb N^k$ to $n$ we have $\mathcal I^\pm_i = \{i\pm e_j, j=1,…,k\}\cap \mathcal I$. The formula then reads
\[E(x) = \sum_{i ∈ \mathcal I,\ j_1 ∈ \mathcal I^+_i,\ j_2 ∈ \mathcal I^-_i} d^p_{\mathcal M}(c_i(x_{j_1},x_{j_2}), x_i),\]
where $c_i(⋅,⋅)$ denotes the mid point between its two arguments that is nearest to $x_i$, see [BBSW16] for a derivation.
In long function names, this might be shortened to TV2
.
See also
grad_second_order_Total_Variation
, prox_second_order_Total_Variation
ManoptExamples.second_order_Total_Variation
— Methodsecond_order_Total_Variation(M,(x1,x2,x3) [,p=1])
Compute the $\operatorname{TV}_2^p$ functional for the 3-tuple of points (x1,x2,x3)
on the manifold M
. Denote by
\[ \mathcal C = \bigl\{ c ∈ \mathcal M \ |\ g(\tfrac{1}{2};x_1,x_3) \text{ for some geodesic }g\bigr\}\]
the set of mid points between $x_1$ and $x_3$. Then the function reads
\[d_2^p(x_1,x_2,x_3) = \min_{c ∈ \mathcal C} d_{\mathcal M}(c,x_2),\]
see [BBSW16] for a derivation. In long function names, this might be shortened to TV2
.
See also
grad_second_order_Total_Variation
, prox_second_order_Total_Variation
, Total_Variation
ManoptExamples.subgrad_Total_Variation
— FunctionX = subgrad_TV(M, λ, p[, k=1; atol=0])
subgrad_TV!(M, X, λ, p[, k=1; atol=0])
Compute the (randomized) subgradient $\partial F$ of all forward differences occurring, in the power manifold array, i.e. of the function
\[F(p) = \sum_{i}\sum_{j ∈ \mathcal I_i} d^k(p_i,p_j)\]
where $i$ runs over all indices of the PowerManifold
manifold M
and $\mathcal I_i$ denotes the forward neighbors of $i$.
Input
M
– aPowerManifold
manifoldp
– a point.
Ouput
- X – resulting tangent vector in $T_p\mathcal M$. The computation can also be done in place.
ManoptExamples.subgrad_Total_Variation
— MethodX = subgrad_TV(M, (p,q)[, k=1; atol=0])
subgrad_TV!(M, X, (p,q)[, k=1; atol=0])
compute the (randomized) subgradient of $\frac{1}{k}d^k_{\mathcal M}(p,q)$ with respect to both $p$ and $q$ (in place of X
and Y
).
Literature
- [BBSW16]
- M. Bačák, R. Bergmann, G. Steidl and A. Weinmann. A second order non-smooth variational model for restoring manifold-valued images. SIAM Journal on Scientific Computing 38, A567–A597 (2016), arXiv:1506.02409.
- [BFPS18]
- R. Bergmann, J. H. Fitschen, J. Persch and G. Steidl. Priors with coupled first and second order differences for manifold-valued image processing. Journal of Mathematical Imaging and Vision 60, 1459–1481 (2018), arXiv:1709.01343.
- [BFPS17]
- R. Bergmann, J. H. Fitschen, J. Persch and G. Steidl. Infimal convolution coupling of first and second order differences on manifold-valued images. In: Scale Space and Variational Methods in Computer Vision: 6th International Conference, SSVM 2017, Kolding, Denmark, June 4–8, 2017, Proceedings, edited by F. Lauze, Y. Dong and A. B. Dahl (Springer International Publishing, 2017); pp. 447–459.
- [BG18]
- R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.
- [Bou23]
- N. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).
- [Cas59]
- P. de Casteljau. Outillage methodes calcul (Enveloppe Soleau 40.040, Institute National de la Propriété Industrielle, Paris., 1959).
- [Cas63]
- P. de Casteljau. Courbes et surfaces à pôles (Microfiche P 4147-1, Institute National de la Propriété Industrielle, Paris., 1963).
- [DMSC16]
- J. Duran, M. Moeller, C. Sbert and D. Cremers. Collaborative Total Variation: A General Framework for Vectorial TV Models. SIAM Journal on Imaging Sciences 9, 116–151 (2016), arXiv:1508.01308.
- [PN07]
- T. Popiel and L. Noakes. Bézier curves and $C^2$ interpolation in Riemannian manifolds. Journal of Approximation Theory 148, 111–127 (2007).
- [WDS14]
- A. Weinmann, L. Demaret and M. Storath. Total variation regularization for manifold-valued data. SIAM Journal on Imaging Sciences 7, 2226–2257 (2014).