Different library functions

Documentation for ManifoldDiff.jl's methods and types for finite differences and automatic differentiation.

Derivatives

ManifoldDiff.geodesic_derivativeMethod
Y = geodesic_derivative(M, p, X, t::Number; γt = geodesic(M, p, X, t))
geodesic_derivative!(M, Y, p, X, t::Number; γt = geodesic(M, p, X, t))

Evaluate the derivative of the geodesic $γ(t)$ with $γ_{p,X}(0) = p$ and $\dot γ_{p,X}(0) = X$ at $t$. The formula reads

\[\dot γ(t) = \mathcal P_{γ(t) \gets p} X\]

where $\mathcal P$ denotes the parallel transport. This computation can also be done in-place of $Y$.

Optional Parameters

  • γt – (geodesic(M, p, X, t)) the point on the geodesic at $t$. This way if the point was computed earlier it can be resued here.
source
ManifoldDiff.shortest_geodesic_derivativeMethod
Y = shortest_geodesic_derivative(M, p, X, t::Number; γt = shortest_geodesic(M, p, q, t))
shortest_geodesic_derivative!(M, Y, p, X, t::Number; γt = shortest_geodesic(M, p, q, t))

Evaluate the derivative of the shortest geodesic $γ(t)$ with $γ_{p,q}(0) = p$ and $\dot γ_{p,q}(1) = q$ at $t$. The formula reads

\[\dot γ(t) = \mathcal P_{γ(t) \gets p} \log_pq\]

where $\mathcal P$ denotes the parallel transport. This computation can also be done in-place of $Y$.

Optional Parameters

  • γt = geodesic(M, p, X, t) the point on the geodesic at $t$. This way if the point was computed earlier it can be resued here.
source

Differentials and their adjoints

ManifoldDiff.differential_exp_argument_lie_approxMethod
differential_exp_argument_lie_approx(M::AbstractManifold, p, X, Y; n)

Approximate differential of exponential map based on Lie group exponential. The formula reads (see Theorem 1.7 of [Helgason1978])

\[D_X \exp_{p}(X)[Y] = (\mathrm{d}L_{\exp_e(X)})_e\left(\sum_{k=0}^{n}\frac{(-1)^k}{(k+1)!}(\operatorname{ad}_X)^k(Y)\right)\]

where $(\operatorname{ad}_X)^k(Y)$ is defined recursively as $(\operatorname{ad}_X)^0(Y) = Y$, $\operatorname{ad}_X^{k+1}(Y) = [X, \operatorname{ad}_X^k(Y)]$.

source
ManifoldDiff.differential_inverse_retract_argument_fd_approxMethod
differential_inverse_retract_argument_fd_approx(
    M::AbstractManifold,
    p,
    q,
    X;
    retr::AbstractRetractionMethod = default_retraction_method(M),
    invretr::AbstractInverseRetractionMethod = default_inverse_retraction_method(M),
    h::Real=sqrt(eps(eltype(X))),
)

Approximate the differential of the inverse retraction invretr using a finite difference formula (see Eq. (16) in [Zim20]

\[\frac{\operatorname{retr}^{-1}_q(\operatorname{retr}_p(hX)) - \operatorname{retr}^{-1}_q(\operatorname{retr}_p(-hX))}{2h}\]

where $h$ is the finite difference step h, $\operatorname{retr}^{-1}$ is the inverse retraction invretr and $\operatorname{retr}$ is the retraction retr.

source
ManifoldDiff.AbstractProjectorType
abstract type AbstractProjector end

An abstract type for projectors on a tangent space $T_pM$ for fixed values of p and M. Calling a projector on a tangent vector returns a new tangent vector:

(Π::AbstractProjector)(X) -> Y

Projectors assume that X is a valid vector from $T_pM$.

source
ManifoldDiff.CoprojectorOntoVectorType
CoprojectorOntoVector{TM<:AbstractManifold,TP,TX}

A structure that represents projector onto the subspace of the tangent space at p from manifold M othogonal to vector X of unit norm.

Constructor

CoprojectorOntoVector(M::AbstractManifold, p, X)

source
ManifoldDiff.ProjectorOntoVectorType
ProjectorOntoVector{TM<:AbstractManifold,TP,TX}

A structure that represents projector onto the subspace of the tangent space at p from manifold M spanned by tangent vector X of unit norm.

Constructor

ProjectorOntoVector(M::AbstractManifold, p, X)

source

Gradients

ManifoldDiff.grad_distanceFunction
grad_distance(M, q, p[, c=2])
grad_distance!(M, X, q, p[, c=2])

compute the (sub)gradient of the distance (default: squared), in place of X.

\[f(p) = \frac{1}{c} d^c_{\mathcal M}(p, q)\]

to a fixed point q on the manifold M and c is an integer. The (sub-)gradient reads

\[\operatorname{grad}f(p) = -d_{\mathcal M}^{c-2}(p, q)\log_pq\]

for $c\neq 1$ or $p\neq q$. Note that for the remaining case $c=1$, $p=q$, the function is not differentiable. In this case, the function returns the corresponding zero tangent vector, since this is an element of the subdifferential.

Optional

  • c – (2) the exponent of the distance, i.e. the default is the squared distance
source
ManifoldDiff.subgrad_distanceFunction
subgrad_distance(M, q, p[, c = 2; atol = 0])
subgrad_distance!(M, X, q, p[, c = 2; atol = 0])

compute the subgradient of the distance (in place of X)

\[f(p) = \frac{1}{c} d^c_{\mathcal M}(p, q)\]

to a fixed point q on the manifold M and c is an integer. The subgradient reads

\[\partial f(p) = -d_{\mathcal M}^{c-2}(p, q)\log_pq\]

for $c\neq 1$ or $p\neq q$. Note that for the remaining case $c=1$, $p=q$, the function is not differentiable. In this case, the subgradient is given by a tangent vector at p with norm less than or equal to one.

Optional

  • c – (2) the exponent of the distance, i.e. the default is the distance
  • atol – (0) the tolerance to use when evaluating the distance between p and q.
source

Jacobi fields

ManifoldDiff.adjoint_Jacobi_fieldMethod
Y = adjoint_Jacobi_field(M, p, q, t, X, β)
adjoint_Jacobi_field!(M, Y, p, q, t, X, β)

Compute the AdjointJacobiField $J$ along the geodesic $γ_{p,q}$ on the manifold $\mathcal M$ with initial conditions (depending on the application) $X ∈ T_{γ_{p,q}(t)}\mathcal M$ and weights $β$. The result is a vector $Y ∈ T_p\mathcal M$. The main difference to jacobi_field is the, that the input X and the output Y switched tangent spaces. The computation can be done in place of Y.

For details see jacobi_field

source
ManifoldDiff.jacobi_fieldMethod
Y = jacobi_field(M, p, q, t, X, β)
jacobi_field!(M, Y, p, q, t, X, β)

compute the Jacobi field $J$ along the geodesic $γ_{p,q}$ on the manifold $\mathcal M$ with initial conditions (depending on the application) $X ∈ T_p\mathcal M$ and weights $β$. The result is a tangent vector Y from $T_{γ_{p,q}(t)}\mathcal M$. The computation can be done in place of Y.

See also

adjoint_Jacobi_field

source
ManifoldDiff.βdifferential_log_argumentMethod
βdifferential_log_argument(κ,t,d)

weights for the JacobiField corresponding to the differential of the logarithmic map with respect to its argument $D_q \log_p q[X]$. They are

\[β(κ) = \begin{cases} \frac{ d\sqrt{-κ} }{\sinh(d\sqrt{-κ})}&\text{ if }κ < 0,\\ 1 & \text{ if } κ = 0,\\ \frac{ d\sqrt{κ} }{\sin(d\sqrt{κ})}&\text{ if }κ > 0. \end{cases}\]

See also

differential_log_basepoint, jacobi_field

source
ManifoldDiff.βdifferential_log_basepointMethod
βdifferential_log_basepoint(κ,t,d)

weights for the jacobi_field corresponding to the differential of the geodesic with respect to its start point $D_p \log_p q[X]$. They are

\[β(κ) = \begin{cases} -\sqrt{-κ}d\frac{\cosh(d\sqrt{-κ})}{\sinh(d\sqrt{-κ})}&\text{ if }κ < 0,\\ -1 & \text{ if } κ = 0,\\ -\sqrt{κ}d\frac{\cos(d\sqrt{κ})}{\sin(d\sqrt{κ})}&\text{ if }κ > 0. \end{cases}\]

See also

differential_log_argument, differential_log_argument, jacobi_field

source
ManifoldDiff.βdifferential_shortest_geodesic_startpointMethod
βdifferential_shortest_geodesic_startpoint(κ,t,d)

weights for the jacobi_field corresponding to the differential of the geodesic with respect to its start point $D_x g(t;p,q)[X]$. They are

\[β(κ) = \begin{cases} \frac{\sinh(d(1-t)\sqrt{-κ})}{\sinh(d\sqrt{-κ})} &\text{ if }κ < 0,\\ 1-t & \text{ if } κ = 0,\\ \frac{\sin((1-t)d\sqrt{κ})}{\sinh(d\sqrt{κ})} &\text{ if }κ > 0. \end{cases}\]

Due to a symmetry argument, these are also used to compute $D_q g(t; p,q)[η]$

See also

differential_shortest_geodesic_endpoint, differential_shortest_geodesic_startpoint, jacobi_field

source

Riemannian differentials

ManifoldDiff.RiemannianProjectionBackendType
RiemannianProjectionBackend <: AbstractRiemannianDiffBackend

This backend computes the differentiation in the embedding, which is currently limited to the gradient. Let $mathcal M$ denote a manifold embedded in some $R^m$, where $m$ is usually (much) larger than the manifold dimension. Then we require three tools

  • A function $f̃: ℝ^m → ℝ$ such that its restriction to the manifold yields the cost function $f$ of interest.
  • A project function to project tangent vectors from the embedding (at $T_pℝ^m$) back onto the tangent space $T_p\mathcal M$. This also includes possible changes of the representation of the tangent vector (e.g. in the Lie algebra or in a different data format).
  • A change_representer for non-isometrically embedded manifolds, i.e. where the tangent space $T_p\mathcal M$ of the manifold does not inherit the inner product from restriction of the inner product from the tangent space $T_pℝ^m$ of the embedding

see also riemannian_gradient and [AMS08], Section 3.6.1 for a derivation on submanifolds.

source
ManifoldDiff.TangentDiffBackendType
TangentDiffBackend <: AbstractRiemannianDiffBackend

A backend that uses tangent spaces and bases therein to derive an intrinsic differentiation scheme.

Since it works in tangent spaces at argument and function value, methods might require a retraction and an inverse retraction as well as a basis.

In the tangent space itself, this backend then employs an (Euclidean) AbstractDiffBackend

Constructor

TangentDiffBackend(diff_backend)

where diff_backend is an AbstractDiffBackend to be used on the tangent space.

With the keyword arguments

source
ManifoldDiff.differentialMethod
differential(M::AbstractManifold, f, t::Real, backend::AbstractDiffBackend)
differential!(M::AbstractManifold, f, X, t::Real, backend::AbstractDiffBackend)

Compute the Riemannian differential of a curve $f: ℝ\to M$ on a manifold M represented by function f at time t using the given backend. It is calculated as the tangent vector equal to $\mathrm{d}f_t(t)[1]$.

The mutating variant computes the differential in place of X.

source
ManifoldDiff.gradientMethod
gradient(M::AbstractManifold, f, p, backend::AbstractRiemannianDiffBackend)
gradient!(M::AbstractManifold, f, X, p, backend::AbstractRiemannianDiffBackend)

Compute the Riemannian gradient $∇f(p)$ of a real-valued function $f:\mathcal M \to ℝ$ at point p on the manifold M using the specified AbstractRiemannianDiffBackend.

The mutating variant computes the gradient in place of X.

source
ManifoldDiff.gradientMethod
gradient(M, f, p, backend::TangentDiffBackend)

This method uses the internal backend.diff_backend (Euclidean) on the function

\[ f(\operatorname{retr}_p(\cdot))\]

which is given on the tangent space. In detail, the gradient can be written in terms of the backend.basis_arg. We illustrate it here for an AbstractOrthonormalBasis, since that simplifies notations:

\[\operatorname{grad}f(p) = \operatorname{grad}f(p) = \sum_{i=1}^{d} g_p(\operatorname{grad}f(p),X_i)X_i = \sum_{i=1}^{d} Df(p)[X_i]X_i\]

where the last equality is due to the definition of the gradient as the Riesz representer of the differential.

If the backend is a forward (or backward) finite difference, these coefficients in the sum can be approximates as

\[DF(p)[Y] ≈ \frac{1}{h}\bigl( f(\exp_p(hY)) - f(p) \bigr)\]

writing $p=\exp_p(0)$ we see that this is a finite difference of $f\circ\exp_p$, i.e. for a function on the tangent space, so we can also use other (Euclidean) backends

source
ManifoldDiff.riemannian_HessianMethod
riemannian_Hessian(M, p, eG, eH, X)
riemannian_Hessian!(M, Y, p, eG, eH, X)

Convert the Euclidean Hessian eH=$\operatorname{Hess} \tilde f(p) [X]$ of a function $f \colon \mathcal M \to \mathbb R$, which is the restriction of $\tilde f$ to $\mathcal M$, given additionally the (Euclidean) gradient $\operatorname{grad} \tilde f(p)$.

The Riemannian Hessian is then computed by

\[\operatorname{Hess} f(p)[X] = \operatorname{proj}_{T_p\mathcal M}\bigl(\operatorname{Hess} \tilde f(p)[X]) + \mathcal W_p\Bigl( X, \operatorname{proj}_{N_p\mathcal M}\bigl( \operatorname{grad} \tilde f (p) \bigr) \Bigr),\]

where $N_p\mathcal M$ denotes the normal space, i.e. the orthogonal complement of the tangent space in the embedding, and $\mathcal W_p$ denotes the Weingarten map. See [Bou23] for more details

The function is inspired by ehess2rhess in the Matlab package Manopt.

source
ManifoldDiff.riemannian_gradientMethod
riemannian_gradient(M, p, Y; embedding_metric=EuclideanMetric())
riemannian_gradient!(M, X, p, Y; embedding_metric=EuclideanMetric())

For a given gradient $Y = \operatorname{grad} \tilde f(p)$ in the embedding of a manifold, this function computes the Riemannian gradient $\operatorname{grad} f(p)$ of the function $\tilde f$ restricted to the manifold $M$. This can also be done in place of X.

By default it uses the following computation: Let the projection $Z = \operatorname{proj}_{T_p\mathcal M}(Y)$ of $Y$ onto the tangent space at $p$ be given, that is with respect to the inner product in the embedding. Then

\[⟨Z-Y, W⟩ = 0 \text{ for all } W \in T_p\mathcal M,\]

or rearranged $⟨Y,W⟩ = ⟨Z,W⟩$. We then use the definition of the Riemannian gradient

\[⟨\operatorname{grad} f(p), W⟩_p = Df(p)[X] = ⟨\operatorname{grad}f(p), W⟩ = ⟨\operatorname{proj}_{T_p\mathcal M}(\operatorname{grad}f(p)),W⟩ \quad\text{for all } W \in T_p\mathcal M.\]

Comparing the first and the last term, the remaining computation is the function change_representer.

This method can also be implemented directly, if a more efficient/stable version is known.

The function is inspired by egrad2rgrad in the Matlab package Manopt.

source

Proximal Maps

Given a convex, lower semi-continuous function $f\colon \mathcal M \to \mathbb R$, its proximal map is defined for some $λ>0$ as [Bac14]

\[\operatorname{prox}_{λf}(p) := \operatorname*{arg\,min}_{q\in\mathcal M} \frac{1}{2λ}d^2_{\mathcal M}(p,q) + f(q).\]

Another name for the proximal map is resolvent. Intuitively this means to minimize the function $f$ while at the same timme “staying close” to the argument $p$.

ManifoldDiff.prox_distanceFunction
y = prox_distance(M::AbstractManifold, λ::Real, p_data, p [, r=2])
prox_distance!(M::AbstractManifold, q, λ::Real, p_data, p [, r=2])

Compute the proximal map $\operatorname{prox}_{λf}$ with parameter λ of $f(p) = \frac{1}{r}d_{\mathcal M}^r(p_{\mathrm{data}},p)$. For the in-place variant the computation is done in place of q.

Input

  • M a manifold M
  • λ the prox parameter, a positive real number.
  • p_data a point on M.
  • p the argument of the proximal map
  • r (2) exponent of the distance.

Output

  • q – the result of the proximal map of $f$

For more details see [WDS14]

source
  • Helgason1978

    S. Helgason, Differential Geometry, Lie Groups, and Symmetric Spaces, First Edition. Academic Press, 1978.

  • SommerFletcherPennec2020

    S. Sommer, T. Fletcher, and X. Pennec, “1 - Introduction to differential and Riemannian geometry,” in Riemannian Geometric Statistics in Medical Image Analysis, X. Pennec, S. Sommer, and T. Fletcher, Eds. Academic Press, 2020, pp. 3–37. doi: 10.1016/B978-0-12-814725-2.00008-X.