Statistics

Statistics.covMethod
Statistics.cov(
    M::AbstractManifold,
    x::AbstractVector;
    basis::AbstractBasis=DefaultOrthonormalBasis(),
    tangent_space_covariance_estimator::CovarianceEstimator=SimpleCovariance(;
        corrected=true,
    ),
    mean_estimation_method::AbstractApproximationMethod=GradientDescentEstimation(),
    inverse_retraction_method::AbstractInverseRetractionMethod=default_inverse_retraction_method(
        M, eltype(x),
    ),
)

Estimate the covariance matrix of a set of points x on manifold M. Since the covariance matrix on a manifold is a rank 2 tensor, the function returns its coefficients in basis induced by the given tangent space basis. See Section 5 of [Pen06] for details.

The mean is calculated using the specified mean_estimation_method using [mean](@ref Statistics.mean(::AbstractManifold, ::AbstractVector, ::AbstractApproximationMethod), and tangent vectors at this mean are calculated using the provided inverse_retraction_method. Finally, the covariance matrix in the tangent plane is estimated using the Euclidean space estimator tangent_space_covariance_estimator. The type CovarianceEstimator is defined in StatsBase.jl and examples of covariance estimation methods can be found in CovarianceEstimation.jl.

source
Statistics.mean!Method
mean!(M::AbstractManifold, y, x::AbstractVector[, w::AbstractWeights]; kwargs...)
mean!(
    M::AbstractManifold,
    y,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::AbstractApproximationMethod;
    kwargs...,
)

Compute the mean in-place in y.

source
Statistics.meanMethod
mean(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::ExtrinsicEstimation;
    kwargs...,
)

Estimate the Riemannian center of mass of x using ExtrinsicEstimation, i.e. by computing the mean in the embedding and projecting the result back.

See mean for a description of the remaining kwargs.

source
Statistics.meanMethod
mean(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::GeodesicInterpolation;
    shuffle_rng=nothing,
    retraction::AbstractRetractionMethod = default_retraction_method(M, eltype(x)),
    inverse_retraction::AbstractInverseRetractionMethod = default_inverse_retraction_method(M, eltype(x)),
    kwargs...,
)

Estimate the Riemannian center of mass of x in an online fashion using repeated weighted geodesic interpolation. See GeodesicInterpolation for details.

If shuffle_rng is provided, it is used to shuffle the order in which the points are considered for computing the mean.

Optionally, pass retraction and inverse_retraction method types to specify the (inverse) retraction.

source
Statistics.meanMethod
mean(M::AbstractManifold, x::AbstractVector[, w::AbstractWeights]; kwargs...)

Compute the (optionally weighted) Riemannian center of mass also known as Karcher mean of the vector x of points on the AbstractManifoldM, defined as the point that satisfies the minimizer

\[\operatorname{argmin}_{y ∈ \mathcal M} \frac{1}{2 \sum_{i=1}^n w_i} \sum_{i=1}^n w_i\mathrm{d}_{\mathcal M}^2(y,x_i),\]

where $\mathrm{d}_{\mathcal M}$ denotes the Riemannian distance.

In the general case, the GradientDescentEstimation is used to compute the mean. mean( M::AbstractManifold, x::AbstractVector, [w::AbstractWeights,] method::AbstractApproximationMethod=defaultapproximationmethod(M, mean); kwargs..., )

Compute the mean using the specified method.

mean(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::GradientDescentEstimation;
    p0=x[1],
    stop_iter=100,
    retraction::AbstractRetractionMethod = default_retraction_method(M),
    inverse_retraction::AbstractInverseRetractionMethod = default_retraction_method(M, eltype(x)),
    kwargs...,
)

Compute the mean using the gradient descent scheme GradientDescentEstimation.

Optionally, provide p0, the starting point (by default set to the first data point). stop_iter denotes the maximal number of iterations to perform and the kwargs... are passed to isapprox to stop, when the minimal change between two iterates is small. For more stopping criteria check the Manopt.jl package and use a solver therefrom.

Optionally, pass retraction and inverse_retraction method types to specify the (inverse) retraction.

The Theory stems from [Kar77] and is also described in [PA12] as the exponential barycenter. The algorithm is further described in[ATV13].

source
Statistics.median!Method
median!(M::AbstractManifold, y, x::AbstractVector[, w::AbstractWeights]; kwargs...)
median!(
    M::AbstractManifold,
    y,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::AbstractApproximationMethod;
    kwargs...,
)

computes the median in-place in y.

source
Statistics.medianMethod
median(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::CyclicProximalPointEstimation;
    p0=x[1],
    stop_iter=1000000,
    retraction::AbstractRetractionMethod = default_retraction_method(M, eltype(x),),
    inverse_retraction::AbstractInverseRetractionMethod = default_inverse_retraction_method(M, eltype(x),),
    kwargs...,
)

Compute the median using CyclicProximalPointEstimation.

Optionally, provide p0, the starting point (by default set to the first data point). stop_iter denotes the maximal number of iterations to perform and the kwargs... are passed to isapprox to stop, when the minimal change between two iterates is small. For more stopping criteria check the Manopt.jl package and use a solver therefrom.

Optionally, pass retraction and inverse_retraction method types to specify the (inverse) retraction.

The algorithm is further described in [Bac14].

source
Statistics.medianMethod
median(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::ExtrinsicEstimation;
    kwargs...,
)

Estimate the median of x using ExtrinsicEstimation, i.e. by computing the median in the embedding and projecting the result back.

See median for a description of kwargs.

source
Statistics.medianMethod
median(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::WeiszfeldEstimation;
    α = 1.0,
    p0=x[1],
    stop_iter=2000,
    retraction::AbstractRetractionMethod = default_retraction_method(M, eltype(x)),
    inverse_retraction::AbstractInverseRetractionMethod = default_inverse_retraction_method(M, eltype(x)),
    kwargs...,
)

Compute the median using WeiszfeldEstimation.

Optionally, provide p0, the starting point (by default set to the first data point). stop_iter denotes the maximal number of iterations to perform and the kwargs... are passed to isapprox to stop, when the minimal change between two iterates is small. For more stopping criteria check the Manopt.jl package and use a solver therefrom.

The parameter $α\in (0,2]$ is a step size.

The algorithm is further described in [FVJ08], especially the update rule in Eq. (6), i.e. Let $q_{k}$ denote the current iterate, $n$ the number of points $x_1,\ldots,x_n$, and

\[I_k = \bigl\{ i \in \{1,\ldots,n\} \big| x_i \neq q_k \bigr\}\]

all indices of points that are not equal to the current iterate. Then the update reads $q_{k+1} = \exp_{q_k}(αX)$, where

\[X = \frac{1}{s}\sum_{i\in I_k} \frac{w_i}{d_{\mathcal M}(q_k,x_i)}\log_{q_k}x_i \quad \text{ with } \quad s = \sum_{i\in I_k} \frac{w_i}{d_{\mathcal M}(q_k,x_i)},\]

and where $\mathrm{d}_{\mathcal M}$ denotes the Riemannian distance.

Optionally, pass retraction and inverse_retraction method types to specify the (inverse) retraction, which by default use the exponential and logarithmic map, respectively.

source
Statistics.medianMethod
median(M::AbstractManifold, x::AbstractVector[, w::AbstractWeights]; kwargs...)
median(
    M::AbstractManifold,
    x::AbstractVector,
    [w::AbstractWeights,]
    method::AbstractApproximationMethod;
    kwargs...,
)

Compute the (optionally weighted) Riemannian median of the vector x of points on the AbstractManifoldM, defined as the point that satisfies the minimizer

\[\operatorname{argmin}_{y ∈ \mathcal M} \frac{1}{\sum_{i=1}^n w_i} \sum_{i=1}^n w_i\mathrm{d}_{\mathcal M}(y,x_i),\]

where $\mathrm{d}_{\mathcal M}$ denotes the Riemannian distance. This function is nonsmooth (i.e nondifferentiable).

In the general case, the CyclicProximalPointEstimation is used to compute the median. However, this default may be overloaded for specific manifolds.

Compute the median using the specified method.

source
Statistics.stdMethod
std(M, x, m=mean(M, x); corrected=true, kwargs...)
std(M, x, w::AbstractWeights, m=mean(M, x, w); corrected=false, kwargs...)

compute the optionally weighted standard deviation of a Vectorx of n data points on the AbstractManifoldM, i.e.

\[\sqrt{\frac{1}{c} \sum_{i=1}^n w_i d_{\mathcal M}^2 (x_i,m)},\]

where c is a correction term, see Statistics.std. The mean of x can be specified as m, and the corrected variance can be activated by setting corrected=true.

source
Statistics.varMethod
var(M, x, m=mean(M, x); corrected=true)
var(M, x, w::AbstractWeights, m=mean(M, x, w); corrected=false)

compute the (optionally weighted) variance of a Vectorx of n data points on the AbstractManifoldM, i.e.

\[\frac{1}{c} \sum_{i=1}^n w_i d_{\mathcal M}^2 (x_i,m),\]

where c is a correction term, see Statistics.var. The mean of x can be specified as m, and the corrected variance can be activated by setting corrected=true. All further kwargs... are passed to the computation of the mean (if that is not provided).

source
StatsBase.kurtosisMethod
kurtosis(M::AbstractManifold, x::AbstractVector, k::Int[, w::AbstractWeights], m=mean(M, x[, w]))

Compute the excess kurtosis of points in x on manifold M. Optionally provide weights w and/or a precomputed meanm.

source
StatsBase.mean_and_stdMethod
mean_and_std(M::AbstractManifold, x::AbstractVector[, w::AbstractWeights]; kwargs...) -> (mean, std)

Compute the mean and the standard deviation std simultaneously.

mean_and_std(
    M::AbstractManifold,
    x::AbstractVector
    [w::AbstractWeights,]
    method::AbstractApproximationMethod;
    kwargs...,
) -> (mean, var)

Use the method for simultaneously computing the mean and standard deviation. To use a mean-specific method, call mean and then std.

source
StatsBase.mean_and_varMethod
mean_and_var(
    M::AbstractManifold,
    x::AbstractVector
    [w::AbstractWeights,]
    method::GeodesicInterpolationWithinRadius;
    kwargs...,
) -> (mean, var)

Use repeated weighted geodesic interpolation to estimate the mean. Simultaneously, use a Welford-like recursion to estimate the variance.

See GeodesicInterpolationWithinRadius and mean_and_var for more information.

source
StatsBase.mean_and_varMethod
mean_and_var(
    M::AbstractManifold,
    x::AbstractVector
    [w::AbstractWeights,]
    method::GeodesicInterpolation;
    shuffle_rng::Union{AbstractRNG,Nothing} = nothing,
    retraction::AbstractRetractionMethod = default_retraction_method(M, eltype(x)),
    inverse_retraction::AbstractInverseRetractionMethod = default_inverse_retraction_method(M, eltype(x)),
    kwargs...,
) -> (mean, var)

Use the repeated weighted geodesic interpolation to estimate the mean. Simultaneously, use a Welford-like recursion to estimate the variance.

If shuffle_rng is provided, it is used to shuffle the order in which the points are considered. Optionally, pass retraction and inverse_retraction method types to specify the (inverse) retraction.

See GeodesicInterpolation for details on the geodesic interpolation method.

Note

The Welford algorithm for the variance is experimental and is not guaranteed to give accurate results except on Euclidean.

source
StatsBase.mean_and_varMethod
mean_and_var(M::AbstractManifold, x::AbstractVector[, w::AbstractWeights]; kwargs...) -> (mean, var)

Compute the mean and the variance simultaneously. See those functions for a description of the arguments.

mean_and_var(
    M::AbstractManifold,
    x::AbstractVector
    [w::AbstractWeights,]
    method::AbstractApproximationMethod;
    kwargs...,
) -> (mean, var)

Use the method for simultaneously computing the mean and variance. To use a mean-specific method, call mean and then var.

source
StatsBase.momentFunction
moment(M::AbstractManifold, x::AbstractVector, k::Int[, w::AbstractWeights], m=mean(M, x[, w]))

Compute the kth central moment of points in x on manifold M. Optionally provide weights w and/or a precomputed mean.

source
StatsBase.skewnessMethod
skewness(M::AbstractManifold, x::AbstractVector, k::Int[, w::AbstractWeights], m=mean(M, x[, w]))

Compute the standardized skewness of points in x on manifold M. Optionally provide weights w and/or a precomputed meanm.

source

Literature

[ATV13]
B. Afsari, R. Tron and R. Vidal. On the Convergence of Gradient Descent for Finding the Riemannian Center of Mass. SIAM Journal on Control and Optimization 51, 2230–2260 (2013), arXiv:1201.0925.
[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
[FVJ08]
P. T. Fletcher, S. Venkatasubramanian and S. Joshi. Robust statistics on Riemannian manifolds via the geometric median. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition (2008).
[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
[Pen06]
X. Pennec. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements. Journal of Mathematical Imaging and Vision 25, 127–154 (2006).
[PA12]
X. Pennec and V. Arsigny. Exponential Barycenters of the Canonical Cartan Connection and Invariant Means on Lie Groups. In: Matrix Information Geometry (Springer, Berlin, Heidelberg, 2012); pp. 123–166, arXiv:00699361.