Frank—Wolfe method

Manopt.Frank_Wolfe_methodFunction
Frank_Wolfe_method(M, f, grad_f, p)
Frank_Wolfe_method(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C \subset \mathcal M$

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ q_k = \operatorname{arg\,min}_{q ∈ C} ⟨\operatorname{grad} F(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$, by default $s_k = \frac{2}{k+2}$. This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

  • M: a manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to find a minimizer $p^*$ for
  • grad_f: the gradient $\operatorname{grad}f: \mathcal M → T\mathcal M$ of f
  • p: an initial value $p ∈ \mathcal C$, note that it really has to be a feasible point

Alternatively to f and grad_f you can provide the AbstractManifoldGradientObjectivegradient_objective directly.

Keyword arguments

  • evaluation: (AllocatingEvaluation) whether grad_f is an in-place or allocating (default) function
  • initial_vector: (zero_vectoir(M,p)) how to initialize the inner gradient tangent vector
  • stopping_criterion: (StopAfterIteration(500) |StopWhenGradientNormLess(1.0e-6)) a stopping criterion
  • retraction_method: (default_retraction_method(M, typeof(p))) a type of retraction
  • stepsize: (DecreasingStepsize(; length=2.0, shift=2) a Stepsize to use; it has to be always less than 1. The default is the one proposed by Frank & Wolfe: $s_k = \frac{2}{k+2}$.
  • sub_cost: (FrankWolfeCost(p, initiel_vector)) the cost of the Frank-Wolfe sub problem which by default uses the current iterate and (sub)gradient of the current iteration to define a default cost, this is used to define the default sub_objective. It is ignored, if you set that or the sub_problem directly
  • sub_grad: (FrankWolfeGradient(p, initial_vector)) the gradient of the Frank-Wolfe sub problem which by default uses the current iterate and (sub)gradient of the current iteration to define a default gradient this is used to define the default sub_objective. It is ignored, if you set that or the sub_problem directly
  • sub_objective: (ManifoldGradientObjective(sub_cost, sub_gradient)) the objective for the Frank-Wolfe sub problem this is used to define the default sub_problem. It is ignored, if you set the sub_problem manually
  • sub_problem: (DefaultManoptProblem(M, sub_objective)) the Frank-Wolfe sub problem to solve. This can be given in three forms
    1. as an AbstractManoptProblem, then the sub_state specifies the solver to use
    2. as a closed form solution, as a function evaluating with new allocations (M, p, X) -> q that solves the sub problem on M given the current iterate p and (sub)gradient X.
    3. as a closed form solution, as a function (M, q, p, X) -> q working in place of q.
    For points 2 and 3 the sub_state has to be set to the corresponding AbstractEvaluationType, AllocatingEvaluation and InplaceEvaluation, respectively
  • sub_state: (evaluation if sub_problem is a function, a decorated GradientDescentState otherwise) for a function, the evaluation is inherited from the Frank-Wolfe evaluation keyword.
  • sub_kwargs: ((;)) keyword arguments to decorate the sub_state default state in case the sub_problem is not a function

All other keyword arguments are passed to decorate_state! for decorators or decorate_objective!, respectively. If you provide the ManifoldGradientObjective directly, these decorations can still be specified

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.Frank_Wolfe_method!Function
Frank_Wolfe_method!(M, f, grad_f, p; kwargs...)
Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank Wolfe method in place of p.

For all options and keyword arguments, see Frank_Wolfe_method.

source

State

Manopt.FrankWolfeStateType
FrankWolfeState <: AbstractManoptSolverState

A struct to store the current state of the Frank_Wolfe_method

It comes in two forms, depending on the realisation of the subproblem.

Fields

  • p: the current iterate, a point on the manifold
  • X: the current gradient $\operatorname{grad} F(p)$, a tangent vector to p.
  • inverse_retraction_method: (default_inverse_retraction_method(M, typeof(p))) an inverse retraction method to use within Frank Wolfe.
  • sub_problem: an AbstractManoptProblem problem or a function (M, p, X) -> q or (M, q, p, X) for the a closed form solution of the sub problem
  • sub_state: an AbstractManoptSolverState for the subsolver or an AbstractEvaluationType in case the sub problem is provided as a function
  • stop: (StopAfterIteration(200) |StopWhenGradientNormLess(1.0e-6)) a StoppingCriterion
  • stepsize: (DecreasingStepsize(; length=2.0, shift=2)) $s_k$ which by default is set to $s_k = \frac{2}{k+2}$.
  • retraction_method: (default_retraction_method(M, typeof(p))) a retraction to use within Frank-Wolfe

The sub task requires a method to solve

\[ \operatorname*{argmin}_{q∈\mathcal M} ⟨X, \log_p q⟩,\qquad \text{ where }X=\operatorname{grad} f(p)\]

Constructor

FrankWolfeState(M, p, X, sub_problem, sub_state)

where the remaining fields from before are keyword arguments.

source

Helpers

For the inner sub-problem you can easily create the corresponding cost and gradient using

Manopt.FrankWolfeCostType
FrankWolfeCost{P,T}

A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads

\[F(q) = ⟨X, \log_p q⟩\]

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
Manopt.FrankWolfeGradientType
FrankWolfeGradient{P,T}

A structure to represent the gradient of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X the function reads

\[F(q) = ⟨X, \log_p q⟩\]

Its gradient can be computed easily using adjoint_differential_log_argument.

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
[WS22]
M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).