Unsolved

Polynomial linearly-convergent method for g-convex optimization?

Posed by Christopher Criscitiello et al. (2023)

§ Problem Statement

Setup

Let (M,g)(\mathcal{M},g) be a dd-dimensional Riemannian manifold with geodesic distance dist(,)\operatorname{dist}(\cdot,\cdot). A function f:MRf:\mathcal{M}\to\mathbb{R} is LL-Lipschitz if f(x)f(y)Ldist(x,y)|f(x)-f(y)|\le L\,\operatorname{dist}(x,y) for all x,yMx,y\in\mathcal{M}, and geodesically convex (g-convex) if for every constant-speed geodesic γ:[0,1]M\gamma:[0,1]\to\mathcal{M}, the map tf(γ(t))t\mapsto f(\gamma(t)) is convex on [0,1][0,1].

Assume f:=infyMf(y)f^\star:=\inf_{y\in\mathcal{M}} f(y) is finite. Consider a deterministic first-order oracle model in which, given a query point xMx\in\mathcal{M}, the oracle returns a Riemannian subgradient sTxMs\in T_x\mathcal{M} satisfying

f(expx(v))f(x)+s,vfor all tangent vectors v for which expx(v) is defined,f(\exp_x(v))\ge f(x)+\langle s, v\rangle \quad\text{for all tangent vectors $v$ for which $\exp_x(v)$ is defined,}

where expx:TxMM\exp_x:T_x\mathcal{M}\to\mathcal{M} is the exponential map and ,\langle\cdot,\cdot\rangle is the inner product induced by gg on TxMT_x\mathcal{M}.

Unsolved Problem

Does there exist a deterministic first-order algorithm that, for every dimension dd, every dd-dimensional Riemannian manifold M\mathcal{M}, every Lipschitz g-convex f:MRf:\mathcal{M}\to\mathbb{R}, and every accuracy ϵ(0,1)\epsilon\in(0,1), outputs a point xMx\in\mathcal{M} such that

f(x)fϵ,f(x)-f^\star\le \epsilon,

using at most O(poly(d)log(ϵ1))O(\mathrm{poly}(d)\,\log(\epsilon^{-1})) oracle queries, and with at most O(poly(d))O(\mathrm{poly}(d)) arithmetic operations performed per oracle query (i.e., per-query work polynomial in dd and not growing with ϵ1\epsilon^{-1} beyond constants)?

§ Discussion

Loading discussion…

§ Significance & Implications

In Euclidean convex optimization, the ellipsoid method gives a deterministic, worst-case polynomial-time framework whose oracle complexity is logarithmic in ϵ1\epsilon^{-1} for achieving ϵ\epsilon-suboptimality. An affirmative answer here would show that g-convexity on arbitrary Riemannian manifolds admits an analogous uniform, dimension-polynomial first-order method with log(ϵ1)\log(\epsilon^{-1}) query dependence; a negative answer would indicate that manifold geometry (beyond g-convexity and Lipschitz continuity) creates a genuine barrier to extending Euclidean-style polynomial-time guarantees.

§ Known Partial Results

  • Criscitiello et al. (2023): In the Euclidean special case M=Rd\mathcal{M}=\mathbb{R}^d, the classical ellipsoid method achieves O(poly(d)log(ϵ1))O(\mathrm{poly}(d)\log(\epsilon^{-1})) first-order (separation/subgradient) oracle complexity for convex optimization.

  • Criscitiello et al. (2023): Criscitiello, Martinez-Rubio, and Boumal (COLT 2023) give an ellipsoid-like algorithm for certain constant-curvature manifolds (hemisphere or hyperbolic space) with subgradient query complexity O(d2log2(ϵ1))O(d^2\log^2(\epsilon^{-1})).

  • Criscitiello et al. (2023): In that constant-curvature setting, their method has per-query computational cost O(d2)O(d^2) arithmetic operations.

  • Criscitiello et al. (2023): For general Riemannian manifolds, the existence of a deterministic first-order method with O(poly(d)log(ϵ1))O(\mathrm{poly}(d)\log(\epsilon^{-1})) queries and O(poly(d))O(\mathrm{poly}(d)) per-query work remains open; the COLT 2023 paper discusses candidate approaches and obstacles but does not establish such a general algorithm.

§ References

[1]

Open Problem: Polynomial linearly-convergent method for g-convex optimization?

Christopher Criscitiello, David Martínez-Rubio, Nicolas Boumal (2023)

Conference on Learning Theory (COLT), PMLR 195

📍 Open-problem note in COLT proceedings.

[2]

Open Problem: Polynomial linearly-convergent method for g-convex optimization? (PDF)

Christopher Criscitiello, David Martínez-Rubio, Nicolas Boumal (2023)

Conference on Learning Theory (COLT), PMLR 195

📍 Proceedings PDF.

§ Tags