Polynomial-time warm initialization at statistically optimal thresholds
Sourced from the work of Wanteng Ma, Dong Xia
§ Problem Statement
Setup
Let and dimensions . Let be an unknown low-multilinear-rank tensor with mode- rank , and define
Observe i.i.d. samples
where are mean-zero noise with scale (e.g., Gaussian/sub-Gaussian). For mode- matricization , define
A statistically optimal scaling regime considered in Ma & Xia (2024) (up to absolute constants and polylogarithmic factors) is
Ma et al. (2024) require a warm-initialization condition (Ma & Xia (2024)). Specifically, for an initializer built on an independent data split, there exists an absolute constant such that
with high probability (the paper states probability at least in the corresponding assumption).
Unsolved Problem
Under the statistical scaling above, does there exist a randomized algorithm running in polynomial time in problem size that outputs, with high probability, an initializer satisfying this warm-start accuracy guarantee?
§ Discussion
§ Significance & Implications
This is a central bottleneck for achieving statistically optimal inference without oracle/computationally intractable initialization. A positive result would shrink a key statistical-to-computational gap; a negative result would formalize a computational barrier in tensor inference. See Ma & Xia (2024).
§ Known Partial Results
Ma et al. (2024): The paper proves that if a sufficiently accurate initializer is available, debiasing plus one-step power iteration yields asymptotic normality and optimal variance. It also gives a computationally intractable constrained least-squares initializer achieving warm start under minimal sample size (with data splitting), and polynomial-time guarantees only under stronger computational conditions.
§ References
Wanteng Ma, Dong Xia (2024)
Annals of Statistics (Future Papers / in press record)
📍 Section 7.1, discussion linking Condition (30) to initialization of (9) (arXiv v2 updated Nov 1, 2024).
Definitive in-press listing in Annals of Statistics Future Papers; preprint version is arXiv:2410.11225v2.