Non-asymptotic AMP distributional theory beyond polynomially many iterations
Sourced from the work of Gen Li, Yuting Wei
§ Problem Statement
Setup
Fix integers and let have independent entries . Let . Consider two high-dimensional regression models with unknown signal and response .
This setup follows Li & Wei (2024).
In the sparse linear model, , where has independent mean-zero sub-Gaussian coordinates (often Gaussian). One AMP convention is
with (typically ), , and coordinatewise denoiser . In this convention, the Onsager term uses the previous denoiser derivative,
(with ), while equivalent index-shifted conventions also appear in the literature.
In robust linear regression, one estimates by an -estimator based on a convex loss with score . A corresponding AMP scheme has the form
where are iteration-dependent Lipschitz score maps (possibly regularized/proximal versions of ), is a deterministic scaling, and is the Onsager correction determined by the average divergence of the nonlinear update.
Existing non-asymptotic results in this sparse/robust regression AMP framework establish finite-sample Gaussian/state-evolution approximation guarantees up to polynomial-length horizons under stated assumptions (model class, moments, and regularity of nonlinearities), with explicit high-probability error bounds depending on . These results should not be interpreted as uniform guarantees for arbitrary pseudo-Lipschitz observables at all polynomial horizons without the paper's explicit conditions.
Unsolved Problem
Under explicit regularity assumptions on the signal/noise distributions and AMP nonlinearities (e.g., bounded moments, Lipschitz derivatives, well-posed Onsager terms), prove analogous finite-sample non-asymptotic distributional guarantees for horizons beyond , ideally up to data-dependent algorithmic convergence time. The goal is explicit, non-vacuous error bounds and tail probabilities that remain controlled uniformly over such longer horizons for both sparse-regression AMP and robust-regression AMP.
§ Discussion
§ Significance & Implications
Many inferential and optimization questions depend on late-iteration behavior, especially near convergence. Extending guarantees beyond currently proved polynomial-length regimes in this setting would narrow the gap between finite-time theory and practical AMP usage, and sharpen distributional understanding of downstream estimators. See Li & Wei (2024) for details.
§ Known Partial Results
Li et al. (2024): In the sparse/robust regression AMP context, earlier asymptotic analyses typically covered only relatively short iteration windows (e.g., in certain settings). This paper provides a finite-sample non-asymptotic distributional theory for sparse- and robust-regression AMP over polynomially many iterations under its stated assumptions.
§ References
Gen Li, Yuting Wei (2024)
Annals of Statistics (accepted; to appear)
📍 Section 4 (Discussion), bullet point on guarantees for polynomially many iterations and the challenge of extending beyond that regime (p. 18 in arXiv version).
Source paper where this problem appears.