Log-factor-free adaptive contraction on general Minkowski-dimensional domains
Sourced from the work of Tao Tang, Nan Wu, Xiuyuan Cheng, David Dunson
§ Problem Statement
Setup
Let , let be compact, and let be the covering number by Euclidean balls of radius . Work under an intrinsic-dimension upper-complexity condition of the form
for some and , together with the regression model
where is supported on .
This setup follows Tang et al. (2024).
What the source proves (self-contained, with notation aligned to the paper):
- Prior class (geometry-agnostic GP): use a centered GP on with squared-exponential kernel
and a hierarchical/empirical-Bayes prior on bandwidth . "Geometry-agnostic" means the prior construction does not take intrinsic dimension as input and does not use manifold charts/atlases or Laplace--Beltrami coordinates; it is built from ambient Euclidean data geometry.
- General assumptions used for the adaptive-rate theorem: (A1) covering-number complexity: for some , , ,
(A2) RKHS approximation: for some and constants , for every there exists such that
(A3) prior-on-bandwidth condition: places enough mass near and sufficiently little mass at much smaller scales (formal two-sided exponential-tail inequalities in Assumption (A3) of the paper).
- Posterior contraction definition used in the source: for a semimetric , a sequence is a contraction rate if
in probability.
- Main adaptive-rate conclusion under (A1)--(A3): in fixed design, the paper proves
and in random design (with bounded-signal truncation in the theorem statement) obtains the same polynomial exponent in up to logarithmic factors.
- Meaning of "minimax power" and "adaptation": the polynomial exponent is (equivalently when writing , ), which is the standard nonparametric minimax exponent for -dimensional smoothness- regression classes; "adaptive" means one prior construction attains this exponent across the paper's class of unknown intrinsic dimensions and smoothness levels, with only logarithmic overhead.
Unsolved Problem
Determine whether one can remove the logarithmic loss and prove, uniformly over broad classes of satisfying the source-type assumptions,
(in expectation under ) for all sufficiently large , simultaneously adaptive in unknown and unknown smoothness.
§ Discussion
§ Significance & Implications
Tang et al. (2024) identify logarithmic factors as the remaining gap in the general low-intrinsic-dimensional setting. Closing this gap would determine whether fully geometry-agnostic GP priors attain the sharp adaptive minimax rate on general supports satisfying covering-number upper bounds.
§ Known Partial Results
Tang et al. (2024): The paper proves adaptive posterior contraction for low-intrinsic-dimensional supports at minimax-power rates up to logarithmic factors under covering-number upper-complexity assumptions. For the manifold setting treated in the paper (with its regularity assumptions), results are also stated up to logarithmic factors.
§ References
Adaptive Bayesian regression on data with low intrinsic dimensionality
Tao Tang, Nan Wu, Xiuyuan Cheng, David Dunson (2024)
Annals of Statistics (to appear)
📍 arXiv v3, Section 6 (Discussion), first paragraph (RKHS approximation/covering-number dependence for general low-dimensional supports), p. 14
Primary source. Author order follows arXiv v3 metadata; year follows the original arXiv posting year (2024), while arXiv_id points to revision v3.