Unsolved

Extension beyond convex differentiable-loss framework

Sourced from the work of Takuya Koriyama, Pratik Patil, Jin-Hong Du, Kai Tan, Pierre C. Bellec

§ Problem Statement

Setup

Let (xi,yi)i=1n(x_i,y_i)_{i=1}^n be training data with xiRpx_i\in\mathbb R^p and yiRy_i\in\mathbb R, and let p=pnp=p_n and subsample size k=knk=k_n satisfy proportional asymptotics p/nδ(0,)p/n\to\delta\in(0,\infty) and k/nκ(0,1]k/n\to\kappa\in(0,1]. For each subsample index set S[n]S\subseteq[n] with S=k|S|=k, define the regularized empirical MM-estimator

β^SargminβRp{1kiS(yi,xiβ)+λρ(β)},\widehat\beta_S\in\arg\min_{\beta\in\mathbb R^p}\left\{\frac1k\sum_{i\in S}\ell\big(y_i,x_i^\top\beta\big)+\lambda\,\rho(\beta)\right\},

where \ell is a loss function, ρ\rho is a regularizer, and λ>0\lambda>0 is a tuning parameter. The infinite-subagged estimator is

βn:=ES ⁣[β^S(xi,yi)i=1n],\overline\beta_n:=\mathbb E_S\!\left[\widehat\beta_S\mid (x_i,y_i)_{i=1}^n\right],

with expectation over uniformly sampled subsamples SS.

This setup follows Koriyama et al. (2025).

In the source paper's baseline analyzed regime (Assumption A in Koriyama et al. (2025)), the loss is convex and differentiable and the regularizer is convex (with separable structure in the main development). Under the full assumption set used in the source, the main asymptotic results characterize deterministic limits for overlap/order-parameter quantities of independent subsample estimators and induced risk formulas for the subagged estimator, and provide a risk-estimation theorem under an additional stronger regularity condition.

Unsolved Problem

Extend this asymptotic framework beyond the convex differentiable-loss setting while keeping the same high-dimensional subagging regime. In particular, determine precise conditions under which analogues of the main deterministic-limit and risk-estimation results remain valid when: (1) the loss is convex but non-differentiable (the paper notes Moreau smoothing as a possible route), (2) the strong-convexity-type assumption on the regularizer used for Theorem 5 is relaxed (the paper notes Gaussian smoothing as a possible route), and (3) the regularizer is non-separable.

§ Discussion

Loading discussion…

§ Significance & Implications

This is a direct open-direction item in Section 6 of Koriyama et al., focused on broadening the applicability of their asymptotic subagging framework beyond the currently analyzed smooth-loss setting.

§ Known Partial Results

  • Koriyama et al. (2025): The cited source explicitly presents this as an open direction in Section 6 and does not provide a completed theorem for this extension there. Global resolution beyond this source was not fully verified .

§ References

[1]

Precise Asymptotics of Bagging Regularized M-estimators

Takuya Koriyama, Pratik Patil, Jin-Hong Du, Kai Tan, Pierre C. Bellec (2025)

Annals of Statistics (to appear)

📍 Section 6 ("Extensions and open directions"), open-direction bullet "Assumptions on the loss and reg functions" (p. 28, arXiv v3)

Primary source where this open direction is explicitly listed.

§ Tags