Extension beyond convex differentiable-loss framework
Sourced from the work of Takuya Koriyama, Pratik Patil, Jin-Hong Du, Kai Tan, Pierre C. Bellec
§ Problem Statement
Setup
Let be training data with and , and let and subsample size satisfy proportional asymptotics and . For each subsample index set with , define the regularized empirical -estimator
where is a loss function, is a regularizer, and is a tuning parameter. The infinite-subagged estimator is
with expectation over uniformly sampled subsamples .
This setup follows Koriyama et al. (2025).
In the source paper's baseline analyzed regime (Assumption A in Koriyama et al. (2025)), the loss is convex and differentiable and the regularizer is convex (with separable structure in the main development). Under the full assumption set used in the source, the main asymptotic results characterize deterministic limits for overlap/order-parameter quantities of independent subsample estimators and induced risk formulas for the subagged estimator, and provide a risk-estimation theorem under an additional stronger regularity condition.
Unsolved Problem
Extend this asymptotic framework beyond the convex differentiable-loss setting while keeping the same high-dimensional subagging regime. In particular, determine precise conditions under which analogues of the main deterministic-limit and risk-estimation results remain valid when: (1) the loss is convex but non-differentiable (the paper notes Moreau smoothing as a possible route), (2) the strong-convexity-type assumption on the regularizer used for Theorem 5 is relaxed (the paper notes Gaussian smoothing as a possible route), and (3) the regularizer is non-separable.
§ Discussion
§ Significance & Implications
This is a direct open-direction item in Section 6 of Koriyama et al., focused on broadening the applicability of their asymptotic subagging framework beyond the currently analyzed smooth-loss setting.
§ Known Partial Results
Koriyama et al. (2025): The cited source explicitly presents this as an open direction in Section 6 and does not provide a completed theorem for this extension there. Global resolution beyond this source was not fully verified .
§ References
Precise Asymptotics of Bagging Regularized M-estimators
Takuya Koriyama, Pratik Patil, Jin-Hong Du, Kai Tan, Pierre C. Bellec (2025)
Annals of Statistics (to appear)
📍 Section 6 ("Extensions and open directions"), open-direction bullet "Assumptions on the loss and reg functions" (p. 28, arXiv v3)
Primary source where this open direction is explicitly listed.