Full minimax characterization for tensor normal factor estimation beyond the largest factor
Sourced from the work of Rafael Mendes de Oliveira, William Cole Franks, Akshay Ramachandran, Michael Walter
§ Problem Statement
Setup
Let and dimensions . For each sample, observe a random tensor , and write with . Assume
for , where is the cone of real symmetric positive-definite matrices and is the Kronecker product.
Because the factorization is not unique under rescalings with and , define identifiable factor parameters by shape normalization
For estimation error, use the affine-invariant Fisher-Rao distance
and optionally the Thompson/log-spectral form
These two metrics are comparable but not dimension-free equivalent: for SPD matrices,
Define minimax risks
where infima are over all measurable estimators based on , and the supremum is over all unrestricted positive-definite factors (no bounded condition number, sparsity, or structural constraints unless explicitly imposed).
Unsolved Problem
Determine the exact nonasymptotic minimax rates (up to universal constant factors) for for every mode and for , uniformly over all sample-size/dimension regimes , including regimes where is too small to guarantee constant Frobenius error. Equivalently, provide matching upper and lower bounds that fully characterize the statistical difficulty of estimating each Kronecker factor (not only the largest one) and the full covariance in the tensor normal model.
§ Discussion
§ Significance & Implications
The tensor case is substantially harder and central in multiway data analysis. A complete minimax theory would specify the true statistical limits for every mode and clarify whether current guarantees are sharp only in restricted regimes or uniformly across regimes. The primary source is the 2021 arXiv preprint (revised in 2025), which is listed as accepted in Annals of Statistics and to appear.
§ Known Partial Results
Oliveira et al. (2021): The paper proves nearly optimal guarantees for tensor normal MLE and establishes constant-factor minimax optimality for the largest factor and overall covariance in regimes with enough samples for constant Frobenius error. It explicitly leaves full minimax characterization for all tensor factors as an open direction in Section 8. The source paper leaves this direction open.
§ References
Near Optimal Sample Complexity for Matrix and Tensor Normal Models via Geodesic Convexity
Rafael Mendes de Oliveira, William Cole Franks, Akshay Ramachandran, Michael Walter (2021)
arXiv preprint (2021); accepted in Annals of Statistics (to appear)
📍 arXiv:2110.07583v3, Section 8 "Conclusion and open problems," paragraph discussing tensor-normal minimax guarantees beyond the largest factor (the paragraph beginning with the tensor-normal limitation statement in that section).
Primary source containing the open-problem discussion; arXiv metadata (v3, 2025-10-23) states "accepted in Annals of Statistics."