Xiao-Li Meng, Harvard University

Pushing Large - p - Small - n to Infinite - p - Zero - n: A Multi - resolution Theory for Individualized Predictions and for A World Without Bias - Variance Trade - off
Date
Oct 11, 2022, 12:25 pm1:25 pm
Location
101 - Sherrerd Hall

Details

Event Description

Abstract:  The arrival of Big Data has expanded the statistical asymptotic regime from fixing-p-growing-n to growing p-&-n, where p is the number of model parameters and n is the sample size. But this expansion falls short of establishing a rigorous theoretical foundation for individualized predictions (e.g., forpersonalized medicines) because unique individuals, such as human beings, correspond to unbounded p. There is also no direct “training sample” because genuine guinea pigs do not exist, which entails zero n.The literature on wavelets and on sieve methods for non-parametric estimation suggest a principled approximation theory for individualized predictions via a multi-resolution (MR) perspective, where we use the resolution level to index the degree of approximation to ultimate individuality(Meng, 2014,COPSS50thAnniversary Volume).MR seeks a primary resolution indexing an indirect training sample,which provides enough matched attributes to increase the relevance of the results to the target individuals and yet still accumulate sufficient sample sizes for robust estimation. Theoretically, MR relies on an infinite-term ANOVA-type decomposition, providing an alternative way to model sparsity via the decay rate of the resolution bias as a function of the primary resolution level. Unexpectedly, this decomposition reveals a world without variance when the outcome is a deterministic function of potentially infinitely many predictors. In this world, the optimal resolution tends to prefer over-fitting in the traditional sense, but it is not a violation of the bias-variance trade-off principle because without variance the optimal trade-off has to put all its eggs in the basket of bias. Furthermore, there can be many "descents" in the prediction error curve, when the contributions of predictors are inhomogeneous and the ordering of their importance does not align with the order of their inclusion in prediction. These findings may hint at a deterministic approximation theory for understanding the apparently over-fitting resistant phenomenon of some over-saturated models in machine learning(Li and Meng, 2021,JASA).

Short bio:  Xiao-Li Meng, the Founding Editor-in-Chief of Harvard Data Science Review and the Whipple V. N. Jones Professor of Statistics at Harvard University, is well known for his depth and breadth in research.  Meng was named the best statistician under the age of 40 by Committee of Presidents of Statistical Societies (COPSS) in 2001. In 2020, he was elected to the American Academy of Arts and Sciences.  Meng received his BS in mathematics from Fudan University in 1982 and his PhD in statistics from Harvard in 1990. He was on the faculty of the University of Chicago from 1991 to 2001 before returning to Harvard, where he served as the Chair of the Department of Statistics (2004–2012) and the Dean of Graduate School of Arts and Sciences (2012–2017).

Event Category
S. S. Wilks Memorial Seminar in Statistics