The causal analysis of (observational) data plays a central role in essentially every scientific field. Many recent developments in causal inference, and functional estimation problems more generally, have been motivated by the fact that classical one-step de-biasing methods, or their more recent sample-split double machine-learning avatars, can outperform plugin estimators under surprisingly weak conditions. However, from a theoretical perspective our understanding of how to construct these estimators for non-standard functionals, how to assess their optimality, and how to improve them, is still far from complete.
I will present two vignettes within this theme. The first part develops minimax theory for estimating the conditional average treatment effect (CATE). Many methods for estimating CATEs have been proposed, but there remain important theoretical gaps in understanding if and when such methods are optimal. We close some of these gaps, by providing sharp minimax-rates for estimating the CATE when the nuisance functions are Holder smooth -- highlighting important differences between the estimation of the CATE, and its more well-studied global counterpart (the average treatment effect).
In the second part, I will focus more broadly on functional estimation problems, and develop some minimax lower bounds for "structure-agnostic" functional estimation, to understand the strengths and limitations of the double machine learning perspective on functional estimation.
This talk will be based on joint work with Edward Kennedy and Larry Wasserman.
Bio: Sivaraman is an Associate Professor with a joint appointment in the Department of Statistics and Data Science, and the Machine Learning Department at Carnegie Mellon. Prior to this he was a postdoctoral researcher at UC Berkeley working with Martin Wainwright and Bin Yu, and before that was a PhD student in Computer Science at Carnegie Mellon. His research interests are broadly in statistical machine learning and algorithmic statistics. Some particular areas that he is currently most fascinated by include robust statistics, minimax testing, optimal transport and causal inference: