Details
In this talk, I discuss how one can use approximate message passing (AMP) algorithms — a class of efficient, iterative algorithms that have been successfully employed in many statistical learning tasks like high-dimensional linear regression and low-rank matrix estimation — for characterizing exact statistical properties of estimators in a high-dimensional asymptotic regime, where the sample size of the data is proportional to the number of parameters in the problem. As a running example, we will study sorted L1 penalization (SLOPE) for linear regression and show how AMP theory can be used to give insights on the variable selection properties of this estimator by characterizing the optimal trade-off between measures of type I and type II error.
Collaborators on this work include Zhiqi Bu, Oliver Feng, Jason Klusowski, Richard Samworth, Weijie Su, Ramji Venkataramanan, and Ruijia Wu.