Details
In this talk, we revisit random feature ridge regression (RFRR), a model that shares a number of phenomena with deep learning—such as double descent, benign overfitting, and scaling laws. Our main contribution is the derivation of a general deterministic equivalent for the test error of RFRR. Specifically, under certain concentration properties, we show that the test error is well approximated by a closed-form expression that only depends on the feature map eigenvalues. Notably, our approximation guarantees are non-asymptotic, multiplicative, and independent of the feature map dimension.
These guarantees depart from usual random matrix theory results which are largely focused on proportional asymptotics and additive error. In contrast, our theory applies to features that come from infinite-dimensional Hilbert spaces—the typical setting in machine learning—, and to models where the test error is itself polynomially vanishing.
To illustrate this precise characterization, we derive tight benign overfitting guarantees and sharp decay rates for RFRR under standard power-law assumptions on the spectrum and target decay.
This is based on joint work with Basil Saeed (Stanford), Leonardo Defilippis (ENS), and Bruno Loureiro (ENS).