In many concrete optimization problems, data are error-prone, calculations have finite accuracy, and we can only implement variables imprecisely. These inaccuracies, in conjunction with ill-conditioning, can have disasterous consequences for solution quality. Recent research on "robust" convex optimization attacks these issues using numerically tractable techniques. I will describe a simple notion of "robust regularization", and illustrate it for convex quadratics. Applied to the largest real part of the eigenvalues of a matrix (a highly nonconvex function), regularization produces the "pseudospectral abscissa", an important tool for analyzing the robust stability of dynamical systems.