Recent Advances in Algorithmic High-Dimensional Robust Statistics
Seminar: Neyman Seminar | February 21 | 4-5 p.m. | 1011 Evans Hall
Ilias Diakonikolas, USC
Fitting a model to a collection of observations is one of the quintessential problems in machine learning. Since any model is only approximately valid, an estimator that is useful in practice must also be robust in the presence of model misspecification. It turns out that there is a striking tension between robustness and computational efficiency. Even for the most basic high-dimensional tasks, such as robustly computing the mean and covariance, until recently the only known estimators were either hard to compute or could only tolerate a negligible fraction of errors.
In this talk, I will survey the recent progress in algorithmic high-dimensional robust statistics. I will describe the first robust and efficiently computable estimators for several fundamental statistical tasks that were previously thought to be computationally intractable. These include robust estimation of mean and covariance in high dimensions, robust learning of various latent variable models, and robust supervised learning. The new robust estimators are scalable in practice and yield a number of applications in exploratory data analysis and adversarial machine learning.