We consider the following basic problem: Given corrupted samples from a high-dimensional Gaussian, can we efficiently learn its parameters? This is the prototypical question in robust statistics, a field that took shape in the 1960's with the pioneering works of Tukey and Huber. Unfortunately, all known robust estimators are hard to compute in high dimensions. This prompts the following question: Can we reconcile robustness and computational efficiency in high-dimensional learning?
In this work, we give the first efficient algorithms for robustly learning a high-dimensional Gaussian that are able to tolerate a constant fraction of corruptions. Our techniques also yield robust estimators for several other high-dimensional models, including Bayesian networks, and various mixture models.
The talk will be based on joint works with (different subsets of) G. Kamath, D. Kane, J. Li, A. Moitra, and A. Stewart.
Ilias Diakonikolas is an Assistant Professor at the University of Southern California (USC) where he holds the Andrew and Erna Viterbi Early Career Chair in Computer Science. He obtained a diploma in electrical and computer engineering from the National Technical University of Athens and a Ph.D. in computer science from Columbia University where he was advised by Mihalis Yannakakis. Before moving to USC, he was a faculty member at the University of Edinburgh, and prior to that he was the Simons postdoctoral fellow in theoretical computer science at the University of California, Berkeley. His research is on the algorithmic foundations of massive data sets, in particular on designing efficient algorithms for fundamental problems in machine learning.
Diakonikolas is the recipient of the NSF Early Career Development Award, the Sloan Research Fellowship in Computer Science, the Google Faculty Research Award, the Marie Curie Fellowship, the IBM Research Pat Goldberg Best Paper Award, a best thesis award from Columbia University, and an honorable mention in the George Nicholson competition from the INFORMS society.