We consider the learnability of geometric concepts and distributions given only positive examples. While learning from positive examples is generally not possible without any additional assumptions, we propose two approaches that enable us to obtain positive results. When samples are generated from a structured distribution such as a Gaussian, we show that any concept class that has low Gaussian surface area can be learned using only few positive samples. In addition, we show that when an oracle is available that can check the validity of generated examples during training, stronger results can be obtained even under arbitrary distributions and under the presence of "model errors". We show that, while proper learning often requires exponentially many queries to the invalidity oracle, improper distribution learning can be done efficiently using polynomially many queries.
The talk will be based on my recent work with Hanneke, Kalai, and Kamath (COLT 2018) and Daskalakis, Gouleakis and Zampetakis (FOCS 2018) and more recent follow-ups.
Christos Tzamos is an Assistant Professor in the Department of Computer Sciences at University of Wisconsin‐Madison and a member of the Theory of Computing group. His research interests lie in the interface of Theory of Computation with Economics and Game Theory, Machine Learning, Statistics and Probability Theory. He completed his PhD in the Theory of Computation group of MIT advised by Costis Daskalakis and worked as a post‐doctoral researcher at Microsoft Research New England. Before that, he studied Electrical and Computer Engineering at NTUA and was a member of Corelab working with Dimitris Fotakis. Christos received the George M. Sprowls Award for the best Computer Science PhD thesis in the EECS Department of MIT and is a recipient of the Simons Graduate Award in Theoretical Computer Science. He is also a recipient of a Best Paper award at the ACM Conference on Economics and Computation in 2013.