Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowd-sourced and contain sensitive information. The models should not expose private information in these datasets. Differential Privacy is a standard privacy definition that implies a strong and concrete guarantee on protecting such information.
In this talk, I'll then outline two recent approaches to training deep neural networks while providing a differential privacy guarantee, and some new analysis tools we developed in the process. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Based on joint works with Martin Abadi, Andy Chu, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot and Li Zhang.
Kunal Talwar is a Research Scientist at Google since 2015, working in Privacy, Machine Learning and Algorithms. He graduated from UC Berkeley in 2004 and worked at Microsoft Research in Silicon Valley until 2014. He has made major contributions in Metric Embeddings, Algorithms, and Differential Privacy. He co-developed the Exponential Mechanism that was the first private mechanism for non-numerical queries and established connections between Game theory and Differential Privacy. His work on the Geometry of Differential Privacy led to instance-optimal private algorithms for a large class of queries. It also led to a surprising connection between Discrepancy theory and Functional analysis, leading to the first non-trivial approximation algorithms to Hereditary Discrepancy, and to progress on the Tusnady problem.