Security and Privacy Guarantees in Machine Learning with Differential Privacy: talk by Roxana Geambasu, Columbia University

Lecture: Real-time Intelligent Secure Execution Laboratory (RISELab): CS | August 13 | 12-1 p.m. | 405 Soda Hall

 Roxana Geambasu, Columbia University

 RISELab

Machine learning (ML) is driving many of our applications and life-changing decisions. Yet, it is often brittle and unstable, making decisions that are hard to understand or can be exploited. Tiny changes to an input can cause dramatic changes in predictions; this results in decisions that surprise, appear unfair, or enable attack vectors such as adversarial examples. Moreover, models trained on users' data can encode not only general trends from large datasets but also very specific, personal information from these datasets, such as social security numbers and credit card numbers from emails; this threatens to expose users' secrets through ML models or predictions. This talk positions differential privacy (DP) -- a rigorous privacy theory -- as a versatile foundation for building into ML much-needed guarantees of not only privacy but also of security and stability. I first present PixelDP, a scalable certified defense against adversarial examples that leverages DP theory to guarantee a level of robustness against this attack. I then present Sage, a DP ML platform that bounds the cumulative leakage of secrets through models while addressing some of the most pressing challenges of DP, such as running out of privacy budget problem. PixelDP and Sage are designed from a pragmatic systems perspective and illustrate that DP theory is powerful but requires adaptation to achieve practical guarantees for ML workloads.

 bzar@berkeley.edu, 510-643-0264