Information-theoretic Privacy: Leakage measures, robust privacy guarantees, and generative adversarial mechanism design: BLISS Seminar

Seminar | March 11 | 3-4 p.m. | 540 Cory Hall

 Lalitha Sankar, Arizona State Univ.

 Electrical Engineering and Computer Sciences (EECS)

Privacy is the problem of ensuring limited leakage of information about sensitive features while sharing information (utility) about non-private features to legitimate data users. Even as differential privacy has emerged as a strong desideratum for privacy, there is also an equally strong need for context-aware utility-guaranteeing approaches in most data sharing settings. This talk approaches this dual requirement using an information-theoretic approach that includes operationally motivated leakage measures, design of privacy mechanisms, and verifiable implementations using generative adversarial models. Specifically, we introduce maximal alpha leakage as a new class of adversarially motivated tunable leakage measures based on accurately guessing an arbitrary function of a dataset conditioned on a released dataset. The choice of alpha determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing the best posterior for alpha = ∞, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. The problem of guaranteeing privacy can then be viewed as one of designing a randomizing mechanism that minimizes (maximal) alpha leakage subject to utility constraints. We then present bounds on the robustness of privacy guarantees that can be made when designing mechanisms from a finite number of samples. Finally, we focus on a data-driven approach, generative adversarial privacy (GAP), to design privacy mechanisms using neural networks. GAP is modeled as a constrained minimax game between a privatizer (intent on publishing a utility-guaranteeing learning representation that limits leakage of the sensitive features) and an adversary (intent on learning the sensitive features). We demonstrate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI dataset. Time permitting, we will briefly discuss the learning-theoretic underpinnings of GAP as well as connections to the problem of algorithmic fairness.

This work is a result of multiple collaborations: (a) maximal alpha leakage with J. Liao (ASU), O. Kosut (ASU), and F. P. Calmon (Harvard); (b) robust mechanism design with M. Diaz (ASU), H. Wang (Harvard), and F. P. Calmon (Harvard); and (c) GAP with C. Huang (ASU), P. Kairouz (Google), X. Chen (Stanford), and R. Rajagopal (Stanford).

 ashwinpm@berkeley.edu