A Convex Duality Framework for GANs: BLISS Seminar

Seminar | April 24 | 2-3 p.m. | 531 Cory Hall

 Farzan Farnia, Stanford

 Electrical Engineering and Computer Sciences (EECS)

Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given a discriminator trained over the entire space of functions, this game reduces to finding the generative model which minimizes a divergence score, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is trained over smaller function classes such as convolutional neural networks. Then, a natural question is how the divergence minimization interpretation changes as we constrain the discriminator. In this talk, we address this question by developing a convex duality framework for analyzing GANs. We show GANs in general can be interpreted as minimizing divergence between two sets of probability distributions: generative models and discriminator moment matching models. We prove that this interpretation applies to a wide class of existing GAN formulations including vanilla GAN, f-GAN, Wasserstein GAN, Energy-based GAN, and MMD-GAN. We then use the convex duality framework to explain why regularizing the discriminator’s Lipschitz constant can dramatically improve the models learned by GANs. We numerically demonstrate the power of different Lipschitz regularization methods for improving the training performance in standard GAN settings.

 ashwinpm@berkeley.edu