BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of California\, Berkeley//UCB Events Calendar//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
DTSTART:19701029T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700402T020000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20190423T170434Z
DTSTART;TZID=America/Los_Angeles:20190424T140000
DTEND;TZID=America/Los_Angeles:20190424T150000
TRANSP:OPAQUE
SUMMARY:A Convex Duality Framework for GANs: BLISS Seminar
UID:125455-ucb-events-calendar@berkeley.edu
ORGANIZER;CN="UC Berkeley Calendar Network":
LOCATION:531 Cory Hall
DESCRIPTION:Farzan Farnia\, Stanford\n\nGenerative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given a discriminator trained over the entire space of functions\, this game reduces to finding the generative model which minimizes a divergence score\, e.g. the Jensen-Shannon (JS) divergence\, to the data distribution. However\, in practice the discriminator is trained over smaller function classes such as convolutional neural networks. Then\, a natural question is how the divergence minimization interpretation changes as we constrain the discriminator. In this talk\, we address this question by developing a convex duality framework for analyzing GANs. We show GANs in general can be interpreted as minimizing divergence between two sets of probability distributions: generative models and discriminator moment matching models. We prove that this interpretation applies to a wide class of existing GAN formulations including vanilla GAN\, f-GAN\, Wasserstein GAN\, Energy-based GAN\, and MMD-GAN. We then use the convex duality framework to explain why regularizing the discriminatorâ€™s Lipschitz constant can dramatically improve the models learned by GANs. We numerically demonstrate the power of different Lipschitz regularization methods for improving the training performance in standard GAN settings.
URL:http://events.berkeley.edu/index.php/calendar/sn/pubaff.html?event_ID=125455&view=preview
SEQUENCE:0
CLASS:PUBLIC
CREATED:20190423T170434Z
LAST-MODIFIED:20190423T171316Z
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-OWNERAPPTID:-1
END:VEVENT
END:VCALENDAR