Representation Learning and Exploration in RL

Seminar | January 30 | 12-1:30 p.m. | 560 Evans Hall

 John Co-Reyes, UC Berkeley

 Neuroscience Institute, Helen Wills

Sparse reward and long horizon tasks are among the most interesting yet challenging problems to solve in reinforcement learning. I will discuss recent work leveraging representation learning to tackle these sets of problems. We present a novel model which learns a latent representation of low-level skills by embedding trajectories with a variational autoencoder. Skills are learned in an unsupervised manner using a maximum entropy objective which encourages diversity. A key component is to learn both a latent-conditioned policy and latent-conditioned model which are consistent with each other. A built-in prediction mechanism allows planning in the learned space of skills to solve sparse reward tasks that are otherwise not possible with existing methods. I’ll also discuss how representation learning can be used for better exploration. More specifically, I’ll present a novelty detection algorithm that is based on discriminatively trained exemplar models.

 clamata@berkeley.edu