Representation Learning

Workshop | March 27 – 31, 2017 every day |  Calvin Laboratory (Simons Institute for the Theory of Computing)

 Simons Institute for the Theory of Computing

This workshop will focus on dramatic advances in representation and learning taking place in natural language processing, speech and vision. For instance, deep learning can be thought of as a method that combines the tasks of finding a classifier (which we can think of as the top layer of the deep net) with the task of learning a representation (namely, the representation computed at the last-but-one layer).

Developing a theory for such empirical work is an exciting quest, especially since the empirical work draws upon non-convex optimization. The workshop will draw a mix of theorists and practitioners, and the following is a list of sample issues that will be discussed: (a) Which models for representation make more sense than
others, and why? (In other words, what patterns in data are they capturing, and how are those patterns useful?) (b) What is an analog of generalization theory for representation learning? Can it lead to a theory of transfer learning to new distributions of inputs? (c) How can we design algorithms for representation learning with provable guarantees? What progress has already been made, and what lessons can we draw from it? (d) How can we learn representations that combine probabilities and logic?

Organizers:
Sham Kakade (University of Washington; chair), Sanjeev Arora (Princeton University), Kristen Grauman (University of Texas at Austin), Ruslan Salakhutdinov (University of Toronto), Noah Smith (University of Washington).

  Register online

 simonsevents@berkeley.edu, 510-664-9856