Skip to main content.
Advanced search >
<< Back to previous page Print

<< Thursday, February 07, 2013 >>


Remind me

Tell a friend

Add to my Google calendar (bCal)

Download to my calendar

Bookmark and ShareShare


Special EECS Seminar: How to make predictions when you're short on information

Seminar: Departmental | February 7 | 3-4 p.m. | Soda Hall, Wozniak Lounge (430)


Benjamin Recht, Assistant Professor, University of Wisconsin-Madison

Electrical Engineering and Computer Sciences (EECS)


With the advent of massive social networks, exascale computing, and high-throughput biology, researchers in every scientific department now face profound challenges in analyzing, manipulating and identifying behavior from a deluge of noisy, incomplete data. In this talk, I will present a unifying optimization framework to make such data analysis tasks less sensitive to corrupted and missing data by exploiting domain specific knowledge and prior information about structure.

Specifically, I will show that when a signal or system of interest can be represented by a combination of a few simple building blocks---called atoms---it can be identified with dramatically fewer sensors and accelerated acquisition times. For example, RADAR signals can be decomposed into a sum of elementary propagating waves, metabolic dynamics can be analyzed as sums of multi-index data arrays, and aggregate rankings of sports teams can be written as sums of a few permutations. In each application, the challenge lies not only in defining the appropriate set of atoms, but also in estimating the most parsimonious combination of atoms that agrees with a small set of measurements.

This talk advances a framework for transforming notions of simplicity and latent low-dimensionality into convex optimization problems. My approach builds on the recent success of generalizing compressed sensing to matrix completion, creating a unified optimization framework that greatly extends the catalog of objects and structures recoverable from partial information. This framework provides a standardized methodology to sharply bound the number of observations required to robustly estimate a variety of structured models. It also enables focused algorithmic development that can be deployed in many different applications, a variety of which I will detail in this talk. I will close by demonstrating how this framework provides the abstractions necessary to scale these optimization algorithms to the massive data sets we now commonly acquire.

Bio: Benjamin Recht is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison and holds courtesy appointments in Electrical and Computer Engineering, Mathematics, and Statistics. He is a PI in the Wisconsin Institute for Discovery (WID), a newly founded center for research at the convergence of information technology, biotechnology, and nanotechnology. Ben received his B.S. in Mathematics from the University of Chicago, and received a M.S. and PhD from the MIT Media Laboratory. After completing his doctoral work, he was a postdoctoral fellow in the Center for the Mathematics of Information at Caltech. He is the recipient of an NSF Career Award, an Alfred P. Sloan Research Fellowship, and the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization.


richter@eecs.berkeley.edu, 510-643-8208