BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of California\, Berkeley//UCB Events Calendar//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
DTSTART:19701029T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700402T020000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20180809T170841Z
DTSTART;TZID=America/Los_Angeles:20181009T110000
DTEND;TZID=America/Los_Angeles:20181009T123000
TRANSP:OPAQUE
SUMMARY:Seminar 217\, Risk Management: Robust Learning: Information Theory and Algorithms
UID:118749-ucb-events-calendar@berkeley.edu
ORGANIZER;CN="UC Berkeley Calendar Network":
LOCATION:1011 Evans Hall
DESCRIPTION:Speaker: Jacob Steinhardt\, Stanford\n\nThis talk will provide an overview of recent results in high-dimensional robust estimation. The key question is the following: given a dataset\, some fraction of which consists of arbitrary outliers\, what can be learned about the non-outlying points? This is a classical question going back at least to Tukey (1960). However\, this question has recently received renewed interest for a combination of reasons. First\, many of the older results do not give meaningful error bounds in high dimensions (for instance\, the error often includes an implicit sqrt(d)-factor in d dimensions). Second\, recent connections have been established between robust estimation and other problems such as clustering and learning of stochastic block models. Currently\, the best known results for clustering mixtures of Gaussians are via these robust estimation techniques. Finally\, high-dimensional biological datasets with structured outliers such as batch effects\, together with security concerns for machine learning systems\, motivate the study of robustness to worst-case outliers from an applied direction.\n\nThe talk will cover both information-theoretic and algorithmic techniques in robust estimation\, aiming to give an accessible introduction. We will start by reviewing the 1-dimensional case\, and show that many natural estimators break down in higher dimensions. Then we will give a simple argument that robust estimation is information-theoretically possible. Finally\, we will show that under stronger assumptions we can perform robust estimation efficiently\, via a "dual coupling" inequality that is reminiscent of matrix concentration inequalities.
URL:http://events.berkeley.edu/index.php/calendar/sn/pubaff.html?event_ID=118749&view=preview
SEQUENCE:0
CLASS:PUBLIC
CREATED:20180809T170841Z
LAST-MODIFIED:20180927T174035Z
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-OWNERAPPTID:-1
END:VEVENT
END:VCALENDAR