BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of California\, Berkeley//UCB Events Calendar//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
DTSTART:19701029T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700402T020000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20191031T180101Z
X-ORACLE-EVENTTYPE:DAILY NOTE
DTSTART;VALUE=DATE:20191111
DTEND;VALUE=DATE:20191111
TRANSP:TRANSPARENT
SUMMARY:Dissertation Talk: Optimal Tradeoffs in Modern Hypothesis Testing
UID:129484-ucb-events-calendar@berkeley.edu
ORGANIZER;CN="UC Berkeley Calendar Network":
LOCATION:511 Soda Hall
DESCRIPTION:Maxim Rabinovich\, UC Berkeley\n\nMany applications of statistics both in science and in the technology industry come in the form of hypothesis testing\, whether executing an A/B test on a product or determining which genes are associated with disease risk. Unfortunately\, classical statistical algorithms do not address the requirements of modern applications\, where true effects may be very rare and the number of tests in the tens of thousands or more. The field of multiple hypothesis testing has developed to fill this gap\, introducing algorithms better suited to large-scale testing scenarios. But these algorithms are challenging to design\, and it is not clear how to determine whether those we know can be improved or not. In this work\, we place constraints on the performance of any multiple testing algorithm. The constraints come in the form of an optimal tradeoff between the two kinds of errors such an algorithm can make (i.e. false positives and false negatives). In more precise terms\, this work provides a framework for establishing the tradeoff between the False Discovery Rate (FDR)\, a measure of the fraction of effects discovered by the algorithm that turn out not to be real effects\, and the False Non-discovery Rate (FNR)\, a measure of the fraction of true effects the algorithm misses. This framework applies to a wide array of testing models\, yields predictions that can be numerically simulated in virtually any model\, and can be analytically instantiated in a number of models previously studied in the context of multiple hypothesis testing. In cases where the framework yields analytically tractable bounds\, they match the best previous known results established by the speaker and others. The work in this talk is joint with Michael I. Jordan and Martin J. Wainwright.
URL:http://events.berkeley.edu/index.php/calendar/sn/pubaff.html?event_ID=129484&view=preview
SEQUENCE:0
CLASS:PUBLIC
CREATED:20191031T180101Z
LAST-MODIFIED:20191031T181602Z
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-OWNERAPPTID:-1
END:VEVENT
END:VCALENDAR