BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of California\, Berkeley//UCB Events Calendar//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
DTSTART:19701029T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700402T020000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20180320T204729Z
DTSTART;TZID=America/Los_Angeles:20180405T160000
DTEND;TZID=America/Los_Angeles:20180405T170000
TRANSP:OPAQUE
SUMMARY:Statistical inference of properties of distributions: theory\, algorithms\, and applications
UID:116476-ucb-events-calendar@berkeley.edu
ORGANIZER;CN="UC Berkeley Calendar Network":
LOCATION:HP Auditorium\, 306 Soda Hall Soda Hall
DESCRIPTION:Jiantao Jiao\, Ph.D. Candidate\, Stanford University\n\nModern data science applications—ranging from graphical model learning to image registration to inference of gene regulatory networks—frequently involve pipelines of exploratory analysis requiring accurate inference of a property of the distribution governing the data rather than the distribution itself. Notable examples of properties include mutual information\, Kullback-Leibler divergence\, total variation distance\, support size\, the "shape" of the distribution\, among others. \n\nThis talk will focus on recent progress in the performance\, structure\, and deployment of near-minimax-optimal estimators for a large variety of properties in high-dimensional and nonparametric settings. We present general methods for constructing information theoretically near-optimal estimators\, and identify the corresponding limits in terms of the parameter dimension\, the mixing rate (for processes with memory)\, and smoothness of the underlying density (in the nonparametric setting). We employ our schemes on the Google 1 Billion Word Dataset to estimate the fundamental limit of perplexity in language modeling\, and to improve graphical model and classification tree learning. The estimators are efficiently computable and exhibit a "sample size boosting” phenomenon\, i.e.\, they attain with n samples what prior methods would have needed n log(n) samples to achieve. \n\nBio:\n\nJiantao Jiao is a Ph.D. student in the Department of Electrical Engineering at Stanford University. He received the B.Eng. degree in Electronic Engineering from Tsinghua University\, Beijing\, China in 2012\, and the M.Eng. degree in Electrical Engineering from Stanford University in 2014. He is a recipient of the Presidential Award of Tsinghua University and the Stanford Graduate Fellowship. He was a semi-plenary speaker at ISIT 2015 and a co-recipient of the ISITA 2016 Student Paper Award. He co-designed and co-taught the graduate course EE378A (Statistical Signal Processing) at Stanford University in 2016 and 2017\, with his advisor Tsachy Weissman. His research interests are in statistical machine learning\, high-dimensional and nonparametric statistics\, information theory\, and their applications in social data analysis\, genomics\, and natural language processing. He is a co-founder of Qingfan (www.qingfan.com)\, an online platform that democratizes technical training and job opportunities for anyone with access to the internet.
URL:http://events.berkeley.edu/index.php/calendar/sn/pubaff.html?event_ID=116476&view=preview
SEQUENCE:0
CLASS:PUBLIC
CREATED:20180320T204729Z
LAST-MODIFIED:20180320T205127Z
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-OWNERAPPTID:-1
END:VEVENT
END:VCALENDAR