BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of California\, Berkeley//UCB Events Calendar//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
DTSTART:19701029T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700402T020000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181107T014524Z
DTSTART;TZID=America/Los_Angeles:20181114T160000
DTEND;TZID=America/Los_Angeles:20181114T170000
TRANSP:OPAQUE
SUMMARY:Condition Number Analysis of Logistic Regression\, and its Implications for Standard First-Order Solution Methods
UID:121429-ucb-events-calendar@berkeley.edu
ORGANIZER;CN="UC Berkeley Calendar Network":
LOCATION:1011 Evans Hall
DESCRIPTION:Paul Grigas\, UC Berkeley\n\nLogistic regression is one of the most popular methods in binary classification\, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem\, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is non-separable\, but fails to exist when the data is separable. First-order methods are the algorithms of choice for solving large-scale instances of the logistic regression problem. We introduce a pair of condition numbers that measure the degree of non-separability or separability of a given dataset in the setting of binary classification\, and we study how these condition numbers relate to and inform the properties and the convergence guarantees of first-order methods. When the training data is non-separable\, we show that the degree of non-separability naturally enters the analysis and informs the properties and convergence guarantees of two standard first-order methods: steepest descent (for any given norm) and stochastic gradient descent. Expanding on the work of Bach\, we also show how the degree of non-separability enters into the analysis of linear convergence of steepest descent (without needing strong convexity)\, as well as the adaptive convergence of stochastic gradient descent. When the training data is separable\, first-order methods rather curiously have good empirical success\, which is not well understood in theory. In the case of separable data\, we demonstrate how the degree of separability enters into the analysis of l_2 steepest descent and stochastic gradient descent for delivering approximate-maximum-margin solutions with associated computational guarantees as well. This suggests that first-order methods can lead to statistically meaningful solutions in the separable case\, even though the ML solution does not exist. This is joint work with Robert Freund and Rahul Mazumder.
URL:http://events.berkeley.edu/index.php/calendar/sn/pubaff.html?event_ID=121429&view=preview
SEQUENCE:0
CLASS:PUBLIC
CREATED:20181107T014524Z
LAST-MODIFIED:20181107T014524Z
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-OWNERAPPTID:-1
END:VEVENT
END:VCALENDAR