Scientific Computing and Matrix Computations Seminar: Analysis of a Two-Layer Neural Network via Displacement Convexity

Seminar: Scientific Computing: CS | April 24 | 2-3 p.m. | 380 Soda Hall

 Marco Mondelli, Stanford U.

 Electrical Engineering and Computer Sciences (EECS)

Fitting a function by using linear combinations of a large number N of
"simple" components is one of the most fruitful ideas in statistical
learning. This idea lies at the core of a variety of methods, from
two-layer neural networks to kernel regression, to boosting. In
general, the resulting risk minimization problem is nonconvex and is
solved by gradient descent or its variants. Unfortunately, little is
known about global convergence properties of these approaches.

We consider the problem of learning a concave function f on a compact
convex domain, using linear combinations of "bump-like" components
(neurons). The parameters to be fitted are the centers of N bumps, and
the resulting empirical risk minimization problem is highly nonconvex.
We prove that, in the limit in which the number of neurons diverges, the
evolution of gradient descent converges to a Wasserstein gradient flow
in the space of probability distributions. Furthermore, when the bump
width \delta tends to 0, this gradient flow has as limit a viscous
porous medium equation. Remarkably, the cost function optimized by this
gradient flow exhibits a special property known as displacement
convexity, which implies exponential convergence rates for large N and
vanishing \delta. Surprisingly, this asymptotic theory appears to
capture well the behavior for moderate values of \delta and N.
Explaining this phenomenon, and understanding the dependence on \delta,
N in a quantitative manner remains an outstanding challenge.

Based on joint work with Adel Javanmard and Andrea Montanari [arXiv:1901.01375]

 mgu@berkeley.edu, 5106423145