Laurent El Ghaoui- Lifted Neural Nets: Beyond The Grip Of Stochastic Gradients In Deep Learning

Seminar | January 29 | 3:30-5 p.m. | 3108 Etcheverry Hall

 Laurent El Ghaoui, UC Berkeley IEOR

 Industrial Engineering & Operations Research

Abstract: We describe a novel family of models of multi-layer feedforward neural networks, where the activation functions are encoded via penalties in the training problem. The new framework allows for algorithms such as block-coordinate descent methods to be applied, in which each step is composed of simple (no hidden layer) supervised learning problems that are parallelizable across layers, or data points, or both. Although the training problem has many more variables than that of a standard network, preliminary experiments seem to indicate that the proposed models provide excellent initial guesses for standard networks, and could become competitive with state-of-the-art neural networks, both in terms of performance and speed. In addition, the lifted models provide avenues for interesting extensions such as network topology optimization, input matrix completion, and robustness against noisy inputs.