Skip to main content.
Advanced search >
<< Back to previous page Print

<< Wednesday, September 06, 2017 >>


Remind me

Tell a friend

Add to my Google calendar (bCal)

Download to my calendar

Bookmark and ShareShare


A Capacity Scaling Law for Artificial Neural Networks

Seminar: Seminars of interest | September 6 | 12-1 p.m. | 560 Evans Hall


Gerald Friedland, UC Berkeley

Neuroscience Institute, Helen Wills


In this talk, we derive the calculation of two critical numbers that quantify the capabilities of artificial neural networks with gating functions, such as sign, sigmoid, or rectified linear units. First, we derive the calculation of the upper limit Vapnik-Chervonenkis dimension of a network with binary output layer, which is the theoretical limit for perfect fitting of the training data. Second, we derive what we call the MacKay dimension of the network. This is a theoretical limit indicating necessary catastrophic forgetting i.e., the upper limit for most uses of the network. Our derivation of the capacity is embedded into a Shannon communication model, which allows measuring the capacities of neural networks in bits. We then compare our theoretical derivations with experiments using different network configurations and depths, diverse neural network implementations, varying activation functions, and several learning algorithms to confirm our upper bound. The result is that the capacity of a fully connected perceptron network scales strictly linear in the number of weights.

Paper: https://arxiv.org/abs/1708.06019


nrterranova@berkeley.edu