Computational experiments with two neuro-inspired abstractions: Hebbian learning and spike timing information

Seminar | June 14 | 12-1:30 p.m. | 560 Evans Hall

 Upamanyu Madhow, UCSB

 Neuroscience Institute, Helen Wills

In this talk, we discuss early work on two different neuro-inspired computational abstractions. In the first, we investigate flavors of competitive Hebbian learning for bottom-up training of deep convolutional neural networks. The resulting sparse neural codes are competitive with layered autoencoders on standard image datasets. Unlike standard training based on optimizing a cost function, our approach is based on directly recruiting and pruning neurons to promote desirable properties like sparsity and distributed information representation. In the second, we consider a minimalistic model for exploring the information carried by spike timing, using a reservoir model for encoding input patterns into sparse neural codes by exploiting variations in axonal delays. Our model enables translation of polychronous groups identified by Izhikevich into codewords on which standard vector operations can be performed. For an appropriate choice of parameters, the distance properties of the code are similar to those for good random codes, which indicates that the approach may provide a robust memory for timing patterns.

 CA, nrterranova@berkeley.edu