Inference and Efficient Coding in Natural Auditory Scenes

Seminar | May 30 | 12-1:30 p.m. | 560 Evans Hall

 Wiktor Mlynarski, MIT

 Neuroscience Institute, Helen Wills

Processing of natural stimuli in sensory systems has been traditionally studied within two theoretical frameworks: probabilistic inference and efficient coding. Probabilistic inference specifies optimal strategies for learning about relevant properties of the environment from local and ambiguous sensory signals. Efficient coding provides a normative approach to study encoding of natural stimuli in resource-constrained sensory systems. By emphasizing different aspects of information processing they provide complementary approaches to study sensory computations. Here, I will discuss applications of these two perspectives to study the problem of auditory scene analysis (ASA) in natural environments. During ASA, the auditory system combines local spectrotemporal measurements of sound encoded by the sensory periphery into a coherent representation of objects and events in the environment. First, I will show that human auditory grouping can be understood as probabilistic inference constrained by natural sound statistics. We analyzed pairwise co-occurrence statistics of simple acoustic features learned from natural sounds. We demonstrated that humans perceptually group only those stimulus pairs, which co-occur frequently in natural sounds. Second, I will present a statistical model of natural sounds motivated by efficient coding principles. The model learns a mid-level auditory code by capturing higher-order statistical dependencies among spectrotemporal primitives. Features learned by the model constitute a hypothesis about not-yet observed intermediate-level neural representations, which may underlie perceptual grouping. Through the talk I will discuss similarities and differences between these two approaches and conclude by proposing a unifying perspective on probabilistic inference and efficient coding in sensory systems.

 nrterranova@berkeley.edu