Dissertation Talk: Interpretable Machine Learning with Applications in Neuroscience

Seminar | April 20 | 1-2 p.m. | Cory Hall, 540AB DOP Center

 Reza Abbasi Asl, UC Berkeley, Department of EECS

 Berkeley Laboratory of Information and System Sciences

In the last decade, research in machine learning has been exceedingly focused on the development of algorithms and models with remarkably high predictive capabilities. Models such as convolutional neural networks (CNNs) have achieved state-of-the-art predictive performance for many tasks in computer vision, autonomous driving, and transfer learning in areas such as computational neuroscience. However, interpreting these models still remains a challenge, primarily because of a large number of parameters involved.

In this talk, we propose and investigate two frameworks based on (1) stability and (2) compression to build more interpretable machine learning models. These two frameworks will be demonstrated in the context of a computational neuroscience study. First, we introduce DeepTune, a stability-driven visualization framework for CNN-based models. DeepTune is used to characterize biological neurons in the difficult V4 area of primate visual cortex. This visualization uncovers the diversity of stable patterns explained by the V4 neurons. Then, we introduce CAR, a framework for structural compression of CNNs based on pruning filters. CAR increases the interpretability of CNNs while retaining the diversity of filters in convolutional layers. CAR-compressed CNNs give rise to new set of accurate models for V4 neurons but with much simpler structures. Our results lend support, to a certain extent, to the resemblance of these CNNs to a primate brain.

 abbasi@berkeley.edu, 510-612-5624