Skip to main content.
Advanced search >
<< Back to previous page Print

<< Tuesday, December 12, 2017 >>


Remind me

Tell a friend

Add to my Google calendar (bCal)

Download to my calendar

Bookmark and ShareShare


Dissertation Talk: How the brain explores and consolidates activity patterns to learn Brain-Machine Interface control

Presentation | December 12 | 2-3:30 p.m. | Cory Hall, Hogan Room / 521


Vivek Ravindra Athalye, Electrical Engineering

Electrical Engineering and Computer Sciences (EECS)


The Brain-Machine Interface (BMI) is an emerging technology which directly translates neural activity into control signals for effectors such as computers, prosthetics, or even muscles. Work over the last decade has shown that high performance BMIs depend on machine learning to adapt parameters for decoding neural activity, but also on the brain learning to reliably produce desired neural activity patterns. How the brain learns neuroprosthetic skill de novo is not well-understood and could inform the design of next-generation BMIs. We view BMI learning from the brain’s perspective as a reinforcement learning problem, as the brain must initially explore activity patterns, observe their consequences on the prosthetic, and finally consolidate activity patterns leading to desired outcomes. In this talk, I will address 3 questions about how the brain learns neuroprosthetic skill:

1) How do task-relevant neural populations coordinate during activity exploration and consolidation?
2) How can the brain select activity patterns to consolidate? Does the pairing of neural activity patterns with neural reinforcement signals drive activity consolidation?
3) Do the mechanisms of neural activity pattern consolidation generalize across cortex, even to visual cortex?

I will present the use of Factor Analysis to analyze neural coordination during BMI control by partitioning neural activity variance arising from two sources: private inputs to each neuron which drive independent, high-dimensional variance, and shared inputs which drive multiple neurons simultaneously and produce low-dimensional covariance.

We found that initially, each neuron explores activity patterns independently. Over days of learning, the population’s covariance increases, and a manifold emerges which aligns to the decoder. This low-dimensional activity drives skillful control. Next, we found that cortical neural activity patterns which causally lead to midbrain dopaminergic neural reinforcement are consolidated. This provides evidence for a “neural law of effect,” following Thorndike’s behavioral law of effect stating that behaviors leading to reinforcements are repeated. Finally, I will present results showing that basal ganglia-dependent mechanisms of neural exploration and consolidation generalize even to visual cortex, an area of the brain primarily thought to represent visual stimulus. These results contribute to our understanding of how the brain solves the reinforcement learning problem of learning neuroprosthetic skill.


510-320-2199