Oxyopia - Graduate Student Seminar

Seminar | May 7 | 12-1:30 p.m. | 489 Minor Hall

 Paul Cullen, John Flanagan Lab; Brian Cheung, Bruno Olshausen Lab

 Neuroscience Institute, Helen Wills

Paul Cullen
John Flanagan Lab
Title: The Secret Lives of Retinal Astrocytes
Abstract: The study of glia – the support cells of the central nervous system – has come a long way since Rudolf Virchow described a connective tissue of the brain that he termed ‘nervenkitt’ in 1856. Rather than a passive scaffolding for neurons (the word ‘glia’ means glue in Greek), these cells are responsible for a dizzying array of tasks in the central nervous system. One particular type of glia, the astrocytes, are a broad and heterogeneous group that is increasingly studied for their potential role in neurodegeneration. However, the
tools to study these essential cells lag far behind those developed for their neuronal partners. Recent developments in sequencing technology has led to the widespread adoption of RNA-seq, a massively parallel approach for investigating the relative expression of genes within a population of cells. Although populations of brain astrocytes have been studied using this technology, to our knowledge those in the retina have never been so investigated. I will present an overview of this exciting technology and how we intend to utilize it to study the response of retinal astrocytes in a powerful in vivo model of ocular hypertension, as well as the challenges this approach presents.

Brian Cheung
Bruno Olshausen Lab
Title: Unsupervised Learning in Biological Neural Networks
Abstract: Supervised learning has proven extremely effective for many problems in machine learning where large amounts of labeled training data are available. However, the dependence on large labeled datasets and non-local updates make it unclear how similar algorithms might function in the brain. Conversely, biological neural networks are extremely effective at building rich, high utility, representations of sensory input with little or no labeled training data. However, unsupervised representation learning in artificial neural networks lags far behind both biological networks, and supervised artificial networks. One explanation for our failure to
develop effective unsupervised learning rules is that the objective functions we propose are mismatched to the behaviorally relevant tasks for which we wish to use the learned representation. We optimize objectives such as log likelihood, sparsity, or reconstruction error, and then hope a learned representation which exposes highlevel features of sensory data relevant to survival will result purely as a side effect. Rather than proposing a hand-designed update rule, in this work we use supervised training to play the role of evolution in discovering an update rule for biological neural networks. We meta-learn a local learning rule that only
depends on bottom-up input from the pre-synaptic neuron and top-down feedback from the post-synaptic neuron. By re-casting unsupervised learning as meta-learning, we directly optimize an unsupervised learning rule with respect to its utility. We argue that this is a natural approach to unsupervised learning in the context
of biology. Our work offers a preliminary investigation of unsupervised learning rules meta-learned using this novel perspective.