Flexibility, Interpretability, and Scalability in Time Series Modeling: Neyman Seminar

Seminar | October 16 | 4-5 p.m. | 1011 Evans Hall

 Emily Fox, University of Washington

 Department of Statistics

We are increasingly faced with the need to analyze complex data streams; for example, sensor measurements from wearable devices have the potential to transform healthcare. Machine learning—and moreover deep learning—has brought many recent success stories to the analysis of complex sequential data sources, including speech, text, and video. However, these success stories involve a clear prediction goal combined with a massive (benchmark) training dataset. Unfortunately, many real-world tasks go beyond simple predictions, especially in cases where models are being used as part of a human decision-making process or medical intervention. Such complex scenarios necessitate notions of interpretability and measures of uncertainty. Furthermore, in aggregate the datasets might be large, but we might have limited data about an individual, requiring parsimonious modeling approaches.

In this talk, we first discuss how sparsity-inducing penalties can be deployed on the weights of deep neural networks to enable interpretable structure learning, in addition to yielding more parsimonious models that better handle limited data scenarios. We then turn to Bayesian dynamical modeling of individually sparse data streams, flexibly sharing information and accounting for uncertainty. Finally, we discuss our recent body of work on scaling computations to massive time series, mitigating bias in stochastic gradient based algorithms applied to sequential data sources. Throughout the talk, we provide analyses of activity, neuroimaging, genomic, housing and homelessness data sources.

Bio: Emily Fox is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering and Department of Statistics at the University of Washington, and is the Amazon Professor of Machine Learning. Currently, she is also Director of Health AI at Apple. She received her Ph.D. in EECS from MIT, with her dissertation being awarded the Leonard J. Savage Thesis Award in Applied Methodology and MIT EECS Jin-Au Kong Outstanding Doctoral Thesis Prize. She has also been awarded a Presidential Early Career Award for Scientists and Engineers (2017), Sloan Research Fellowship (2015), ONR Young Investigator award (2015), and NSF CAREER award (2014). Her research interests are in large-scale dynamic modeling and computations, with a focus on Bayesian methods and applications in health and computational neuroscience.

 Berkeley, CA 94720, 5106422781