Understand The Brain Using Interpretable Machine Learning Models
Anqi Wu, Ph.D.
Postdoctoral Research Scientist, Grossman Center for the Statistics of the Mind, Columbia University
Seminar abstract: Computational neuroscience is a burgeoning field embracing exciting scientific questions, a deluge of data, and an imperative demand for quantitative models. These opportunities promote the advancement of data-driven machine learning methods to understand our brains deeply. In particular, my work lies in such an interdisciplinary field and spans the development of neuroscience-motivated machine learning for neural and behavioral analysis in both animal and human studies. In this talk, I will show how to incorporate neuro-tailored assumptions into probabilistic modeling to discover interpretable structures. I will first present my work on Bayesian latent models for high-dimensional multi-neuron recordings in multiple cortical areas providing intriguing insights. Next, I will introduce a structured prior that integrates prior knowledge about fMRI bold signals, enhances probabilistic decoding for fMRI analysis, and discovers interpretable brain maps. Finally, I will discuss a novel probabilistic graphical model for animal pose tracking and interpretable downstream behavioral analyses. These examples illustrate the exploitation of probabilistic models guided by neuroscience assumptions applying to diverse neural and behavioral data.
Mechanisms underlying flexible information flow across the brain Karel Svoboda, Ph.D. Director, Allen Institute: Abstract: Neural computation and behavior are produced by shifting configurations of multi-regional neural networks, implemented by dynamic coupling between brain regions. We...