Discovering Structure in Neural and Behavioral Data
Scott Linderman, PhD
Department of Statistics
Columbia University Abstract: New recording technologies are transforming neuroscience, allowing us to precisely quantify neural activity, sensory stimuli, and natural behavior. How can we discover simplifying structure in these high-dimensional data and relate these domains to one another? I will present my work on developing statistical tools and machine learning methods to answer this question. With two examples, I will show how we can leverage prior knowledge and theories to build models that are flexible enough to capture complex data yet interpretable enough to provide new insight. Alongside these examples, I will discuss the Bayesian inference algorithms I have developed to fit such models at the scales required by modern neuroscience. First, I will develop models to study global brain states and recurrent dynamics in the neural activity of C. elegans. Then, I will show how similar ideas apply to data that, on the surface, seem very different: movies of freely behaving larval zebrafish. In both cases, these models reveal how complex patterns may arise by switching between simple states, and how state changes may be influenced by internal and external factors. These examples illustrate a framework for harnessing recent advances in machine learning, statistics, and neuroscience. Prior knowledge and theory serve as the main ingredients for interpretable models, machine learning methods lend additional flexibility for complex data, and new statistical inference algorithms provide the means to fit these models and discover structure in neural and behavioral data. Host: Stanley C. Froehner
Mechanisms underlying flexible information flow across the brain Karel Svoboda, Ph.D. Director, Allen Institute: Abstract: Neural computation and behavior are produced by shifting configurations of multi-regional neural networks, implemented by dynamic coupling between brain regions. We...