You are here

Enabling big neuroscience through computational advances

Adam Charles, Johns Hopkins University
Monday, April 8, 2024 - 12:00pm
See description for connection details

Connection details (including Zoom room) here: https://indico.cern.ch/event/1365683/

Uncovering the principles of neural computation is entering an unprecedented new era of large-scale recordings at the micron level. Making sense of this new data requires 1) process and analyze these high-dimensional datasets and 2) interpretable models of the resulting high-dimensional time-series. In this talk I will cover recent advances in leveraging advanced data models based on latent sparsity and low-dimensionality to tackle key challenges in both domains. First I will discuss ongoing work in multi-photon data analysis. This work seeks to expand our capabilities to extract scientifically rich information from large-scale data of sub-micron targets that represent how circuits compute and how those computations adapt over time. Specifically, I will discuss recent machine learning image enhancement for tracking synaptic strength in-vivo at scale, and a morphology-independent image segmentation algorithm for identifying geometrically complex fluorescing objects (e.g., dendritic and wide-field imaging). Finally, I will discuss the analysis challenges if inferring meaningful representations of brain-wide activity provided by imaging advances. Specifically, brain-wide data represents many parallel and distributed computations. I will discuss recent work building on the intuition of the "neural data manifold", and present a decomposed linear dynamical systems (dLDS) model that can capture the nonlinear and non-stationary properties of the neural trajectories along this manifold. dLDS learns a concise model of such dynamics by breaking up the system into several independent, overlapping systems that are each interpretable as linear systems. I will demonstrate how this model finds meaningful trajectories both in synthetic data and in "whole-brain" C. elegans imaging.

Biography:
Adam Charles is an Assistant Professor of Biomedical Engineering (BME) at The Johns Hopkins University with affiliations at the Department of Neuroscience, Center for Imaging Science (CIS), the Kavli Neuroscience Discovery Institute (NDI) and the Mathematical Institute for Data Science (MINDS). His lab focuses on machine learning and signal processing for neural imaging, data analysis, and other applications (including remote sensing and theoretical/computational neuroscience). They develop tools based in probabilistic and low-dimensional modeling to create the next-generation of methods needed to acquire and interpret complex neural signals. New methods are vital for progress in our understanding of the computations that the brain performs and furthering our understanding of how we as humans experience the world.

Share