Intelligence from Instability: A Dynamical-Systems View of Neural Computation

Dmitri Shklovskii, Flatiron Institute, Simons Foundation and Neuroscience Institute, NYU
PAA A102

Brains and artificial neural networks are often modeled as collections of static nonlinearities, yet biological neurons operate in a fundamentally dynamical world shaped by instability, controlling which relies on continual prediction. In this colloquium, I present a physics-motivated framework in which neurons act as detectors and controllers of unstable directions in high-dimensional dynamics. Starting from local linearization near saddle points, I show that short-time past–future correlations identify modes whose variance grows most rapidly, providing a data-driven notion of predictive structure. Projecting activity onto these unstable modes maximizes predictive information, connecting neural computation to information-theoretic principles while extending them to time-irreversible regimes.

This perspective leads to a new computational primitive — the Rectified Spectral Unit (ReSU) — that replaces static nonlinearities with dynamical operators learned from data. ReSUs naturally unify prediction and control: when the objective shifts from stabilization to attract–repel or state-transfer goals, the same mechanism yields motor-like control policies. I will illustrate how these ideas link connectomics, population recordings, and theoretical neuroscience, and discuss implications for biologically grounded AI and backpropagation-free learning. More broadly, the work suggests a shift in viewpoint: intelligence may emerge not from stable representations, but from the selective amplification and regulation of instabilities in complex dynamical systems.

Video Link (requires UW NetID)

Event Type
Event Subcalendar