Welcome to the Learning Mechanics DeCal!
Learning Mechanics is the emerging discipline that treats deep learning the way physics treats the natural world: seeking compact mathematical principles, tight connections between theory and experiment, and simple, intuitive explanations for complex phenomena. Pieces of a scientific theory for deep learning are beginning to fit together, and in this course, we will examine what has been assembled so far, what remains contested, and where the field is heading.
Deep learning is among the most powerful technologies humans have ever built, and understanding it promises to be one of the defining intellectual challenges of the early 21st century. As of 2026, the engineering success of deep learning has dramatically outpaced our scientific understanding of it. Closing that gap may amount to founding a genuinely new field of science—one whose implications for our understanding of intelligence, data, and learning extend well beyond the neural networks that motivated it.
Readings draw heavily from the whitepaper There Will Be a Scientific Theory of Deep Learning (Simon et al., 2026) and the primary literature it synthesizes. We will work through the theoretical tools, empirical regularities, and open questions that are laying the groundwork for a physics-like understanding of deep learning.
Lecture 8 Universality I: The Platonic Representation Hypothesis
Do deep learning models learn similar representations of data across diverse architectures?
Lecture 11 Empirical Laws I: The Edge of Stability
Why do neural networks routinely train successfully while hovering on the very brink of numerical divergence?
Q&A
What is a DeCal?