|

I'm an undergraduate at UC Berkeley double majoring in Applied Mathematics and Computer Science with a minor in Physics. My interests lie at the intersection of all of these fields. I'm particularly interested in foundational theories of artificial intelligence. Currently, I'm working at a small lab in the Redwood Center for Theoretical Neuroscience, where I work on first-principles theories for neural networks.

I collect my thoughts in a digital garden—some polished essays, others messy learning notes. Feel free to explore.

profile photo

Projects

Learning Mechanics DeCal
Fall 2026

I'm designing a course that gives an introduction to Learning Mechanics: the emerging discipline that treats deep learning the way physics treats the natural world: seeking compact mathematical principles, tight connections between theory and experiment, and simple, intuitive explanations for complex phenomena.

Deep linear networks are a surprisingly useful toy model of weight-space dynamics
April 2026

What/why/how do deep neural networks learn? I co-authored this blog post for learningmechanics.pub to explain how a toy model (deep linear networks) can strip away the complexity of nonlinear networks such as neural networks to reveal the underlying physics of learning.