There will be a scientific theory of deep learning

April 24
1h 33m

View Transcript

Episode Description

Deep learning works extraordinarily well. And we still largely don't know why.

A new paper from Jamie Simon, Daniel Kunin, and 12 co-authors argues that a scientific theory of deep learning is emerging, and coins a name for the emerging field: learning mechanics.

We sat down with Jamie and Dan on Generally Intelligent to talk about what a physics of deep learning would actually look like, why now, and what's left to figure out.

00:03:05 Learning mechanics as the physics to mechanistic interpretability's biology

00:04:13 Why deep learning needs a theory

00:07:07 Why deep learning is uniquely hard to engineer

00:12:11 How a week in the woods became a paper

00:25:59 The barrier to theory isn't opacity, but complexity

00:36:26 Deep learning's first gas law

00:47:22 Why more particles makes the problem easier

00:56:22 The discretization hypothesis

01:01:50 The strongest signal that a compact theory exists

01:05:07 The Platonic Representation Hypothesis

01:15:41 Why learning mechanics and mech interp need each other

01:25:29 Theory as safety infrastructure

Read the paperTranscript and links

Learning Mechanics website

See all episodes