The Mysterious Math Behind LLMs | Anil Ananthaswamy

January 23
1h 10m

Episode Description

WANTED: Developers and STEM experts! Get paid to create benchmarks and improve AI models. Sign up for Alignerr using our link: https://alignerr.com/?referral-source=briankeating

One of the most powerful AI systems we’ve ever built is succeeding for reasons we still don’t understand. And worse, they may succeed for reasons that might lock us into the wrong future for humanity.

Today’s guest is Anil Ananthaswamy, an award-winning science writer and one of the clearest thinkers on the mathematical foundations of machine learning.

In this conversation, we’re not just talking about new demos, incremental improvements, or updates on new models being released. We’re asking even harder questions: Why does the mathematics of machine learning work at all?

How do these models succeed when they suffer from problems like overparameterization and lack of training data?

And are large language models revealing deep structure, or are they just producing very convincing illusions and causing us to face an increasingly AI-slop-driven future?

KEY TAKEAWAYS

  • 00:00 — Book explores why ML works through math
  • 02:47 — Perceptron proof shows simple math guarantees learning
  • 05:11 — Early AI failed due to single-layer limits
  • 07:12 — Nonlinear limits caused the first AI winter
  • 09:04 — Backpropagation revived neural networks
  • 10:59 — GPUs + big data enabled deep learning
  • 15:25 — AI success risks technological lock-in
  • 17:30 — LLMs lack human-like learning and embodiment
  • 22:57 — High-dimensional spaces power ML behavior
  • 27:36 — Data saturation may slow future gains
  • 31:11 — Continual learning is still missing in AI
  • 33:46 — Neuromorphic chips promise energy efficiency
  • 41:49 — Overparameterized models still generalize well
  • 45:05 — SGD succeeds via randomness in complex landscapes
  • 48:27 — Perceptrons remain the core of modern neural net

-

Additional resources:

Anil's NEW Book "Why Machines Learn: The Elegant Math Behind Modern AI": https://www.amazon.com/Why-Machines-Learn-Elegant-Behind/dp/0593185749

Get My NEW Book: Focus Like a Nobel Prize Winner: https://www.amazon.com/dp/B0FN8DH6SX?ref_=pe_93986420_775043100

Please join my mailing list here 👉 https://briankeating.com/yt to win a meteorite 💥

-

Join this channel to get access to perks like monthly Office Hours:

https://www.youtube.com/channel/UCmXH_moPhfkqCk6S3b9RWuw/join

📚 Get a copy of my books:

Think Like a Nobel Prize Winner, with life changing interviews with 9 Nobel Prizewinners: https://a.co/d/03ezQFu

My tell-all cosmic memoir Losing the Nobel Prize: http://amzn.to/2sa5UpA

The first-ever audiobook from Galileo: Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican https://a.co/d/iZPi9Un

📺 Watch my most popular videos:📺

Follow me to ask questions of my guests:

#universe #podcast #briankeating #intotheimpossible #science #astronomy #cosmology #cosmicmicrowavebackground #intotheimpossible #briankeating #AnilAnanthaswamy

Learn more about your ad choices. Visit megaphone.fm/adchoices

See all episodes

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.