View Transcript
Episode Description
Our chat with Ari shows that data curation is the most impactful and underinvested area in AI. He argues that the prevailing focus on model architecture and compute scaling overlooks the "bitter lesson" that "models are what they eat." Effective data curation—a sophisticated process involving filtering, rebalancing, sequencing (curriculum), and synthetic data generation—allows for training models that are simultaneously faster, better, and smaller. Morcos recounts his personal journey from focusing on model-centric inductive biases to realizing that data quality is the primary lever for breaking the diminishing returns of naive scaling laws. Datology's mission is to automate this complex curation process, making state-of-the-art data accessible to any organization and enabling a new paradigm of AI development where data efficiency, not just raw scale, drives progress.
Timestamps
00:00 Introduction
00:46 What is Datology? The mission to train models faster, better, and smaller through data curation.
01:59 Ari's background: From neuroscience to realizing the "Bitter Lesson" of AI.
05:30 Key Insight: Inductive biases from architecture become less important and even harmful as data scale increases.
08:08 Thesis: Data is the most underinvested area of AI research relative to its impact.
10:15 Why data work is culturally undervalued in research and industry.
12:19 How self-supervised learning changed everything, moving from a data-scarce to a data-abundant regime.
17:05 Why automated curation is superior to human-in-the-loop, citing the DCLM study.
19:22 The "Elephants vs. Dogs" analogy for managing data redundancy and complexity.
22:46 A brief history and commentary on key datasets (Common Crawl, GitHub, Books3).
26:24 Breaking naive scaling laws by improving data quality to maintain high marginal information gain.
29:07 Datology's demonstrated impact: Achieving baseline performance 12x faster.
34:19 The business of data: Datology's moat and its relationship with open-source datasets.
39:12 Synthetic Data Explain
ed: The difference between risky "net-new" creation and powerful "rephrasing."
49:02 The Resurgence of Curriculum Learning: Why ordering data matters in the underfitting regime.
52:55 The Future of Training: Optimizing pre-training data to make post-training more effective.
54:49 Who is training their own models and why (Sovereign AI, large enterprises).
57:24 "Train Smaller": Why inference cost makes smaller, specialized models the ultimate goal for enterprises.
01:00:19 The problem with model pruning and why data-side solutions are complementary.
01:03:03 On finding the smallest possible model for a given capability.
01:06:49 Key learnings from the RC foundation model collaboration, proving that data curation "stacks."
01:09:46 Lightning Round: What data everyone wants & who should work at Datology.
01:14:24 Commentary on Meta's superintelligence efforts and Yann LeCun's role.