Why the US need Open Models | Nathan Lambert on what matters in the AI and science world

March 11
47 mins

Episode Description

Open models are often discussed as if they’re competing head-to-head with frontier systems. Are they catching up? Falling behind? Are they “good enough” yet?

Nathan Lambert doesn’t believe open models will ever catch up with closed ones, and he explains clearly why. But he also argues that this is the wrong framing.

Nathan is a research scientist at the Allen Institute for AI, the author of the RLHF Book, and the writer behind the Interconnects newsletter. He’s also one of the clearest voices on what open models are for, and just as importantly, what they are not.

We talk about how academic AI research lost influence as training scaled up, why open models became the main place where experimentation still happens, and why that role matters even when open models trail frontier systems. We also discuss why China’s open model ecosystem developed so differently from the US one, and what that tells us about incentives, talent, and access to resources.

From there, the conversation moves into the mechanics: post-training and reinforcement learning complexity, data availability, coding agents, hybrid architectures, and the very practical reasons most people continue to rely on closed models, even when they support openness in principle.

This is a conversation about how AI research actually moves, where open models fit into that picture, and what it means to build systems when the frontier is expensive, fast-moving, and increasingly product-driven.


This conversation offers a realistic look at where the open ecosystem stands today. Watch it!

*Follow on*: https://www.turingpost.com/

*Did you like the episode? You know the drill:*

📌 Subscribe for more conversations with the builders shaping real-world AI.

💬 Leave a comment if this resonated.

👍 Like it if you liked it.

🫶 Thank you for watching and sharing!

*Guest:*

Nathan Lambert, Research Scientist at Allen Institute for AI (AI2)

https://x.com/natolambert

https://www.linkedin.com/in/natolambert/

https://www.interconnects.ai/ (his newsletter on open models + RL + everything important in AI)

https://rlhfbook.com/ - The RLHF Book

https://allenai.org/

*Links:*

State of AI in 2026 (Lex Fridman interview): https://www.youtube.com/watch?v=EV7WhVT270Q&t=10206s

NVIDIA’s path to open models https://www.youtube.com/watch?v=Y3Vb6ecvfpU

OLMo models: https://allenai.org/olmo

NVIDIA Nemotron: https://developer.nvidia.com/nemotron

SpaceX + xAI partnership: https://www.spacex.com/updates#xai-joins-spacex

Season of the Witch (book): https://www.simonandschuster.com/books/Season-of-the-Witch/David-Talbot/9781439108246

📰 Transcript: https://www.turingpost.com/nathanlambert

*Turing Post* – AI stories from labs the Valley doesn't cover.

https://x.com/TheTuringPost

https://www.linkedin.com/in/ksenia-se

See all episodes