When Did Common Sense AI Policy Become Radical?

February 24
37 mins

Episode Description

A couple of months ago, I joined the Canadian government’s AI strategy task force. Out of thirty members, I was one of only four focused on safety. Everyone else was there to talk growth. It reflects a pattern playing out all over the world: we’re going all in on AI, and regulation will only slow us down.

It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen.

The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?

Mentioned:

Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, by the White House Office of Science and Technology Policy

The mirage of AI deregulation, by Alondra Nelson (Science)

International AI Safety Report 2026, by Yoshua Bengio et al


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

See all episodes

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.