View Transcript
Episode Description
Will AI destroy the world, or transform it into one of abundance? Across two books and several papers, philosopher Nick Bostrom has envisioned a range of AI futures. He joins Nicholas Thompson to discuss the ethics of how we treat AI, whether AI has sentience, and why he believes we should keep building, even at the risk of annihilation.
Produced in collaboration with PwC.
(00:00) Introduction to Nick Bostrom and Superintelligence
(01:56) How AI development matched Bostrom's predictions
(04:48) Recursive self-improvement: Are we there yet?
(07:40) Physical limits of intelligence and computational ceilings
(09:40) Timeline predictions: Next year vs. next five years
(11:46) Embodied intelligence: Can AI replicate motor skills?
(14:32) Centralization vs. democratization of AI power
(16:52) The race dynamics: One leader vs many competitors
(19:57) AI alignment: Making systems behave as intended
(21:37) The gap between model power and our understanding
(23:25) "Optimal Timing for Superintelligence" paper explained
(28:14) Swift to harbor, slow to berth: When to pause AI development
(35:20) Moral status of digital minds and sentience
(41:23) Building trust with potentially misaligned AI
(44:11) Where to invest unlimited AI research funding
(47:07) Closing: Should we say please and thank you to AI?
Learn more about your ad choices. Visit megaphone.fm/adchoices