Navigated to Who is Leading ? Who is Learning ? : AI at Work - The Deeper Thinking Podcast

Who is Leading ? Who is Learning ? : AI at Work - The Deeper Thinking Podcast

August 23
22 mins

Episode Description

Who is Leading, Who is Learning?: AI at Work

new report from MIT has sent shockwaves through the enterprise AI world. According to the State of AI in Business 2025 study, 95% of generative AI pilots deliver zero return on investment.

#ArtificialIntelligence #MultimodalAI #ExplainableAI #PhilosophyOfTechnology #DigitalEthics #NarrativeStructures

What if the real question of AI was not how powerful it becomes, but what kind of story it tells? This episode frames artificial intelligence as a narrative force—less a technological object and more a co-author of contemporary meaning. From the growing unease around generative AI to the quiet revolutions in healthcare and governance, we explore how intelligence is escaping the lab and inhabiting our daily institutions, expectations, and moral architectures.

We move through philosophical tensions: the trade-off between efficiency and autonomy, the ethical opacity of explainable AI, and the metaphysics of machines that now see, speak, and learn. Drawing on thinkers like Gilbert Simondon, Hannah Arendt, and Bruno Latour, the episode unpacks the architecture of AI not as a technical challenge, but as a civic, cultural, and ontological one.

The aim is not to simplify the story of AI—but to listen more carefully to it. What are its rhythms, its blind spots, its unspoken philosophies? And how might we design with care rather than control?

Reflections

  • AI is not just a tool—it is a theory of how cognition ought to behave.
  • Efficiency is not a neutral value; it reshapes institutions and identities.
  • Machines that perceive change the ethical demand we place on design.
  • The opacity of AI is not just technical—it is philosophical.
  • Smaller models challenge our assumptions about scale and significance.
  • To understand AI is to understand what it means to delegate judgment.
  • Governance without interpretability is not governance—it is abdication.
  • Multimodal AI simulates perception, but what does it mean to simulate care?
  • The future of intelligence is less about code and more about character.

Why Listen?

  • Understand the philosophical tensions behind AI development and deployment
  • Explore how narrative, care, and institutional design shape AI's societal role
  • Engage with the ethical implications of autonomous systems and machine ethics
  • Reconsider AI as an unfolding civic actor rather than a technical artifact

Listen On:

Support This Work

If this episode deepened your perspective, you can support the project here: Buy Me a Coffee

Further Reading

The future will not be decided by machines alone. It will be shaped by the structures we choose to trust—and the rhythms we choose to listen for.

#TheDeeperThinkingPodcast #ArtificialIntelligence #EthicsOfTechnology #PhilosophyOfAI #DigitalHumanism #NarrativeAI #InstitutionalDesign #CivicArchitecture #Simondon #Latour #Arendt #FutureOfWork #TechEthics #AIInSociety #Explainability #Governance

See all episodes

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.