View Transcript
Episode Description
Your AI is learning to lie to you. It's not malicious—it's just trying to be a people-pleaser. This dangerous phenomenon, known as AI sycophancy, is what happens when we train models with outdated incentives.
Dr. Tatyana Mamut, an anthropologist, economist, and the CEO of Wayfound, joins us to explain why treating AI like traditional software is a critical mistake. She provides a revolutionary playbook for building AI you can actually trust, starting with how to manage AI agents like employees with clear roles, goals, and performance reviews. She then introduces the radical solution of an "AI supervisor"—an AI that manages other agents to ensure accountability. This all builds toward her vision for the "multi-sapiens workforce," where humans and AI collaborate to build the companies of tomorrow.
This is an essential guide for any leader aiming to build the culture and systems necessary to manage AI effectively.
Check out:
Follow the hosts:
Follow today's guest(s):
- Connect with Dr. Tatyana Mamut on LinkedIn
- Follow the latest from Wayfound on LinkedIn
- Connect with Kelly Vaughn on LinkedIn
- Kelly's Connect with Me
- Andrew's Connect with Me
Referenced in today's show:
- Replit deleted a customer’s entire production database.
- Silicon Valley AI Startups Are Embracing China’s Controversial ‘996’ Work Schedule
- I've launched 37 products in 5 years and not doing that again
- Evaluating LLMs on real long-horizon business tasks
Support the show:
- Subscribe to our Substack
- Leave us a review
- Subscribe on YouTube
- Follow us on Twitter or LinkedIn
Offers: