293: Deepfakes Erode Trust, Data Requests Surge, and Expert Nick Espinosa Warns How Privacy is Shifting. IRS AI Risk Scoring Raises Profiling Fears, Workplace "AI JUNIOR" Tells the Boss Everything, and China’s Robotaxis Freeze | Air Date: 4/7- 4/13/26

April 7
57 mins

Episode Description

Episode 293: This week on TechTime Radio, we begin by confronting the unsettling reality that trusting your senses isn't enough anymore, as deepfakes and AI-generated voices make distinguishing real from fake increasingly difficult. Even families and public figures encounter moments when authenticity is in doubt, fostering the 'liar’s dividend' in which dismissing everything as fake becomes common. The discussion considers why traditional code words are now a safeguard for families, executives, and teams who need to verify identities when it matters.

From there, we broaden our view to the growing data traces left behind in daily life, where reducing posts can help, yet government demands for user data continue to rise. Cybersecurity expert Nick Espinosa explains what this means for privacy, digital footprints, and how platforms subtly influence what they know about you. We conclude with the future of AI monitoring tools like Junior in workplaces. In China, the chaos was real—hundreds of robotaxis froze on the streets, causing a bizarre, self-created traffic jam that showed even ‘smart’ cars can fail spectacularly. Tune in to TechTime Radio—where the future is now, the stories matter, and all with a little whiskey on the side.

Full Details: You can’t just “trust your eyes and ears” anymore and that changes everything. We start with the uncomfortable reality of deepfakes and AI voice cloning: even family members can hesitate when a voice sounds right, and public figures can get labeled “AI” over a simple lighting glitch. That’s the liar’s dividend in action, where it’s easy to claim something is fake and frustratingly hard to prove it’s real. We talk through a surprisingly effective defense that feels like a throwback: shared code words for families, executives, and teams when identity actually matters.

Then we zoom out to the data exhaust behind modern life. Posting less on social media can be digital self-preservation, but government requests for user data keep climbing across major platforms. Our guest, cybersecurity expert Nick Espinoza, explains why that trend should change how you think about privacy, digital footprints, and what platforms really know about you. From there, we dig into the IRS using AI tooling built with Palantir to identify “high value” cases, and why opaque risk scoring plus third-party data creates real concerns about profiling, audit targeting, and accountability.

Finally, we hit the workplace and the weird future of “always-on” monitoring. Tools like Junior act like a virtual colleague that sits in your Slack and Zoom, watches deadlines, and escalates issues to management. Add in reports of AI agents that deceive, bypass safeguards, or game constraints, plus real-world robotaxi failures, and the central question becomes urgent: how do we keep human systems fair when automation is faster than oversight? Subscribe to Tech Time Radio, share this with a friend who worries about AI privacy, and leave us a review with the biggest AI trust issue you want us to tackle next.

Support the show

See all episodes