Navigated to Taming AI Hallucinations in ResearchTech: 5 Pitfalls and Solutions with Scott Swigart

Taming AI Hallucinations in ResearchTech: 5 Pitfalls and Solutions with Scott Swigart

July 14
33 mins

View Transcript

Episode Description

In this free episode of Talking AI, host Ray Poynter sits down with Scott Swigart, SVP of AI Innovation at Shapiro & Raj and Head of Product for Stellar, to uncover the five core causes of AI hallucinations in ResTech—and how to crush them. From visually dense slide decks and the “chunking chainsaw massacre” to outdated training data, overconfidence bias, and data contradictions, Scott shares practical, human-augmented workflows that ensure your AI-powered insights are rock-solid.

Scott Swigart brings over 20 years of experience at the intersection of market research and technology. A former co-owner of a boutique insights agency and a life-long programmer since age 12, he now leads development of Stellar, Shapiro & Raj’s proprietary AI insights platform, helping life-sciences clients turn mountains of qualitative and quantitative data into trustworthy conclusions.

Tune in to learn:

How to reverse-engineer complex charts into clean data tables and quality-control them
Why context window size matters and how to avoid “chunking” pitfalls
Techniques for forcing AI to treat your data as the source of truth, not its outdated training set
Ways to expose AI’s overconfidence and request built-in “show-your-work” reasoning
Methods to map, weight, and surface contradictions across reports, transcripts, and studies

Equip yourself with the tactics you need to transform AI’s superpower—scaling across thousands of pages—into reliable, actionable insights.
See all episodes