Episode Description
Your AI tool isn't broken. It's just full.
Hi, I'm Mike Fox, host of this podcast, "Lone Wolf Unleashed." I help solo founders systemise their businesses so they can switch off sooner and live larger. This week I'm pulling back the curtain on a real data project: 103,000 rows, a client locked into Microsoft Copilot, and a categorisation task that would've taken weeks to do manually.
Here's what I worked through — and what you can take straight into your own business.
The context window is the AI's working memory. Once it runs out, the quality of your outputs tanks — or the conversation just stops. Understanding this constraint is the difference between AI that saves you hours and AI that wastes them.
Working within real-world limitations (not every client is on Claude), I built a strategy to break down a massive data set into token-efficient chunks, set up a structured workflow for Microsoft Copilot to process them in sequence, and then used a manager-agent review layer to QA the outputs before any human had to.
The same principles apply whether you're running Claude, ChatGPT, or whatever tool your organisation has decided is the one. The constraints change. The framework doesn't.
What you'll learn:
- What a context window is and why it limits what your AI can do with large data sets
- How to make your data and your prompts token-efficient before you send them
- A practical chunking strategy for splitting large Excel or CSV files across multiple AI sessions
- How to use a manager-agent role to review and QA your AI outputs
- Which model settings to use for heavy analytical tasks
If you're using AI to make decisions — not just write emails — this episode is for you.
Resources, frameworks, and tools: lonewolfunleashed.com/resources
Mentioned in this episode:
This podcast is part of the Podknows Podcasting ICN Network
You might also like...
Check out the "Websites Made Simple" podcast with Holly Christie at https://websitesmadesimple.co.uk/