Navigated to Sahaj Garg on Low Latency AI
January 14
54 mins

View Transcript

Episode Description

In this episode, Sahaj Garg, CTO of wispr.ai, joins SE Radio host Robert Blumen to talk about the challenges of building low-latency AI applications. They discuss latency's effect on consumer behavior as well as interactive applications. The conversation explores how to measure latency and how scale impacts it. Then Sahaj and Robert shift to themes around AI, including whether "AI" means LLMs or something broader, as they look at latency requirements and challenges around subtypes of AI applications. The final part of the episode explores techniques for managing latency in AI: speed vs accuracy trade-offs; speed vs cost; latency vs cost; choosing the right model; reducing quantization; distillation; and guessing + validating.

Brought to you by IEEE Computer Society and IEEE Software magazine.

See all episodes

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.