LLMs in 2026: What’s Real, What’s Hype, and What’s Coming Next

February 23
1h 14m

Episode Description

Is AI actually going to replace developers? Or is the hype getting ahead of reality?

On this episode of Digital Disruption, we’re joined by Sebastian Raschka, AI Research Engineer and author.

Sebastian Raschka sits down with Geoff Nielson to unpack the real state of Large Language Models (LLMs) in 2026. As an LLM research engineer, Sebastian bridges deep technical expertise with practical, real-world AI implementation. In this conversation, he cuts through AI hype to focus on what’s actually achievable with modern LLMs, reasoning models, reinforcement learning, and inference scaling and where the limitations still exist. Sebastian explains why most companies should not build a large language model from scratch, but also why understanding the fundamentals may be one of the most important investments technology leaders can make.

This conversation breaks down:

◼️Why coding is currently the strongest LLM use case

◼️Why “reasoning” models still fail simple tasks like counting letters in “strawberry”

◼️The reality behind Math Olympiad gold-level AI claims

◼️The true cost of training large models (millions in GPU compute)

◼️The privacy risks of uploading proprietary data into APIs

◼️How enterprises should think about fine-tuning vs API-based prompting

◼️Why benchmarks and leaderboards can be misleading


Sebastian Raschka has over a decade of experience in artificial intelligence and machine learning. His work bridges academia and industry, serving as a Senior Engineer at Lightning AI and as a faculty member at the University of Wisconsin–Madison. He is the author of Build a Large Language Model from Scratch and is widely recognized for his practical, code-driven approach to AI education and research. His expertise lies in LLM research, transformer architectures, reinforcement learning, and the development of high-performance AI systems, with a strong focus on real-world implementation.


In this video:

00:00 Intro

01:23 The Rise of “Reasoning” and Thinking Models

03:06 Inference scaling vs training scaling

06:17 What LLMs are actually good (and bad) at

07:09 The “Strawberry” Problem and Reasoning Limits

09:00 Tool use and why LLMs don’t need to count letters

10:20 Math Olympiads & self-refinement techniques

12:01 Why coding is the killer use case

13:28 Does AI make developers obsolete?

18:02 The Reality of 10x developer productivity claims

21:43 Generalist vs specialized models

23:53 Build from scratch vs fine-tune vs API prompting

25:01The true cost of training an LLM

27:33 API customization vs owning your model

29:12 Who should build an LLM from scratch?

33:16 Data requirements & why you need terabytes

34:28 Enterprise data challenges

35:40 Retrieval-Augmented Generation (RAG) explained

46:05 Multi-agent systems & tool calling

49:48 The problem with LLM benchmarks

55:43 Using LLMs as judges

58:00 Biggest misconceptions about LLMs

1:04:19 Reinforcement learning with verifiable rewards

1:06:32 Advice for technology leaders

1:11:48 Escaping AI hype through fundamentals


Connect with Sebastian:

LinkedIn: https://www.linkedin.com/in/sebastianraschka/

X: https://x.com/rasbt

Connect with Sebastian:

LinkedIn: https://www.linkedin.com/in/sebastianraschka/

X: https://x.com/rasbt



Our links:Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG

See all episodes