Navigated to 2025 Python Year in Review - Transcript

2025 Python Year in Review

Episode Transcript

Michael Kennedy

Python in 2025 is a delightfully refreshing place.

The guild's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight.

On this episode, we have an amazing panel to give us a range of perspectives on what mattered in 2025 in Python.

We have Barry Warsaw, Brett Cannon, Gregory Kampfhammer, Jody Burchell, Reuven Lerner, and Thomas Worders on the show to give us their thoughts.

This is Talk Python To Me, episode 532, recorded December 9th, 2025.

Welcome to Talk Python To Me, the number one Python podcast for developers and data scientists.

This is your host, Michael Kennedy.

I'm a PSF fellow who's been coding for over 25 years.

Let's connect on social media.

You'll find me and Talk Python on Mastodon, BlueSky, and X.

The social links are all in your show notes.

You can find over 10 years of past episodes at talkpython.fm.

And if you want to be part of the show, you can join our recording live streams.

That's right.

We live stream the raw uncut version of each episode on YouTube.

Just visit talkpython.fm/youtube to see the schedule of upcoming events.

Be sure to subscribe there and press the bell so you'll get notified anytime we're recording.

Look into the future and see bugs before they make it to production.

Sentry's SEER AI code review uses historical error and performance information at Sentry to find and flag bugs in your PRs before you even start to review them.

Stop bugs before they enter your code base.

Get started at talkpython.fm/seer-code-review.

Hey, before we jump into the interview, I just want to send a little message to all the companies out there with products and services trying to reach developers.

That is the listeners of this show.

As we're rolling into 2026, I have a bunch of spots open.

So please reach out to me if you're looking to sponsor a podcast or just generally sponsor things in the community.

And you haven't necessarily considered podcasts.

You really should reach out to me and I'll help you connect with the Talk Python audience.

thanks everyone for listening all of 2025 and here we go into 2026 cheers hey everyone it's so awesome to be here with you all thanks for taking the time out of your day to be part of talk python for this year in review this python year in review so yeah let's just jump right into it gregory welcome welcome to the show welcome back to the show how you doing hi i'm an associate professor

Gregory Kapfhammer

of computer and information science and i do research and software engineering and software testing.

I've built a bunch of Python tools, and one of the areas we're studying now is flaky test cases in Python projects.

I'm also really excited about teaching in a wide variety of areas.

In fact, I use Python for operating systems classes or theory of computation classes.

And one of the things I'm excited about is being a podcast host.

I'm also a host on the Software Engineering Radio podcast sponsored by the IEEE Computer Society, and I've had the cool opportunity to interview a whole bunch of people in the Python community.

So Michael, thanks for welcoming me to the show.

Michael Kennedy

Yeah, it's awesome to have you back.

And we talked about FlakyTest last time.

I do have to say your AV setup is quite good.

I love the new mic and all that.

Thomas, welcome.

Awesome to have

Thomas Wourters

you here.

Thanks for having me.

I'm Thomas Wauters.

I'm a longtime Python core developer, although not as long as one of the other guests on this podcast.

I worked at Google for 17 years.

for the last year or so I've worked at Meta.

In both cases, I work on Python itself within the company and just deploying it internally.

I've also been a board member of the PSF, although I'm not one right now.

And I've been a steering council member for five years and currently not because the elections are going and I don't know what the result is going to be.

But I think there's like five, six chance that I'll be on the steering council since we only have six candidates for

Michael Kennedy

five positions when this episode probably airs.

I don't know.

That's quite the contribution to the

Thomas Wourters

whole community.

Thank you.

I always forget this.

I also got the, what is it, the Distinguished Service Award from the PSF this year.

I should probably mention that.

So yes, I have been

Michael Kennedy

recognized.

No need to talk about it further.

Wonderful.

Wonderful.

Jody, welcome back on the show.

Awesome to catch up with you.

Yeah, thanks for having me back.

I am a data scientist and

Jodie Burchell

developer advocate at JetBrains working on PyCharm.

And I've been a data scientist for around 10 years.

And prior to that, I was actually a clinical psychologist.

So that was my training, my PhD, but abandoned academia for greener pastures.

Let's put it that way.

Noah Franz-Gurig.

Michael Kennedy

Brett, hello.

Good to see you.

Brett Cannon

Hello.

Yes.

Yeah, let's see here.

I've been at Microsoft for 10 years.

I started doing, working on AI R&D for Python developers.

Also keep Wazzy running for Python here and do a lot of internal consulting for teams outside.

I am actually the shortest running core developer on this call, amazingly, even though I've been doing it for 22 years.

I've also only gotten the Frank Wilson award, not the DSA.

So I feel very under accomplished here as a core developer.

Yeah, that's me in a nutshell.

Otherwise, I'm still trying to catch that.

Barry Warsaw

Most quoted.

Yeah, most quoted.

Brett Cannon

I will say actually at work, it is in my email footer that I'm a famous Python quotationist.

That was Anthony Shaw's suggestion, by the way.

That was not mine, but does link to the April Fool's joke from last year.

And I am still trying to catch Anthony Shaw, I think, on appearances on this podcast.

Michael Kennedy

Well, plus one.

Anthony Shaw should be here, honestly.

I mean, I put it out into Discord.

He could have been here, but probably at an odd time.

You used to work on VS Code a bunch on the Python aspect of VS Code.

You recently changed roles, right?

Brett Cannon

Not recently.

That was, I used to be the dev manager.

Michael Kennedy

Every seven years, years ago.

Brett Cannon

Yeah, September of 2024.

So it's been over a year.

But yeah, I used to be the dev manager.

Michael Kennedy

That counts as recent for me.

Brett Cannon

Yes, I used to be the dev manager for the Python experience in VS Code.

Michael Kennedy

Okay, very cool.

That's quite a shift.

Brett Cannon

Yeah, it went back to being an IC basically.

Michael Kennedy

You got all, you're good at your TPS reports again now?

Brett Cannon

Actually, I just did do my connect, so I kind of did.

Michael Kennedy

Awesome.

Reuven, I bet you haven't filed a TPS report in at least a year.

Reuven Lerner

So yeah, I'm Reuven.

I'm a freelance Python and Pandas trainer.

I just celebrated this past week 30 years since going freelance.

So I guess it's working out okay.

We'll know at some point if I need to get a real job.

I teach Python Pandas both at companies and on my online platform.

I have newsletters.

I've written books, speaking conferences, and generally try to help people improve their Python and Pandas fluency and confidence and have a lot of fun with this community as well as with the language.

Michael Kennedy

Oh, good to have you here.

Barry, it's great to have a musician on the show.

Barry Warsaw

Thanks.

Yeah, I've got my bases over here.

So, you know, if you need to be serenaded.

Michael Kennedy

Yeah, like a Zen of Python may break out at any moment.

You never know when it's going to happen.

Barry Warsaw

Thanks for having me here.

Yeah, I've been a core developer for a long time, since 1994.

And I've been, you know, in the early days, I did tons of stuff for Python.org.

I worked with Guido at CNRI and we moved everything from the mailing, the Postmaster stuff, and the version control systems back in the day, websites, all that kind of stuff.

I try to not do any of those things anymore.

There's way more competent people doing that stuff now.

I have been a release manager.

I'm currently back on the steering council and running again.

between Thomas and I, we'll see who makes it to six years, I guess.

And I'm currently working for NVIDIA, and I do all Python stuff.

Some half and half, roughly, of internal things and external open source community work, both in packaging and in core Python.

Michael Kennedy

That's, I guess, I think that's about it.

Yeah, you all are living in exciting tech spaces, that's for sure.

That's for sure.

For sure.

Yeah.

Well, great to have you all back on the show.

Let's start with our first topic.

So the idea is we've each picked at least a thing that we think stood out in 2025 in the Python space that we can focus on.

And let's go with Jody first.

I'm excited to hear what you thought was

Jodie Burchell

one of the bigger things.

I'm going to mention AI.

Like, wow, what a big surprise.

So to kind of give context of where I'm coming from, I've been working in NLP for a long time.

I like to say I was working on LLMs before they were cool.

So sort of playing around with the very first releases from Google in like 2019, incorporating that into search.

So I've been very interested sort of seeing the unfolding of the GPT models as they've grown.

And let's say slightly disgusted by the discourse around the models as they become more mainstream, more sort of the talk about people's jobs being replaced, a lot of the hysteria, a lot of the doomsday stuff.

So I've been doing talks and other content for around two and a half years now, just trying to cut through the hype a bit, being like, you know, they're just language models, they're good for language tasks.

Let's think about realistically what they're about.

And what was very interesting for me this year, I've been incorrectly predicting the bubble bursting for about two and a half years.

So I was quite vindicated when in August, GPT-5 came out, and all of a sudden, everyone else started saying,

Michael Kennedy

maybe this is a bubble.

Don't you think that was the first big release that was kind of a letdown compared to what the hype was?

Jodie Burchell

Yeah, and it was really interesting.

So I found this really nice Atlantic article, and I didn't save it, unfortunately, but essentially it told sort of the whole story of what was going on behind the scenes.

So GPT-4 came out in March of 2023, and that was the model that came out with this Microsoft research paper saying, you know, sparks of AGI, artificial general intelligence, blah, blah, blah.

And from that point, there was really this big expectation sort of fueled by OpenAI that GPT-5 was going to be the AGI model.

And it turns out what was happening internally is these scaling laws that were sort of considered, you know, this exponential growth thing that would sort of push the power of these models perhaps towards human-like performance.

They weren't laws at all.

And of course they started failing.

So the model that they had originally pitched as GPT-4 just didn't live up to performance.

They started this post-training stuff where they were going more into like specialized reasoning models.

And what we have now are good models that are good for specific tasks, but I don't know what happened, but eventually they had to put the GPT-5 label on something.

And yeah, let's say it didn't live up to expectations.

So I think the cracks are starting to show because the underlying expectation always was this will be improving to the point where anything's possible and you can't put a price on that.

But it turns out that if maybe there's a

Michael Kennedy

limit on what's possible, yeah, you can put a price on it.

And a lot of the valuations are on the

Jodie Burchell

first part.

Yes.

And it's always been a bit interesting to me because I come from a scientific background and you need to know how to measure stuff, right?

And I'm like, what are you trying to achieve?

Like Gregory's nodding, like, please jump in.

I'm on my monologue, so please don't interrupt me.

You really need to understand what you're actually trying to get these models to do.

What is AGI?

No one knows this.

And what's going to be possible with this?

And it's more science fiction than fact.

So this for me has been the big news this year, and I'm feeling slightly smug, I'm going to be honest, even though my predictions were off by about a year and a half.

Michael Kennedy

Yeah, maybe it's not an exponential curve.

It's a titration S curve with an asymptote.

We'll see.

Jodie Burchell

Yeah, sigmoid.

Yeah.

Reuven Lerner

Yeah, yeah, yeah.

I mean, I think we have to sort of separate the technology from the business.

And the technology, even if it doesn't get any better, even if we stay with what we have today, I still think this is like one of the most amazing technologies I've ever seen.

It's not a god.

It's not a panacea.

But it's like a chainsaw that if you know how to use it, it's really effective.

but in the hands of amateurs, you can really get hurt.

And so, yes, it's great to see this sort of thing happening and improving, but who knows where it's going to go.

And I'm a little skeptical of the AGI thing.

What I'm a little more worried about is that these companies seem to have no possible way of ever making the money that they're promising to their investors.

And I do worry a lot that we're sort of like a year 2000 situation where, yeah, the technology is fantastic, But the businesses are unsustainable.

And out of the ashes of what will happen, we will get some amazing technology and even better than we had before.

But there are going to be ashes.

Jodie Burchell

For me, that also makes me worry.

And I don't know if anyone reads Ed Zitron here.

He's a journalist kind of digging into the state of the AI industry.

He does get a bit sort of, his reputation is a bit of a crank now.

So I think he's leaned into that pretty hard, but he does take the time to also pull out numbers and point out things that don't make sense.

And he was one of the first ones to sound the whistle on this circular funding we've been seeing.

So the worry, of course, is when a lot of this becomes borrowings from banks and then that starts dragging in funding from everyday people.

And also the effect that this has had on particularly the U.S.

economy, like the stock market.

I think the investment in AI spending now exceeds consumer spending in the U.S., which is a really scary prospect.

Michael Kennedy

That is crazy.

Jodie Burchell

Mm-hmm.

But yeah, also as Reven said, I love LLMs.

They are the most powerful tools we've ever had for natural language processing.

It's phenomenal the problems we can solve with them now.

I didn't think this sort of stuff would be possible when I started in data science.

I still think there's a use case for agents, although I do think they've been a bit overstated, especially now that I'm building them.

Let's say it's not very fun building non-deterministic software.

It's quite frustrating, actually.

But I hope we're going to see improvements in the framework, particularly I've heard good things about Pydantic AI.

And yeah, hopefully we can control the input outputs and make them a bit more strict.

This will fix a lot of the problems.

Michael Kennedy

One thing I do want to put out in this conversation, I think is worth separating.

And Reuven, you touched on this some.

I want to suggest to you, I'll throw this out to you all and see what you think.

I think it's very possible that this AI bubble crashes the economy and causes bad things economically to happen and a bunch of companies that are like wrappers over open ai api go away but i don't think things like the agentic coding tools will vanish they might stop training they might slow their advance because that's super expensive but i even as if you said even if we just had claude sonnet 4 and the world never got something else it would be so much far farther beyond autocomplete and the other stuff that we had before and stack overflow that it's i don't think it's going to go.

The reason I'm throwing this out there is I was talking to somebody and they were like, well, I don't think it's worth learning because I think the bubble is going to pop.

And so I don't want to learn this agent at coding because it won't be around very long.

Brett Cannon

What do you all think?

It's here to stay.

I think it's just, where's the limit?

Where does it stop?

I think that's the big open question for everybody, right?

Like pragmatically, it's a tool.

It's useful in some scenarios and not in others.

And you just have to learn how to use the tool appropriately for your use case and to get what you need out of it.

And sometimes that's not using it because it's just going to take up more time than it will to be productive.

But other times it's fully juices up your productivity and you can get more done.

It's give and take.

But I don't think it's going to go anywhere because as you said, Michael, there's even academics doing research now.

There's open weight models as well.

There's a lot of different ways to run this, whether you're at the scale of the frontier models that are doing these huge trainings or you're doing something local and more specialized.

So I think the general use of AI isn't going anywhere.

I think it's just the question of how far can this current trend go and where will it be i want to say stop that's because that plays into the whole it's never it's going to completely go away i don't think it ever will i think it's just going to be where where are we going to start to potentially bump up against

Gregory Kapfhammer

limits one thing that i'll say is that many of these systems are almost to me like a dream come true now admittedly it's the case that the systems i'm building are maybe only tens of thousands of lines or hundreds of thousands of lines but i can remember thinking to myself how cool would it be if I had a system that could automatically refactor and then add test cases and increase the code coverage and make sure all my checkers and linters pass and do that automatically and continue the process until it achieved its goal.

And I remember thinking that five to seven years ago, I would never realize that goal in my entire lifetime.

And now when I use like anthropics models through open code or Claude code, it's incredible how much you can achieve so quickly, even for systems that are of medium to moderate scale.

So from my vantage point, it is a really exciting tool.

It's incredibly powerful.

And what I have found is that the LLMs are much better when I teach them how to use tools and the tools that it's using are actually really quick, fast ones that can give rapid feedback to the LLM and tell it whether it's moving in the right direction or not.

Michael Kennedy

Yeah, there's an engineering angle to this.

It's not just Vibe Coding if you take the time to learn it.

Jodie Burchell

There was actually a very interesting study.

I don't think the study itself has been released.

I haven't found it yet, but I saw a talk on it by some guys at Stanford.

So they call it the 10K developer study.

And basically what they were doing was studying real code bases, including, I think 80% of them were actually private code bases and seeing the point where the team started adopting AI.

And so their findings are really interesting and nuanced.

And I think they probably intuitively align with what a lot of us have experienced with AI.

So basically, yes, there are productivity boosts, but it produces a lot of code, but the code tends to be worse than the code you would write and also introduces more bugs.

So when you account for the time that you spend refactoring and debugging, you're still more productive.

But then it also depends on the type of project, as Gregory was saying.

So it's better for greenfield projects, it's better for smaller code bases.

It's better for simpler problems and it's better for more popular languages because obviously there's more training data.

And so this was actually, I like this study so much.

I'll actually share it with you, Michael, if you want to put it in the show notes, but it shows that, yeah, the picture is not that simple and all this conflicting information and conflicting experiences people were having line up completely with this.

So again, like I work at an IDE company, it's tools for the job.

It's not like your IDE will replace you.

AI is not going to replace you.

It's just going to make you maybe more productive sometimes.

Michael Kennedy

Yeah.

Wait, IDE, you work for me.

Right.

It's not about you.

Jodie Burchell

But then I work for the IDE.

Michael Kennedy

This portion of Talk Python To Me is brought to you by Sentry.

Let me ask you a question.

What if you could see into the future?

We're talking about Sentry, of course.

So that means seeing potential errors, crashes, and bugs before they happen, before you even accept them into your code base.

That's what Sentry's AI Sears Code Review offers.

You get error prediction based on real production history.

AI Sear Code Review flags the most impactful errors your PR is likely to introduce before merge using your app's error and performance context, not just generic LLM pattern matching.

Sear will then jump in on new PRs with feedback and warning if it finds any potential issues.

Here's a real example.

On a new PR related to a search feature in a web app, we see a comment from seer bicenturybot in the PR.

And it says, potential bug, the process search results function, can enter an infinite recursion when a search query finds no matches.

As the recursive call lacks a return statement and a proper termination condition.

And Seer AI Code Review also provides additional details which you can expand for further information on the issue and suggested fixes.

And bam, just like that, Seer AI Code Review has stopped a bug in its tracks without any devs in the loop.

A nasty infinite loop bug never made it into production.

Here's how you set it up.

You enable the GitHub Sentry integration on your Sentry account, enable Seer AI on your Sentry account, and on GitHub, you install the Seer by Sentry app and connect it to your repositories that you want it to validate.

So jump over to Sentry and set up Code Review for yourself.

Just visit talkpython.fm/seer-code-review.

The link is in your podcast player show notes and on the episode page.

Thank you to Sentry for supporting Talk Python and me.

Reuven Lerner

I mean, the other thing is a lot of people and a lot of the sort of when people talk about AI and LLMs and so forth in context of coding, it's the LLM writing code for us.

And maybe because I'm not doing a lot of serious coding, it's more instruction and so forth.

I use it as like a sparring or brainstorming partner So it does, you know, checking of my newsletters for language and for tech edits and just sort of exploring ideas.

And for that, maybe it's because I do everything in the last minute and I don't have other people around or I'm lazy or cheap and don't want to pay them.

But definitely the quality of my work has improved dramatically.

The quality of my understanding has improved, even if it never wrote a line of code for me.

Just getting that feedback on a regular automatic basis is really helpful.

Michael Kennedy

Yeah, I totally agree with you.

All right.

We don't want to spend too much time on this topic, even though I believe Jody has put her finger on what might be the biggest tidal wave of 2025.

But still, a quick parting thoughts.

Anyone else?

Brett Cannon

I'm glad I'll never have the right bash from scratch ever again.

Michael Kennedy

Tell me about it.

Yeah.

Barry Warsaw

I'll just say from anecdotally, the thing that I love about it is when I need to do something and I need to go through docs, online docs for whatever it is, you know, it might be GitLab or some library that I want to use or something like that.

I never even search for the docs.

I never even try to read the docs anymore.

I just say, hey, you know, whatever model I need to set up this website.

And I just, just tell me what to do or just do it.

And it's an immense time saver and productivity.

And then it gets me bootstrapped to the place where now I can start to be creative.

I don't have to worry about just like digging through pages and pages and pages of docs to figure out one little setting here or there.

That's an amazing time saver.

Gregory Kapfhammer

Yeah, that's a really good point.

Another thing that I have noticed, there might be many things for which I had a really good mental model, but my brain can only store so much information.

So for example, I know lots about the abstract syntax tree for Python, but I forget that sometimes.

And so it's really nice for me to be able to bring that back into my mind quickly with an LLM.

And if it's generating code for me that's doing a type of AST parsing, I can tell whether that's good code or not because I can refresh that mental model.

So in those situations, it's not only the docs, but it's something that I used to know really well that I have forgotten some of.

And the LLM often is very powerful when it comes to refreshing my memory and helping me to get started and move more quickly.

Michael Kennedy

All right.

Out of time, I think.

Let's move on to Brett.

What do you got, Brett?

Brett Cannon

Well, I actually originally said we should talk about AI, but Jody had a way better pitch for it than I did because my internal pitch was a little bit AI.

Do I actually have to write a paragraph explaining why?

Then Jody actually did write the paragraph.

So she did a much better job than I did.

So the other topic I had was using tools to run your Python code.

And what I mean by that is traditionally, if you think about it, you install the Python interpreter, right?

Hopefully you create a virtual environment, install your dependencies, and you call the Python interpreter in your virtual environment to run your code.

Those are all the steps you went through to run stuff.

But now we've got tools that will compress all that into a run command, just do it all for you.

And it seems like the community has shown a level of comfort with that, that I'd say snuck up on me a little bit, but I would say that I think it's a good thing, right?

It's showing us, I'm going to say us, as the junior core developer here on this call, as to, sorry to make you too feel old, but admittedly, Barry did write my letter of recommendation to my master's program.

So what happened was like, yeah, we had Hatch and PDM, poetry before that, and uv as of last year, all kind of come through and all kind of build on each other and take ideas from each other and kind of just slowly build up this kind of repertoire of tool approaches that they all kind of have a baseline kind of, not synergy is the right word, but share just kind of approach to certain things with their own twists and added takes on things.

But in general, this whole like, you know what, you can just tell us to run this code and we will just run it, right?

Like inline script metadata coming in and help making that more of a thing.

Disclaimer, I was the PEP delegate for getting that in.

But I just think that's been a really awesome trend And I'm hoping we can kind of leverage that a bit.

Like I have personal plans that we don't need to go into here, but like I'm hoping as a Python core team, we can kind of like help boost this stuff up a bit and kind of help keep a good baseline for this for everyone.

Because I think it's shown that Python is still really good for beginners.

You just have to give them the tools to kind of hide some of the details to not shoot yourself in the foot and still leads to a great outcome.

Michael Kennedy

Yeah, 2025 might be the year that the Python tools stepped outside of Python.

Instead of being, you install Python and then use the tools.

You do the tool to get Python, right?

Like uv and PDM and others.

Brett Cannon

Yeah, and inverted the dependency graph in terms of just how you put yourself in, right?

I think the interesting thing is these tools treat Python as an implementation detail almost, right?

Like when you just say uv or hatch run or PDM run thing, these tools don't make you have to think about the interpreter.

It's just a thing that they pull in to make your code run, right?

It's not even necessarily something you have to care about if you choose not to.

And it's an interesting shift in that perspective, at least for me.

But I've also been doing this for a long time.

Barry Warsaw

I think you're really onto something.

And what I love at sort of a high level is this, I think there's a renewed focus on the user experience.

And like uv plus the PEP 723, the inline metadata, you know, you can put uv in the shebang line of your script.

And now you don't have to think about anything.

You get uv from somewhere, and then it takes care of everything.

And Hatch can work the same way, I think, for developers.

But this renewed focus on installing your Python executable, you don't really have to think about, because those things are very complicated, and people just want to hit the ground running.

And so if you think about the previous discussion about AI, I just want things to work.

I know what I want to do.

I can see it.

I can see the vision of it.

And I just don't want to.

An analogy is like when I first learned Python and I came from C++ and all those languages.

And I thought, oh my gosh, just to get like, hello world, I have to do a million little things that I shouldn't have to do.

Like create a main and get my braces right and get all my variables right and get my pound includes correct.

And now I don't have to think about any of that stuff.

And the thing that was eye-opening for me with Python was the distance between vision of what I wanted and working code just really narrowed.

And I think that as we are starting to think about tools and environments and how to bootstrap all this stuff, we're also now taking all that stuff away.

Because people honestly don't care.

I don't care about any of that stuff.

I just want to go from like, I woke up this morning and had a cool idea and I just wanted to get at work.

Michael Kennedy

Or you wanted to share it so you could just share the script and you don't have to say, here's your steps that you get started with.

Barry Warsaw

Exactly.

Exactly.

Reuven Lerner

I want to thank the two of you for, oh, sorry, sorry, go ahead.

I'm just going to say, like, for years teaching Python that how do we get it installed?

At first, it surprised me how difficult it was for people.

Because like, oh, come on, we just got Python.

Like, what's so hard about this?

But it turns out it's a really big barrier to entry for newcomers.

And I'm very happy that Jupyter Lite now has solved its problems with input.

And it's like huge.

But until now, I hadn't really thought about starting with uv because it's cross-platform.

And if I say to people in the first 10 minutes of class, install uv for your platform and then say uv in it, your project, bam, you're done.

It just works.

And then it works cross-platform.

This is mind-blowing.

And I'm going to try this at some point.

Thank you.

Gregory Kapfhammer

I can comment on the mind-blowing part because now when I teach undergraduate students, we start with uv in the very first class.

And it is awesome.

There were things that would take students, even very strong students who've had lots of experience, it would still take them a week to set everything up on their new laptop and get everything ready and to understand all the key concepts and know where something is in their path.

And now we just say, install uv for your operating system and get running on your computer.

And then, hey, you're ready to go.

And I don't have teach them about docker containers and i don't have to tell them how to install python with some package manager all of those things just work and i think from a learning perspective whether you're in a class or whether you're in a company or whether you're teaching yourself uv is absolutely

Jodie Burchell

awesome i'm actually wondering whether i am the one who is newest to python here i taught myself python in 2011 so i was like python 2.7 stage but it was my first programming language i was just procrastinating during my PhD.

And I was like, I should learn to program.

So I just taught myself Python.

And I can tell you, you do not come from an engineering background.

And you're like, what is Python?

What is Python doing?

Why am I typing Python to execute this hello world?

And if you're kind of curious, you get down a rabbit hole before you even get to the point where you're just focusing on learning the basics.

And so it's exactly, I was going to say with Reuven, And like whether you thought about it for teaching, because we're now debating for Humble Data, which is a beginner's data science community that I'm part of, whether we switch to uv.

This was Chuck's idea because it does abstract away all these details.

The debate I have is, is it too magic?

This is kind of the problem because I also remember learning about things like virtual environments, because again, this was my first programming language and being like, oh, it's a very good idea.

This is best practices.

And it's also a debate we have in PyCharm, right?

Like how much do you magic away the fundamentals versus making people think a little bit, but I'm not sure.

Michael Kennedy

All right.

Like, would you even let somebody run without a virtual environment?

That's like, that's a take you, that's a stance you could take.

Jodie Burchell

I used to when I first learned Python, because it was too complicated, but then I learned better.

But yes.

Thomas Wourters

The consideration here is like hiding the magic isn't like hiding the details and having all this magic just work is great as long as it works.

And the question is, how is it going to break down and how are people going to know how to deal when it breaks down, if you hide all the magic?

And I think virtual envs were, or let's say before we had virtual envs, installing packages was very much in the, you had to know all the details because it was very likely going to break down in some way right before we had virtual envs, because you would end up with weird conflicts or multiple copies of a package installed in different parts of the system.

When we got virtual ends, we sort of didn't have to worry about that anymore because we were trained in that you can just blow away the virtual one and it just works.

And with uv, we're back into, this looks like a single installation.

We don't know what's going to go on, but we've learned, we as a community and also the people working on uv, we have learned from those earlier mistakes or not, maybe not mistakes, but consequences of the design.

And they have created something that is, that appears to be very stable where it's unlikely the magic will break.

And when the magic does break, it's obvious what the problem is or, or it automatically fixes itself.

So like it's not reusing, broken, installations and that kind of thing.

So the risk now, as it turns out, I think as is proven by the community adopting uv so fast and so willingly, I think it's acceptable.

Well, I think it's, yeah, I think it's proven itself.

It's clear that this is, it's worth the potential of discovering weird edge cases later, both because it's probably low likelihood, but also the people behind uv Astral have proven that they would jump in and fix those issues, right?

They would do anything they need to keep uv workable the same way.

And they have a focus that Python as a whole cannot have because they cater to fewer use

Michael Kennedy

cases than Python as a whole needs to.

On the audience, Galano says, as an enterprise tech director in Python coder, I believe we should hide the magic which empowers the regular employee to do simple things that make their job easier.

Yeah.

Barry Warsaw

This notion of abstractions, right, has always been there in computer science.

And, you know, we've used tools or languages or systems where we've tried to bring that abstraction layer up so that we don't have to think about all these details, as I mentioned before.

The question is, that's always the happy path.

And when I'm trying to teach somebody something like, here's how to use this library or here's how to use this tool, I try to be very opinionated to keep people on that happy path.

Like, assume everything's going to work just right.

Here's how you just make you go down that path to get the thing done that you want.

The question really is when things go wrong, how narrow is that abstraction?

And are you able, and even when you're just curious, like what's really going on underneath the hood?

Of course, that's not a really good analogy today because cars are basically computers on wheels that you can't really

Brett Cannon

understand how they work.

But back in your day.

But back in my day, we were changing spark plugs, you know, but and crank that window down.

Exactly.

So I think we always have to leave that

Barry Warsaw

room for the curious and the bad path where when things go wrong or when you're just like, you know what, I understand how this works, but I'm kind of curious about what's really going on.

How easy is it for me to dive in and get a little bit more of that background, you know, a little bit more of that understanding of what's going on.

I want the magic to decompose,

Brett Cannon

right like you should be able to explain the magic path via a more decomposed steps using the tool all the way down to what the tools like to do behind the scenes just just to admit i the reason i brought this up and i've been thinking about this a lot is i'm thinking of trying to get the python launcher to do a bit more because one interesting thing we haven't really brought up here is we're all seeing uv uv uv uv is a company there's always there's they might disappear and we haven't de-risked ourselves from that.

Now we do have Hatch, we do have PDM, but as I said, there's kind of a baseline I think they all share that I think they would be probably okay if the Python launcher just did because that's based on standards, right?

Because that's the other thing that there's been a lot of work that has led to this step, right?

Like we've gotten way more packaging standards, we've got PEP 723, like we mentioned.

There's a lot of work that's come up to lead to this point that all these tools can lean on to have them all have an equivalent outcome because it's expected is how they should be.

And so I think it's something we need to consider of how do we make sure, like, by the way, uv, I know the people, they're great.

I'm not trying to spares them or think they're going to go away, but it is something we have to consider.

And I will also say, Jody, I do think about this for teaching because I'm a dad now and I don't want my kid coming home when they get old enough to learn Python and go, hey, dad, why is getting Python code running so hard?

So I want to make sure that that never happens.

Jodie Burchell

But they fall in love with it from the start.

Michael Kennedy

I realized something for the 2026 year interview.

I have to bring a sign that says time for next topic because we got a bunch of topics and we're running low on time.

So, Thomas, let's jump over to yours.

Oh, and I had two topics as well.

Thomas Wourters

So I'm only going to have to pick my favorite child, right?

That's terrible.

My second favorite child is Lazy Imports, which is a relatively new development.

So we'll probably not get to that.

And just accepted.

Yes, it's been accepted and it's going to be awesome.

So I'll just give that a shout out and then move to my favorite child, which is free threaded Python.

For those who were not aware, the global interpreter lock is going away.

I am stating it as a fact.

It's not actually a fact yet, but it, you know, that's because the steering council hasn't realized the fact yet.

Brett Cannon

It is trending towards.

Thomas Wourters

Well, I was originally on the steering council that accepted the proposal to add free threading as a, as an experimental feature, we had this idea of adding it as experimental and then making it supported, but not the default and then making it the default.

And it was all a little vague and, and up in the air.

And then I didn't get reelected for the steering council last year, which I was not sad about at all.

I sort of ran on a, well, if there's nobody better, I'll do it, but otherwise I have other things to do.

And it turns out those other things were making sure that prefer that Python landed in a supported state.

So I lobbied the steering council quite hard, as Barry might remember at the start of the year, to get some movement on this, like get some decision going.

So for Python 3.14, it is officially supported.

The performance is great.

It's like between a couple of percent slower and 10% slower, depending on the hardware and the compiler that you use.

It's basically the same speed on macOS, which is really like it's, that's a combination of the ARM hardware and Clang specializing things, but it's basically the same speed, which, wow.

And then on recent GCCs on Linux, it's like a couple of percent slower.

The main problem is really community adoption, getting third-party packages to update their extension modules for the new APIs and the things that by necessity sort of broke, and also supporting free threading in a in a good way and in packages for Python code, it turns out there's very few changes that need to be made for things to work well under free threading.

They might not be entirely thread safe, but usually like almost always in cases where it wasn't thread saved before either, because the guild doesn't actually affect thread safety.

Just the likelihood of things breaking.

Michael Kennedy

I do think there's been a bit of a, the mindset of the Python community hasn't really been focused on creating thread safe code because the GIL is supposed to protect us.

But soon as it takes multiple steps, then all of a sudden it's just less likely.

It's not that it couldn't happen.

Thomas Wourters

Yeah, that's my point, right?

It's not the GIL never gave you threat safety.

The GIL gave cpythons internals threat safety.

It never really affected Python code and it very rarely affected thread safety in extension modules as well.

So they already had to take care of, of making sure that the global interpreter couldn't be released by something that they ended up calling indirectly so it's actually not that hard to port most things to support free threading and the benefits we've seen some experimental work because you know it's still it's still new there's still a lot of things that don't quite support it there's still places where thread contention slows things things down a lot but we've seen a lot of examples of really promising very parallel problems that now speed up by 10x or more.

And it's going to be really excited in the future.

And it's in 2025 that this all started.

I mean, Sam started it earlier, but he's been working on this for years, but it landed in 2025.

Michael Kennedy

It dropped its experimental stage in 314, basically.

Yeah.

I was going to say, were we all,

Brett Cannon

the three of us on the steering council at the same time when we decided to start the experiment

Barry Warsaw

for free threading?

I think Barry wasn't on it.

Yeah, I missed a couple of years there, but I'm Not sure.

No, I totally agree.

I think free threading is one of the most transformative developments for Python, certainly since Python 3, but even maybe more impactful because of the size of the community today.

Personally, you know, not necessarily speaking as a current or potentially former steering council member.

We'll see how that shakes out.

But I think it's inevitable.

I think free threading is absolutely the future of Python, and I think it's going to unlock incredible potential and performance.

I think we just have to do it right.

And so I talked to lots of teams who are building various software all over the community.

And I actually think it's more of an educational and maybe an outreach problem than it is a technological problem.

I mean, yes, there are probably APIs that are missing that will make people's lives easier.

There's probably some libraries that will make other code a little easier to write or whatever or to understand.

But like all that's solvable.

And I think really reaching out to the teams that are, you know, like Thomas said, that are building the ecosystem, that are moving the ecosystem to a free threading world.

That's where we really need to spend our effort on.

And we'll get there.

It won't be that long.

It certainly won't be as long as it took us to get to Python 3.

Reuven Lerner

I'm sort of curious as someone who's not super experienced with threading or, you know, basic concurrency.

I mean, I've used it, but I feel like now we have threads, especially with free threading and sub interpreters and multiprocessing and asyncio.

And I feel like for many people now it's like, oh, my God, which one am I supposed to use?

And for someone who's experienced, you can sort of say, well, this seems like a better choice.

But are there any plans to sort of try to have a taxonomy of what problems are solved by which of these?

Thomas Wourters

The premise here is that everyone would be using one or more of these low-level techniques that you mentioned.

And I think that's not a good way of looking at it.

Like AsyncIO is a library that you want to use for the things that AsyncIO is good at.

And you can actually very nicely combine it with multiprocessing, with subprocesses, with so that subprocesses and subinterpreters, just to make it clear that those are two very separate things and multithreading, both with and without free threading.

And it solves different problems or it gives you different abilities within the AsyncIO framework.

And the same is true for like GUI frameworks.

I mean, GUI frameworks usually want threads for multiple reasons, but you can use these other things as well.

I don't think it's down to teaching end users when to use or avoid all these different things.

I think we need higher level abstractions for tasks that people want to solve.

And then those can decide on what for their particular use case is a better approach.

For instance, PyTorch has multiple.

So it's used for people who don't know to train, not just train, but it's used in AI for generating large matrices and LLMs and what have you.

Part of it is loading data and processing.

And the basic ideas of AsyncIO are, oh, you can do all these things in parallel because you're not waiting on the CPU, you're just waiting on IO.

Turns out it is still a good idea to use threads for massively parallel IO because otherwise you end up waiting longer than you need to.

So a problem where we thought AsyncIO would be the solution and we never needed threads is actually much improved if we tie in threads as well.

And we've seen massive, massive improvements in data loader.

There's even an article, a published article from some people at Meta showing how much they improve the PyTorch data loader by using multiple threads.

But at a very low level, we don't want end users to need to make that choice, right?

Brett Cannon

I concur to futures is a good point, right?

Like all of these approaches are all supported there and it's a unified one.

So if you were to teach this, for instance, you could say use concurrent.tot futures.

These are all there.

This is the potential tradeoff.

Like basically use threads.

It's going to be the fastest unless there's like some module you have that's not that's screwing up because of threads, then use sub interpreters.

And if for some reason sub interpreters don't work, you should move to the processing pool, the process pool.

But I mean, basically, you just kind of just like, it's not go sort the fast stuff.

And for some reason, it doesn't work.

Use the next fastest and just kind of do it that way.

After that, then you start to the lower level.

Like, okay, why do I want to use subinterpreters instead of threads?

Those kinds of threads.

But I think that's a different, as I think we're all searching, a different level of abstraction, which is a term we keep bringing up today.

It's a level that a lot of people are not going to have to care about.

I think the libraries are the ones that are going to have to care about this and who are going to do a lot of this for you.

Michael Kennedy

Let me throw this out on our way out the door to get to Reuven's topic.

I would love to see it solidify around async and await.

And you just await a thing, maybe put a decorator on something.

say this, this one, I want this to be threaded.

I want this to be IO.

I want this.

And you don't, you just use async and await and don't have to think about it, but that's, that's my dream.

Reuven, what's your dream?

Reuven Lerner

Wow.

How long do you have?

Michael Kennedy

No, what's your topic?

Reuven Lerner

So I want to talk about Python ecosystem and funding.

When I talk to people with Python and I talk to them about it, how it's open source, they're like, oh, right, it's open source.

That means I can download it for free.

And from their perspective, that's sort of where it starts and ends.

And the notion that people work on it, the notion that needs funding, the notion that there's a Python software foundation that supports a lot of these activities, the infrastructure is completely unknown to them and even quite shocking for them to hear.

But Python is in many ways, I think, starting to become a victim of its own success, that we've been dependent on companies for a number of years to support developers and development.

And we've been assuming that the PSF, which gives money out to lots of organizations to run conferences and workshops and so forth, can sort of keep scaling up and that they will have enough funding.

And we've seen a few sort of shocks that system in the last year.

Most recently, the PSF announced that it was no longer going to be doing versus sort of pared down about a year ago, what it would give money for.

And then about five months ago, six months ago, I think it was in July or August, they said, actually, we're not going to be able to fund anything for about a year now.

And then there was the government grant, I think from the NSF that they turned down.

And I'm not disputing the reasons for that at all.

It basically, it said, well, we'll give you the money if you don't worry about diversity and inclusion.

And given that that's like a core part of what the PSF is supposed to do, they could not do that without shutting the doors, which would be kind of counterproductive.

And so I feel like we're not yet there, but we're sort of approaching this, I'm going to term like a problem crisis in funding Python.

The needs of the community keep growing and growing, whether it's workshops, whether it's PyPI, whether it's conferences.

And companies are getting squeezed.

And the number of people, it always shocks me every time there are PSF elections, the incredibly small number of people who vote.

Which means that, let's assume half the people who are members, third of the people.

Like for the millions and millions of people who program Python out there, an infinitesimally small proportion of them actually join and help to fund it.

So I'm not quite sure what to do with this other than express concern.

But I feel like we've got to figure out ways to fund Python and the PSF in new ways that will allow it to grow and scale as needed.

Thomas Wourters

I couldn't agree more.

Obviously, the PSF is close to my heart because I was on the board for, I think, a total of six or nine years or something over, you know, the last 25.

I was also for six months, I was the interim general manager because Eva left and we hadn't hired Deb yet while I was on the board.

I remember signing the sponsor contracts for the companies that came in wanting to sponsor Python.

And it is like, it's ridiculous how, and I can say this working for a company that is one of the biggest sponsors of the PSF and has done so for years.

It's ridiculous how small those sponsorships are and yet how grateful we were that they came in because every single one has such a big impact.

You can do so much good with the money that comes in.

We need more corporate sponsorships more than we need.

Like, I mean, obviously a million people giving us a couple of bucks, giving the PSF, let's be clear.

I'm not on the board anymore.

Giving the PSF a couple of bucks would be fantastic.

But I think the big players in the big corporate players where all the AI money is, for instance, having done basically no sponsorship of the PSF is mind-boggling.

It is a textbook tragedy of the commons right there, right?

They rely entirely on PyPI and PyPI is run entirely with community resources, mostly because of very generous and consistent sponsorship, basically by Fastly, but also the other sponsors of the PSF.

And yet very large players use those resources more than anyone else and don't actually contribute.

Georgie Kerr, she wrote this fantastic

Jodie Burchell

blog post saying pretty much this straight after Europython.

So Europython this year was really big actually.

And she was wandering around looking at the sponsor booths and the usual players were there, but none of these AI companies were there.

And the relationship actually between AI, if you want to call it that.

Let's call it ML and neural networks.

And like some of the really big companies and Python actually is really complex.

Obviously, a lot of these companies and some of us are here, employ people to work on Python.

Companies like Meta and Google have contributed massively to frameworks like PyTorch, TensorFlow, Keras.

So it's not as simple a picture as saying cough up money all the time.

Like there's a more complex picture here, but definitely there are some notable absences.

And we talked about the volume of money going through.

I totally agree with the sentiment.

When the shortfall came and the grants program had to shut down, we were brainstorming at JetBrains, like maybe we can do some sort of, I don't know, donate some more money and call other companies to do it.

Or we can call on people in the community.

And I was like, I don't want to call on people in the community to do it because they're probably the same people who are also donating their time for Python.

Like it's just squeezing people who give so much of themselves to this community even more.

And it's not sustainable.

Like Reuben said, if we keep doing this, the whole community is going to collapse.

Like I'm sure we've all had our own forms of

Brett Cannon

burnout from giving too much.

I'm going to pat ourselves on the back here.

Everyone on this call who works at a company are all sponsors of the PSF.

Thank goodness.

But there's obviously a lot of people not on this call who are not sponsors.

And I know personally, I wished every company that generated revenue from python usage donated to the psf like and it doesn't see and i think part of the problem is some people think it has to be like a hundred thousand dollars it does not have to be a hundred thousand dollars now if you can afford that amount please do or more there are many ways to donate more than the maximum amount for getting on the website but it's one of these funny things where a lot of people just like oh it's not me right like even startups don't some do to give those ones credit but others don't because like oh we're we're burning through capital level i was like yeah but we're not we're asking for like less so you'd pay a dev right by a lot per year right like the amount we actually asked for to get to the highest tier is still less than a common developer in silicon valley if we're gonna price point to a geograph geogra geographical

Thomas Wourters

location we call kind of comprehend i'm gonna steal a net bachelor's observation here and yeah what the psf would be happy with is less than a medium company spends on the tips of expensed meals every

Brett Cannon

year.

Yeah.

Yeah.

And it's a long running problem, right?

Like, I mean, I've been on the PSF for a long time, too.

I've not served as many years as Thomas on the board, but I was like executive vice president because we had to have someone with that title at some point.

It's always been a struggle, right?

Like I and I also want to be clear, I'm totally appreciative of where we have gotten to, right?

Because for the longest time, I was just dying for paid staff on the core team.

And now we have three developers as residents.

Thank goodness.

Still not enough to be clear.

I want five.

And I've always said that, but I'll happily take three.

But it's one of these things where it's a constant struggle.

And it got a little bit better before the pandemic just because everyone was spending on conferences and PyCon US is a big driver for the Python Software Foundation.

And I know your Python's a driver for the European Society.

But then COVID hit and conferences haven't picked back up.

And then there's a whole new cohort of companies that have come in post-pandemic that have never had that experience of going to PyCon and sponsoring PyCon.

And so they don't think about, I think, sponsoring PyCon the PSF because that's also a big kind of in your face, you should help sponsor this.

And I think it's led to this kind of lull where offered spending has gone down, new entrants into the community have not had that experience and thought about it.

And it's led to this kind of dearth where, yeah, that PSF had to stop giving out grant money.

And it sucks.

And I would love

Reuven Lerner

to see it not be that problem.

I want to add one interesting data point that I discovered in short.

Keep it short.

Yes.

NumFocus has about twice the budget of the PSF.

I was shocked to discover this.

So basically it is possible to get money from companies to sponsor development of Python related projects.

And I don't know what they're doing that we aren't.

And I think it's

Michael Kennedy

worth talking and figuring it out.

We need a fundraiser and marketer in residence, maybe.

Who

Thomas Wourters

knows?

Lauren does a great job, to be clear.

The PSF has Lauren and Lauren is that.

But it's still

Brett Cannon

hard.

We have someone doing it full time at the PSF and it's just hard to get companies to give cash up cash.

Michael Kennedy

Yeah, and what do we get in return?

Well, we already get that.

So, yeah, I know.

Barry Warsaw

All right, Barry.

To just, you know, shift gears into a different area, something that I've been thinking a lot over this past year on the steering council.

Thomas, I'm sure, is going to be, you know, very well aware, having been instrumental in the lazy imports PEP A10.

We have to sort of rethink how we evolve Python and how we pose changes to Python and how we discuss those changes in the community.

Because I think one of the things that I have heard over and over and over again is that authoring PEPs is incredibly difficult and emotionally draining and it's a time sink.

And leading those discussions on discuss.python.org, which we typically call DPO, can be toxic at times and very difficult.

So one of the things that I realized as I was thinking about this is that peps are 25 years old now, right?

So we've had this, and not only just peps are old, but like we've gone through at least two, if not more sort of complete revolutions in the way we discuss things.

You know, the community has grown incredibly.

The developer community is somewhat larger, but just the number of people who are using Python and who have an interest in it has grown exponentially.

So it has become really difficult to evolve the language in the standard library and the interpreter.

And we need to sort of think about how we can make this easier for people and not lose the voice of the user.

And the number of people who actually engage in topics on DPO is the tip of the iceberg.

You know, We've got millions and millions of users out there in the world who, for example, lazy imports will affect, free threading will affect and don't even know that they have a voice.

And maybe we have to basically represent that, but we have to do it in a much more collaborative and positive way.

That's something that I've been thinking about a lot.

And whether or not I'm on the steering council next year, I think this is something that I'm going to spend some time on trying to think about, you know, talk to people about ways we can make this easier for everyone.

Michael Kennedy

The diversity of use cases for Python in the last couple of years.

So complex.

Yes, exactly.

Brett Cannon

It should also be preface that Barry created the PEP process.

He should have started that one.

Barry Warsaw

It is that old.

Yeah.

Brett Cannon

By the way, just so everyone knows, these are not ages jokes to be mean to Barry.

We've always known Barry long enough that we know Barry's okay with us making these jokes.

To be very, very clear.

Thomas Wourters

Also, I am almost as old as Barry, although I don't look as old as Barry.

Brett Cannon

Yeah, we're all over from the same age anyways.

Thomas Wourters

Yeah, Barry and I have known each other for 25 years, and I've always made these jokes of him.

So it is different when you know each other in person.

Let's put it that way.

For the PEP process, I think for a lot of people, it's not obvious how difficult the process is.

I mean, it wasn't even obvious to me.

I saw people avoiding writing peps multiple times, and I was upset, like on the steering council, right?

I saw people making changes where I thought, this is definitely something that should have been discussed in a PEP and the discussion should be recorded in a PEP and all that.

And I didn't understand why they didn't until, basically until PEP 8.10.

So I did PEP 779, which was the giving free threading supported status at the start of the year.

And the discussion there was, you know, sort of as expected and it's already, was already an accepted PEP.

It was just the question of how does it become supported?

That one wasn't too exhausting.

And then we got to Lazy Imports, which was Pablo, who is another steering council member, as well as a bunch of other contributors, including me and two of my co-workers and one of my former co-workers, who had all had a lot of experience with Lazy Imports, but not necessarily as much experience with the PEP process.

And Pablo took the front seat because he knew the PEP process and he's done like five PEPs in the last year or something, some ridiculous number.

And he shared with us the vitriol he got for like offline for the, just the audacity of proposing something that people disagreed with or something.

And that was like, this is a technical suggestion.

This is not a code of conduct issue where I have received my fair share of vitriol around.

This is a technical discussion.

And yet he gets this, these ridiculous accusations in his mailbox.

And for some reason, only the primary author gets it as well, which is just weird to me.

Brett Cannon

But people are lazy.

Thomas is what I think you just said.

Barry Warsaw

Remember, the steering council exists because Guido was the got the brunt of this for Pet 572, which was the walrus operator.

Right.

Which is just like this minor little syntactic thing that is kind of cool when you need it.

But like just the amount of anger and negative energy and vitriol that he got over that was enough to for him to just say, I'm out, you know, and you guys figure it out.

And that cannot be an acceptable way to discuss the evolution of the language.

Thomas Wourters

Especially since apparently now every single PEP author of any contentious or semi contentious pep.

Although I have to say, Pep 810 had such broad support.

It was hard to call it contentious.

It's just there's a couple of very loud opinions, I guess.

And I'm not saying we shouldn't listen to people.

We should definitely listen to especially contrary opinions.

But there has to be a limit.

There has to be an acceptable way of bringing things up.

There has to be an acceptable way of saying, hey, you didn't actually read the Pep.

please go back and reconsider everything you said after you fully digested the things, because everything's already been addressed in the pep.

It's just really hard to do this in a way that doesn't destroy the relationship with the person you're telling this, right?

It's hard to tell people, hey, I'm not going to listen to you because you haven't, you know, you've done a bad job.

Brett Cannon

You've chosen not to inform yourself.

Barry Warsaw

I think you make another really strong point, Thomas, which is that there have been changes that have been made to Python that really should have been a pep.

And they aren't because people don't want to go through core developers, don't want to go through this gauntlet.

And so they'll create a PR and then that.

But that's also not good because then, you know, we don't have that.

We don't have the right level of consideration.

And you think about the way that, you know, if you're in your job and you're making a change to something in your job, you have a very close relationship to your teammates.

And so you have that kind of respect and hopefully, right, like compassion and consideration.

And you can have a very productive discussion about a thing and you may win some arguments and you may lose some arguments, but the team moves forward as one.

And I think we've lost a bit of that in Python.

Michael Kennedy

So that's not great.

I think society in general could use a little more civility and kindness, especially to strangers that they haven't met in forums, social media, driving, you name it.

Okay, but we're not going to solve that here, I'm sure.

So instead, let's do Gregory's topic.

Gregory Kapfhammer

Hey, I'm going to change topics quite a bit, but I wanted to call 2025 the year of type checking and language server protocols.

So many of us probably have used tools like mypy to check to see if the types line up in our code or whether or not we happen to be overriding functions correctly.

And so I've used mypy for many years and loved the tool and had a great opportunity to chat with the creator of it.

And I integrate that into my CI and it's really been wonderful.

And I've also been using a lot of LSPs, like, for example, PyRite or PyLands.

But in this year, one of the things that we've seen is, number one, Pyrefly from the team at Meta.

We've also seen ty from the team at Astral.

And there's another one called Zubon.

And Zubon is from David Halter.

David was also the person who created JEDI, which is another system in Python that helped with a lot of LSP tasks.

What's interesting about all three of the tools that I just mentioned is that they're implemented in Rust, and they have taken a lot of the opportunity to make the type checker and or the LSP significantly faster.

So for me, this has changed how I use the LSP or the type checker and how frequently I use it.

And in my experience, it has helped me to take things that might take tens of seconds or hundreds of seconds and cut them down often to less than a second.

And it's really changed the way in which I'm using a lot of the tools like ty or Pyrefly or Zubon.

So I can have some more details if I'm allowed to share, Michael, but I would say 2025 is the year of type checkers and LSPs.

Michael Kennedy

I think given the timing, let's have people give some feedback.

I personally have been using Pyrefly a ton and am a big fan of it.

Thomas Wourters

I don't know if I'm allowed to have an opinion that isn't Pyrefly is awesome.

I mean, I'm not on the Pyrefly team, but I do regularly chat with people from the Pyrefly team.

Michael Kennedy

Tell people real quick what it is, Thomas.

Thomas Wourters

So Pyrefly is Meta's attempt at a Rust-based type checker.

And so it's very similar to ty.

Started basically at the same time, a little later.

Meta originally had a type checker called Pyre, which was written in OCaml.

They basically decided to start a rewrite in Rust.

And then that really took off.

And that's where we're going now.

Brett Cannon

Yeah.

Yeah.

I don't know what I can say because I'm actually on the same team as the Pylands team.

So, but no, I mean, I think it's good.

I think this is one of those interesting scenarios where some people realize like, you know what, we're going to pay the penalty of writing a tool in a way that's faster, but makes us go slower because the overall win for the community is going to be a good win.

So it's worth that headache, right?

Not to say I don't want to scare people off from writing Rust, but let's be honest, it takes more work to write Rust code than it does take to write Python code.

But some people chose to make that trade off and we're all benefiting from it.

The one thing I will say that's kind of interesting from this that hasn't gotten a lot of play yet because it's still being developed, But PyLens is actually working with the Pyrefly team to define a type server protocol, TSP, so that a lot of these type servers can just kind of feed the type information to a higher level LSP and let that LSP handle the stuff like symbol renaming and all that stuff.

Right.

Because the key thing here and the reason there's so many different type checkers is there are there is a spec.

Right.

And everyone's trying to implement it.

But there's differences like in type in terms of type inferencing.

And if I actually go listen to Michael's interview, talk Python to me with the Pyrefly team, they actually did a nice little explanation of the difference between Pyrites approach and Pyrefly's approach.

And so there's a bit of variance.

But for instance, I think there's some talk now of trying to like, how do we make it so everyone doesn't have to reimplement how to rename a symbol, right?

That's kind of boring.

That's not where the interesting work is.

And that's not performant from perspective of you want instantaneously to get that squiggly red line in whether it's VS Code or it's in PyCharm or whatever your editor is, right?

You want to get it as fast as possible,

Thomas Wourters

but the rename-

Brett Cannon

Jupyter.

Thomas Wourters

Jupyter.

No, not Emacs.

Everything but Emacs.

Brett Cannon

No, not Emacs.

Barry Warsaw

Just to bring things full circle, it's that focus on user experience, right?

Which is, yes, you want that squiggly line, but when things go wrong, when your type checker says, oh, you've got a problem, you know, like I think about as an analogy, how Pablo has done an amazing amount of work on the error reporting, right?

When you get an exception and, you know, now you have a lot more clues about what is it that I actually have to change to make the tool, you know, to fix the problem, right?

Like so many times years ago, you know, when people were using mypy, for example, and they'd have some complex failure of their type annotations and have absolutely no idea what to do about it.

And so getting to a place where now we're not just telling people you've done it wrong, but also here's some ideas about how to fix it.

Michael Kennedy

I think this is a full circle here because honestly, using typing in your Python code gives a lot of context to the AI when you ask for help.

If you just give it a fragment and it can't work with it.

Gregory Kapfhammer

That's true.

And also, if you can teach your AI agent to use the type checkers and use the LSPs, it will also generate better code for you.

I think the one challenge I would add to what Barry said a moment ago is that if you're a developer and you're using, say, three or four type checkers at the same time, you also have to be careful about the fact that some of them won't flag an error that the other one will flag.

So I've recently written Python programs and even built a tool with one of my students named Benedek that will automatically generate Python programs that will cause type checkers to disagree with each other.

I will flag it as an error, but none of the other tools will flag it as an error.

And there are also cases where the new tools will all agree with each other, but disagree with mypy.

So there is a type checker conformance test suite.

But I think as developers, even though it might be the year of LSP and type checker, we also have to be aware of the fact that these tools are maturing and there's still disagreement among them.

and also just different philosophies when it comes to how to type check and how to infer.

And so we have to think about all of those things as these tools mature and become part of our ecosystem.

Michael Kennedy

Yeah, Greg, that last point is important.

Thomas Wourters

Out of curiosity, how did the things where the type checkers disagree match up with the actual runtime behavior of Python?

Was it like false positives or false negatives?

Gregory Kapfhammer

That's a really good question.

I'll give you more details in the show notes because we actually have it in a GitHub repository and I can share it with people.

But I think some of it might simply be related to cases where mypy is more conformant to the spec, but the other new tools are not as conformant.

So you can import overload from typing and then have a very overloaded function.

And mypy will actually flag the fact that it's an overloaded function with multiple signatures, whereas PyRite and Pyrefly and Zubon will not actually flag that, even though they should.

Michael Kennedy

Another big area is optional versus not optional.

Yes.

Like, are you allowed to pass a thing that is an optional string when the thing accepts a string?

Some stuff's like, yeah, it's probably fine.

Others are like, no, no, no.

This is an error that you have to do a check.

And if you want to switch type checkers, you might end up with a thousand warnings that you didn't previously had because of an intentional difference of opinion on how strict to be, I think.

Gregory Kapfhammer

Yeah.

So you have to think about false positives and false negatives when you're willing to break the build because of a type error.

All of those things are things you have to factor in.

But to go quickly to this connection to AI, I know it's only recently, but the Pyrefly team actually announced that they're making Pyrefly work directly with Pydantic AI.

So there's going to be an interoperability between those tools so that when you're building an AI agent using Pydantic AI, you can also then have better guarantees when you're using Pyrefly as your type checker.

Jodie Burchell

It makes total sense, though, because then the reasoning LLM that's at the core of the agent can actually have that information before it tries to execute the code and you don't get in that loop that they often get in.

You can correct it before it runs.

Yeah, really good point.

I want to

Reuven Lerner

just sort of express my appreciation to all the people working on this typing stuff.

As someone who's come from many, many years in dynamic languages, I was always like, oh, typing.

Those E, I love seeing how easy it is for people to ease into it when they're in Python.

It's not all or nothing.

C, I love the huge number of tools.

The competition in this space is really exciting.

And D, guess what?

It really, really does help.

And I'll even add an E, which is my students who come from Java, C++, C#, and so forth feel relief.

They find that without type checking, it's like doing a trapeze act without a safety net.

And so they're very happy to have that typing in there, typings in there.

So kudos to everyone.

Michael Kennedy

All right, folks, we are out of time.

This could literally go for hours longer.

It was a big year.

It was a big year, but I think we need to just have a final word.

I'll start and we'll just go around.

So my final thought here is, we've talked about some things that are negatives or sort of downers or whatever here and there, but I still think it's an incredibly exciting time To be a developer, data scientist, there's so much opportunity out there.

There's so many things to learn and take advantage of and stay on top of.

And amazing.

Every day is slightly more amazing than the previous day.

So I love it.

Gregory, let's go to you next.

Let's go around the circle.

Gregory Kapfhammer

Yeah, I wanted to give a shout out to all of the local Python conferences.

I actually, on a regular basis, have attended the PyOhio conference.

And it is incredible.

The organizers do an absolutely amazing job.

And they have it hosted on a campus, oftentimes at Ohio State or Cleveland State University.

And incredibly, PyOhio is a free conference that anyone can attend with no registration fee.

So Michael, on a comment that I think is really positive, wow, I'm so excited about the regional Python conferences that I've been able to attend.

Thomas.

Thomas Wourters

Wow, I didn't expect this.

So I think I want to give a shout out to new people joining the community and also joining just core developer team as triagers or it's just drive by commenters.

I know we harped a little bit about people, you know, giving strong opinions and discussions, but I always look to the far future as well as the near future.

And we always need new people.

We need new ideas.

We need new opinions.

So yeah, I'm, I'm excited that there's still people joining and signing up and even when it's thankless work.

So I guess I want to say thank you to people doing all the thankless work.

Jodi.

Jodie Burchell

Yeah, I want to say this is actually really only my third year or so really in the Python community.

So before that, I was just sort of on the fringes, right?

And after I started advocacy, I started going to the conferences and meeting people.

And I think I didn't kind of get how special the community was until I watched the Python documentary this year.

And I talked to Paul about this, Paul Everett afterwards, also made fun of him for his like early 2000s fashion.

but I think, yeah, like I'm a relative newcomer to this community and you've all made me feel so welcome.

And I guess I want to thank all the incumbents for everything you've done to make this such a special tech community for minorities and everyone, newbies, you know, Python,

Brett Cannon

Python is love.

Oh, geez.

How am I supposed to follow that?

I think one of the interesting things that we're kind of looping on here is I think the language evolution has slowed down, but it's obviously not stopped, right?

Like as Thomas pointed out, there's a lot more stuff happening behind the scenes.

Lazy imports are coming, and that was a syntactic change, which apparently brings out the mean side of some people.

And we've obviously got our challenges and stuff, but things are still going.

We're still looking along.

We're still trying to be an open, welcoming place for people like Jody and everyone else who's new coming on over and to continue to be a fun place for all of us slightly grain-beard people who have been here for a long time to make us want to stick around.

I think it's just more of the same, honestly.

It's all of us just continuing to do what we can to help out to keep this community being a great place.

And it all just keeps going forward.

And I'll just end with, if you work for a company that's not sponsored the PSF, please do so.

Reuven Lerner

It's rare to have, I mean, a programming language or any sort of tool where it is both really, really beneficial to your career and you get to hang out with really special, nice, interesting people.

And it's easy to take all that for granted if you've been steeped in the community.

I went to a conference about six months ago, a non-Python conference.

And that was shocking to me to discover that all the speakers were from advertisers and sponsors.

Everything was super commercialized.

People were not interested in just like hanging out and sharing with each other.

And it was a shock to me because I've been to basically only Python conferences for so many years.

I was like, oh, that's not the norm in the industry.

So we've got something really special going that not only is good for the people, but good for everyone's careers and mutually reinforcing and helping each other.

And that's really fantastic.

And we should appreciate that.

Barry Warsaw

Barry, final word.

Thomas stole my thunder just a little bit, but just to tie a couple of these ideas together.

Python, and you know, Brett said this, right?

This is Python is the community or the community is Python.

There's no company that is telling anybody what Python should be.

Python is what we make it.

And, you know, as folks like myself get a little older and, you know, and we have younger people coming into the community, both developers and everything else who are shaping Python into their vision.

I encourage you, if you've thought about becoming a core dev, find a mentor.

There are people out there that will help you.

If you want to be involved in the community, the PSF, you know, reach out.

There are people who will help guide you this community.

You can be involved.

Do not let any self-imposed limitations stop you from becoming part of the Python community in the way that you want to.

And eventually run for the

Thomas Wourters

steering council because we need many, many, many more candidates next year.

And you don't need any qualifications either because I'm a high school dropout and I never went to college or anything.

And look at me.

Brett Cannon

And I have a PhD and I will tell you, I did not need all that to become a Python developer because I was the Python developer before I got the PhD.

I'm a bass player.

Barry Warsaw

So if I can do it, anybody can do it.

Michael Kennedy

Thank you everyone for being here.

This awesome look back in the air and I really appreciate you all taking the time.

Gregory Kapfhammer

Thank you, Michael.

Thanks everybody.

Michael Kennedy

Bye everybody.

This has been another episode of Talk Python To Me.

Thank you to our sponsors.

Be sure to check out what they're offering.

It really helps support the show.

Look into the future and see bugs before they make it to production.

Sentry's Seer AI Code Review uses historical error and performance information at Sentry to find and flag bugs in your PRs before you even start to review them.

Stop bugs before they enter your code base.

Get started at talkpython.fm/seer-code-review.

If you or your team needs to learn Python, we have over 270 hours of beginner and advanced courses on topics ranging from complete beginners to async code, Flask, Django, HTMX, and even LLMs.

Best of all, there's no subscription in sight.

Browse the catalog at talkpython.fm.

And if you're not already subscribed to the show on your favorite podcast player, what are you waiting for?

Just search for Python in your podcast player.

We should be right at the top.

If you enjoy that geeky rap song, you can download the full track.

The link is actually in your podcast blur show notes.

This is your host, Michael Kennedy.

Thank you so much for listening.

I really appreciate it.

I'll see you next time.

I think is the norm.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.