
·E533
Web Frameworks in Prod by Their Creators
Episode Transcript
Today on Talk Python, the creators behind FastAPI, Flask, Django, Quart, and Litestar get practical about running apps based on their frameworks in production.
Deployment patterns, async gotchas, servers, scalings, and the stuff that you only learn at 2 a.m.
when the pager starts going off.
For Django, we have Carlton Gibson and Jeff Triplich.
For Flask, we have David Lord and Phil Jones.
And on Team Litestar, we have Yannick Noverde and Cody Fincher.
And finally, Sebastian Ramirez from FastAPI is here as well.
Let's jump in.
This is Talk Python To Me, episode 533, recorded December 17th, 2025.
Welcome to Talk Python To Me.
the number one Python podcast for developers and data scientists.
This is your host, Michael Kennedy.
I'm a PSF fellow who's been coding for over 25 years.
Let's connect on social media.
You'll find me and Talk Python on Mastodon, Bluesky, and X.
The social links are all in your show notes.
You can find over 10 years of past episodes at talkpython.fm.
And if you want to be part of the show, you can join our recording live streams.
That's right.
We live stream the raw uncut version of each episode on YouTube.
Just visit talkpython.fm/youtube to see the schedule of upcoming events.
Be sure to subscribe there and press the bell so you'll get notified anytime we're recording.
Hey, before we jump into the interview, I just want to send a little message to all the companies out there with products and services trying to reach developers.
That is the listeners of this show.
As we're rolling into 2026, I have a bunch of spots open.
So please reach out to me if you're looking to sponsor a podcast or just generally sponsor things in the community and you haven't necessarily considered podcasts, you really should.
Reach out to me and I'll help you connect with the Talk Python audience.
Thanks everyone for listening all of 2025.
And here we go into 2026.
Cheers.
Hello, hello, Carlton, Sebastian, David, Cody, Yannick, Phil, Jeff, welcome back to Talk Python, all of you.
Thanks for having us.
Thank you for having us.
Happy to be here again.
We're here for what may be my favorite topic for sure.
Something I spend most of my time on is web API stuff, which is awesome.
So excited to have you here to give your inside look at how people should be running your framework, at least the one that you significantly contribute to, depending on which framework we're talking about, right?
It's going to be a lot of fun, and I'm really excited to talk about it.
However, there's an interesting fact that I've been throwing out a lot lately is that fully half of the people doing professional Python development have only been doing it for two years or less.
And some of you been on the show, it was maybe two years longer than that.
Let's just do a quick round of introductions for people who don't necessarily know you.
We'll go around the squares here in the screen sharing.
So Carlton, you're up first.
Oh, I get to go first.
Brilliant.
Carlton GibsonWell, I'm Carlton.
I work on the Django Red framework mostly.
I'm a former Django fellow.
I maintain a number of packages in the ecosystem.
And the last few years I've been back to building stuff with Django rather than working on it.
So I run a build startup that's, well, we're still
Michael Kennedygoing.
So I'm quite excited about that.
Awesome.
How is it to be building with Django than building
Carlton GibsonDjango?
Oh, I'm literally having the time of my life.
Like I spent five years as a Django fellow working on Django and I just built up this backlog of things I wanted to do and I had no time and no capacity and no, no sort of nothing to work on with them.
And it's just, it's just a delight.
And every day I sit down on my computer thinking, oh, what's today?
I look at the background.
Oh, yes.
And every day, a delight.
So I'm still just loving it.
That's awesome.
Michael KennedySo more often you're appreciating your former self than cursing your former self for the way you built.
Yeah, that's an interesting one.
I think we should move on before.
All right.
All right.
Speaking of building with and for Sebastian, FastAPI.
Hello.
Hello.
So, okay, intro for the ones that don't know me.
I'm Sebastian Ramirez.
I created FastAPI.
Yeah, that's pretty much it.
And now I have been building a company since the last two years, FastAPI Cloud, to deploy FastAPI.
So, I get to drink from funny cups, as you can see.
The world's best boss.
Amazing.
So, I think you deserve to give a bit of a shout out to FastAPI Cloud.
That's a big deal.
Thank you.
Thank you very much.
Yeah, it's super fun.
And the idea is to make it super simple to deploy FastAPI applications.
The idea with FastAPI was to make it very simple to build applications, build APIs, and get the idea from idea to product in record time.
That was the idea with FastAPI.
But then deploying that, in many cases, is just too cumbersome.
It's too complicated.
There are just so many things to that.
So I wanted to bring something for people to be able to say, like, hey, just one command FastAPI deploy, and we take care of the rest.
And then we and the team, I have an amazing thing that I've been able to work with.
We suffer all the cloud pains so that people don't have to deal with that.
And yeah, it's painful to build, but it's so cool to use it.
You know, like that's the part when I say like, yes, this was worth it.
When I get to use the thing myself, that is super cool.
MARK MANDEL: Yeah, I'm assuming you build FastAPI Cloud with FastAPI somewhat.
FRANCISCO MOLIN: Yes, yes, yes, exactly.
FastAPI Cloud runs on FastAPI Cloud.
And I get just like now random things in there and like, yes.
Congrats to that again.
That's super cool.
David Lord, welcome.
Welcome back.
David LordYeah.
Hello.
I'm David Lord.
I'm the lead maintainer of Pallets, which is Flask, Jinja, Click, Berkswag.
It's dangerous, markup safe.
And now Pallets Eco, which is a bunch of the famous extensions for Flask that are getting community maintenance now.
I've been doing that since, I think I've been the lead maintainer since like 2019, but a maintainer since like 2017.
So it's been a while.
Michael KennedyThat's been a while.
We're coming up on seven, eight years.
That's crazy.
Time flies.
David LordIt's always funny because I always feel like I've been doing it for way, way longer.
And then I look at the actual date that I got added as a maintainer.
I'm like, well, it couldn't have been that late.
Michael KennedyI was doing stuff before that, right?
Well, I'm sure you were deep in flask before you got added as a maintainer of it, right?
Yeah.
Phil Jones, since you are also on the same org, next.
Hey, welcome back.
Hello.
Phil JonesYeah, I'm Phil Jones.
I am the author of Quartz, which is also part of Palette.
I also work on Berkshagen and Flask and help out there.
And I've done a server called Hypercorn as well.
So a bit of interest in that part of the ecosystem.
What is Quart for people who don't know?
Quart is basically Flask with async await.
And that was the idea behind it really to make it possible to do async await.
So yeah, that's pretty much it.
If we, when we manage to merge them, we will.
David LordAnd the goal now with Quart as part of palettes is to eventually have it be one code base with Flask.
But given that we both have small children now, we're moving a lot slower.
Michael KennedyHaving kids is great.
I have three kids.
Productivity is not a thing that they are known to imbue on the parents, right?
Especially in the early days.
I want to say, Phil, thank you.
I've been running Quart for a couple of my websites lately, and it's been amazing.
Phil JonesNice.
Yeah, I also use it at work.
We've got all our stuff in Quart, which is, yeah, it's really good fun.
A bit like Carlton.
Michael KennedySo when people, if they get, if they listen to the show or they go to the website of the show and they're not on YouTube, then that somehow involves court.
Janek NouvertnéJanek, welcome.
Hey.
Yeah, I'm Janek de Vietni.
I work on Litestar.
I just looked up how long it's been because I was curious myself.
I also had the same feeling that it's been a lot longer than, it's actually only been three years.
Yeah.
And I also, I noticed something with all you guys here in the room.
I use almost all of the projects you maintain at work, which is quite nice.
We have a very big Django deployment.
We have some Flask deployments.
We have a few FastAPI deployments.
I think we have one core deployment and we also have two Light Store deployments, which obviously is a lot of fun to work with.
And I find it really, really nice actually to work with all these different things.
It's super interesting also because like everything has its own niche that it's really good at.
And even, you know, you think if you maintain a framework yourself, you tend to always recommend it for everything.
But I noticed it's not actually true.
There's actually quite a few cases where I don't recommend Litestar.
I recommend, you know, just, you know, use Django for this or, you know, use Flask for that or use FastAPI for this because, well, they are quite different after all.
And I find that really, really interesting and nice.
And I think it's a good sign of a healthy ecosystem if it's not just, you know, the same thing, but different, but it actually brings something very unique and different to
Michael Kennedythe table.
I think that's a great attitude.
And it's very interesting.
You know, I feel like there's a lot of people who feel like they've kind of got to pick their tech team for everything.
I'm going to build a static site.
Like, well, I've got to have a Python-based static site builder.
Like, well, it's a static site.
Who cares what technology makes it turn?
You're writing Markdown, and out comes HTML.
Who cares what's in the middle, for example, right?
And, you know, I feel like that's kind of a life lessons learned.
Absolutely, yeah.
Yeah, that's awesome.
Cody, hello, hello.
Cody FincherYeah, hey guys, I'm Cody Fincher.
I'm also one of the maintainers of Litestar.
I've been there just a little bit longer than Yannick.
And so it's been about four years now.
And Yannick actually teed this up perfectly because I was going to say something very similar.
I currently work for Google.
I've been there for about three and a half years now.
And we literally have every one of the frameworks you guys just mentioned, and they're all in production.
And so one of the things that you'll see on the Litestar org and part of the projects that we maintain are that we have these optional batteries and most of the batteries that we have all work with the frameworks for you guys.
And so it's nice to be able to use that stuff, you know, regardless of what tooling you've got or what project it is.
And so, yeah, having that interoperability and the ability to kind of go between the frameworks that work the best for the right situation is crucial.
And so I'm glad you mentioned that, Yannick.
But yeah, nice to see you guys
Michael Kennedyon the show.
Cody, tell people what Litestar is.
I know I had both you guys and Jacob on a while ago, but it's been a couple of years, I think.
Litestar at its core is really a web framework
Cody Fincherthat kind of sits somewhere in between, I'd say, Flask and FastAPI and Django.
So whereas, you know, Flask doesn't really, you know, bundle a lot of batteries.
There's a huge amount of, you know, third-party libraries and ecosystem that's built around it that people can add into it, but there's not really like, for instance, a database adapter or a database plugin or plugins for VEAT or something like that, right, for front end development.
And so what we have been doing is building a API framework that is very similar in concept to FastAPI that is also extensible.
So if you want to use the batteries, they're there for you.
But if you don't want to use them, you don't have to, right?
And so a lot of the tooling that we built for LightStore was birthed out of a startup that I was in prior to joining Google.
And so having all this boilerplate, really, it needed somewhere to go.
And so a lot of this stuff ended up being plugins, which is what we bundled into Litestar so that you can kind of add in this extra functionality.
And so I know I'm getting long-winded.
It's somewhere between Django and Flask, if you were to think about it in terms of a spectrum, in terms of what it gives you in terms of a web framework.
But in short, it does everything that all the other guys do.
Michael KennedyVery neat.
It's definitely a framework I admire.
Jeff Triplett, so glad you could make it.
Jeff TriplettYeah, thanks for having me.
Yeah, I'm Jeff Triplett.
I'm out of Lawrence, Kansas.
I'm a consultant at a company called Revolution Systems.
I was on, some people know me from being on the Python Software Foundation board.
I've been off that for a few years.
As of last week, I'm the president of the Django Software Foundation.
So I've been on that board for a year.
I'm kind of a Django power user, I guess.
I've used it for about 20 years.
And I've kind of not really worked on, I don't even think I have a patch anymore in Django.
But I've done a lot with the community.
I've done a lot with contributing through conferences and using utilities.
I try to promote Carleton's applications like Neapolitan.
And if I like tools, Python tools in general, I try to advocate for it.
I've also used all of these applications.
Litestar, I haven't, but I have a friend who talks about it a lot.
And so I feel like I know a lot from it.
As a consultant, we tend to go with the best tool for the job.
So I've done a little bit of FastAPI.
I worked with Flask a lot over the years, even though we're primarily a Django shop.
It just depends on what the client needs.
Michael KennedyAnd you see a lot of different sizes of web app deployments.
So I think that's going to be an interesting angle for sure.
Jeff TriplettYeah, absolutely.
Small ones to hundreds of servers.
We don't see it as much anymore the last four or five years, especially with like CDNs and caching.
We just don't see load like we did, you know, 10 years ago or so.
And then I also do a lot of like small, I kind of call them some of them little dumb projects, but some are just fun.
Like I've got a FastAPI web ring that I wrote a year ago for April Fool's Day.
And for some reason that kind of took off and people liked it, even though it was kind of a joke.
So I started like peppering it on a bunch of sites and I maintain like Django packages.
I do a newsletter, Django News newsletter, just kind of lots of fun stuff.
Michael KennedyDefinitely looking forward to hearing all of your opinions.
So I've got a bunch of different your app in production topics I thought we could just work around or talk over.
So I thought maybe the first one is what would you recommend, or if you don't really have a strong recommendation, what would you choose for yourself to put your app in your framework in production?
I'm thinking app servers, reverse proxies like Nginx or Caddy.
Do you go for threaded?
Try to scale out with threads.
you try to scale out with processes, Docker, no Docker, Kubernetes.
What are we doing here, folks?
Carlton.
I think we'll just keep going around the circle here.
So you may get the first round of everyone.
No, I'll try to mix it up, but let's do it this time.
Carlton GibsonI do the oldest school thing in the book.
I run Nginx as my front end.
I'll stick a WSGI server behind it with a pre-fork, a few workers, depending on CPU size, depending on the kind of requests I'm handling.
These days, in order to handle long-lived requests, like server-sent events, that kind of, or WebSocket type things, I'll run an ASGII server as a kind of sidecar.
I've been thinking about this a lot, actually.
But yeah, this is interesting.
If you're running a small site and you want long-lived requests, just run ASGII.
Just use ASGII.
Because any of the servers, Hypercorn, Uvacorn, Daphne, Grannion is the new hot kid on the bot, right?
All of those will handle your traffic, no problem.
But for me, the scaling paddles and whiskey are so well known and just like i can do the maths on the back of the pencil i know exactly how to scale it up having done it for so long for me for my core application i would still rather use the whiskey server and then limit the async stuff to just to the use cases where it's particularly suited so i'll do that um process manager i deploy using systemd if i want to if i want a container i'll use podman by systemd it's as old school as it gets i'll very often run a a Redis instance on localhost for caching, and that will be it.
And that will get me an awful long way.
If I have to schedule, I just get a bigger box.
And a bigger box.
Yeah, yeah, yeah.
I really, really, really need multiple boxes.
Well, then we'll talk.
Michael KennedyI feel like you and I are in a similar vibe.
But one thing I want to sort of throw out there to you, but also sort of the others is, what are we talking with databases?
Like, who is bold enough to go SQLite?
Anyone's going SQLite out there?
Cody FincherYeah, it depends, right?
It just depends on what you're doing, right?
And how many concurrent users you're going to have.
It really is amazing there.
David LordThe Palette's website is running on Flask, which I wasn't doing for a while.
I was doing a static site generator.
Then I got inspired by Andrew Godwin's static dynamic sites.
And so it loads up all these markdown files, static markdown files into a SQLite database at runtime and then serves off of that because you can query really fast.
Oh, that's awesome.
I love it.
So I am using SQLite for the Palette's website.
Janek NouvertnéYeah, I also do have a few small apps that use SQLite.
And one recently that's Cody's fault because he put me on that track where it's running a SQLite database in the browser because nowadays it's quite easy to do that.
And then you can do all sorts of stuff with it, like hook into it with DuckDB and perform some analysis.
So you don't actually need to run any sort of server at all.
You can just throw some files into Nginx and serve your data.
And as long as that's static, you have a super, super simple deployment.
So yeah, definitely SQLite.
Michael KennedyIf you can, I like it.
I agree.
It's interesting.
The database probably won't go down with that, probably.
Let's do this by framework.
So we'll do vertical slices in the visual here.
So Jeff.
Jeff TriplettYeah, Django, Postgres, pretty old school stack.
I think putting a CD in front of anything is just a win.
So whether you like Fastly or Cloudflare, you get a lot of mileage out of it.
You learn a lot about caching because it's kind of hard to cache Django by default.
So you get to play with curl and kind of figure out why very headers are there.
And it's a good learning experience to get through that.
I also like Coolify, which is kind of new, at least new to me and new to Michael.
We talk about this in our spare time a lot.
It's just kind of a boring service that'll launch a bunch of containers for you.
There's a bunch of one-click installs, so Postgres is a one-click.
It also does backups for you, which is really nice to have for free.
I run a couple dozen sites through it and really like it.
You can either do a hosted forum, I don't get any money from it, or you can run the open-source version.
I do both.
I've got like a home lab that I just run stuff using the open-source version.
And for five bucks a month, it's worth it to run a couple servers.
And like Carlton said, you can just scale up.
Michael KennedyYeah, it's got a bunch of one-click deploy for self-hosted SaaS things as well.
Like I want an analytics stack of containers that run in its own isolated bit.
Just click here and go.
Jeff TriplettYeah, one-click, it's installed and you're up.
And then once you get one Django, Flask, FastAPI site working with it, and it uses like a Docker container.
Once you get that set up, it's really easy to just kind of duplicate that site, plug it in to GitHub or whatever your Git provider is.
And it's a nice experience for what normally is just our syncing files and life's too short for that.
Michael KennedySebastian, I want to have you go last on this one because I think you've got something pretty interesting with FastAPI Cloud to dive into.
But let's do Litestar next.
Cody.
Cody FincherI have actually bought all the way in on Gradient.
So for the ASCII server, I've actually been running Gradient now for I'd say a year in production.
It's worked pretty well.
There's a couple of new things that I'm actually kind of with.
I don't know how well they're going to work out.
So I'm going to go ahead and throw this out there.
But Granian is one of the few ASCII servers that supports HTTP2.
And it actually can do HTTP2 clear text.
And so this is part of the next thing I'm going to say.
Because I work for Google, I'm actively using lots of Kubernetes and Cloud Run mainly.
And so most of the things that I deploy are containerized on Cloud Run.
And I typically would suggest if you're not using something like SystemD and deploying it directly on bare metal, then you are going to want to let the container or whatever you're using to manage your processes, manage that and spin that up.
And so I typically try to allocate, you know, like one CPU for the container and let the actual framework scale it up and down as needed.
Cloud Run itself has a, like an ingress, like a load balancer that sits in front that it automatically configures.
And you're required to basically serve up clear text traffic in when you run Cloud Run.
And because now Gradient supports HTTP2 and Cloud Run supports HTTP2 clear text, you can now serve Granian as HTTP2 traffic.
The good thing about that is that you get an unlimited upload size.
And so there are max thresholds to what you can upload into the various cloud environments.
HTTP2 usually circumvents that or gets around it because of the way the protocol works.
And so you get additional features and functionality because of that.
So anyway, that's what I typically do.
And most of my databases are usually Postgres, AlloyDB if it needs to be something that's on the analytical side.
Michael KennedyYeah, I'm on Team Granian as well.
I think that's a super neat framework.
I had Giovanni on who's behind it a while ago.
It seems like it's not as popular, but it's based on Hyper from the Rust world, which has like 130,000 projects based on it or something.
So, you know, at its core, it's still pretty battle-tested.
this portion of talk python to me is brought to you by our course just enough python for data scientists if you live in notebooks but need your work to hold up in the real world check out just enough python for data scientists it's a focused code first course that tightens the python you actually use and adds the habits that make results repeatable we refactor messy cells into functions and packages, use Git on easy mode, lock environments with uv, and even ship with Docker.
Keep your notebook speed, add engineering reliability.
Find it at Talk Python Training.
Just click courses in the navbar at talkpython.fm.
Yannick, how about you?
You've got a variety, it sounds like.
Yeah, definitely.
Janek NouvertnéThere's a pretty clear split between what I do at work and what I do outside of that.
So at work, it's Kubernetes deployments.
And we managed that pretty much the same way that Cody described.
So it's one or two processes per pod max.
So you can have Kubernetes scaled or even manually easily scale that up.
You can just go into Kubernetes and say, OK, do me one to five more pods or whatever.
And don't have to worry.
Don't have to start calculating whatever.
Most of the stuff we run nowadays with uv corn or Django deployment up until I think three months ago or so was running under the Unicorn, but we switched that actually.
And it's been a really great experience.
I think we tried that a year ago and it didn't work out quite so well.
There was some things that didn't work as expected or didn't perform great or Django was throwing some errors or Uvicorn was throwing some errors.
And then apparently all of that got fixed because now it runs without any issue for the production.
Michael KennedyYeah, for people who don't know, the vibe used to be run G Unicorn, but then with UVicorn workers, if you're doing async stuff.
And then UVicorn kind of stepped up its game and said, you can actually treat us as our own app server.
We'll manage lifecycle and stuff.
And so that's the path you took, right?
Janek NouvertnéYeah, exactly.
Before that.
Well, no, actually, before that, we didn't because our Django is fully synchronous.
It doesn't do any async.
So it was just bare metal G Unicorn.
And it's still synchronous with just running it under UVcorn.
But interestingly, still quite a bit faster in a few cases.
We tried that out and we low tested it in a couple of scenarios and we found that it makes a lot of sense.
But outside of that, I do have a lot of, well, very simplistic deployments that are also just systemd and a couple of Docker compose files and containers that are managed through some old coupled together Ansible things.
But I think the oldest one that I have still running is from 2017.
And it's been running without a change for like four or five years.
That is awesome.
I don't see a reason to do anything about it because the app works.
It's being used productively.
So why change anything about that?
No need to introduce.
Just don't touch it.
Yeah, I was actually looking into Coolify that you two guys mentioned.
I was thinking about, you know, maybe upgrading it to that, but I played around with it and I thought, well, why?
You know, if I have to look into that deployment maybe once a year.
So that's really nothing to gain for me to make it more complicated.
David, Team Flask.
David LordI mentioned this before the show started, but I think I'm pretty sure I've said this the last time I was on Talk Python, but the projects I do for work typically have less than 100 users.
And so my deployment is usually really simple.
And usually they've chosen like Azure or AWS already.
So we just have a Docker container and we put it on the relevant Docker container host in that service and it just works for them.
We have a Postgres database and we have like Redis.
But I never really had to deal with like scaling or that sort of stuff.
But the funny thing is like, at least for my work, I'm always, we're often replacing older systems.
And so even a single Docker container running a Flask application is way more performant and responsive than anything they're used to from like some 20 year old or 30 year old Java system.
Right.
And it can just respond on a small container with like a little bit of CPU and a little bit of memory.
They're always shocked at like, how much do we need to pay for?
Oh, just like it'll run a potato.
You know, there's only 100 users and they're like, that's a lot of users.
So my recommendation is always start small and then scale up from there.
Don't try to overthink it ahead of time.
Yeah, for my personal stuff, I'm using like Docker containers now and fly.io.
I haven't gotten in.
So I do want to look into Granian and Coolify, but I haven't gotten there yet.
And for the Docker container, I can definitely recommend pythonspeed.org.
I don't remember off the top of my head who writes that, but it's somebody in the Python ecosystem.
And they have a whole series of articles on how to optimize your Docker container.
And that sounds really complicated, but you end up with a Docker file that's like 20 lines long or something.
So it's not like there's crazy things.
It's just you have to know how to structure it.
And then I just copy and paste that to the next project.
Nice.
Yeah.
Michael KennedyI resisted doing Docker for a long time because I'm like, I don't want that extra complexity.
But then I realized the stuff you put in the Docker file is really what you just type in the terminal once and then you forget.
David LordI mean, always using Postgres, Redis, probably if I need some background.
tasks, just plain SMTP server for email.
I wrote for all three of those things.
I wrote new extensions in the Flask ecosystem that I'm trying to get more people to know about now.
So Flask SQLAlchemy Lite, L-I-T-E, instead of Flask SQLAlchemy, takes a much more lightweight approach to integrating SQLAlchemy with Flask.
And then Flask Redis, I revived from like 10 years of non-maintenance.
And then I wrote this whole system, this whole pluggable email system called email simplified, kind of inspired by Django, Django's pluggable system, except, and so there's like Flask email simplified to integrate that with Flask.
But unlike Django, you can use email simplified in any library you're writing, in any Python application you're writing.
It doesn't have to be a Flask web framework.
It's pluggable as the library itself.
And then you can also integrate it with Flask or something else.
So Flask email simplified.
I get like three downloads a month
Michael Kennedyright now.
So it needs some popularity.
Awesome.
I've been doing the non-simplified email lately.
So I'm happy to hear that there might be a better way.
Yeah.
I think people do underappreciate just how much performance you got out of Python web apps.
You know, they're like, oh, we're going to need to rewrite this and something else because the GIL or whatever.
Like I decided just to make a point to pull up the tail till my log running court, by the way.
And each one of these requests doing like multiple db calls and it's like 23 milliseconds six milliseconds three milliseconds you know nine milliseconds it's like that's good enough for that's a lot of requests per second per worker until you gotta you gotta have a lot of traffic speaking of court phil what's your take
Phil Joneson this one i think it's very similar i also build docker containers and uh with a postgres database on the back end and i run hypercorn as the ascii server and put them behind a aws load balancer and just run them in ECS.
And I think it's pretty simple, but I guess it depends on your biases.
But yeah, that's all we do really.
And it goes a long way.
There are multiple ECS tasks, mostly because if one falls over rather than scaling, it's usually the database that you need to scale, I find.
But yeah, that's how we run it.
The nice thing for me about Hypercorn is that I can play with HTTP 3.
So that's what we're doing at times.
Michael KennedyOh, HTTP 3, okay.
I've just been getting my HTTP 2 game down, so I'm already behind the game.
What's the deal with HTTP/3?
Phil JonesIt's obviously a totally new way of doing it over UDP now rather than TCP.
Although at the application level, you can't tell any difference really.
But I mean, I just find it interesting.
I'm not really sure it will help too much.
And it's probably best if you've got users who have not that great a network connection.
But for most other cases, I don't think it matters too much.
Michael KennedyJust keep blasting packets until some of them get through.
OK, fine.
We'll give you a page eventually.
There's three pages, actually.
All right, Sebastian, you are running not just FastAPI from your experience, but you're running FastAPI for a ton of people through FastAPI Cloud at, I'm sure, many different levels.
This probably sounds like a shameless plug, and it kind of is, but it's sort of expected.
I will deploy FastAPI or FastAPI Cloud.
Just because, well, the idea is just to make it super simple to do that.
You know, like if you are being able to run the command FastAPI run.
So FastAPI run has like the production server that is using Ubicorn underneath.
And if you can run that, then you can run also FastAPI deploy.
And then like, you know, like it will most probably just work.
And, you know, we just wrap everything and like deploy, build, install, deploy, handle HTTPS, all the stuff without needing any Docker file or anything like that.
And I think for many use cases, it's just like simpler being able just to do that.
There are so many projects that I have been now building, like random stuff that is not really important, but now I can.
And before it was like, yeah, well, I know how to deploy this thing like fully with like all the bells and whistles, but it's just so much work that yeah, I know later.
So for that, I would end up just like going with that.
Now if I didn't...
Well, what I was going to ask is how much are you willing to tell us how things run inside FastAPI Cloud?
Oh, I can't, it's just so much stuff that is going on.
And it's also, it's fun that nowadays that they're like, we have Docker and we have Docker Swarm and there was Nomad and Kubernetes and oh, Kubernetes won.
And then we have the cloud providers and there's AWS and Google and Azure.
And you will expect that all these things and all this complexity is like, now that it's like, okay, these are the clear winners.
So it's like a lot of complexity to take on, but once you do it all works, but it doesn't.
And it's just like so much work to get things to work together, to work correctly.
And the official resources from the different providers and things, in many cases, it's like, oh, the solution is hidden in this issue somewhere in GitHub because the previous version was obsolete, but now the new version of this package or whatever is like, it's just, it's crazy.
But like, yeah, so if I didn't have FastAPI Cloud, I will probably use containers.
I will probably use Docker.
If it's like something simple, I will deploy with Docker Compose, probably try to scale minimum replicas.
I don't remember Docker Compose has that.
I remember that Docker Swarm had that, but then Docker Swarm sort of lost against where Net is.
I will put a traffic load balancer in front to handle HTTPS and, yeah, well, like regular load balancing.
And, yeah, just regular YubiCorn.
What some of the folks we were talking about before, At some point, we needed to have Unicorn on top of Uvicorn because Uvicorn wouldn't be able to handle workers.
But now Uvicorn can handle its workers like everything and handle the main thing was some VE processes and reaping the processes and handling the stuff.
Now it can't just do that.
So you can just run plain Uvicorn.
So if you're using FastAPI and you say FastAPI run, that already does that.
So if you're deploying on your own, you can just use the FastAPI run command.
Then, of course, you have to deal with the scaling and HTTPS and a lot of balancing and all the stuff, but the core server, you can just run it directly.
If going beyond that, then there will probably be some cluster Kubernetes and trying to scale things, figure out the ways to scale things based on the load of the requests, like scaling automatically.
Having normally one container per process to be able to scale that more dynamically without depending on the local memory for each one of the servers and things like that, I'm probably saying too much.
But yeah, actually, you know, like if I didn't have a CPI cloud, I will probably use one of the providers that abstract those things a little bit away, you know, like render, railway, fly, like, I don't know.
Like, I don't really think that a regular developer should be dealing with, you know, like the big hyperscalers and like Kubernetes and like all that complexity for a common app.
Most of the cases, I think it's just really too much complexity to real with.
It's kind of eye-watering to open up the AWS console or Azure or something.
Whoa.
Oh, the other day, you know, like the other day I had to, in one of the AWS accounts, I had to change the account email.
I think I spent four hours.
I know.
Because I had to create the delegate account that has the right permissions to roll.
And they're like, oh, no, this is, you know, like, sometimes it's just overwhelming the amount of complexity that needs to be dealt with.
And, yeah, I mean, it's great to really have, like, you know, like the infra people that I have working with me at the company that I can deal with all that mess and, like, can make sure that everything is just running perfectly and it just works.
So it's like, you know, like, sort of SRE as a service, DevOps as a service for everyone.
It's like a cloud product that provides DevOps as a service,
Cody FincherI spent a number of years doing nothing but cloud migrations to these hyperscalers for enterprises.
And I can tell you that when you mentioned the eye-watering comment about the network and all that stuff, it's so incredibly complicated now, right?
There's literally every kind of concept that you need to know to deploy these enterprises now, move them from on-prem to the cloud.
So it does get incredibly complicated.
Having something simple like what Sebastian is talking about, I think, is super helpful when you're just trying to get started and get something up and running quickly.
Michael KennedyI've got a lot of questions and I realize that we will not be getting through all of them.
So I want to pick carefully.
So let's do this one next.
Performance, what's your best low effort tip?
Not like something super complicated, but I know there's a bunch of low hanging fruit that people maybe missed out on.
And this time let's start with Litestar.
Cody, back at you.
Cody FincherI'm going to stick to what I know, which is databases because I deal with that.
every single day.
There's a couple of things that I see as like gotchas that I constantly see over and over.
One, SQLAlchemy kind of obfuscates the way it's going to execute things and what kind of queries it's going to actually execute.
So it's really easy if you're not kind of fluent in how it works to create N plus one types of issues.
And so when people start talking about sync or async, it's really, in my mind, it's less of that because you're going to spend more time waiting on the network and database and those kind of things, then you're going to spend serializing just generally, right?
And or processing things on the web framework.
And so, one, making sure that you, your relationships dialed in correctly so that you don't have N plus one queries.
The other thing is oversized connection pooling into Postgres and just databases in general, because what people don't tend to know is that each of those connections takes up CPU cycles and RAM of the database.
And so when you slam the database with hundreds of connections, you're just taking away processing power that can be done for other things, right?
And so you end up ultimately slowing things down.
So I've seen databases that have had so many connections that all of the CPU and all the stuff is actually doing things, just managing connections and can't actually do any database work.
Michael KennedyAnd so what about this socket?
Is it busy?
What about this socket?
Is it busy?
It's just round robin that, right?
Cody FincherPaying attention to the database is kind of my first kind of rule of thumb.
Michael Kennedy100%.
I like that one a lot.
David LordI'll throw in putting stuff or identifying work that doesn't need to be done immediately for the user and putting in a background task.
Having a background worker defer things till later.
So sending email is an example, although there's nuances there about knowing that it's sent and everything.
But yeah, if you user kicks off some process and then you wait to do that process in the worker, you're holding that worker up, which is more relevant in WSGI than ASGI.
but and you're making them wait for their page to load again versus record what they wanted to do send it off to the background let them see the status of it but let the background worker handle
Michael Kennedyit all right yeah like i said as you guys go for it i'm not sure if that's some sort of
Janek Nouvertnéit's not really a trick or a tip or more more like a i think the most common mistake i see when i is ascii specific but when i look at ascii apps that people have written who are maybe not as familiar with ASCII or async Python at all, if you make something an async function, you should be absolutely sure that it's non-blocking.
Because if you're running an ASCII app and you're blocking anywhere, your whole application server is blocked completely.
It doesn't handle any other requests at the same time.
It's blocked.
I don't think I've had any mistake more times when I've looked through some apps that someone has written or that i've came across somewhere so this is really it's super super common and it has such a such a big impact on the overall performance in every every metric imaginable so i would say unless and that's nowadays what i tell people unless you're 100 sure that you know what you're doing and you know it's it's non-blocking don't make it async put it in a thread pool, execute it in a thread, whatever.
All of the ASCII frameworks and Django give you a lot of tools at hand to translate your stuff to from sync to async so you can still run it.
Do that unless you're very sure that it actually fully supports async.
Michael KennedyMARK MANDEL: Yeah, that's good advice.
Sebastian.
SEBASTIAN BASTIAN: Hey, I'm actually going to second, Yannick.
I think, yeah, like it's-- and it's maybe counterintuitive that one of the tips of performance is to try to not optimize that much performance at the beginning.
You know, like, I think the idea with async is like, oh, you can get so much performance and throughput in terms of accuracy, whatever.
But the thing is, in most of the cases, you know, like, till apps grow so large, they actually don't need that much extra throughput, that much extra performance.
And in a framework like, you know, like, as Yannick was saying, well, in my case, I know FastAPI, but like, you know, like also many others.
If you define the function with async, it's going to be run async.
If you define it non-async and regular def, it's going to be run on a thread worker automatically.
So it's just going to do the smart thing automatically.
So it's like fair, you know, like it's going to be good enough.
And then you can just start with that and just keep blocking code everywhere.
You know, like just not use async until you actually know that you really need to use async.
And once you do, you have to be, as Yannick was saying, you know, like 100% sure that you are not running blocking code inside of it.
And if you need to run blocking code inside of Async code, then make sure that you are sending it to a thread worker.
Sending it to a thread worker sounds the own thing, but yeah, like, you know, like Django has tools, any IO has tools.
I also built something on top of any IO called AsyncR, that is just to simplify these things, to asyncify a blocking function, keeping all the type information so that you get autocompletion and inline errors and everything.
even though it's actually doing all the stuff of sending the thing to the thread work.
So the code is super simple.
You keep very simple code, but then underneath it's just like doing all the stuff that should be done.
But you know, like that's normally when you actually need to hyper-optimize things.
In most of the cases, you can just start with just not using async at first.
Also, now that you're going to have Python multi-threaded, then suddenly you're going to have just so much more performance out of the blue without even having to do much more.
So, yeah, actually that's, you know, like, sorry, I kept speaking so much, but here's a tip for improving performance.
Upgrade your Python version.
I was just chatting today with Savannah.
She was adding the benchmarks to the, you know, like Python benchmark, the official Python benchmarks that they run for the CPython, the faster CPython program.
And the change from Python 3.10 to Python 3.14 when running FastAPI is like almost double the performance or something like that.
It was like, it was crazy.
It was just crazy improvement in performance.
So you can just upgrade your Python version.
You're gonna get so much better performance just out of that.
- Yeah, that's an awesome piece of advice that I think is often overlooked.
And it's not only CPU speed, it's also memory gets a lot lower.
Whoever's gonna jump in, go ahead.
David LordLast year, I was looking at MarkupSafe, which is an HTML escaping library that we use and has a C extension for speedups.
And I almost convinced myself that I can stop maintaining the C extension because just Python itself got way faster.
But then it turned out that I could do something to the C extension to make it faster also.
So I'm still maintaining.
But just the fact that I almost convinced myself like, oh, I can drop a C extension for just a Python upgrade instead was pretty impressive.
They've done a lot, especially with like string handling and, you know, which you're going to use for templating for web apps.
Phil JonesPhil.
Yeah, well, I definitely echo looking at your DB queries because by far and large, that's always where our performance issues have been.
It's either badly written query or we're returning most of the database when the user just wants to know about one thing or something silly like that.
I was thinking about low-hanging ones, which I think you asked about.
So I'd say uv loop, which is still a noticeable improvement.
And also, because I think it's likely a lot of us are returning JSON, often changing the JSON serializer to one of the faster ones can be noticeable as well and obviously quite easy to do.
Michael KennedySo yeah, that's my key.
That's really good advice.
I didn't think about the JSON serializer.
What one do you recommend?
I think, is it you, JSON?
Or is it all JSON?
Phil JonesI can't remember which one was deprecated.
But yeah, if you look at the Tech Empower benchmarks, everyone's changing the JSON serializer to get that bit extra speed.
Michael KennedyBut yeah, you're like, our framework looks bad because our JSON serializer is like third of the performance.
Phil JonesWe changed, well, David added a JSON provider to Flask.
And yeah, you could see it make a difference in the tech and power benchmarks.
So that was really good.
Yeah, cool.
David LordYeah, it's pluggable now.
But if you're not installing Flask or JSON, I mean, I don't know what other JSON library you'd be using at this point, unless you're already using one.
But or JSON is very, very fast.
Michael KennedyOkay, this is something I'm going to be looking at you later.
So over to Django, Jeff, David talked about running stuff in the background and was it Django 5 or Django 6 that got the background task thing?
Jeff TriplettYeah, Django 6 just came out a couple of weeks ago.
And I'll hand that off to Carlton in a second because I think Carlton's had more to do with the actual plumbing being on the steering council.
My advice to people is the best way to scale something is just to not do it, avoid the process completely.
So like I mentioned to CDN earlier, it's content heavy sites, cache the crap out of stuff.
It doesn't even have to hit your servers.
You can go a lot, as we mentioned earlier, too, just by doubling the amount of resources a project has.
Django is pretty efficient these days, especially with async views.
Like everybody else has said, too, any blocking code, move off to threads, move off to a background queue.
Django Q2 is my favorite one to use because you can use a database.
So for those little side projects where you just want to run one or two processes, you can use it.
It works great.
And Carlton, if you want to talk about Django internals.
Carlton GibsonYeah, OK.
So the new task framework I just mentioned, the main thing, the main sort of bit about it is that it's, again, this pluggable Django API.
So it gives a standard task API.
So if you're writing a third-party library and you, I know, you need to send an email.
It's the canonical example, right?
You need to send an email in your third-party library.
Before, you'd have had to tie yourself to a specific queue implementation, whereas now Django is providing a kind of like an ORM of tasks.
Right, right.
Michael KennedyYou got to do Redis, you got to do Celery, and you got to manage things and all that.
Carlton GibsonYou don't have to pick that now as the third-party package author.
You can just say, right, just use Django, wrap this as a Django task and queue it.
And then the developer, when they come to choose their backend, if they want to use Celery or they want to use Django Q2 or they want to use the Django task backend, which Jake Howard, who wrote this for Django provided as well, you can just plug that in.
So it's a pluggable interface for tasks, which is, I think, the really nice thing about it.
In terms of quick wins, everybody's mentioned almost all of mine.
I'm going to, Cody and Phil, they mentioned the database.
That's the big one.
Django, the ORM, because it does lazy related lookups, it's very easy to trigger in M plus one where, you know, the book has multiple authors and suddenly you're iterating through the books and you're iterating through the authors and it's a lookup.
So you need to do things like prefetch related, select related.
You need to just check that you've got those.
Django debug toolbars are a great thing to run in development where you can see the queries and it'll tell you where you've got the duplicates.
And then the slightly bigger one is to just check your indexes.
The ORM will create the right indexes if you're leaning, if you're going through primary keys or unique fields.
But sometimes you're doing a filter on some field, and then there's not the right index there, and that can really slow you down.
So again, you can do the SQL explain on that and find that.
And then the thing I was going to say originally was caching, is get a Redis instance, stick it next to your Django app, and as Jeff said, don't do the work.
If you're continually rendering the same page and it never changes, cache it and pull it from the cache rather than rendering.
Because template DB queries are one of your biggest things.
The second one's always going to be serialization.
It's either serialization or template rendering.
Michael KennedySo if you can avoid that by caching, you can save an awful lot of time on your account.
Yeah.
I was wondering if somebody would come back with database indexes, because that's like a 100x multiplier for free almost.
It's such a big deal.
Carlton GibsonIt really can be.
If you're making a particular query and it's doing a full table scan all of a sudden you put the index in, it's instant.
It's like, oh, wow.
You don't have to be a DBA or
Michael Kennedymaster information architect sort of thing.
I don't know about Postgres.
I'm sure it has it.
Somebody can tell me.
But with Mongo, you can turn on in the database, I want you to log all slow queries and slow for me means 20 millisecond or whatever.
Like you put a number in and then you run your app for a while and you go, look at what's slow by slowest.
And then you can see, well, maybe that needs an index, right?
Like just let your app tell you what you got to do.
Carlton GibsonYeah, there is a post.
I'm just trying to see if I can quickly look it up now.
There's a Postgres extension, which will automatically run explain on the slow queries and log them for you.
So it'll...
There you go.
Cody FincherSee if I can find...
It's pgstat statements, I think, is what you're thinking about.
Right, okay.
Michael KennedyIf you're unsure about your database indexes, do this, or at least go back and review your queries.
Yeah, I agree.
Very good.
All right, I can see we're blazing through these questions.
David LordI had one more.
If I can mention one.
No, please go ahead.
Michael KennedyYeah, go ahead, David.
David LordIf you want to like get some more responsive parts of your website, like make your website a little more responsive or interactive with the user, HTMX or Datastar, especially like if you're using Cort or another ASGI where you can do SSE, server sent events or WebSockets, streaming little bits of changes to the web front end and then rendering them with the same HTML you're already writing can make things a lot more responsive.
We had talk about that from Chris May at FlaskCon last year, which you can find on YouTube.
Michael KennedyThis is not one of the questions, but let me just start out for a quick riff on this, folks.
Out in the audience, someone was asking, what about HTMX?
And I think more broadly, I am actually a huge fan of server-side-based, template-based apps.
I think it just keeps things simpler in a lot of ways, unless you need a lot of interactivity.
But things like HTMX or a little bit of JavaScript can reduce a lot of the traffic and stuff.
Where do people land on those kinds of things?
Janek NouvertnéI absolutely love HTMLX, not just because you don't have to write a lot of JavaScript or whatever, but mostly because if I'm just building a simple app that needs a bit more than just be a static HTML page, it needs some interactivity, a little bit of reactivity.
I feel like having the whole overhead of building an SPA or whatever tools you need for the whole JavaScript, TypeScript, whatever stack, it's just so much work to get a little bit to make a simple thing a little bit nicer, a little bit more reactive.
And I feel like HTMX just fits right in there.
It's super great.
I've built a couple of things with it now, a few of my own projects, a few things that work.
And it makes things so much easier where the work probably wouldn't have been done if it was just because it's too much.
If you're doing a whole front end thing that you have then to deploy and build and whatever, or it would have been less nice.
So it's an amazing, really amazing thing.
Cody FincherAs the maintainer and author, though, one of the things that is not frustrating, but it's understandable is that HTMX is not for everybody, right?
There's not like you can't use HTMX in all occasions or Datastar, right?
And so there are people that are always going to want to use React and there's going to be people that want to use all these other frameworks.
And so having some cohesive way to make them all talk together, I think, is important.
I don't have that answer yet, but I just know that like I can't always say HTMX is it, right?
And then you'll have a great time because I'll inevitably meet somebody that says I need to do this.
And it's right.
And it's in a single page application or something is more appropriate for that.
And so it's obviously the right tool for the right job when you need it.
But, you know, I want to make something that is cohesive depending on whatever library you want to use.
Jeff TriplettI would throw one thing in there, though.
I would rather somebody start with HTMX than I would start with React if you don't need it.
Because React can be total overkill.
It can be great for some applications.
But oftentimes the consultant, we see people like having an about page and they throw a React at it.
Like, why do you need that?
Like, especially for small things with partials.
Do you mean you don't want to start with Angular?
You know, it's fine if you need it, but I don't think you really need it.
Like, introduce tools as you need them.
Django 6.0 just added template partials, and I guess my job here is to hand off to Carlton because this is his feature.
Michael KennedyYeah, I was happy to see that come in there, Carlton.
Nice job.
Carlton GibsonNo, it's okay.
Plug the new feature.
So, I mean, I stepped down as a fellow in 2023 into a new business, and I read the essay about template fragments on the htmx website where it's this like named reusable bits in the in the templates and i was like i need that so i built django template partials released as a third-party package and it's now just been merged into core for django 6.0 and i have to say about htms it's really changed the way i write websites before i was the fellow i used to write mobile applications and do the back do the front end of the mobile application then the back end in django using django rest framework and i was that's how i got into you know open source was by django rest framework and since starting the we're three years in we've hardly got a json endpoint in sight it's like two three four of them in the whole application and it's oh it's just a delight again you know you asked me at the beginning michael am i having fun yeah i really am having fun and htmx is the reason i do grant
Michael Kennedythere are you know these use cases awesome all right let's talk about our last topic and we have five minutes ish to do that so we gotta we gotta stay on target quick but let's just go around run real quick here we talked about upgrading the python version getting better performance out of it i mentioned the lower memory side but i think one of the underappreciated aspects of this you know the instagram team did a huge talk you know on it a while ago is the memory that you run into when you start to scale out your stuff on the server because you're like oh i want to have four workers so i can have more concurrency because of the gills so now you've got four copies of everything that you cache in memory and just like the runtime and now you need eight gigs instead of what would have been one or who knows right but with free threaded python coming on which i've seen a couple of comments in the chat like hey tell us about this like it's we could have true concurrency and we wouldn't need to scale as much on the process side i think giving us both better performance and the ability to say well you actually have four times less memory so you could run smaller servers or whatever what's the free threaded story for all the frameworks carlton
Carlton Gibsonlet's go back to you for do it in reverse i'm really excited about it i don't know how it's going to play out but i'm really excited about it all it can do is help django the async story in django is nice and mature now uh but still most of it's sync like you know you're still going to default the sync you're still going to write your sync views you still got template rendering you know django's template template based kind of framework really you're still going to want to run things synchronously, concurrently, and proper threads are going to be, yeah, they can't but help.
I don't know how it's going to roll out.
I'll let someone else go because I'm getting locked up.
Michael KennedyYeah, I just like elaborated that for people out there before we move on is you could set up your worker process to say, I want you to actually run eight threads in this one worker process.
And when multiple requests come in, they could both be sent off to the same worker to be processed.
And that allows that worker to do more unless the GIL comes along and says, stop, you only get to do one thing in threads in Python.
And all of a sudden, a lot of that falls down.
This basically uncorks that and makes that easy all of a sudden.
Even if you yourself are not writing async, your server can be more async.
Carlton GibsonYeah.
And this is the thing that we found with ASCII is that you dispatch to a, you know, using to async or you dispatch it to a thread, a thread pool executor, but Python doesn't run that concurrently.
And so it's like, or in parallel.
So it's like, ah, it doesn't actually go as fast as you want it to.
And so you end up wanting multiple processes still.
All right, let's keep it with Django.
Jeff, what do you think?
Jeff TriplettI'm going to defer to the others on this.
I have the least thoughts.
Michael KennedyAll right, write down the stack, Sebastian.
Write down the video, not website, web framework.
I think it's going to be awesome.
This is going to help so much, so many things.
The challenge is going to be third-party libraries used by each individual application and if they are compatible or not.
That's where the challenge is going to be.
But other than that, it's just going to be free extra performance for everyone.
Just, you know, like just upgrading the version of Python.
So that's going to be us.
Cody.
Cody FincherYeah, I'm going to echo what Sebastian just said.
The third party libraries, I think, are going to be the big kind of sticky point here.
I'm looking forward to seeing what we can do.
I'm going to kind of hold my thoughts on that.
Yannick kind of speak a little bit on it because I know that he's looked at msgspec specifically and some of the other things that might, you know, give some better context here.
But yes, the third party libraries are going to be the kind of the sticky issue.
but I'm looking forward to seeing what we can make happen.
Janek NouvertnéI'm super excited, actually, specifically about async stuff, because for most of the time, it's like, if you can already saturate your CPU, async doesn't help you much.
Well, now, if you have proper threads, you can actually do that in async as well.
And I think it's going to speed up a lot of applications just by default, because almost all async applications out there use threads in some capacity because, well, most of things aren't async by nature.
So they will use a thread pool and it will run more concurrently.
And so that's going to be better.
But I'm also a bit scared about a few things that mainly, as a few others have said now, third-party libraries extension is specifically those that are Python C extensions.
just recently, I think like three weeks ago, so got a msgspec released for Python 3.14 and proper free threading support.
And that took a lot of work.
Fortunately, a few of the Python core devs chimed in and contributed to PRs and helped out with that.
And all around the ecosystem, the last few years, there's been a lot of work going on.
But especially for more niche libraries that are still here and there, I think there's still a lot to do and possibly also quite a few bugs lurking here and there that haven't been found or are really hard to track down.
I'm curious and a bit maybe scarce, too hard of work, but I'm cautious.
Michael KennedyIt's going to be a little bit of a bumpy ride as people turn that on and then the reality of what's happening.
However, I want to take Cody's warning and turn it on its head about these third-party libraries, Because I think it's also an opportunity for regular Python developers who are not async fanatics to actually capture some of that capability.
Say if some library says, hey, I realize that if we actually implement this lower level thing, you don't actually see the implementation of in true threading.
And then you use it, but you don't actually do threading.
You just call even a blocking function.
You might get a huge performance boost, a little bit like David was talking about with markup safe.
And you could just all of a sudden, with doing nothing with your code, goes five times faster on an eight core machine or something in little places where it used to matter.
Janek NouvertnéI'm super excited for-- we currently focused on the things that are out there right now and that might need to be updated.
But I'm super excited for what else might come of this, new things that will be developed or stuff that we are currently not thinking about or have that hadn't been considered for the past 30 years or so, because it just wasn't feasible or wasn't possible or didn't make sense at all.
I think it would pay off definitely.
All right.
Team Flask.
Phil JonesYou guys got the final word.
I think it would probably be more advantageous to whiskey apps than it will for ASCII apps.
And when I've been playing with it, it's mostly on the whiskey flask side where I'm quite excited about it.
At the same time, like the others are a bit worried because not clear to me, for example, that green threading is going to work that well with free threading.
And that may have been fixed, but I don't think it has yet.
And that might then break a lot of whiskey apps.
So next, I think.
But yeah, very excited for Flask in particular.
David LordThanks for bringing up green threading.
I added that to my notes of mention right now.
So Flask already has emphasized for years and years and years that don't store stuff globally, don't have global state, bind stuff to the request response cycle if you need to store stuff, look stuff up from a cache otherwise.
And my impression is that that emphasis is pretty successful.
I don't, there's any well-known extensions using like global state or anything like that.
It's helped that the dev server that we have is threaded by default.
Like it's not going for performance, obviously, it's just running on your local machine, but it's already like running in a threaded environment, running your application in a threaded environment, not a process-based one by default.
I don't know if anybody even knows that you can run the dev server as process-based.
And we also already had for a decade or more than a decade, Gevent to enable the exact same thing that free threading is enabling for Flask, which is concurrent work and connections.
And so plenty of applications are already deployed that way using Gevent to do what kind of ASCII is enabling.
I've run all the test suites with pytest FreeThreaded, which checks that your tests can run concurrently in the free threaded builds.
so go check that out by Anthony Shaw.
And I'm pretty sure Granian already supports free-threading.
Not sure though, I haven't looked into Granian enough.
Michael KennedyBut like- You know, I'm not sure either.
It does have a runtime threaded mode but I don't know if that's truly free-threaded or not.
David LordAll of those things combined make me pretty optimistic that Flask will be able to take advantage of this without much work from us.
I mean, I know that's a big statement right there and I haven't tested it but the fact that we've emphasized all these different parts for so long already makes me confident about it.
Michael KennedyI'm also super excited about it.
And just one final thought I'll throw out there before we call it a show, because we could go on for much longer, but we're out of time.
I think once this comes along, whatever framework out of this choice you're using out there, there's a bunch of inner working pieces.
One of them may have some kind of issue.
And I think it's worth doing some proper load testing on your app, you know, point something like locus.io at it and just say, well, what if we gave it 10,000 concurrent users for an hour?
Does it stop working?
Does it crash?
Or does it just keep going?
You're like, so that seems like a pretty good thing to do the first time before you deploy your first free threaded version.
Yeah.
All right, everyone.
I would love to talk somewhere.
This is such a good conversation, but I also want to respect your time and all that.
So thank you for being here.
It's been an honor to get you all together and have this conversation.
Thank you very much for having us.
Janek NouvertnéThank you.
Yeah.
Thanks for having us all.
Thanks, everybody.
Yeah.
It's nice being here.
Yeah.
Thanks for having us.
Michael KennedyThanks for having us all.
Bye.
Bye-bye.
This has been another episode of Talk Python To Me.
Thank you to our sponsors.
Be sure to check out what they're offering.
It really helps support the show.
If you or your team needs to learn Python, we have over 270 hours of beginner and advanced courses on topics ranging from complete beginners to async code, Flask, Django, HTMX, and even LLMs.
Best of all, there's no subscription in sight.
Browse the catalog at talkpython.fm.
And if you're not already subscribed to the show on your favorite podcast player, what are you waiting for?
Just search for Python in your podcast player.
We should be right at the top.
If you enjoyed that geeky rap song, you can download the full track.
The link is actually in your podcast blog or share notes.
This is your host, Michael Kennedy.
Thank you so much for listening.
I really appreciate it.
I'll see you next time.
I started to meet.
And we're ready to roll.
Upgrade the code.
No fear of getting whole.
We tapped into that modern vibe over King Storm.
Talk Python To Me, I-Sync is the norm.
Редактор субтитров А.Семкин Корректор А.Егорова