Brought to you by Michael and Brian - take a Talk Python course or get Brian's pytest book


Transcript #330: Your data, validated 5x-50x faster, coming soon

Return to episode page view on github
Recorded on Tuesday, Apr 4, 2023.

00:00 - Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.

00:05 This is episode 330, recorded April 4th, 2023.

00:10 I'm Michael Kennedy.

00:11 - And I'm Brian Okken.

00:12 - And you can connect with us over on Fosstodon.

00:16 If you're on Mastodon, find us there.

00:18 I'm@mkennedy@fauceton.org.

00:20 Brian's Brian Okken over there.

00:22 And the show is at pythonbytes@fauceton.org.

00:25 And if you're interested in the video version, Check out pythonbytes.fm/live, click that, go over to the YouTube channel, subscribe, get notified, you'll get a little pop-up when we start streaming live.

00:37 It's always fun to be part of it.

00:39 We encourage people to check the show out that way as well.

00:42 So Brian, let's start off with something that has been almost exactly one year in the works.

00:47 - I was just gonna ask you about that.

00:50 So was this about a year ago we talked about this Pydantic re-arrange?

00:56 I think I saw something on Twitter of all places from Samuel Colvin saying it was April 4th, 2022 that I started working on Pydantic version two.

01:09 So that sounds like to the day.

01:11 Okay.

01:12 Well, it's, it's not completely here yet, but it's here enough to try.

01:16 So I'm pretty excited about it.

01:18 So by Pydantic V2 version two pre-release.

01:22 So people can, it's available now.

01:24 so people can install it.

01:26 You have to do the, like the pip install dash dash pre Pynantic, and then you can say Bitantic greater than or equal to 2.0 A1, I guess, if you want to get the alpha one or better.

01:39 So the big news is alpha one's available.

01:42 And it's pretty exciting.

01:45 There's a whole bunch of great stuff changes.

01:47 We've talked, I think we talked about it.

01:48 You talked about it on your show, I think also.

01:50 - Yeah, yeah.

01:51 But the headlines here is, well, one, it's not complete yet.

01:57 This is the alpha, we're not even debate is yet.

01:59 So if you try it out and you see something, they've got a GitHub created GitHub issue.

02:06 And they wanna have people use the bug v2 label to create issues around the version two, 'cause they wanna hop on those right away.

02:15 Anyway, so the big change was, One of the big changes was to move all a lot of Pydantic and the rules and everything into a different module called Pydantic Core.

02:28 That one's mostly written in Rust.

02:31 And so it's like five to 50 times faster overall for performance.

02:38 So that's pretty exciting.

02:39 'Cause these are, I mean, this is, when you're using Pydantic, it's hitting for every interaction, right?

02:44 So as fast as possible is great.

02:47 And I do like the idea of the separating the Rust part out into a different module, a different package, by data core, so that they can have--

02:57 kind of maintain it and have safety and maintenance around that a little separate.

03:02 I think that makes sense.

03:03 Yeah, and people used to have to create their derived classes and put a lot of their customization in their--

03:10 what's called root level validators and things like that, where it's like, I want to validate the whole class, not just a certain field.

03:16 or if this is set, then that has to be set that way.

03:19 A lot of those things had to be done in an OOP way.

03:22 And I think with the PyDandic core, you have more direct access to a layer below.

03:27 So it's not just faster, which is fantastic, but it also opens up more ways to interact with PyDandic, which is cool.

03:34 - Yeah.

03:35 So they've got a lot of stuff working already.

03:38 They want people to be able to experiment and try out their base model changes, a lot of the same features for validation, but there's new method names.

03:48 And it is a change.

03:51 Data classes, serialization, strict mode, different schemas, lots of changes for V2.

03:56 So I'd like people to try it out.

03:58 There is a lot of stuff still under construction, mostly documentation.

04:05 And some of the base settings have changed from, they were base settings, and now they're gonna be in by identic settings.

04:12 That's not quite ready.

04:14 So there's still some work to do, even in the migration guide.

04:18 So they've gotten a start on the migration guide, but it's not there.

04:21 As you see on some of the links, there's changes to data classes, changes to base model.

04:27 Some of the stuff's already there, but it's still under construction.

04:31 So pretty exciting.

04:33 - I'm definitely excited.

04:34 And the five to 50 times faster, that's no joke.

04:39 There's the idea, okay, well, what are you doing?

04:41 I'm like parsing a settings file, I got a single JSON document, whatever.

04:45 All of FastAPI is deeply, not all, but much of FastAPI is deeply based on exchanging rich data with Pydantic, right?

04:54 So your API layer could get much faster, right?

04:58 And you can also use this, people maybe don't realize it.

05:02 You can use this with other frameworks as well.

05:04 You could use it with Flask, you could use it with Pyramid.

05:07 FastAPI is cool 'cause you can put just, there's a argument of type Pydantic model and it automatically fills it all in.

05:14 But all you got to do is just take, here's the post dictionary and feed it to a Pydantic model.

05:18 Like just inside the function is the first line and you're in the same place.

05:22 >> Yeah.

05:22 >> So you can use this across all these areas.

05:24 Then for example, Python bytes.fm is powered by Beanie, which is Pydantic plus MongoDB plus Async, which is awesome.

05:32 But every single database record comes back, goes through Pydantic.

05:36 If you're using something like Beanie or SQL model, and FastAPI, your data layer goes through Pydantic, and your web layer, 'cause there's like multi-layered Pydantic operations on every interaction.

05:48 And so making that part five to 50 times faster is just huge, right?

05:53 That's a really big surface area to make a lot faster.

05:56 I got speedups as well to talk about later in the show.

05:59 But it's not that much area, that's awesome.

06:01 - One of the things you bring up, which is interesting, is that a lot of people, I mean there's tons of people that use Pydantic just by itself with their own code.

06:11 But the people that mostly touch it through FastAPI or Beanie or something, they may have to wait until those projects bring on the changes then, unless those projects have branches for B2, which who knows?

06:24 >> I hope so.

06:27 And maybe there's a way--

06:29 I think the breaking changes in, say, base model, for example, are--

06:34 I think they're kind of deprecated already in 1.10.7.

06:38 Just they're still there.

06:40 but I think they become breaking changes here.

06:42 If you're also, if you're doing model validation, a lot of the function names gets changed.

06:50 Yeah, like things like from RM go away.

06:53 There's a bunch of little, I don't think they're big changes that are gonna be a huge problem for people, but they are incompatibilities, as you point out.

07:00 Both Roman Wright and Sebastian Ramirez seem to be really on top of their projects, respectively Beanie and FastAPI.

07:06 So I feel like by the time this becomes fully released, will be there.

07:11 - Yeah, and what's one of the reasons why I covered it is to try to like promote everybody.

07:14 - Nudge nudge, hey, exactly.

07:16 - Nudge nudge.

07:17 (laughing)

07:17 - Exactly.

07:18 - Please.

07:19 (laughing)

07:20 - So, yes, indeed.

07:22 Yeah, it would be awesome that when this comes out for just boom, V2 is out, like you just update all the packages that depend upon it and you can adopt it right away, that would be great.

07:30 - But to be fair, like let's say I'm working on like a side project that's using FastAPI or Beanie or something, and I don't have it in production yet, I'd be like, yeah, let's use V2 right away.

07:42 But if I've got a production system, I don't want to switch right away.

07:46 So I get that there's projects at different levels and group projects like FastAPI and BNI have to keep that in mind.

07:54 So yeah.

07:55 All right.

07:56 - All right, well, this is exciting to me.

07:58 I've known this has been coming for a long time.

08:00 So excellent work.

08:01 - What you got for us?

08:03 - Well, that's quick real-time follow-up.

08:05 I interviewed Samuel Colvin.

08:08 He loves to do stuff on the fourth of months, apparently on August 4th, as well, last year called Titanic V2, the plan.

08:15 So people can check that out if they're interested.

08:18 Let's talk about something really small, okay? From a friend of the show, Miguel Grinberg, and he created this thing called micro dot.

08:26 Micro dot, it's very small.

08:28 It's bigger even than the semi dot or the regular dot, very small.

08:35 - No, so this is, yeah, so back to web frameworks.

08:39 This is the impossibly small web framework for Python and MicroPython.

08:44 So it's, I believe it's reason for existence really is to basically bring something like Flask to MicroPython and CircuitPython, which is cool.

08:54 However, it also runs under standard CPython, which opens up some interesting possibilities as well.

09:01 So if we go down here.

09:04 So if you're familiar with the Flask API, you should be real familiar with micro dot.

09:10 So app equals, instead of Flask, you say micro dot.

09:13 Got a function, it says I want to do an app.route on it.

09:17 Or I don't know if he supports a direct verb, HTTP verbs there like app.get, app.post like Flask adopted recently, but app.route for sure.

09:26 And then one of the differences is you have to pass a request object into the functions there, whereas Flask has this thread local ambient variation of this thing.

09:37 So you'll get like a 500 error if you try to, you know, just run this directly without adding that request.

09:43 So it's easy to overlook.

09:44 But other than that, other than the fact that there's a request parameter to the views, basically the same thing.

09:51 Okay, so yeah, that's pretty interesting.

09:54 Now, there's a bunch of compromises that are made here because MicroPython doesn't support Jinja, It doesn't support Flask, all of these different things.

10:03 All right, so there is a template language, but it's not Jinja, right?

10:09 So there's a bit of a migration if you were going to take this on, right?

10:13 So you can run it under CPython, but you can also run it under MicroPython.

10:18 There's the HTTP methods.

10:20 I think you see the old style.

10:22 Yeah, no, no, there is an app.get.

10:24 You can do the old style where you pass the method names like get and post, or you can just do an app.get.

10:29 But yeah, again, if you're familiar with Flask, the way you do routes, the way you pass data into the functions, all those things are absolutely the same, which is pretty cool.

10:40 One thing you can do is you can return JSON responses, and you can even just return a dictionary, which I don't think you can do in Flask.

10:47 Maybe you can, but I think you have to JSONify it.

10:50 I think you gotta say Flask.JSONify.

10:51 So this is kind of an upgrade, I would say.

10:54 So if you have a little tiny thing, like let's get it, little tiny thing like this, Brian.

11:00 >> Okay.

11:00 >> Right here. How big is that?

11:02 Not by the size of my hand, half the way across the pole of my hand.

11:06 Got one of these little tiny micro-Python, circuit Python things.

11:09 You can now put APIs on here, and you can put even really interesting things like it has support for concurrency.

11:17 So Flask doesn't support directly having async and await, I don't believe.

11:22 Not fully anyway, you got to switch over to court to do that.

11:24 I think they partially support it, but not full async and await.

11:28 But you can use the MicroPython async I/O extension here and get APIs running with full async and await, concurrency support, doing JSON or other things, maybe with PyJSON, not sure.

11:40 - That's pretty cool, 'cause a lot of the, I mean, that's just an easy, like, throw in a REST API or some sort of API on something to throw back JSON to, that would be really cool, and you don't need, I mean, for applications like that, you don't need a lot of templating around.

11:57 - Yeah, and so let's see, go to the core extensions here.

12:00 You can see there's actually a bunch of cool core extensions.

12:03 So got the async and await support.

12:04 So all you gotta do is just, just write async def endpoint, right?

12:11 Boom, off it goes.

12:12 - Yeah, 'cause you really wanna async that hello world.

12:15 - Yeah, for hello world, not so much.

12:16 But if you're talking to files that similar database or something.

12:19 - Yeah, sure.

12:21 - You can use, what is that, micro template?

12:23 It says uTemplate, but I bet it's a micro.

12:26 - I think it's pronounced template.

12:28 - A template, yeah, uTemplate.

12:30 It's a lightweight and memory efficient template for Python, which it looks very, very Jinja-like as well.

12:40 So that's a pretty straightforward thing, but it's not identical, right?

12:43 What else we got?

12:44 - Ooh, this is Jinja, isn't it?

12:46 - So, oh, can you?

12:47 Oh, you, but no, hold on.

12:51 It's CPython only.

12:52 >> Okay. Got it.

12:54 >> Because it's not supported in MicroPython.

12:56 >> Got it. But if you're doing it on CPython, you can add those templates.

13:01 >> Yes. This is actually why it's pretty interesting to me.

13:03 So you can do TLS, HTTPS support, which is pretty cool.

13:09 You have WebSockets, you have Asynchronous WebSockets, you have Cores, Cross-Origin, Resource Sharing settings, and you can even deploy it.

13:18 So it comes with its own little web server, which you can run on your MicroPython DLE.

13:24 But if you were going to put this into a big data center on a huge rack of servers, that might not be the best choice.

13:31 So you can run it on MicroWSGI, which is awesome.

13:33 You can run it on GUnicorn.

13:35 You can even run it on GUnicorn with UVicorn workers to get the awesome libUV, high-performance async support.

13:46 So you can deploy it onto kind of the top tier Python stuff, put, you know, Nginx in front of it and all that.

13:53 It's pretty cool, right?

13:53 - Yeah.

13:54 - Now, one of the ways you run web apps on servers, especially in Python, because of the threading support is not as friction-free, I guess, you know, like the GIL and all that, is you'll farm it out into multiple workers, right?

14:08 So I can't remember what we have for Python by, it's not too many, like two or four different worker processes.

14:15 And each of the worker processes kind of gets round Robin brought in to handle requests.

14:20 So like if one of them is busy, it'll automatically send the request over to another one.

14:25 Now, usually there's maybe a couple of limits, but one of the limits you really wanna consider is, I don't wanna run the machine out of memory.

14:32 So I think the ones we use like 150 megs per worker process.

14:36 So at some point, they'll only create so many and then the servers like, okay, I've had it.

14:41 But with this little thing that runs on MicroPython, you could scale the heck out of it.

14:45 you could have a ton of worker processes under microWSGI.

14:48 - Oh yeah.

14:49 - Or under uviacorn, right?

14:52 And I actually did a little super simple test, like I wrote the Flask equivalent of hello world, literally exactly the same code but the changes I talked about.

15:02 And then the microPython version, the, sorry, the microDot version, and that one both running on CPython, nine megs for this framework, 25 megs for the Flask framework.

15:15 So maybe you can have twice as much processing horizontal scale with this thing, but deployed to real servers.

15:23 So there might actually be some interesting advantages to having this like really tight framework.

15:28 Like I wanna run it in a bunch of Docker containers that are like using the smallest amount of memory.

15:32 I wanna farm it out and say, no, I'll have 30 worker processes on this server, not five.

15:37 I don't know, we'll see.

15:38 Yeah, yeah.

15:39 So Pamphle is pretty psyched about the memory saving out there as well.

15:43 So I think it's good.

15:45 - Yeah, and I mean, for lots of jobs where you don't need, if you don't need the power, don't use the power.

15:50 - Yeah, yeah, pretty neat.

15:51 So well done, Miguel, and yeah.

15:54 Now, Brian, let me take just a moment and tell everyone about our sponsor.

15:59 This episode of Python Bytes is brought to you by Influx Data, the makers of InfluxDB.

16:04 InfluxDB is a database purpose-built for handling time series data at a massive scale for real-time analytics.

16:12 Developers can ingest, store, and analyze all types of time series data, metrics, events, traces in a single platform.

16:20 So dear listener, let me ask you a question.

16:22 How would boundless cardinality and lightning fast SQL queries impact the way you develop real-time applications?

16:29 InfluxDB processes large time series data sets and provides low latency SQL queries, making it a go-to choice for developers building real-time applications and seeking crucial insights.

16:40 for developer efficiency, InfluxDB helps you create IoT, analytics, and cloud applications using time-stamped data rapidly and at scale. It's designed to ingest billions of data points in real time with boundless cardinality. InfluxDB streamlines building once and deploying across various products and environments, from the edge, on-premise, and to the cloud. Try it for free at pythonbytes.fm/influx DB.

17:08 The links in your podcast show notes.

17:10 Thanks to influx data for supporting the show.

17:12 Over to you.

17:14 What's your next one?

17:14 I want to talk about GitHub actions a bit.

17:17 So I'm a lot of my workflows have moved over to get hub actions.

17:20 And so there's actually three projects that I wanted to talk about that I thought were neat and worth, and they're all kind of in the GitHub action genre.

17:29 What you got, what you got, what you got first.

17:32 what, what, what you got, watch GHA.

17:37 It comes from the Ed batch elder.

17:39 this is a, a, just a simple tool to, to watch your GitHub action progress from a command line.

17:47 so, pretty, I think it's a command line thing.

17:50 It looks like command line.

17:51 and so it looks so you can, it has like a little progress bar thing, progress dots that go green and then there's start out gray and then they to go white and green and stuff, to see the different things.

18:03 You got like, we were running 3.7 on Ubuntu, 5.8.

18:08 So if you've got a big matrix that's doing a lot of stuff, it's kind of hard to keep up with what all is going on.

18:14 So this is kind of neat to watch, just a little tool from Ned, thanks Ned.

18:19 One of the other things I was thinking about, so I just, my talk at PyCascades was talking about that you can share, you can just, you can share packages without actually ever building it because pip install will build your wheel for you if it's not built already.

18:38 But you probably should test that.

18:41 And one of the ways you can test some of that building is with Linux build and inspect Python package.

18:48 So this is a GitHub action that will, it does a lot of stuff, but it does a build to make sure the build works.

18:58 It also has a linter to lint the wheel contents.

19:03 It also uploads the wheel and the source distribution as GitHub action artifacts.

19:08 So it actually does generate the wheel for you as an artifact, which is kind of neat.

19:14 One of the things that's always a mystery to me is making sure I have everything that I want in the SDist, the source distribution and this will lint that and also, well, I guess it doesn't lint the contents of the S-disk, but it does print them out.

19:32 So print does a tree of the S-disk and the wheel in the output so that you don't have to download it to check it, you can just look at it in your GitHub output.

19:41 - Make sure all the files and resources you might need to send out come along.

19:45 - Yeah, and this, yeah, I had recently made a change to a package and it took out the tests and I had somebody say, oh, look, we want the tests back in.

19:57 So it's kind of nice.

19:58 So I guess that's it with that.

20:03 It's kind of a neat GitHub action thing that you just put it in.

20:08 It's one of those actions.

20:09 So you just like specify it and it just works.

20:12 It's nice.

20:12 The third thing I wanted to bring up was pytest GitHub actions annotate failures.

20:21 So this is a, just a nice extra thing that I hadn't heard about before.

20:27 pytest, it's under the pytest dev umbrella, but it's a pip install sort of a thing.

20:33 And what it does is it makes sure that all the proper stuff gets output so that you can have nice annotated, your asserts, if there's failures, it's annotated nicely in GitHub Actions.

20:47 That's it.

20:48 Just some fun GitHub Actions stuff.

20:49 - Yeah, once you really start getting into CI/CD, it's fun and you're just like, "Oh, and now that it's automated, we could do this, "or we could do that." - Yeah, but when you automate all the things and then when things go wrong, you're like, "Oh God, then we have to pull it down and check it again." But having some of these debug stuff and things up in the cloud is good.

21:10 - Yeah, very handy.

21:11 Excellent.

21:12 All right, well, I have a PEP for us to discuss.

21:16 - Okay.

21:16 709, inlined comprehensions.

21:20 Now, this is a debate that I seem to have only on YouTube.

21:25 If I'll do, like I've done some videos about list comprehensions or other sort of design patterns, you might involve comprehensions and people are like, oh, Michael, you said that a for loop is different than a list comprehension, but look, it says for thing in collection, and so they're the same, and so you just don't know what you're talking about.

21:45 Like, you know what, let's disassemble it.

21:47 Well, let's see what it does.

21:48 Is it the same disassembly?

21:49 No, it's completely different disassembly.

21:52 That means the implementation of those comprehensions are different.

21:55 I don't care if the word for appears in both of them.

21:57 They're not the same thing.

21:58 This pep brings them, and it tries to take the best of both worlds though.

22:04 And it says, there are some things we do to make comprehensions work, but look like they're just right there in the same function or in line, even if you don't have a function.

22:14 But in fact, there's kind of this thing behind the scenes that's happening where we create a nested function that you never see, but the interpreter creates and then calls it, and that's the interpreter, okay?

22:26 That's the comprehension rather.

22:28 So this pep by Karl Mayer is basically saying we could get really good performance increases if we just change that implementation a little.

22:39 And the reason it's created as a nested function and not just some inline code is, what if you have a variable x in your regular function, and then you have x as a loop variable, or as the item variable in your comprehension or things like that, you want them to still be isolated.

22:57 So that's basically the idea here.

22:59 It says, "Comprehensions are currently compiled as nested functions which provides isolation of the comprehension's iteration variable but is inefficient at runtime." So PEP 709 proposes to inline list dictionary and set comprehensions into the code where they are defined and provide the expected isolation by looking at all the variables, creating a copy of them, running this in place, and then if there was a variable for that loop variable, just put the old value back, right?

23:29 Kind of push and pop them.

23:30 And the benefits here are up to two times as fast as comprehensions.

23:37 So then they said, this is translating to an 11% speed up in one sample benchmark derived from real world code that makes heavy use of comprehensions in the context of doing actual work.

23:48 That's pretty cool, right?

23:49 - Yeah.

23:50 - I believe comprehensions were in general slightly faster than for loops that would just do something and put it in a list.

23:56 So making it two times faster still is even better.

24:00 So if this gets adopted, it's in draft form right now.

24:02 I can go back to my YouTube comments and have even further nuanced discussions about like, here's yet again, how they are not the same thing, but they look similar.

24:12 So yeah.

24:14 - I never would have thought that I should reuse a variable in a comprehension though.

24:20 I don't do that, but I guess.

24:22 - No, I think it's like, let's say you've got two less comprehensions, you know, X squared for X in first set, then X, two X plus one for X in other set.

24:36 those are two separate list comprehensions.

24:37 You don't want like one of those variables too.

24:41 You want to keep them, they want to be like, okay, this X is only for this comprehension.

24:45 That's what it's like.

24:46 >> If you have an embedded comprehension, you might use X in both places.

24:50 >> Right. Or if you have an X and Y equals something, and then you generate a comprehension, you say X in there. There's a couple of, there's some weird.

24:57 >> Yeah. I guess I was just thinking my own style.

25:00 The second one I would never do.

25:02 If I was already using X, I probably wouldn't use X in the comprehension.

25:06 But I'll often use I or X in a comprehension in embedded ones and don't even think about it.

25:13 So yeah, interesting.

25:15 - Cool. - Yeah.

25:16 David Poole says, "I'm sure there's good reasons for it, "but I wonder why comprehensions don't use "name mingling strategies for their foreign names.

25:23 "Everyone's gotta be named underscore, underscore X." (David laughs)

25:27 That is a good question. - It reminds me of a joke.

25:29 - Well, so what it's doing now is it basically says, We're going to create a function.

25:37 And so that variable is basically a local variable, that function, which has no influence on it.

25:43 Were you going to actually tell us the joke?

25:45 (laughing)

25:46 - No, I was going to wait.

25:47 - Okay, all right.

25:49 - Or should we do it now?

25:50 Oh, just, I think it was Ned Batchelder actually that mentioned that Dunder, we often talk about Dunder init instead of double underscore init, but it's really underscore, underscore, and it underscore, underscore.

26:05 So it's really quunder.

26:07 >> Quunder.

26:08 >> So I responded to him and said, "I don't think so.

26:12 I think it's dunder and it dunder is what it should be." But that would be redundant.

26:19 >> It's pretty bad. That's pretty much on par with the joke we got at the end. So I'm preparing some people.

26:25 All right. So basically the way to understand this, you can't look at the code and tell, which is why people incorrectly try to correct me on YouTube.

26:34 So you look at the code and it looks like, oh, it's just a for list, and we took out the line breaks and put brackets, so it's the same thing.

26:41 So if you look at it now, you can see if you create a function that creates a list comprehension, you'll see it creates what's called a code object of type list comprehension.

26:53 Then it calls make function, then it loads the list, and then it does a bunch of stuff on it.

26:58 And then it actually, you can see there's like the sub-function that gets disassembled, and it says, we're gonna build a list, load fast, iterate it, list append, and what's really interesting, this is the part that differs from for loops.

27:09 There's a byte code called list_append.

27:12 If you do this with a for loop, where you have a list and you call append, it loads the function append, and then calls append on the operands, but it's not in the runtime in a for loop.

27:22 In a comprehension, there's a special byte code that runs, and that's like the primary difference, okay?

27:27 So, but the drawback, right?

27:30 So the benefit is list append is a byte code operation, not a function call.

27:34 But the drawback is there's this object created, there's a stack frame created, there's a function call over to this comprehension call.

27:41 Like there is an issue with all that stuff, right?

27:43 So the new one just says, what we're gonna do is we're gonna create a new opcode called load fast and clear, which is like, I'm gonna load the variable X and if there was one of those before, we're going to hang on to that just in case, so we can put it back to avoid that.

28:00 And then it calls build list, and you can notice there's no function calls, so no stack frame, no extra function call, there's no list comprehension object, all those things.

28:11 And so this is the new bytecode operation that manages that variable isolation, and then you just do it directly, which saves you a bunch.

28:19 We talked about the 2x speed there.

28:21 So that's the pep.

28:23 people check it out, see what you think.

28:25 - That's really interesting, Michael, but you had me at it's faster.

28:29 - I know, exactly.

28:30 I just want people to kind of know what's happening and why it might be faster and so on.

28:35 So pretty neat.

28:36 You can see it does have the only possible, I guess, concern, or the reason they say, why is this even a pep?

28:45 Why is this not just, hey, I made it faster?

28:46 Like, why do we need to discuss this?

28:48 - Yeah.

28:49 - Just like you said, right?

28:50 Like, it's faster, we're done, let's go.

28:52 is there are user observable changes if the user doesn't like themselves, basically.

28:58 For example, why would a user return locals as the item you want put into a list during a list comprehension?

29:07 Well, if you did do that, you would see that it's not the same as before.

29:11 I have no idea why you would ever try to do that, but that would technically would be a breaking change.

29:18 The other one was slightly more valid perhaps is that if there's an exception inside the list comprehension, because it used to be in a separate function call, it would show up in the actual traceback call stack.

29:33 But now you are not in another function, you're on a line in the OG function.

29:38 So you don't have that, basically it's missing from there.

29:43 So you would have a slightly different traceback exception.

29:47 Well, the exception would be the same, but the trace back call stack listing would be different.

29:52 That potentially affects somebody, but not a lot.

29:55 I don't know.

29:56 It's a two X, it's a trade off I would totally think is worth taking.

29:59 - Yeah, but I see why that's a, it's a observable interface or an observable behavior change.

30:06 So, yeah. - Yeah, yeah.

30:08 - Although I learned from Brett Cannon just a couple weeks ago that locals often has weird stuff in it.

30:14 If you look at locals a lot, sometimes there's stuff in there that you don't recognize.

30:18 >> Interesting. Yeah, the local one seems like, you know what, don't do that.

30:23 But the trace back one, I can see, okay, we're always looking for this.

30:28 If I get an error, I try to look at the trace back to figure out what to tell people.

30:32 I don't know, theoretically, it still seems unlikely.

30:36 I feel like you shouldn't depend upon what's listed there, but I'm sure somebody does somewhere.

30:41 >> Well, like ID makers and things like that.

30:43 >> Yeah. Cool. All right.

30:45 Well, that's it for all of our items.

30:48 You got any extras?

30:50 - I don't, do you?

30:51 - I thought I didn't have any extras, but I think by the time, I'm gonna try to predict the future a little bit because I have some control over it.

30:58 So I do have an extra.

30:59 I'm gonna be releasing either today or tomorrow.

31:03 By the time this podcast comes out, this is going to be released.

31:06 But if you're watching it live, it's not yet released.

31:09 So I'm gonna be releasing a new course, Python Web Apps that Fly with CDNs.

31:14 It's just over a three hour course that's all about taking CDNs and applying them to like Flask web apps and also hosting video content and large files and how do you geo replicate that.

31:26 We use a lot of these techniques in Python Bytes to like make the website faster as well as to make, to deliver, you know, terabytes of MP4, MP3s to people.

31:36 So check that out.

31:38 I will put a link in the show notes.

31:40 Again, if you're listening live, this is not out yet, but it will be out by the time the MP3 hits your podcast player.

31:46 So if you have the audio only version, go check it out, links in the show notes.

31:49 - Nice.

31:50 - Yeah, I think that's a really, really cool course.

31:51 I think there's so much people can get out of it in terms of like, it's really easy.

31:57 You know, 30 minutes and you're like, "Oh, our app is so much faster "and we can use smaller servers." That's really great.

32:03 - Well, three hours plus 30 minutes.

32:05 - Yes, well, once you know the thing, it's probably 30 minutes to apply it to your app, is what I mean.

32:10 - Yeah.

32:11 - Yeah.

32:12 David Poole says, "The traceback one could be worked around "if the debug compiled used the old function style method, "like aggressive optimizations in GCC "with inline functions." Okay, possibly, interesting.

32:24 You would have to have people buy into that, but right.

32:27 I mean, I'm sure Brian, you're very well aware of the debug versus release builds and optimization levels and all that stuff in C, right?

32:36 - No, I mean, yes, but I don't use 'em.

32:39 - No, you don't need that.

32:40 - Well, I personally don't like to test in something my user isn't gonna see.

32:46 So I always test in optimized released.

32:49 - Got it, yeah, yeah.

32:51 But it can make a big difference.

32:52 But in the Python world, we don't really discuss that so much, right?

32:57 - Yeah, and except for the comment, the one thing to be aware of that I would, while we're on it, we probably haven't mentioned this lately is asserts are awesome in your test code, but they're not that great in your, Actually, they're pretty great in your function code also, but just don't depend on it, because assert lines can be completely removed if you have the optimization on.

33:20 - Absolutely.

33:21 All right, you ready for a joke?

33:22 - Yes.

33:23 - You a fan of movies?

33:24 Like watching movies and stuff?

33:26 - Yeah, I just went to a great movie, yeah, so.

33:28 - Nice, well, as a software person, especially if you do a lot with Linux or macOS, you might not be able to watch the movies too much.

33:35 This one says, "I can't watch movies on my computer.

33:37 "All it does is bash scripts." bash on the scripts of the movie, not plot or run shelf scripts.

33:45 I'm not sure which.

33:47 >> Okay. That's funny. I think.

33:48 >> Sort of. Yeah.

33:49 >> [LAUGHTER]

33:50 >> Somewhat. Anyway, it's what I have for you.

33:53 I have an incredible one that I want to put up, but it's like only video that has music and no spoken word.

34:01 I don't think it fits for this format, but you know what, I'll throw it in.

34:04 I'll throw it in. People also check out the movie.

34:06 It's about releasing stuff to production.

34:07 So it's pretty epic.

34:09 Put it in the list, but we won't play something that's like 30 seconds long.

34:13 That has nothing but music.

34:15 Cool.

34:17 Nice.

34:17 All right.

34:18 is that all we got?

34:19 That's all we got.

34:20 It's a wrap.

34:21 It's a wrap.

34:22 Yeah.

34:22 Thanks as always.

34:23 Thank you.

34:24 Later.

Back to show page