« Return to show page
Transcript for Episode #275:
Airspeed velocity of an unladen astropy
00:00 Hello, and welcome to python bytes where we deliver Python news and headlines directly to your earbuds. This is Episode 275, recorded March 15. And I am Brian Aachen.
00:12 I'm Michael Kennedy.
00:13 And I'm Emily Morehouse.
00:14 Yay. Thanks for coming. I also want to thank, give a shout out to Microsoft for startup founders hub. And we'll learn more about them later in the show. So welcome Emily. Can if people aren't familiar with who you are, who are you?
00:30 Yeah, of course. So I'm Emily. I am the Director of Engineering and one of the co founders of cuddle soft. So we are a creative product development agency focused on web, mobile, IoT and the cloud. I'm also a Python core developer and the high con conference chair for this year. Awesome.
00:50 That is awesome. Said another way you're quite busy.
00:53 Yeah. I also have a 10 month old so you know, not a lot going on in my life.
00:57 Yeah, now a lot, a lot of time to binge watch NetFlix. 10 months. So are you Are you pretty busy for iKON already.
01:05 So interestingly enough, this is kind of the time that goes into autopilot in a way, you know, like most things are already set in motion that need to be set in motion. So it's really we're working on the fun stuff right now. Like, you know, speaker and staff gifts and stuff like that. But otherwise, it's it's pretty smooth sailing and just sitting back and watching COVID numbers and hoping that we don't get another spike before. April May.
01:29 Yeah, fingers crossed. This will be the first ERP icon after COVID hit. So hopefully everything goes great. I know people are excited. Yeah, yeah. Well, Brian, Dr. The first one, do we have to wait? Do we have to wait for me? Or can I talk about this? Oh, yes. All wait. Well, let's let's await a very excited talk about this one. Actually, this one comes from Frederick Africville, I believe listens to the show a lot. So hello, Frederick, nice work on this. I was working on the Python bytes website of all things. And I wanted to do some stuff with like uploading MB threes and having a bunch of automation happen behind the scenes. And one of the things I did not too long ago is switch over to an async database. I think we talked about moving to beanie and some of the cool stuff I'm doing to like halfway move to async. But not all the way yet not till we're quite ready. But as part of that, I'm like, well, all this file stuff, this is kind of slow, like there's a couple of seconds, is there a way to sort of set the webserver free while we're doing this work, right? And some of that involved calling sub processes. I thought, well, maybe there's some third party package like a I O files that I could use that would allow me to asynchronously do sub processes instead of the sub process module. So I did a quick Google search. And I came across Frederick's article here. And much to my surprise, I don't know if you're aware of this, but built in async. I O has async sub process management already. That's pretty cool. Yeah. Yeah. Emily, have you played with this? Any? Yeah,
03:00 no, I actually think I've used this exact blog post, which is super funny. I was actually just recently writing, like CLI regression test and pi test. And you basically like test running to different servers. And I was like fighting with sub process to get it to work. I don't think they were using a new version that I can use async await. But I definitely remember referencing this. So
03:23 yeah, yeah, very cool. So you can just say async IO, the module dot create subprocess exact for just running it. Or if you need to sort of follow along and see what's going on, you can use even the shell one that requires shell commands like, like a CD or an LS type of thing. And then you could just grab the standard out from set that to be async, I O 's sub process, pipe, and so on, you get the results out of it and everything. So you just do things like await, creating a sub process with a shell or exec in it, and so on, then you can await communicating with it, which I think is pretty cool, and so on. So not a whole lot more to say then, other than if you're doing stuff with sub process, and you're moving already. The other areas where async and await are very doable. Think fast API, SQL model that SQL alchemy 1.4 or later, where you're already doing a bunch of other async stuff, you know, go ahead and write this in the async way. So it just sort of flows into the rest of what you're doing.
04:26 That's pretty cool. This is this is this is from like 2017 as older articles are,
04:32 yeah, it looks like it. Yeah, I mean, yeah, it's it's news to me, maybe not news to the world. like Emily said she was already working with it. And previously, but yeah, I think it's great,
04:42 right? Well, it's the sub process communicators sort of people often shifted over to just run so I'm hoping there's a there's a run version of that too. So
04:51 yeah, probably is. Anyway, cool. Indeed. All right. Well, gonna explain some stuff to us. Brian Maas, I see the The author of what you're about to talk about out in the audio, really.
05:02 So that's all also. That's very cool. Well, this
05:05 is pretty definitely an exciting one. Yeah,
05:07 yeah. So this comes from this type explainer. And let me explain it to you. So I don't know. It's just this cool thing it popped up last week we saw it. This is from Aryan, sorry, Aryan, Moloch, wasI. It's a pretty cool name, by the way. But so this is this little neat Heroku app that has you, it's pretty simple. You will, I don't know how simple it is behind the scenes, but too simple to use. It's very simple. You pop in any sort of Python code that has type hints in it. And so this one has, like, for instance, we've got an example up that the default with like a callable that takes a stir and an int, and a generator. And, yeah, so there's a bunch of type pins in here. This is even like, kind of more than most people use all the time, but, and then you hit type splain. And it will show you what the different type pins mean, and translate them into English for you. And it's just pretty, pretty cool. I like that one of the things that wasI said that he was also when he was developing this, he was on his way to developing a Visual Studio Code plugin. And so there is a if you search for types planar as a VS code plug in, that that functionality is available to right in your editor as well. So this is pretty neat. Yeah, this is
06:37 really cool. This explanation you have they're like dictionary of list of set of frozen set of like, oh, my gosh, the description is something like a dictionary that map's a list of sets of frozen sets of integers onto a string like that's way more legible. And internalized double than, yeah, how many brackets deeper? We were four brackets deep in that type information there.
07:00 Yeah. It's interesting on on Twitter, with the announcement of it, or we heard we heard about it through Wilma Coogan, or at least I did. And some of the some of the comments were like that, not that this isn't cool. It was like, oh, yeah, this is cool. But maybe Python shouldn't be this complicated if you have to explain it. But let's
07:22 have these people done C++. Let me just ask them.
07:25 I know. You don't have to use types if you don't want to. But there's a lot of places where types are helping out a lot. And if you're running into somebody else's code that has some types on there, that you're not quite sure what that's going on, throw it in type Splinter, and you'll be able to figure it out.
07:43 Absolutely. So yeah.
07:45 And I did actually take a look, I think this is awesome. I think I absolutely agree. Like, you know, my PI has allowed us to do gradual typing and all that. But a lot of times you do jump into somebody else's code base, and you're like, Whoa, these are more types than I've ever seen. And so being able to kind of convert it really easily is nice. And I did actually take a look at how it works under the hood. There's a really big if statement of like serialization, but then it also, I'm a nerd for anything ASD related. And so it uses like the my PI parse under the hood, which is actually relatively complex for what it needs to handle based on you know, different pipeline versions and whatnot.
08:23 Wow, that's pretty awesome. This. Yeah, the very first time you were on talked by me was to talk about the ASD abstract syntax tree. Right?
08:30 Yeah, it was right around. It's my first conference talk. The heck of it. And
08:35 yeah, awesome. Yeah, way back in the day, I think we met in Vancouver to set that over. So when you are we met at pi cascade, sir.
08:41 I generally think of myself as a smart person. But but people that can handle doing ASD work in Python, I'm like, Oh, my gosh, you know? It's over my head.
10:05 it's it's really good looking for something right out of the gate. So looks like Yeah, awesome.
10:12 Yeah, I think the the like architecture of it's really great too. So I really like that embrace sort of building out the tool itself, and then building a CLI and a web interface and vs. vs. Code extension. So I think that is a really great example of how to structure a project like this. Yeah. Nice.
10:31 Yeah, that's awesome. Emily, we'd lost your screen, if you want to share back, or I just added. While you're working on that, let me just follow up with Sam real quick, who pointed out be super aware of the limitations of your hardware when you try to write files and async environments at this project that ground to a halt? Because too many workers are trying to run at once? Yes, absolutely. Good point, Sam, that is generally true for any limited resource that you point async things at, right? If you're going like async against a database, and a couple of queries will max it out. If you end up hitting it with 100 queries at the same time, it's not going to go faster. It's only going to get worse as it like fights for contention and resources and stuff. And then on this one, on the type explainer, Brian skin says agreed very nice work. So awesome. All right.
11:21 So this isn't for the first one. Yeah, so this is another one of those, like new to me things. But Marlene article just came out. And that's how I actually found out about it. So Marlene wrote this really excellent introduction to using iPads for Python programmers, I've it's itself has been around for like seven years or so. It's a project, I think that was started by Wes McKinney. But they are a productivity centered Python data analysis framework for SQL engines and to do so you get a ton of different backends. It's gonna compile to, you know, basically any flavor of SQL database. And then a bunch of like more data science focused backends that this popped up on my Twitter feed from from Marlene, and it's just a really great introduction, also just a really interesting sort of application. So she went through and wanted to pull some like art information about a city that she was going to visit because she likes to experience the culture of the New City. So it would just walk through, like how to get data into it, and then how to interact with it with Ibis. So actually switch over to the Ibis documentation. Oh, and this is now just different because it's small. But yeah, I think I was really interested in it, because we have like a pseudo legacy system that we're moving all the migrations out of Django, and we're actually managing it with a tool called his surah. So we're so used to having you know, Django, Django that's going to use SQL alchemy and the ORM. And everything just kind of is magic from there. And you give it a YAML file, and you get seat data. Right? Right. So we're trying to figure out how to manage the data in like a wildly different environment where you have to load it in the like the history of CLI tool, and you need SQL. And I don't want to write SQL, like anything I can possibly do to avoid that. So this was a really neat way for getting around meeting that modeling. So let's see if I can get Yeah,
13:27 it also looks like it integrates with Hadoop and other things that maybe are not direct sequel compatible might need a slightly different query language anyway, right?
13:35 Yeah. And it's super interesting. So they have a few different ways that it works. So it directly will execute pandas. And then it compiled and a few different ways to either, you know, those SQL databases das could do. Being query like a bunch of different stuff that yeah, it's not necessarily just going to be straight SQL, but it's going to handle that for you. So you're really sort of, you know, future proofing yourself away from that. But yeah, it just got a ton of like, really intelligent ways to filter data and interact with data in a really performant way. I'm actually going to go back to Marlene blog posts real quick and do some quick scrolls. It's also one of the most Pythonic like tools to integrate with SQL that I've seen. So she gets to the point where she has this database table. So you just tell it, you know, your table name, and you set the variable and then you can interact with it as if it's just a dictionary. So you've got your school, and you want to just pull these columns and you've got it and it's there. So I thought it was like
14:39 so you would say something like DV dot table of quote art and then you say art, art equals the art bracket, quote, artists and display and then boom, you get those back, right. Yeah, that's awesome as a dictionary, I guess or something like that. Yeah, that's cool.
14:55 So yeah, there's a ton of different things that you can do with it. I highly recommend checking out a They're tutorials, they've got a ton of different options. My favorite one is the geospatial analysis. So if you check out their example, they're going to show you how to pull information out of a geospatial database. And then a really quick way of actually like mapping out the data. So if you check out these examples, I know it's not going to come through necessarily on audio. But you can, you know, pull information out of these, like land plots, and then tell it to graph it. And it gives you, you know, a really nice looking graph with all the data there in like a whopping 10 lines of code.
15:37 Oh, yeah, generating that picture in 10 lines of code. That's, that's awesome. Yeah, it's pretty, it makes me think I should be doing more with geospatial stuff. Like, I don't do very much, because I'm always afraid, like, Ah, how am I going to graph it? What am I going to do like, but there's a lot of cool layers in that graph and everything. That's neat. Yeah, the API reminds me a little bit of pi Mongo, actually, where you kind of just say, you know, dot and give it the name of things. And it's really kind of dynamic in that sense. And you get dictionaries back and stuff. So yeah, and anybody, but it's different databases, right?
16:09 He right. Yeah. But I do like that perspective. Like, it really is kind of taking any database. But especially taking a relational database and giving it more of a document oriented interface to it, which is pretty cool.
16:24 Yeah, this is cool. I definitely want to check this out for, especially for for exploration, it feels like it's really got a lot of advantages for data scientists, like they're gonna fire up a notebook, and they're like, I just need to start looking at this and playing with it. They don't really want to write queries, and then convert that right.
16:39 Well, it also looks like, as far as I can tell, looks like both in this article, and in one of the tutorials on the the main web page is that the there's a good relation, wonderful, almost a one to one relationship between SQL things you can do in SQL and things you can do here. So so that people familiar, already familiar with SQL can transfer over pretty easily. So that's pretty neat.
17:05 Absolutely. Yeah, this is nice, fine. All right, Brian, before we move on, can I take a second tell you about our sponsor? Yeah, very excited about this. I think it's a big opportunity for people. So let me tell you about Microsoft for startups founders up. This episode of Python bytes is brought to you by Microsoft for startups founders hub, starting a business is hard. By some estimates, over 90% of startups will go out of business in just their first year. With that in mind, Microsoft for startups set out to understand what startups need to be successful and to create a digital platform to help them overcome those challenges. Microsoft for startups founders hub was born founders hub provides all founders at any stage with free resources to solve their startup challenges. The platform provides technology benefits, access to expert guidance in skilled resources, mentorship and networking connections, and much more. Unlike others in the industry, Microsoft for startups, founders hub doesn't require startups to be investor backed, or third party validated to participate. Founders hub is truly open to all to what you get if you join them. You speed up your development with free access to GitHub and Microsoft Cloud computing resources and the ability to unlock more credits. Over time, help your startup innovate founders hub is partnering with innovative companies like open AI, a global leader in AI research and development to provide exclusive benefits and discounts through Microsoft for startups founders hub, becoming a founder is no longer about who you know, you'll have access to their mentorship network, giving you a pool of hundreds of mentors across a range of disciplines and areas like idea validation, fundraising, management, and coaching, sales and marketing as well as specific technical stress points, you'll be able to book a one on one meeting with the mentors, many of whom are former founders themselves. Make your idea a reality today with a critical support you'll get from founders hub, to join the program, just visit Python bytes.fm/founders hub all one word, the links in your show notes. Thank you to Microsoft for supporting the show. Yeah, so $150,000 credit people get. So if you're new startup, you know, that would have been awesome when I was trying to just start up. Now this next thing I want to tell you about, I think I think this kind of lives in your wheelhouse as well. In keeping with the theme of the show, this one is recommended by Wilma googan. So thank you well, for all the good ideas. I know you're out there scouring the internet for all sorts of cool things to use on textual and rich and whatnot. And this is the one of the ones they said they are going to start testing on. And that has to do with performance. So the topic, or the library is airspeed velocity, or ASV. And the PIP nomenclature, and the idea is basically setting up profiling and performance as a thing that you can measure over the lifetime of your project rather than a thing that you measure when you feel like oh, it's slow. I need to go figure out why it's slow for now, too, as you automatically do as you do, check As you know, like CI runs and stuff like that. So probably the best way to see this is to just like, pick on an example. So if you go to the link in the show notes, airspeed velocity, there's a thing that says see examples for Astro pi NumPy. Sai pi, I'll pick up Astro pi. And you get all these graphs. So each one of these graphs here is the performance of some aspects of Astro pi over time. How cool is this? Look at that was pretty neat. If you hover over it, it shows you the code that runs that scenario. Well, yeah. And, you know, this is a sample this is sample and then they did a huge improvement sealer, like massive refactorings on this guy coordinate benchmarks, you know, scale, or whatever this is, right, this this particular test they did there, it goes on pretty steady state. And then there's a big drop, and a little spike up and then another big drop, and then steady state for a long time. So wouldn't it be cool to have these different views into your system about like, how it's performing over time?
20:58 Yeah, so lower is better, right?
21:01 Yeah, believe lower is better. Think lower is better. You can pull up regressions, you can say, Okay, well, what got what got worse, like, for example, timetable out, putter got 35 times slower, so that might want some attention. And it lists the GitHub commit a really technical, I suppose it just lists the git commit, which is probably on GitHub, which is on GitHub, so that you can actually say this is the code that changed that made it go 35 times slower? Well, that's neat, I think. I think one of the other challenges here is, what about, what if you wanted this information, but you're only now learning about this project? Right? You're only like, now realizing? Wouldn't it be great to have these graphs? How do you get those graphs back in history? So we'll pointed out that you can actually connect it with your Git repository, and it will check out older versions and run it, it'll like roll back in time, and, and go across different versions in different versions of Python and generate those graphs for you. Even if you just pick it up. Now,
22:07 that's awesome. Any idea of if it's, like, restricted to packages? Or if you can,
22:11 I think you can apply to general projects. I don't remember where I saw it. I had to pull it back up here. Somehow I've escaped the main part. But yeah, I think if you look at the using airspeed, you basically come up with a configuration file that says, you know, this particular project with these settings, and then here's like the run command, you come up with one of these test suites. I don't think it has any tie into packages per se, because I think it goes against git not against IPI. Yeah. Yeah. So pretty neat. People can check that out. But like, here, you can specify like, which versions of Python? Or is this g7 stuff? I don't know. But yeah, so you can run it against all those old versions, you can configure how it runs, and so on.
22:55 Okay, so you can even you can set up like, since you're defining what's being timed, you can you can type large things like, like a particular workflow through lots of bits of code first, things like that.
23:10 Yeah, exactly. So you basically come up with a couple of scenarios of what you would want to do that you're going to running. So here you can see like, you can benchmark against like tags and things like that and get four branches.
23:22 Yeah. So willsez, he ran it ran, pop is up, ran it against two years worth of rich releases. That's cool.
23:32 And found a performance regression nicely. I love it optimizations that made rich slower. Isn't that true? Like, this is gonna make it better? No.
23:42 Yeah. So pretty cool.
23:45 And I have to give a nice shout out to the full embracing of the my Python reference. If you go back to that Astro pi version in the top left corner, it says airspeed velocity of an unladen. Oh, yeah, I
23:58 did notice that. That's awesome. It's nice. Yeah, very cool. Well, thanks for sitting that over. Well,
24:05 yeah, it got some projects. I'd like to do that on. But speaking of testing things, this one comes from Anthony Shah, this is perfect lint. So this is a this is a pilot extension to check for performance anti patterns, and it's Tony somewhere Anthony.
24:27 Some guy named Anthony Shaw.
24:30 Tony baloney says oh, here it is. Project is an early beta it will likely raise many false positives. So I'm thinking that might be why he went went with a an extension to pilot instead of like an extension to pi flakes because or flake eight, because pilot gives lots of false bosses. At least in my experience with pilot it is takes some configuration to get happy with it because it We'll show you things that maybe you're okay with like, I through pilot against some some demo code that I have for like teaching people stuff. And I'm using short variable names like, you know, x and y and things like that. And one of the one of the restrictions for pilot is you have to have almost most everything has to be three characters or longer. And, you know, for production go, that's probably fine. But if you have different rules, you can change that. But back to this, I really like, I like the idea of having something look over my shoulder and look at performance problems, because I I'm an advocate for don't solve performance problems unless you find that there's a performance problem. So don't do premature optimization. However, having some some things are just kind of slow, that you should get out of the habit of doing like, when using list in a for loop, if the thing that you're using a list of already is an iterable. That's, that's a big performance hit, if it's a huge thing, if it's because that turns an iterator iterable or a generator into an entire list, it creates the list, you don't need to do that. So that's a, that's a big one. Anyway, there's a whole bunch of different things that checks for. And I like the idea of just as you're writing code, and as your test, you know, running this, and in trying to figure out if you know, if there's problems with it, you can kind of get out of the habit of doing some of these things. So yeah, these
26:28 are nice catch, just some of the things you might think you need to do. You're not super experienced with or whatever, right? Yeah,
26:36 like one of the things here is a error, w 218201, which is loop invariant statement. And this is one of that's kind of interesting is like, there is an example of taking the length of something within a loop. And if that never changes within the loop, don't do the length in the loop, take it out of the loop. Those are Yeah, there are some exact, there's a few examples that you like, you might not notice right away, especially if you've taken something that was a linear some linear code that you kind of added it inside of a loop and indented it over. And now it's in a loop, you might forget that some of the stuff inside might not might maybe shouldn't be in the loop. So yeah,
27:19 this example here, you're doing it 10,000 You're doing a loop 10,000 times and every time you're asking the length of this thing that is defined outside the loop and is unchanging. So you're basically doing it, and 1000 9999 times more than necessary. Yeah,
27:33 yep. So kind of fun. I'm gonna give it a shot. See what I think in as using it. So
27:39 yeah, definitely. Emily do use some of these linters or anything's like this. give you warnings.
27:46 Yeah. Yeah. I mean, I think we mostly use play gate. But I'm definitely curious to try this out, too. I can see how this would be tricky to get really consistent errors for these things. So props to Tony baloney.
28:01 Well done. Yeah. This is exciting. I'm glad to see this coming out. I know he was talking about it. But I didn't. Didn't see actually anything on GitHub yet. Or anything. So yeah, very well done. Yeah, it's
28:11 cool. I like stuff like this that really like takes you to that next level of like, this is something that somebody would hopefully notice then like a code review. But if you can automate it, yeah,
28:20 I think that's a great thing. I think a lot of these things that would, would have to be a discussion during a code review, if they could be automated. And you could save the code review for meaningful stuff like, yeah, security, or, you know, like, how are we going to version this over time? It's going to be tricky. Like, are you really storing pickles in the database? It's not, you know, stuff like that. Yeah. All right.
28:41 PEP 594 has been accepted, which is super exciting. So PEP 594. If you don't know what that is, it's a Python enhancement proposal. So proposed change to the Python language itself. And so this one is removing dead batteries from the standard library. It was written by Kristian Himes and Brett cannon. I think I saw a tweet from read thing that it had been accepted. So this is just really exciting for anyone who's followed on followed along with any of this discussion. It's been a long time coming. I think there was a major discussion about it at pi con us 2019. It must have been in shortly after that there was a PEP, but it's been since then, that it's kind of been off and on in discussion and finally, figuring out what is going to be the thing that really works for everyone and for the feature of the language. So this is going to be targeting version 311. Um, so just a quick recap of like the release plan for that development on 311. We'll start this May so may 2021. The final release even for 311 is not until October 2022 and even then this is just going to be deprecating modules. So it'll be deprecations at 311, and 312. And it's not until 313, that it will actually be fully removed from the language itself. So you can kind of get a glimpse into how long of a process this is, and how like big of a decision it was to get everyone on board and look
30:21 at all like that anything rushed? When I went through and read this, it was like, here's the things that we think we can take out. Here's why. There's a table in there that shows third party alternatives to certain things. Mostly, yeah, that's the one. So there's certain things that are just like, you know, that probably isn't needed, or it's really superseded. So there's pipes, but then we also have sub process, which will take care of that. And that's a built in one and then async core, just use async. Io, but then there's other ones.
30:57 There's a bunch of here I've never even heard of, yeah, that's the
31:00 thing, right? And there's one called crypt and it's like, look, just use past lib, or argon or hash lib, or anything that is better in modern, you know, this was from 1994. Cryptography, cryptography is not exactly the same as it was then. So you know, maybe it makes sense to take it out. Right? Yeah. I get.
31:20 Yeah. Yeah, I think it's a really like, yeah, it's a thin line to walk great. Like, some people are using these and some of these modules maybe didn't have a lot of like maintenance over time. But that also meant that there wasn't somebody walking it for bugs or security vulnerabilities or anything like that. So the balance of is it worth pulling it out? If somebody was relying on it, versus the maintenance cost, or the lack of maintenance, that could be
31:47 a liability. There's a CGI
31:48 library. That's, that's something else that takes you back from 95. That's how I started but not with Python. I was doing CGI with Perl way back in 95. So
32:00 yeah, that does go back. It also talks about whether that bit of code has a maintainer and whether that maintainer is active, for example, CGI has no maintainer, like no one wants that. One of the things that's interesting here is you could take this code, and you could still use it, you could vendor it into your, your code, right? Just yeah. Now you're the maid of using? Yeah, yeah, exactly. It's all yours, you can have that. But you could just go to C Python on GitHub, get that module copied over. And now you kind of still have that functionality. Just you're taking it on. I expect maybe one or two of these might end up in their own GitHub repository as a package that is maintained. They did talk about that. Right, Emily, about that being one of the possible paths they decided against?
32:43 Yeah, yeah. Yeah, that was like the big conversation back at the language summit in 2019. was, you know, could we get libraries on a, you know, more independent release schedule, and pull them out of the standard library entirely, and just have them be sort of their own standalone thing, which, as I have briefly outlined, since the release schedule for 311, you can see that it is on, like a very long scale timeframe. So I definitely agree, I think that some of us that people are still using, people are either going to go in there and grab the code and hopefully grab the license with it as well. Or they're just going to become, you know, modules that enough people care about that live on their own empire.
33:28 I don't see anything here that I would miss. But I That doesn't mean that there's not people using them, you know, so on
33:33 the good side, I mean, it totally makes sense to like, remove things, especially stuff that's not getting maintained and there's no maintainer and, and does possibly has bugs in it. Now, nobody knows. But like, what are the some of the good aspect other good aspects? Does it is it going to make the library or the Python standard? Installs smaller or mean, you think anybody know what the numbers on that?
34:00 Okay, I don't know the numbers on that. But that is
34:03 I would say the biggest change is like maintenance. Just no one has to worry about whether there's a bug in CGI that someone discovers because it's just not there. Yeah.
34:13 Yeah. Yeah. And with, especially with C, Python, there's often a very vague like barrier to entry. So like, if a CGI bug was even filed by somebody, where would you start with that sort of thing. And, right, and then the other thing, too, is maybe somebody else goes through the effort to fix it. But it always takes the cord have to review that PR and get it merged in. And so a lot of times, if you don't have an owner of a module, it's just not going to get a lot of attention. So as a whole, it should be, hopefully have an impact on how we interpret core developer time. Because right now, I think we were at like, over 1000 prs open on GitHub. So a lot of times, you know, it's not just core developers. Writing code and a lot of times you can have even more of an impact being that person that, you know, tries to review prs. And keep that number down.
35:07 Ryan, the audience points out that the comment threads on discuss Python and elsewhere are really interesting if you want to see examples of these old modules still in use. Yeah,
35:17 yeah, I've got a couple of them here. I think. I think I linked them in the show notes. But if they're not there, I'll make sure it's in there.
35:22 Yeah, you got a link to Brett's or the breadth discussion there. That's cool. I think this is good. I think this is good. And quick shout out to a new theme. Right.
35:30 Yeah. So the it's a brand new PEP site. So it's perhaps that pipe on that org. And there's this really lovely theme on it. It's really clean and modern. You've got a nice dark theme here, as well.
35:41 Yeah, I noticed the dark theme. That was cool. And I think it even auto adapts to the time of day, which is great. Right? Is that it for all of our main items? I think
35:47 it is. It's Do you have anything extra for us?
35:51 Would it surprise you if I said no. Yeah, well, I think extra high. No, I always have like, 10 extra things. No, I don't have anything extra this week.
35:58 Oh, really? Yeah. Nice. Nice. Okay. How about you, Emily?
36:02 Cool. Yeah, I've had a couple of extra things. So if I was prepping for this time with that, I think it was just the most recent episode before this one. Um, there was a blog post that I think Brian shared on like a better Git flow that basically was saying, like, commit all yourself, reset everything, and then re commit everything once you're like ready to make a clean PR. And so I wanted to share this as well. This is one of my favorite tools that I learned about probably a few months ago. Again, 2015 is not a new thing, but new to me. So you can do auto squashing of Git commits when you're interactive rebasing. So essentially, if you've got a ton of different commits, and you realize, oh, like, I had a style commit, commit for styling, all my new stuff, a few commits back, but like, I want to make this one more change, instead of needing to, you know, rebase, immediately or remember to, you know, stage it in a certain way in the future, you can actually go ahead and just commit one more time. And then you need flag that commit that you're making with the fix up flag. So just dash dash fix up. And then you tell it, the commit that you're wanting to sort of amend. So you can just keep working like that, make your fix up commits. And then the only thing that you do right before you PR is you tell it to rebase with auto squashing. So once you do that interactive rebase. With Auto squash, it's going to find all those fix up commits. And you know, when you interrupt a rebase, you often have to like move commits around and tell it to squash into the previous commit, you've got to get it in the right order. This handles all that for you. And anything that's applied with a fix up, it finds that commit ID and auto squashes that back in. So you get a really, really clean history without having to like redo all of your like commit work.
37:56 Yeah, that's really nice. And this looks built into Git. Yeah. I've never heard of auto squashing. I've definitely never used it, but it looks really useful. Yeah. Thank you. Yeah. All right. What's your next one?
38:07 Yeah. Yeah. And then a couple, couple other cool ones. Um, there was a tweet from Dustin Ingram about in Word that the Python Software Foundation actually received. And it's from the annual awards, which is a, you know, animation version of the Academy Award sort of thing. And it was for pythons use in animation. And so I think this is just super cool. It's one of those like, applications that you don't necessarily think about for Python all the time. I don't think it gets talked about enough. I actually tried to find pole Hildebrand had a talk at pi con Montreal when I think it was back before we are recording these. So if you ever see Paul at a conference, you got to ask him about, you know, how Python is used at animation. And so yeah, that's
38:59 really neat. So exciting. I would have never expected that. But that's great. And congrats, Guido for getting the award.
39:06 And two more quick ones, the PSF spring fundraiser launched yesterday. And they're having a ton of fun and hit launched on at least Pi Day in the United States. So if you donate with some sort of contribution that is related to the number pi, you get, like a free swag bag. So just a fun twist on
39:28 Yeah, you can donate $3.14 or $31.41 or $314 and 116 cents and yeah, like that's it goes pretty far out if I remember pi there's a lot of numbers in there. So yeah, just keep going.
39:43 Yeah, whatever your bank account will allow.
39:47 Exactly. Alright, anything else you want to throw out?
39:52 Um, yeah, just one last quick one. Just a small plug for us. Cuddle soft is hiring. We have a bunch of Different positions open. But we're especially always looking for pipeline engineers. We're a small team. We're a team of about eight people right now. predominantly female engineering team. And just like the the pride of what I have done in the last three years of like building this team, so if you're looking for someplace that is always innovating, always focused on like, really high quality tested code, but you want to work in a small team environment, get hands on with clients get hands on with product. Yeah.
40:30 Kudos off looks really cool. You'd seem to be doing a lot of bunch of different small fun projects, instead of just getting stuck in like one huge legacy code. So if you're, you're looking to kind of bounce around from project to project and learn a lot. I think that'd be a good place. Right? Yeah. All right. Well, I have two jokes for us, even though I have no extra. So making up for there,
40:50 I guess. Nice.
40:52 So Aaron Patterson said, I heard Microsoft is trying to change the file separator in Windows, but it received tons of the backlash from the community. I'm pumped. That's pretty funny. Right? The Ford
41:04 slash works fine in Windows people just for it actually is a it actually
41:08 does. It totally does. And following along there. Oh, Emily, I think this is the perfect follow on for you as well. Do you ever look at people's GitHub profiles? If they apply? Like they say, yeah, right. Of course. Not, right. So this person here, you know, if you go to your GitHub profile, it will show you your public activity over time. And it'll say like, on this day, you know, in September on Monday, you had this much work, and then on Tuesday that much, and it'll color like different colors that green. Yeah. So if you check out the link here, we have a GitHub activity for a year that spells out please hire me in like the exact amount of commits on just the right day. And I think that's,
41:52 I think there's some history manipulation going on here. But
41:55 probably some auto squashing, I don't know.
42:01 I mean, hey, I would, I would look at that and think that they had some decent Yeah, exactly.
42:06 It does mean that you're probably not doing like normal get work on one hand, but on the other, like, I'd have to think for a while to figure out how to get it to draw that out. So that's pretty cool, too.
42:19 That's one of the main reasons why I switched my blog to Hugo. So that blog post count as git commits.
42:25 Exactly. Double Dip. Yeah. Yeah. Nice. Well, that's it. That's what I brought for the jokes. Nice.
42:33 Well, thanks, everybody, for showing up. Thanks, Emily, for showing up here and also for the walrus operator. Love it. Yeah. And we'll see everybody next week.