Transcript #351: A Python Empire (or MPIRE?)
Return to episode page view on github00:00 Hello and welcome to Python Bytes where we deliver Python news and headlines directly to your earbuds.
00:05 This is episode 351, recorded September 5th, and I am Brian Okken.
00:10 And I'm Michael Kennedy.
00:11 And this episode is also sponsored by Sentry. Thank you, Sentry.
00:15 And if you want to reach any of us or the show, either of us or the show, we're @mkennedy, @brianokken and @pythonbytes, all at fosstodon.org.
00:25 And if you're listening later and you'd like to join the show live sometime, you can join us on YouTube at pythonbytes.fm/live.
00:36 And we'd love to have you here.
00:37 Indeed.
00:38 What do you got for us first, Michael?
00:40 Let's talk about multiprocessing.
00:42 Empyre, not pyre, but mpyre, right?
00:46 Pyre is the type checker from meta.
00:48 Empyre is something entirely different about multiprocessing.
00:53 And so it's a Python package for easy multiprocessing that's faster than the built-in multiprocessing, but has a similar API, has really good error reporting and a bunch of other types of reporting, like how well did that session go, you know, like how well did you utilize the multiprocessing capabilities, your machine and so on.
01:13 So yeah, let's, let's dig into it.
01:15 So the whole acronym for the name is multiprocessing is really easy, which is not what most people say, right?
01:22 - No.
01:23 - But it's a package that's faster than multiprocessing in most cases, has more features, and is generally more user-friendly than the default multiprocessing package.
01:34 It has APIs like multiprocessing.pool, but it also has the benefits of things like copy, unwrite, shared objects.
01:42 We're gonna come back to that later as well.
01:45 But also the ability to have like init and exit, set up tear down for your workers, some more state that you can use and so on.
01:54 So pretty cool.
01:55 It has a progress bar.
01:57 It has TQDM progress built right into it across the multiple processes.
02:02 So you can say things like, here is some work I want you to do.
02:05 There's a hundred items split that across across five cores.
02:09 And as those different processes complete the work for individual elements, give me a unified progress bar, which is pretty awesome, right?
02:18 Yeah.
02:19 Yeah. Very cool.
02:19 Yeah.
02:20 Yeah.
02:20 It's got a progress dashboard.
02:22 Actually, I have no idea what that is.
02:23 It has a worker insights that you can ask when it's done, like how well did, you know, how efficient was that multi-processing story graceful and user-friendly exception handling and has timeouts.
02:34 So you can say, I would like the execution of the work to not take more than three seconds.
02:41 And actually you can even say things such as if the worker process itself takes 10 seconds or more to exit.
02:48 maybe there's like some something happening over there that's like a hung connection on a database thing or who knows right some network thing you can actually set a different timeout for the process which is pretty cool it has automatic chunking so instead of saying i have 100 things let's go individually one at a time hand them off to workers it can you know break them up into bigger blocks including numpy arrays which is pretty cool you can set the maximum number of tasks that are allowed to run at any given moment.
03:18 I guess you can set the workers, but also if it does this chunking, you can say how many things can run to avoid memory problems.
03:24 You could even say, I want to use five processes, but after 10 bits of work on any given process, give me a new worker and shut down the others in case there's like leaky memory or other things along those lines.
03:39 You can create a pool of them through a daemon option.
03:43 a whole bunch of stuff. It uses DIL to serialize across the multi-process processes, which is cool because it gives you more exotic serialization options for say, objects that are not pickable, lambdas, functions, and other things in IPython and Jupyter notebooks. So all that's pretty awesome, right?
04:03 >> Yeah.
04:04 >> Yeah. So the API is super, super simple. From Empire, import worker pool, with worker pool, jobs equal 5, pool.map, some function, some data, go.
04:14 So this jobs here tells you how many processes to run basically.
04:18 For the progress bar, you just set progress bar equals true.
04:22 That's not too bad.
04:23 Another thing that's cool is you can have shared objects.
04:26 So you can have some shared data that's passed across without...
04:32 Basically using shared memory I think is how that it works.
04:35 So that it's more efficient instead of trying to pick a load across.
04:38 I think they have to be read-only or something.
04:39 There's a whole bunch about it.
04:40 >> Oh, interesting. But you pass it into the worker pool.
04:43 >> Yeah. You say worker pool.
04:45 These things, I want you to set them up in a way to be shared.
04:49 I think, like I said, in a read-only way across all the processes instead of trying to copy them over.
04:56 You have a setup and a teardown thing that you can do to prepare the worker when it gets started.
05:04 You can ask it for the insights, like I said, and then benchmarks, it shows it's significantly faster, not just the compared, not just against multi-processing, but they say, here's how you do it.
05:16 Here's what happens if you do it in a serial way.
05:17 Here's what multi-processing and process pool executors look like.
05:21 But it also compares against JobLib, Dask and Array.
05:24 And it's, it's pretty much hanging there with the best of them, isn't it?
05:28 Yeah.
05:29 It is a titch faster than Ray everywhere.
05:32 Yeah.
05:32 Just, yeah.
05:33 Just, just a bit.
05:34 One other thing, I don't remember where it was in this huge, long list of things.
05:40 but you can also pin the CPUs to CPU cores, which can be really valuable when you're thinking about, taking advantage of, you know, L1, L2 CPU caches.
05:52 So if your processes are bouncing around back and forth, one's working with some data, then it switches to another core.
05:59 And then it has to pull a new data into the L2 cache, which is like hundreds of times slower than real memory.
06:07 And that slows it down, then they switch back and they keep running.
06:09 So you can say, you know, pin these workers to these CPUs and you've got a better chance of them not redoing their cache all the time.
06:17 So that's kind of cool.
06:18 So there's just a bunch of neat little features in here.
06:20 If you're already using multiprocessing, you might check this out.
06:24 If you care about performance for real, you know.
06:27 - Why are you using multiprocessing if you don't care about performance though.
06:32 - Well, I mean, you're looking to pull out the final little bits of performance, I suppose.
06:36 Yeah, yeah, yeah.
06:37 Right, like these benchmarks are cool, but they're doing computation on the workers, right?
06:42 Whereas a lot of what you're doing is like talking to queues and talking to networks and talking to databases.
06:48 Like it doesn't matter what framework you used to do that, long as you're doing something parallel, right?
06:52 - Yeah, well, yeah, well, I don't know.
06:54 That's why you have to do your own benchmarks.
06:56 - Yeah, for sure.
06:57 And then there's that article over on Medium by the creator as well, that gives you a whole lot of background on all this stuff.
07:03 Oh, neat.
07:04 Nice.
07:05 Yeah, this is quite a long article and I think it's actually more relevant.
07:09 Like for example, it's got screenshots where it shows, you know, if you use something like, let me read really quickly, Ray or Joblib, and you get some kind of exception, it just says exception occurred.
07:20 Yikes.
07:21 Whereas with this one, with Empire, you get things like, here's the call stack that goes back a little more correctly to the right function.
07:28 And here's the arguments that were passed.
07:31 Oh, interesting.
07:32 Over.
07:32 so when it crashes, you know, cause you have like five processes, potentially all doing stuff.
07:37 One of them crashed, like what data did they have?
07:39 I don't know.
07:40 It's a parallel.
07:41 It's hard.
07:41 Right.
07:42 So having the arguments, like these are the ones that cause the error is, is pretty excellent.
07:46 Yeah.
07:47 Cool.
07:47 Anyway, empire.
07:49 That's what I got for number one.
07:50 All right, cool.
07:53 I want to have something else that starts with M.
07:56 Mop up.
07:59 So mop up is something that I learned about from an article by Glyph.
08:05 So let me jump to the article first.
08:07 So Glyph wrote an article saying, get your Mac Python from python.org.
08:12 That's what I already do.
08:13 I've tried all the other stuff, and I just like just the downloader from python.org.
08:19 So this article talks about reasons why that's probably what you want.
08:24 That's probably what if you're writing a tutorial, it's probably what your users need to do too, if they're using a Mac.
08:32 I won't go through all the details, but he goes through reasons why you probably want this one and not things like, what are the others? Homebrew, you can brew install your Python, but he doesn't recommend it, and you can read on pyenv.
08:48 I've tried it, it like messes up other stuff for me.
08:52 So I like the downloader from Python.
08:54 But one of the things that I don't like is that if, like if I had Python 3.11.4 installed and now Python 3.11.5 is out, how do I get that on my computer?
09:05 Do I just reinstall it?
09:06 Yes, you can.
09:07 But Glyph made a new thing called Mopup.
09:12 So what Mopup does is you just pip install Mopup, And it's like the only thing I install on my global Python versions.
09:21 Like 3.11, pip install, I update pip and install this and that's it.
09:26 Everything else goes into a virtual environment.
09:28 - Or pipx install this. - Exactly.
09:33 But mopup, what's the usage?
09:35 So I just tried it this morning.
09:36 I didn't pass it any flags. I just installed it and ran it.
09:40 And it updated me from Python 3.11.4 to Python 3.11.5 without me having to re-download anything other than this.
09:49 So I'm going to set up something that goes through.
09:52 I've got a lot of versions on my computer.
09:54 I've got, I think, well, I've got 3.7 through 3.12 installed.
09:58 And I want all of them to be on the latest bug fix release.
10:03 So I'm just going to use, probably use Brett Cannon's Pi installer or Python installer Pi on my Mac to go to each of the versions and run mop up on all of them to update it.
10:17 So that's what I'd like to do.
10:19 Anyway, it's cool.
10:20 I'm re I'm really excited about this because this was like the one hole in using the install, the, the, the Python.org installer is how they update it.
10:29 So.
10:30 Nice.
10:31 Yep.
10:31 Interesting.
10:32 Interesting.
10:32 I, I gotta admit, I'm still a brew install Python three sort of person.
10:37 Okay.
10:38 And the drawback, the main drawback that glyph makes an argument for, which is valid is you don't control necessarily the version of Python that you get.
10:46 Because if you brew install, I don't know, some other, you know, YouTube downloader app or whatever rando thing, it might say, well, I need a Python read 12 and you only have three eight.
10:59 Right.
10:59 And it'll auto upgrade on you without you knowing, but I'm always running the absolute latest Python anyway.
11:05 And so, you know, when it, those other packages say greater than three, 10, like I don't care, I already have greater than three 10.
11:11 And so I don't know, that's the world I'm living in now, but that's, that's okay for me.
11:15 Oh, okay.
11:16 So yeah, I'm, I'm a package maintainer, so I, I have multiple versions on, on my box, but it's, but in a lot of people like PI and for that reason, but I don't, but anyway, I've, I've had trouble with PI too, especially around the Apple Silicon Rosetta compiler mismatch.
11:36 Like there's just, they think it wouldn't install for me.
11:38 And so, yeah, I think the Python.org is a good recommendation.
11:44 - Okay, cool.
11:45 - Yep, yep.
11:45 All right, before we move on to our next topic, Brian.
11:50 - Well, I'd like to thank Sentry for sponsoring this episode of Python Bytes.
11:55 You know Sentry for their error tracking service, but did you know that you can take it all the way through your multi-tier and distributed app with their distributed tracing feature?
12:04 How cool is that?
12:05 Distributed tracing is a debugging technique involves tracking the requests of your system starting from the very beginning, like the user action, all the way to the back end database and third-party services. This can help you identify if the cause of an error in one project is due to an error in another project. That's very useful.
12:27 Every system can benefit from distributed tracing, but they are useful especially for microservices.
12:33 in microservice architecture, logs won't give you the full picture.
12:37 So you can't debug every request in full by reading the logs, but distributed tracing with a platform like Sentry can give you a visual overview of which services were called during the execution of certain requests.
12:50 Aside from debugging and visualizing architecture, distributed tracing also helps you identify performance bottlenecks.
12:57 Through a visual like Gantt chart, you can see if a particular span in your stack took longer than expected and how it could be causing slowdowns in other parts of your app.
13:07 Learn more and see examples in the tracing section of their docs at docs.sentry.io.
13:14 To take advantage of all the features of the Sentry platform, create your free account.
13:18 For Python Bytes listeners, be sure to use code Python Bytes, all one word and activate a free month of their premium paid services.
13:28 Get started today at pythonbytes.fm/sentry.
13:31 Thank you Sentry for supporting Python Bytes.
13:33 Indeed, thank you Sentry.
13:35 And I want to bring it back to a similar, bingo, bring it back to something I gave a shout out to before, multithreading and meta.
13:45 I talked about both those and I want to cover this article posted on engineering at meta, which is on the facebook.com domain, actually not the meta domain, but whatever, engineering at meta, because it's really about Instagram anyway.
14:00 And it talks about this new thing called immortal objects.
14:04 And Brian, would you want to live forever?
14:05 Like a vampire?
14:06 No.
14:07 Me either.
14:08 Definitely, definitely not.
14:09 For a while.
14:10 I mean, I could, I could take a few more years, but not infinity.
14:14 But Python objects, they can benefit from this infinity.
14:17 And so, I want to go through this new pep, pep, six, eight, three, which is accepted, accepted in three 12.
14:27 So that's pretty exciting.
14:29 This is part of the sender performance work that's coming out of the meta team.
14:33 And I want to look at it, not originally, but let's look at over on omnivore.app.
14:39 This is my new favorite way for research because I can put highlights and notes.
14:43 So Instagram has introduced immortal objects to PEP 683.
14:48 Now, Python objects can bypass reference counting checks and live forever through the entire execution of the runtime, at least from when they're created to the end.
14:58 So, you know, traditionally, typically, I guess I should say, Python objects have a whole bunch of information about them on the object that is allocated on the heap.
15:08 This even includes numbers, and those things change over time.
15:13 So if I have X equals a string, and then I say Y equals X, it goes up to that thing and says, plus plus, you know, plus equals one on the reference count.
15:22 And when Y goes away, then that minus minuses it, right?
15:26 When that number hits zero, it gets cleaned up.
15:28 There's also stuff on the object for cycles and garbage collection.
15:32 So there's a lot of stuff that's happening there, right?
15:34 And so what they're doing is they're running a lot of Django for Instagram, which is pretty awesome.
15:41 However, what they're trying to take advantage of is the fact that there's a lot of similar data, similar memory usage when I load up Python.
15:50 So if I write type Python on the terminal and then I open up a new terminal type Python, it's gone through exactly the same startup process, right?
15:58 So it's loaded the same shared libraries or DLLs, it's created its negative 255 to 255 flywheel number that it's gonna reuse instead of when you say the number seven, it doesn't always create a new seven, you always have the seven that was created at startup, exceptions, those kinds of things, right?
16:17 Well, if you have a web server that's got 10 or 20 or 100 worker processes that all went through the same startup for a Python app, you would want to have things like that number seven or some exception type or whatever modules, right? Core modules that are loaded. You would like to have one copy of those in memory on Linux and then have a copy on write thing for the stuff that actually changes. But those other pieces, you want them to stay the same.
16:43 Yeah. Yeah. Like there's no point in having a different representation of the number four for every process, if there's some way to share that memory that was created at startup.
16:53 - And we don't need reference counts updated and all that stuff, 'cause it's--
16:56 - Exactly, exactly.
16:57 So what they found was, while many of their Python objects are practically or effectively immutable, they didn't actually, over time, behave that way.
17:08 So they have graphs of private memory and shared memory.
17:11 And what you would hope is that the shared memory stays pretty stable over time, or maybe even grows.
17:16 Maybe you're doing new stuff that's like pulled in similar things, but that's not what happens in practice.
17:21 I'm current Python.
17:22 The shared memory goes down and down and down because even though that object or let's say that flywheel number that got created to be shared, it's still got its reference count number change.
17:33 So throughout the behavior of one app, it might go, well, four was used 300 times here and 280 over there.
17:40 So those are not the same four.
17:41 Cause on the reference count, they have 281 and 301 or whatever it is.
17:46 Right.
17:46 And so that shared memory is falling down because the garbage collector and just the interacting with the ref count is in very meaningless and small ways changing pieces of the shared memory to make them fall out of the shared state.
18:02 So this whole pep, this whole idea is we're gonna make those types of things so that their reference count can't change, their GC structures can't change, they cannot be changed.
18:12 They're just always set to some magic number for like this thing's reference count is unchanged, right?
18:19 So if you look at like the object header, it's got a GC header, reference count, object type, and then the actual data.
18:25 Well, for the ones that don't change, now these new ones can be set.
18:29 So even their GC header and the reference counts don't change.
18:32 Cool, right?
18:33 - Yeah.
18:34 - And what that means is if you come down here, it says there's some challenges.
18:39 First, they had to make sure that applications wouldn't crash if some objects suddenly had different ref counts.
18:45 Second, it changes the core memory representation of a Python object, which if you work in the C level, just directly with the memory, that's pointers to the object, that can be tricky.
18:56 And finally, the core implementation relies on adding checks explicitly to the increment and decrement, the add ref, remove ref, decrement ref, which are two of the hottest bits of code in all of Python, as in the most performance critical.
19:11 So if you make a change to it, If you make all the Python slower for this, that's bad.
19:16 And they did make Python slower, but only 2%.
19:18 And they believe that the benefit they get is actually worth it 'cause you bring in, for like heavy workloads, you get actually better performance.
19:26 So it's a trade-off, but there it is.
19:28 - One of the things, I was reading this article and one of the things that confused me was, is this just something internal to Python that it's gonna happen under the hood or do I need to change my syntax in any way?
19:40 - Yes, I was looking for that as well.
19:43 And every single thing about, I went and read the PEP and everything I remember from reading the pep, maybe I missed something, but everything I got from the PEP was it, the, the C layer, it was, you know, here's the PI immortal, you know, call that you make in the C API.
20:00 So what I would like to see is something where you set a decorate, like kind of like a data class.
20:04 Like this thing is outside of garbage collection.
20:07 This class is out or this, I don't know, in some way to say in Python, This thing is immortal for now, at least.
20:15 Yeah.
20:15 But I didn't see it either.
20:16 It also would be good to, even if we could just do like, like that would be kind of like a constant then also, we could set up some, some constants in your system that, that are immortal or something.
20:28 Yep.
20:28 Okay.
20:31 Yeah.
20:31 Like the dictionary of a module that loads up, if you're not dynamically changing it, which you almost never do, unless you're like mocking something out, like, let it, let it be, you know, it's just tell it, it's the same.
20:42 I don't know if I referenced that.
20:42 - Yeah, I'd be curious to see in this, as they're implementing it, it does seem like parts of the system are gonna go a little bit slower, but also parts of it are gonna go faster 'cause you don't have to do all that work.
20:54 - Exactly, right, yeah, you don't have to do a lot of stuff.
20:57 Okay, like the garbage collection cycles that happen over time, right?
21:01 These things would just be excluded from garbage collection entirely, so that's cool.
21:05 So they have some graphs of what happened afterwards, and the before and after, and on the after in the shared memory, Well, sorry, the before it went almost to zero.
21:14 Like it went from, you know, Y axis with no numbers really high to Y axis, low with no numbers, but I don't know exactly what this is, maybe a percent.
21:24 But like I said, it doesn't really, really, say, but after processing as few as 300 requests, it was like a 10th of the original shared memory.
21:33 Was left.
21:34 And that was it.
21:35 Now, after it's down to it's 75, 80% still shared, which is pretty excellent.
21:40 Okay.
21:40 Cool.
21:41 >> Neat.
21:41 >> But as you said, this is like one of the internal core things from what I can tell.
21:47 >> Yeah.
21:47 >> They do say that this is foundational work for the per interpreter GIL, F684 as well as making the global interpreter lock optional in CPython 703.
21:58 Because if you know an object is never going to change, not even its ref count, not even its GC state, and definitely not its data.
22:07 Well, you can have at it with multi-threading, right?
22:10 The problem with multithreading is something has changed in between two operations.
22:14 And if you know it's never going to change, you can just completely remove all the checks that you need to do and make it a lot faster.
22:20 So that's why it's there to support.
22:22 And that's why it's relevant for some of these parallelism peps.
22:26 So anyway, pretty cool.
22:28 This is coming in 3.12, I guess.
22:31 Nice. Cool.
22:32 Well, I'd like to talk about something that I don't really think about that much in that that is docstrings for docstring formats.
22:41 And I just ran across this article and I'm covering it partly just as a question to the audience.
22:48 So the article is from Scott Robinson and it's called Common Docstring Formats in Python.
22:54 And docstrings, people forget what they are.
22:58 Let's say you have a function called addNumbers or something.
23:02 You can really do any kind of quote, but the first string in a function, if it's not assigned to a variable, is the doc string.
23:09 Or it's the first element, anyway.
23:12 The first line is a little...
23:14 It's usually one line and then maybe a space and then some other stuff.
23:20 And apparently there's several common formats of this.
23:25 You can also get access to it by the _doc attribute of something.
23:30 So if you have a reference to a function, you can say dot dunder doc and you can see the doc string.
23:38 A lot of IDEs use this to pop up hints and stuff.
23:43 That's one of the reasons why you want to have at least the first line be an explanation that is good for somebody to see if it pops up on them and stuff.
23:51 Anyway, which format should this be?
23:55 It covers a handful of different formats.
23:58 There's a restructured text doc stream format.
24:03 You've got all this like descriptions of parameters and the types and stuff.
24:08 This is scary looking to me.
24:11 Let's go through next.
24:14 Google DocStream format.
24:15 This one makes a little more sense, but again, I don't know.
24:19 It says int and it talks about the different parameters.
24:22 If you really have to describe them, this is probably one of my favorites.
24:26 This looks pretty good.
24:27 Return, what is the information that it returns?
24:31 What are the arguments and why some like one-liner explanations, not too bad.
24:36 There's a NumPy SciPy doc format.
24:39 This is also pretty clear.
24:41 Maybe a little, let's compare the two.
24:44 I guess it's got an extra line because you're doing the underscore line, which is, I guess, okay.
24:51 It looks, I don't know, this is a lot of space.
24:55 But I'm just curious if people are really using this.
24:58 Looking at this, I can see the benefit of describing if it's not clear from the name of your function describing stuff, and I also like type hints.
25:10 This seems like a great argument for type hints because the types will be great just right in the parameters.
25:19 Then if you don't have to describe the type, maybe just have variable names that are more clear.
25:26 My personal preference really is use type hints, and then also have a description.
25:34 If you're going to do a docstring and it's not obvious from the name of the function, then have a description of what the function does, and that's it.
25:41 Then if it's unclear about really what the stuff is, the behavior of different parameters, then add that.
25:49 But again, I'd love to hear back from people.
25:53 go ahead and send me a message on @brianokken@fosstodon.org.
25:58 This worked great last week.
26:00 I got some great feedback.
26:01 And so I'd love to hear what people are doing for their docstring formats.
26:05 Do you use docstring formats, Michael?
26:07 I'm familiar with docstring formats and I've played with them.
26:10 I like the Google one best, I think.
26:12 But I'm with you.
26:13 Like, if you have good variable names, do you need the parameter information?
26:17 If you use type hints, do you need the parameter information to say the type?
26:20 If you have a return declaration with a type, do you need to have the returns?
26:24 The function has a good name, like get user, you know, angle bracket, optional user.
26:29 Like, oh, well it returns a user or it returns none.
26:32 How much more do you need to say about what it returns?
26:34 You know?
26:34 Right?
26:35 Like there's a lot of, it's, it's a little bit of a case study and yes, you want to be very thorough, but also good naming goes a really long way to like limit the amount of comments and docs you got to put onto a thing.
26:49 There are times when it makes sense though, like if you're talking about range or something like that, is it inclusive of both numbers? If I say one to 10, do I get one, two, three, four up to 10 or I get one, two, three, four up to nine, right? Like those situations where you might need to say the non-inclusive upper bound of the rain, I don't know, whatever, or something like that, right?
27:14 - Yeah, yeah.
27:16 I do like an explanation of what's returned, though.
27:19 Often it's not obvious.
27:22 And even if you are doing a type hint and you can get the type of what's returned, what's the meaning of what's returned is, if that's not obvious, please put that in a doc string.
27:32 But yeah, anyway.
27:33 - Cool.
27:34 I wonder, I don't, this is an honest question.
27:36 I have no idea if you express it in any of this documentation or if the editors consume it.
27:42 But what would be really awesome is if there was a way to express all possible exception types and the entire call stack, right?
27:49 Like you could get a value error.
27:51 You could get a database connection error, or you could get a uniqueness constraint exception with any of those three, then you could have editors where you just hit like alt enter, right?
28:01 The error handling goes, bam, bam, bam.
28:03 Here's the three types of things you might catch, right?
28:06 That would be awesome.
28:06 But I don't know if you can express the possible range of exceptions in there or >> Unless you've, yeah, and especially if you're calling any extra functions within a function.
28:18 >> Yeah.
28:18 >> You don't know if it's going to raise an exception possibly.
28:21 >> Possibly.
28:22 >> Yeah.
28:22 >> Anyway, that's something I would see actually really useful there, that you don't express in like the type information or the name or any of those things.
28:29 >> Yeah. Cool.
28:30 >> Yeah.
28:30 >> Cool.
28:31 >> Well, those are our items.
28:33 Michael, do you have any extras for us?
28:34 >> I have an extra for you in particular. How about that?
28:38 >> Okay.
28:38 >> Let's start with that one then.
28:40 So last week you asked about GitHub releases.
28:44 Who uses these?
28:45 Should I be bothered?
28:46 There's this person that seems to be telling everyone on GitHub, they should use releases if they're not.
28:51 Do I care?
28:52 And Rhett Turnbull, who's been on Talk Python to talk about building Mac apps with Python, GitHub said there, said, "GitHub releases questions.
29:02 I use them and I like them so people can subscribe to be notified of new releases.
29:06 I use gh, the GitHub command line, GitHub release create to create one in the command line every time I push to PyPI.
29:14 I'm sure this can be done as an action, but I don't push that often, so it's fine with me.
29:19 Anyway, there's some feedback for you.
29:21 >> Thanks. Yeah, I actually got quite a few people reaching out and I really appreciate it.
29:27 It did convince me that I'm going to start trying to figure it out using Git releases, But I also want to, want to make sure that it's automated as much as possible.
29:38 I don't want to add redundant work just for the heck of it.
29:40 So you're going to set up some automation to go around and tell everyone on GitHub who doesn't have releases going yet.
29:45 No, they should do releases.
29:47 Think of all the contributor badges you're going to get.
29:50 Yeah.
29:52 All right.
29:57 Let's talk about one more thing.
29:58 we've heard about IPI issues where people are uploading malicious packages And a lot of times it's crypto kiddies and other idiots who are doing that or researchers to like just prove of concept that it can be done, but Lazarus hackers who are, I'm pretty sure, yeah, North Korean state sponsored hacking group uploaded a fake VMware, a VM connect library targeting it professionals.
30:26 So, it only had 237 downloads, but when you start to think about state actor hacking level of stuff getting installed onto your machine.
30:36 That's like a, at minimum format, the OS, maybe just throw it in the trash.
30:41 I don't know.
30:41 It's like pretty bad level of being infected.
30:44 So I don't know.
30:45 That's I have no action or further thoughts.
30:48 Just like a, Hey, that's worth checking out.
30:49 Yeah.
30:51 And maybe we do need to care about our, you know, pipeline and whatever.
30:55 Yeah.
30:56 The supply chain, but we do have the new security person, Mike that was hired.
31:02 Right.
31:02 So that's excellent.
31:03 Yes.
31:03 Yeah.
31:03 He was in the audience when we announced it.
31:05 That was great.
31:05 Believe it's Mike, right?
31:06 Hopefully I got the name right.
31:07 Yeah.
31:07 Over to you.
31:08 That's what I got.
31:09 Okay.
31:09 Well, I was having a conversation, with, was actually this, the, I don't know how to pronounce that.
31:16 JNY, JNY, on the PyBytes Slack.
31:20 we were, talking about using, talking about using TRS-80 computers.
31:26 And I said, Hey, I, I remember typing in Lunar Lander on my TRS-80 way back when, copied it out of the back of a magazine.
31:35 And he said, "Oh, I've got a copy of Lunar Lander that works on Python." I'm like, "Oh, I want to try it." And I still can't, I'm going to get back to him, but I can't get his to work.
31:49 And then I looked around and there was this other cool one, Lunar Lander Python, I found that's a four years old.
31:57 apparently it was done as part of a fundamentals of computing course, which is pretty impressive.
32:04 I couldn't get it to work, but their website looks great.
32:07 They have a website attached to it with screenshots.
32:12 >> It shows good fonts too.
32:14 >> Yeah, and it looks exactly like the Lunar Lander that I typed into my TRS-80, so I'm pretty excited about that.
32:20 Anyway, but I can't get that to work either.
32:23 So if anybody's got like a Lunar Lander copy or something that works with modern Python, I would love to play with it.
32:32 I also wanna hack with it with my daughter and stuff.
32:35 So anyway, that's the only extra thing I got is bring on the Lunar Lander.
32:41 - I like it.
32:42 Yeah, Mike Filders here, the security guy and people are thanking him and stuff for all the security work.
32:50 So just getting started, but yeah, it's not an easy job, I'm sure.
32:54 - Yeah, and we're pretty excited.
32:56 I can't think of a better person to do this job, so.
32:59 - Indeed.
33:00 Shall we play some bingo?
33:01 - Sure.
33:02 - All right, this is our joke.
33:03 Programmer bingo.
33:04 I love it.
33:05 So, you know how bingo works.
33:07 Everybody gets a different card with different options.
33:10 Typically it's numbers, but in this case, it's programmer actions or statements.
33:14 You call out or have happened, and as they get called out, you mark them off, And then whoever completes a row or column or I don't know, something diagonal.
33:23 I don't play that much bingo, but you win, right?
33:25 And so this is a possible programmer bingo card.
33:30 We should come up with one, a whole bunch of them.
33:32 So I'll just read you some of the options out of this card.
33:34 Okay, Ryan.
33:35 So we've got number one, written code without comments.
33:40 Everybody can check that one off.
33:41 For all of the C-inspired language people, forgot a semicolon at the end of a line.
33:46 That's good.
33:47 relate with number three, close 12 tabs after fixing an issue.
33:51 - Oh yeah.
33:52 - Oh yeah.
33:53 Also relate number four, 20 warnings, zero errors.
33:57 - Works on my machine, man.
33:58 - Yeah, exactly.
34:00 The number five is program didn't run on someone else's computer.
34:04 - Oh, yeah.
34:04 - And instantiation of the works on my machine problem.
34:08 And then this number six, to-do list greater than completed tasks.
34:11 Number seven, copied code from Stack Overflow.
34:14 I'm pretty sure we can all check that one off.
34:15 Close program without saving it, okay.
34:18 Number nine, asked to fix a laptop because you're a programmer.
34:21 I have a problem with my computer, like please don't, please don't.
34:24 Number 10, turned your bug into a feature.
34:26 11, deleted block of code and regretted it later.
34:30 Finally, learned a new programming language but never used it.
34:33 Hello, TypeScript.
34:34 >> We could come up with so many of these.
34:36 We should totally do more.
34:38 >> They're so good, aren't they?
34:39 You could just go on and on.
34:40 >> Yeah. Have a backup copy of your code repository, even though it's on GitHub.
34:47 - Yeah, zip is my source control.
34:50 - Yeah, and then there's usually a free one in the middle.
34:55 That could just be need to update pip.
34:59 - That's exactly, pip is out of date.
35:03 - Yeah, awesome.
35:06 Well, as usual, pleasure to talk with you, Michael, and thank you so much, Sentry, for sponsoring this episode.
35:12 again, everybody check out Sentry and go to, what was that link again?
35:17 Pythonbytes.fm/Sentry.
35:19 - /Sentry.
35:20 Thanks Brian.
35:21 - Thank you, bye.