« Return to show page
Transcript for Episode #193:
Break out the Django testing toolbox
00:00 Hello, and welcome to Python bytes where we deliver Python news and headlines directly to earbuds. This is Episode 193, recorded July 29 2020. I'm Michael Kennedy. And I am Brian arkin. And we've got a bunch of great stuff to tell you about this episode is brought to you by us, we will share that information with you later. But for now, I want to talk about something that I actually saw today, and I think you're gonna bring this up as well, Brian, I'm gonna let you talk about it. But something I ran into updating my servers with a big warning in red, when I pip install some stuff, saying, your things are inconsistent. There's no way PIP is going to work soon. So be ready. And of course, that just results in frustration for me, because, you know, depend a bot tells me I need these versions, but some things don't require anyway, long story you tell us about it? Yeah. Okay. So I was curious. I haven't actually seen this yet. So I'm glad that you've seen it so that you have some experience. This was brought up to us by Matthew Fae card. And he says he was running PIP and he got this warning, and it's all in red. So I'm gonna have to squint to read this says, after October 2020, you may experience errors when installing or updating packages. This is because PIP will change the way it resolves dependency conflicts, we recommend that you use use features equals 2020 resolver. to test your packages, it shows up as an error. And I think it's just so that people actually read it. But I don't know if it's a real error or not, it still works fine. But it's it's going to be an error eventually. Okay, so this is not a problem, do not adjust your sets actually do adjust your sets, what you need to be aware of is the changes. So we've got to we I think we've covered it before, but we've got a link in the show notes to the PIP, Pip dependency resolver changes. And these are good things. But one of the things that Matthew pointed out, which is great. And he's we're also going to link to an article where he discusses where, like how his problem showed up with this. And it's around projects that use some people use poetry and other things. And Kevin, the other one depends, that does things like lock files and stuff. But a lot of people just do that kind of manually. And what you often do is you have like a, your original set of requirements that are just your, like the handful of things that you immediately depend on with no versions or, or with minimal version rules around it. And you say, if install this stuff, well, that actually ends up installing a whole bunch of all of your immediate dependencies, all of their dependencies and everything. So if you want to lock that down, so that you're only installing the same things again, and again, you say PIP freeze, and then pipe that to a like a lock file. And then you can use that I guess a common pattern, it's not the same as PIP m lock file, and stuff. But it can be similar anyway. And then if you use that, and pip install from that, everything should be fine, you're gonna install those dependencies. The problem is, if you don't use the use 20 resolver feature to generate your lock file, then when if you do use it to install from your lock file, there may be incompatibilities with those. So the resolvers actually try is actually there's good things going on here having PIP do the resolver better. But the Quick note we want to say is don't panic, when you see that red thing, you should just try the use features 2020 resolver. But if you're using a lock file, use it for the whole process, use the new resolver to generate your original lock file from your original stuff. And then use it when you're installing the requirements lock file. There's also information on the IPA website. They want to know if there's issues this is still in a it's available. But we're still there still maybe kinks but I think it's pretty solid, not enforced. But warning days. Yeah. And I kind of actually really liked this way of rolling out a new feature and in it behavior changes to to have have it be available as a flag so that you can test in a in a not a pre release, but an actual release, and then have changed the default behavior later. But so the reason why we're bringing this up is October is not that far away. And October is the date when that's going to change to not just a flag behavior, but the default behavior. So yes, go out and make sure these things are happening. And if you completely ignore us, when things break in October, the reason is probably the you need to regenerate your lock file. Yep. So in principle, I'm all for this, this is a great idea. It's going to make sure that things are consistent by looking at the dependencies of your libraries. However, two things that are driving me bonkers right now, our systems like depend a bot or pi up, which are critically important for making sure that your web apps get updated with say like security patches and stuff, right. So you would do this
05:00 Like, you know, Pip freeze your dependencies, and then it has the version, what if, say you're using Django and there's a security release around something in there, right? Unless you know, to update that, it's always just going to install the one that you started with. So you want to use a system like depend a bot or pi up where it's going to look at your requirements, he's going to say, these are out of date. Let's update them. Here's a new one. However, those systems don't look at the entirety of what it potentially could set them to it says, okay, you're using doc OPT. There's zero dot 16, a doc OPT. Oh, except for the thing that is before it actually requires doc OPT 14, or it's incompatible. And as incompatible, as PIP will not install that requirements, dot txt any longer. But those systems still say, Great, let's upgrade it. And you're like in this, this battle of those, those things are like upgrading it. And then like the older libraries are not upgrading, or you get two libraries, one requires cop 16 or above one requires cop 14 or lower, you just can no longer use those libraries together. Now, it probably doesn't actually matter, like the feature you're using probably is compatible with both, but you won't be able to install it anymore. And my hope is what this means is the people that have these weird old dependencies will either loosen the requirements on their dependency structure, like we're talking about, right? Like this thing uses this older version, or it's got to be a new version, or update it or something, because it's gonna be there's gonna be packages that are just incompatible that are not actually incompatible because of this. Yeah. Interesting. Yes. painful. I don't know what to do about it. But it's like, literally this morning, I ran into this. And I had to go back and undo what dependent bot was trying to do for me, because certain things were no longer working right, or something like that. Interesting. Yeah. So just depend upon, depend, depend upon? Yeah, that's the thing that GitHub acquired that basically looks at your various package listings and says, there's a new version of this, let's pin it to a higher version, and it comes to the PR. Okay, cuz that was my question. It comes as a PR. So if you had testing around in a CI like environment or something, it could catch it before it went through. Yes, you will still get the PR, it'll still be like be in your GitHub repo. But the CI presumably would fail because the pip install step would fail. And then it would just know that it couldn't auto merge it But still, it's like, No, no, it's, you're like constantly like, trying to push the water the tide back, because you're like, stop doing this. It's driving me crazy. And there are certain ways to like link it, but then it just force it to certain boundaries. But anyway, it's it's like, it's going to make it a little bit more complicated. Some of these things, hopefully it considers us maybe depend upon an update to do this when they agree. Yep. That would be great. Well, speaking of packages, the way you use packages is you import them once you've installed them, right? Yes. So Brandon branner, Brenner was talking on Twitter with me saying like, I have some imports that are slow, like, how can I figure out what's going on here? And this led me over to something I may have covered a long time ago. I don't think so but possibly called import dash profiler? You know, this? No, this is cool. Yeah. So one of the things that can actually be legitimately slow about Python startup code is actually the imports. So for example, like if you import requests, it might be importing like a ton of different things, standard library modules, as well as external packages, which are then themselves importing standard library modules, etc, etc, right? So you might want to know, what was slow and what's not. And it's also not just, it's not like, just like a C include it's a imports actually run code. Yes, exactly. It's not something happening at compile time. It's happening at runtime. So every time you start your app, it goes through and it says, Okay, what we're going to do is when execute the code that defines the functions and defines the methods and potentially other code as well, who knows what else is going on. So there's a non trivial amount of time to be spent doing that kind of stuff. For example, I believe it takes like half a second to import requests, just requests interest. I mean, obviously, that depends on the system, right? Do it on micro Python versus on like a supercomputer the time it's gonna vary. But nonetheless, like, there's a non trivial amount of time because of what's happening. So there's this cool thing called import profiler, which all you got to do is say, from import profiler, import, profile import.
09:42 A bunch of times fast written, it's fine spoken, it's funky. But then you just create a context manager around your import statements you say with profile, import as context, all your imports and then you can print out a context out print info, and you get a like a profile status report. That's cool.
10:00 I included a little tiny example of this for requests and what was coming out of it. If you look at the documentation, it's actually much longer. So I'm looking here, I would say just, you know, eyeball and there's probably 30 different modules being imported. When you say import requests. That's non trivial. All right, that's a lot of stuff. So this will give you that output, I'll say, here, this module imported this module, and then it has like a hierarchy or a tree type of thing. So this module imported this module, which imported those other two. And so you can sort of see the chain or tree of like, if I'm importing this, here's the whole bunch of other stuff it takes with it. Okay, yeah. And it gives you the overall time, I think, maybe the time dedicated to just that operation. And then the inclusive time or something, actually, maybe it looks more like 83 milliseconds, sorry, I have my units wrong instead of half a second. But nonetheless, it's like, you know, you have a bunch of imports, you're running code, where's that slow, you can run this. And it basically takes three lines of code to figure out how much time each part of that tire import stack, I want to say call stack of that execution. But it's the the series of imports that happened, like you timed that whole thing and look at it. So yeah, that's it's pretty cool. That's neat. And, also, I mean, there's, there's times where you want to, you really want to get startup time for something really, as fast as possible. And this is part of it is your the things you're importing it your startup is non sometimes non trivial when you have something that you really want to run fast, right? Like, let's say you're spending half a second on startup time, because of the imports, you might be able to take the slowest part of those and import that in a function that gets called. Right. So yeah, important later, yes, you only pay for it, if you're going to go down that branch, because maybe you're not going to call that part of the operation, or like that part of the CLR or whatever. Yeah, and it's definitely one of those fine tuning things that you want to make sure you don't do this too early. But for people on packaging and supporting large projects, I think it's a good idea to pay attention to this and make sure to your important time. like it'd be something that would be kind of fun to throw in a test case for ci to make sure that your your import time doesn't suddenly go slower, because something you depend on suddenly got slower or something like Yeah, absolutely. And you don't necessarily know because the thing, it depends upon that thing change, right? It's not even the thing you actually depend upon, right? It's very, could be very done down the line. Yeah. And maybe you're like, we're gonna use this other library. We barely use it. But you know, we already have dependencies, why not just throw this one in? Oh, wait, that's adding a quarter of a second, we could just vendor that one file that we don't really, you know, and make it much, much faster. So there's a lot of interesting use cases here. A lot of time. You don't care, like for my web apps? I don't care for my ci apps. I might care. Yeah, definitely. Yeah. So I've been on this bit of a exploration lately, Brian? And that's because I'm working on a new course. Yeah, yeah, we're actually working on a bunch of courses over at talk Python, some data science ones, which are really awesome. But the one that I'm working on is Python, memory management and profiling are and tips and tricks and data structures to make all those things go better. Nice. So I'm kind of on this profiling bent. And anyway, though, if people are interested in that, or any of the other courses that we're working on, they can check them out over at training, talk python.fm helps bring you this podcast and others and books. Thanks for that transition. But I I'm excited about that. Because the profiling and stuff is one of those things that often is considered kind of like a black art, something that you just learned on the job. And how do you learn it? I don't know. You just have to know somebody that knows how to do it or something. So having some courses around, that's a really great idea. Thanks. Yeah. Also, like, when does the GC run? What is the cost of reference counting? Can you turn off the GC? What data structures that are more efficient or less efficient? And according to that, and all that good stuff, it'll be a lot of fun? Well, yeah, so I've got a book, I actually want to highlight something I've got a link called, it's pi test book calm. So if you just go to pi test book.com, it actually goes to a landing page that's on blog, that's kind of not really that active, but there is a landing page. The reason why I'm pointing this out, is because some people are transitioning, some people are finally starting to use three, eight more. There's people starting to test three, nine a lot, which is great. There's pi test six just got released, not one of our items in I've gotten a lot of questions of is the book still relevant? And yes, the PI test book is still relevant. But there's a couple gotchas. I will list all of these on that landing page. So they're not there yet, but they will be by the time this airs time travel. Yeah, there's a rat a page on pragmatic that I'll link to but also but the main, there's a few things like there's a database that I use in the examples is a tiny DB and the API changed. Since I wrote the book. There's a little note
15:00 to update your update the setup to pin the database version. And there's a something, you know, markers used to be used to be able to get away with just throwing markers in anywhere. Now you get a warning, if you don't declare them, there's a few minor things that get changed are changed to that make it for new PI test users might be frustrating to walk through the book. So I'm going to lay those out just directly on that page to help people get started really quickly. So pi test book.com is what that is awesome. Yeah, it's a great book. And you might be on a testing bent as well, if I'm on my profiling line. Yeah, actually. So this is a Django testing toolbox is an article by Matt layman. And I was actually gonna think about having him on the show. And I still might on testing code to talk about some of the stuff but he threw together, I just wanted to cover it here because it's a really great threw together information. That's a quick walkthrough of how Matt tests, Django projects. And he goes through some of the packages that he uses all the time and some interesting techniques, the packages that there's a couple of them that I was familiar with PI test Django, which is like, of course, your, of course, you should use that. Factory boys, the one there's a lot of factory boys one project, there's a lot of different projects to generate fake data, factory boys, the one Matt uses, so there's a highlight there. And then one that I hadn't heard of before Django test plus, which is a beefed up test case. And maybe has other stuff too. But it has a whole bunch of helper utilities to make it easier to check commonly tested things in Django. So that's, that's pretty cool. And then some of the techniques like one of the things that people some people trained to use PI tests for Django get tripped up at is a lot of people think of pi test is just functions only test functions only and not test classes. But there's a there are some uses. Matt says he really likes to use test classes. And there's no, I mean, pi test allows you to use test classes. But you can use these derived test cases like the Django test plus test case, a couple other things using a reject assert as a structure in memory, SQL lite databases when you can get away with it to speed up because in memory databases are way faster than on file system databases. Yeah. And you have to worry about dependencies or servers you got to run. It's just colon memory, when you connect to it, and off it goes. Yeah, one of the things I didn't get, I mean, I kind of get the next one disabling migrations while testing. I don't know a lot about my database migrations, or Django migrations or whatever those are, but apparently disabling them is a good idea. Makes sense? Faster password hasher. I have no idea what this is talking about. But apparently you can speed up your testing by having a pass faster password hash, or Yeah, a lot of times though, they'll generate them. So it's they're explicitly slow. Right? So like, over I talk Python, I use pass lib, not Django. But pass up is awesome. But if you just do say an MB five, it's like super fast, right? So if you say I want to generate this and take this and generate it, it'll come up with the hashed output. But because it's fast, people could look at that say, Well, let me try like 100,000 words, I know and see if any of a match that then that's the password, right? You can use more complicated ones in Mt. Five is not when you want something like B encrypt, or something, which is slower, a little bit, and better, harder to guess. But what you should really do is you should like inserts a little bits of like salt, like extra text around it. So even if it matches, it's not exactly the same, you know, you can't do those guesses. But then you should fold it, which means take the output of the first time beat it back through, take that, put it a second time feed it back through 100 200 300,000 times so that if they try to guess it's super computationally slow. I'm sure that's what he's talking about. So you don't want to do that when you want your test to run fast because you don't care about hash security during tests. Oh, yeah, that makes total sense. That's my guess. I don't know for sure. But that's what I think what that probably means. The last tip, which is always a good tip is figure out your editor so that you can run your tests from your editor, because it your cycle time of flipping between code and test is going to be a lot faster if you can run them from your editor. Yep, sure. good tips. If you're super intense, you have the auto run.
19:25 Which I don't do. I don't have auto run on I do once in a while. Yeah, cool. Well, back to my rant. Let's talk about profiling. Okay. Actually, this is not exactly the same type of profiling, it's more of a look inside of data than of performance. So this was recommended to us by one of our listeners named oz first name only is what we got. So thank you. And he is a data scientist who goes around and spends a lot of time working on Python and doing exploratory data analysis and ideas like going to grab some data, open it up and explore it right and just start over
20:00 Looking around, but it might be incomplete. It might be malformed, you don't necessarily know exactly what its structure is. And he used to do this by hand. But he found this project called pandas dash profiling, which automates all this. So that's the Tandy, I mentioned before missing, no missing an O as in the missing number missing Data Explorer, which is super cool. And I still think that's awesome. But this is kind of in the same vein. And the idea is given a panda's data frame, you know, pandas has a describe function that says a little bit of detail about it. But with this thing, it kind of takes that and supercharges it. And you can say, df dot profile report, and it gives you all sorts of stuff. It does type inference to say things in this column are integers or numbers, strings, date times, whatever. It talks about the unique values, the missing values, core tile, statistics, stuff, descriptives, that's like mean mode, some standard deviation, a bunch of stuff, histograms, correlations, missing values, there's the missing node thing. I spoke about text analysis of like, categories and whatnot, file and image analysis, like file sizes, and creation dates and sizes of images and like all sorts of stuff. So the best way to see this is to look at an example. So in our notes, Brian, do you see where there's like a has nice examples. And there's like the NASA meteorites one. So there's example for like the US Census data, a NASA meteorite, one, from Dutch healthcare data and so on. If you open that up, you get it. You see what you get out of it. Like it's pages of reports, oh, what was in that data frame? Oh, this is great. It's got like, it's it's tabbed and stuff. So you can tab. It's got warnings, it's got pictures, it's got all kinds of analysis, tons of graphs, and you can like hide and show details and details include tagged on a note, this is a massive dive into what the heck is going on with this data correlations. heat maps, I mean, this is the business right here. So this, this is like one line of code to get this output. This is great. This is like replaces a couple interns at least.
22:13 Sorry, interns. But yeah, this is really cool. So I totally recommend if this sounds interesting, you do this kind of work. Just pull up the NASA meteorite data. And realize that like that all came from, you know, importing the thing and saying df profile report, basically. And you get this, you can also click and run that in binder and Google colab. So you can go and interact with it live if you want. Yeah, I love the I love the the warnings on these things. It can like saying some of the variables or show up have like there's some of them are skewed, like too many values, that one value, that there's some missing zeros showing, it does a quite a bit of analysis for you about the data right away. That's pretty great. Yeah, yeah, the types is great, because you can just I mean, you can have, like, hundreds or thousands of data points, it's not trivial to just just say, Oh, yeah, all of them are true or false. All of them are I know, they're billions. You'd have to look at everything first. So yeah, it's one of those things that's, like, easy to adopt, but looks really useful. And it's also beautiful. So yeah, check it out. It looks great. I want to talk about object oriented programming a little bit. Ah, okay. Actually, it's not something I mean, all of Python really is object oriented, because we use everything is an object really deep, deep down, everything's a PI object pointer. Yeah, there's an article by Redwan delamar, called the interfaces mix ins in building powerful custom data structures in Python. And I really liked it, because it's a Python focused. I mean, there's not a lot, I've actually been disappointed with a lot of the object oriented discussions around Python, and a lot of them are talking about basically, I think they're lamenting that the system isn't the same as other languages, but it's just not get over it. This is a Python centric discussion, talking about interfaces and abstract base classes, both informal and formal, a abstract base classes using mix ins. And it starts it starts out with the concept that people there's like a base amount of knowledge that people have to have to discuss this sort of thing, and of understanding why they're useful, and what are some of the downfalls and pitfalls and or benefits and whatever. And so he actually starts by Todd, it's not too deep of a discussion, but it's a it's an interesting discussion, and I think it's a good background to discuss it. Then he talks about like one of the things you kind of get into a little bit and you go, well, what's really different about an abstract base class and an interface for instance, and he writes interfaces can be thought of as a special case of an abstract base class. It's imperative that all methods have an interface are abstract methods, and that classes don't store any data or any state or instance variables. However,
22:13 In case of abstract base classes, the methods are generally abstract. But there can also be methods that provide implementation, concrete methods. And also these classes can have instance variables. So that's a nice distinction. Yeah, then mix ins are where you have a parent class that provides some functionality of a subclass, but but it's not intended to be instantiated itself. That's why it's sort of similar to abstract base classes and other things. So having all this discussion from one person in a, in a good discussion, I think is a really great thing. And there are definitely times I don't pull for pulling into class hierarchies and base classes that much, but there's times when you need them. And they're very handy. So this is cool. Yeah, this is super cool. Actually, I really like this analysis, I love that it's really Python focused, because a lot of times the mechanics of the language just don't support some of the object oriented programming ideas in the same way, right? Like the interface. Keyword doesn't exist, right? So this distinction, you have to, you have to make it in a conventional sense, like, we come up with a convention that we don't have concrete methods, or state with interfaces, right. But there's nothing there's not like an interface keyword in Python. So I, I'm a big fan of object oriented programming. And I'm very aware that in Python, a lot of times what people use for classes is simply unneeded. And I know where that comes from. And I want, I want to make sure that people don't overuse it right, if you come from Java, or C sharp, or one of these Opie only languages, everything's a class and you're just going to start creating classes. But if what you really want is to group functions and a couple pieces of data that's like shared, that's a module, right? You don't need a class, right? You could still say module name, dot and get the list of them. And it's like a static class or something like that. But sometimes, you want to model stuff with object or in programming. And understanding the right way to do it in Python is really cool. This looks like a good one. Yeah. And also, there is a built in library called ABC, for abstract base class within Python. And it seems like a for a lot of people, it seems like a mystery thing that only advanced people use. But it's really not that complicated. And this article uses that as well and talks about it. So it's good, you want to my favorite things about abstract base classes and abstract methods. In pi charm. If I have a class that derives from an abstract class, all I have to write is class. The thing I'm trying to create parenthese abstract class name goes for the C, colon, and then just hit alt, Alt Enter, and it'll pull up all the abstract methods, you can highlight them, say, implemented with a boom, and I'll just write the whole class for you. But if it's not abstract, it obviously won't do that. Right. So the abstractness will tell the editor to like, write the stubs of all the functions for you. Oh, that'd be that's a cool use reason to use that. That's almost reason to have them in the first place. Yeah. Almost. We've pickled before, haven't we? Yeah. So yeah, we have talked about pickle a few times. Yes. Have we talked about this article? I don't remember. I don't think so. We have apologies. But it's short and interesting. So Ned batchelder, wrote this article called pickles, nine flaws. And so I want to talk about that. This comes shows via pi coders comm, which is very cool. And we've talked about the drawbacks. We talked about the benefits, but what I liked about this article is concise, but it shows you all the trade offs you're making. Right? So quickly, I'll just go through the nine, one, it's insecure. And the reason that it's insecure is not because pickles contain code, but because they create these objects by calling the constructors named in the pickle. So any, any callable can be used in place of your class name, you construct objects. So basically, it runs potentially arbitrary code, depending on where you got it from. Old pickles look like old code number two. So if your code changes between the time you pickled it, and whatever, it's like, you get the old one recreated back to life, like so if you added fields or other capabilities, like those are not going to be there. Or you took away fields, they're still going to be there. Yeah, it's implicit. So they will serialize whatever your object structure is. And they often overseer lies, does all serialize everything. So like if you have cached data, or pre computed data that you wouldn't ever normally save? While that's getting saved? Yeah, one of the weird ones that this has caught me out before and it's just, I don't know, it's weird. So there you go, is the Dunder init. The constructor is not called. So your objects are recreated. But the Dunder init is not called. They just the values have the value. So that might set it up in some kind of weird state.
22:13 Like maybe pass fail some validation or something. It's Python only, like you can't share it with other programs because it's like a Python only structure. They're not readable. They're binary. It will seem like it will pick a code. So if you have like a function you're hanging on to you pass it along, like some kind of lambda function or whatever or a class. It's been passed to.
22:13 And you have a list of them or you're holding on to them and that you think that it's going to save that all it really saves is basically like the name of the function. So those are gone. And I think one of the big, real big challenges is actually slower than things like JSON and whatnot. So, you know, if you're willing to give up those trade offs, because it was super fast, that's one thing, but it's not. And are you telling me that we covered it before we did cover 189? But I forgotten. So it was like a couple months ago, right. So yeah, it's a while ago. Anyway, it's good to go over it again. Definitely be careful with your pickling. All right. How about anything extra? That was our top six items. What else we got? I don't have anything extra. Do you have anything extra pathlab. Speaking of stuff we covered before, we talked about pathlab? a couple times, you talked about Chris Mays article, or whatever it was around pathlab, which is cool. And I said, basically, I'm still I just got to get my mind around like not using AAS path that just get into this. Right? Yeah. And people sent me feedback. like Michael, you should get your mind into this. Of course, you should do this, right. And I'm like, Yeah, I know. However, Brett Abel sent over a one line tweet that may just like seal the deal for me like this is sweet. So he said, How about this text equals path of file dot read. txt? Done? no context managers? No, open none of that. And then go, that's okay. That's pretty awesome.
22:13 baby just live in London for a few years. Like if they're gonna fund that for you. That would be awesome. Yeah, that'd be great, though.
22:13 Okay, all right. How about another joke? I'd love another joke. But this one is by Caitlin Hudson, but was pointed out to us by Aaron Brown. So she tweeted this on Twitter. And he's like, Hey, you guys should think about this. So you ready? Yeah. Caitlyn says, I have a Python joke. But I think I don't think this is the right environment.
22:13 Yeah, there's a ton of these like type of jokes. Like I have a joke. But so this is a new thing, right? It's probably gonna be over by the time this airs, probably. But I'm really amused by these types of gems. Yeah, I love it. This kind of touches on the whole virtual environment, packaging, management, isolation, chaos. I mean, there was that XKCD as well about that. Yeah. Okay, so while we're while we're here, I'm going to read some from Luciana Luciana, Ramallah. He's a Python author, and he's awesome guy. He has a couple other related ones. I have a Haskell joke, but it's not popular.
22:13 I have a scalar joke, but nobody understands it. I have a Ruby joke, but it's funnier in elixir. And I have a rest joke, but I can't compile it. Yeah, those are all good. Nice. Good. Nice. Nice. All right. Well, Brian, thanks for being here. As always, thank you. Thank you better. Bye. Thank you for listening to Python bytes. Follow the show on Twitter via at Python bytes events Python bytes as in be yts and get the full show notes at python bytes.fm. If you have a news item you want featured just visit Python by set FM and send it our way. We're always on the lookout for sharing something cool. On behalf of myself and Brian knockin. This is Michael Kennedy. Thank you for listening and sharing this podcast with your friends and colleagues.