Brought to you by Michael and Brian - take a Talk Python course or get Brian's pytest book


Transcript #167: Cheating at Kaggle and uWSGI in prod

Return to episode page view on github
Recorded on Wednesday, Jan 29, 2020.

00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.

00:05 This is episode 167, recorded January 29th, 2020.

00:10 I'm Michael Kennedy, and Brian Okken is away.

00:13 We miss you, Brian, but I have a very special guest to join me, Vicki Boykus. Welcome, Vicki.

00:18 Thanks for having me.

00:19 Yeah, it's great to have you here.

00:20 I'm excited to get your take on this week's Python news.

00:23 I know you found some really interesting and controversial ones that we're going to jump into, and that will be great.

00:28 Also great is Datadog, they're sponsoring this episode.

00:32 So check them out at pythonbytes.fm/datadog.

00:35 I'll tell you more about them later.

00:36 Vicki, let me kick us off with command line interface libraries, for lack of a better word.

00:43 So back on episode 164, so three episodes ago, I talked about this thing called typer.

00:49 Have you heard of typer?

00:50 T-Y-P-E-R?

00:51 I have not, but I've heard of click.

00:53 So I'm curious to see how this differs from that even.

00:56 Yeah, so this is sort of a competitor to click.

01:00 Typer is super cool because what it does is it uses native Python concepts to build out your CLI rather than attributes where you describe everything.

01:11 So, for example, you can have a function and you just say this function takes a name colon str to give it a type or an int or whatever.

01:19 And then Typer can automatically use the type and the name of the parameters and stuff to generate like your help and the inbound arguments and so on.

01:29 So that's pretty cool, right?

01:30 Yeah, seems like a great excuse to start using type annotations if you haven't yet.

01:34 Yeah, exactly. Very, very nice.

01:35 That it leverages type annotations, hence the name typer, right?

01:38 So our listeners are great. They always send in stuff that we haven't heard about or you know, like, I can't believe you didn't talk about this other thing.

01:46 So Marcelo sent in a message and says, hey, you should talk about CLIZ, C-L-I-Z-E, which turns functions into command line interface.

01:55 So Klies is really cool, and it's very similar in regard to how it works to Typer.

02:02 So what you do is you create functions, you give them variables, you don't have to use the types in the sense that Typer does, but you have positional arguments, and you have keyword only arguments.

02:13 And you know, Python has that syntax that very few people use, but it's cool if you want to enforce it, where you can say, here are some parameters to a function, comma, star, comma, here are some more, and the stuff after the star has to be addressed as a keyword argument, right?

02:30 Yeah, so it leverages that kind of stuff. So you can say, like their example says, here's a hello world function, and it takes a name, which has a default of none, and then star, comma, no capitalize is false, and it gives it a default value.

02:43 So all you got to do to run it is basically import clies.run and call run on your function.

02:49 And then what it does is it verifies those arguments about whether or not they're required, and then it'll convert the keyword arguments to like, dash dash, this or that. So like, dash dash no capitalize will pass true to no capitalize.

03:05 If you admit it, it'll pass, you know, whatever the default is, I guess, so false.

03:09 So there's like positional ones where you don't say the name, but then also this cool way of adding on these dash dash capitalize and so on.

03:16 So it seems like a really cool and pretty extensive library for building command line interfaces.

03:21 Yeah, so this seems like it would be good if you have a lot of parameters that you have to pass in. I'm thinking specifically of some of the work that you would do in the cloud, like in the AWS command line.

03:30 Yeah, yeah.

03:31 Or similar?

03:31 Yeah, for sure.

03:32 Another thing that's cool is it will take your doc strings and use those as help messages.

03:37 Oh, that's neat.

03:38 Yeah, so you know, in like some editors, you can type triple quote enter and it'll generate, you know, here's the summary of the method, And then here's the arguments, and you can put this, or you can just write them out, of course.

03:49 And then here's the descriptions about each parameter.

03:53 Those become help messages about each command in there.

03:57 So it's really nice, and I like how it uses just pure Python, sort of similar to Typer in that regard that you don't put like three or four levels of decorators on top of things and then reference other parts of that.

04:09 You just say, here's some Python code.

04:11 I want to treat it as a command line interface, - That is pretty cool. - Yeah.

04:16 So there's now a lot of choices if you want to do command line interfaces.

04:20 Yeah, yeah. Definitely. And Click is good, and it's very popular in arg pars as well.

04:25 But I'm kind of a fan of these pure Python ones that don't require me to go do a whole bunch of extra stuff.

04:30 So, yeah, definitely loving that.

04:32 You know what? I bet that Kaggle's not loving what you're talking about next.

04:36 - Before we get into this. - Well, I think they might be, but...

04:39 Yeah, we'll see. Okay. Tell us about Kaggle and what the big news here is.

04:43 the big news here is.

04:43 - Yeah, so there was a dust up at Kaggle a couple weeks ago.

04:47 So just as a little bit of background, Kaggle's a platform that's now owned by Google that allows data scientists to find data sets, to learn data science, and most importantly, it's probably known for letting people participate in machine learning competitions.

05:01 That's kind of how it gained its popularity and notoriety.

05:04 - Yeah, that's how I know it.

05:04 - Yep, and so people can sharpen their data science and modeling skills on it.

05:09 So they recently, I wanna say last fall, hosted a competition that was about analyzing pet shelter data.

05:16 And this resulted in enormous controversy.

05:19 So what happened is there's this website that's called petfinder.my that helps people find pets to rescue in Malaysia from shelters.

05:28 And in 2019, they announced a collaboration with Kaggle to create a machine learning predictor algorithm which pets would be most likely to be adopted based on the metadata descriptions on the site.

05:40 go to petfinder.my, you'd see that they'll have a picture of the pet and then a description, how old they are, and some other attributes about them.

05:49 - Right, were they vaccinated or things like that, right?

05:53 Sort of, you might think, well, if they're vaccinated or they're neutered or spayed, they may be more likely to be adopted, but you don't necessarily know, right?

06:00 So that was kind of some, like, what are the important factors was this whole competition, right?

06:05 - Yeah, the goal was to help the shelters write better descriptions so that pets would be adopted more quickly. So after several months, they held the competition for several months and there was a contestant that won and he was previously what was called a Kaggle Grandmaster. So he'd won a lot of different stuff on Kaggle before and he won $10,000 in prize money. But then what happened is they started to validate all of his data.

06:30 Because when you do a Kaggle competition, you then submit all of your data and all of your results and your notebooks and your code.

06:38 Like how you trained your models and stuff like that, right?

06:40 Yeah, all of that stuff.

06:42 And then what happened was Petfinder wanted to put this model into production.

06:47 So you initially have something like a Jupyter or a Colab notebook in this case.

06:52 And the idea is that now you want to be able to integrate it into the Petfinder website.

06:56 So they can actually use these predictors to fine tune how they post the content.

07:03 And so when a volunteer who was Benjamin Minnikhofer offered to put the algorithm into production and he started looking at it, he found that there was a huge discrepancy between the first and second place entrance in the contest.

07:16 And so what happened was, so a little to get more into the technical aspect, the data they gave to the contestants asked them to predict the speed at which a pet would be adopted from one to five and included some of the features you talked about like animal, breed coloration, all that stuff.

07:31 The initial training set had 15,000 animals and then after a couple months the contestants were given 4,000 animals that had not been seen before as a test of how accurate they were.

07:43 So what the winner did was he actually scraped basically most of the website so that he got that 4,000 set, the validation set also.

07:53 And he had the validation set in his notebook.

07:57 So basically what he did was he used the MD5 library to create a hash for each unique pet and then he looked up the adoption score for each of those pets, basically when they were adopted from that external data set.

08:12 And there were about 3,500 that had overlaps with the validation set.

08:16 And then he did a column manipulation in pandas to get at the hidden prediction variable for every 10th pet.

08:23 Not every single pet, but every 10th pet, so it didn't look too obvious.

08:26 So he gave himself like a 10% head start or advantage or something like that.

08:30 Exactly.

08:31 And he replaced the prediction that should have been generated by the algorithm with the actual value.

08:38 And then he did a dictionary lookup between the initial MD5 hash and the value of the hash.

08:43 And this was all obfuscated in a separate function that happened in his data.

08:48 And so they must have been looking at this going, "What does the MD5 hash of the pet attributes have to do with anything.

08:55 You know what I mean?

08:56 Right?

08:56 It's the, the hashes are meant to obscure stuff.

08:58 Right?

08:59 Right.

08:59 Yeah.

09:00 So what was the fallout?

09:01 So the fallout was this guy worked at h2o.ai.

09:04 And so he was fired from there.

09:06 And Kaggle also issued an apology where they explained exactly what happened.

09:11 And they expressed the hope that this didn't mean that every contest going forward would be viewed with suspicion for more openness and for collaboration going forward.

09:21 Wow.

09:21 It was an amazing catch.

09:22 Yeah, that's such a good catch.

09:24 I'm so, so glad that Benjamin did that.

09:26 I've got the whole deal here.

09:28 Now, did Kaggle actually end up paying him the $10,000 before they caught it?

09:33 Is there some sort of waiting period?

09:35 Unfortunately, I think the money had already been dispersed by that point.

09:40 I can easily see something, well, the prize money will be sent out after a very deep--

09:48 it may change the timing of that for sure in the future.

09:51 Who knows but wow, that's crazy. Do you know why he was fired? I mean, they're just like we don't want You say I mean h2o AI they're kind of a will help you with your AI Story, so I guess you know, they're probably just like we don't want any of the negativity of that on our product Yeah, I think that's essentially it and it was a pretty big competition in the data science community And I think also once they'd started to look into it in other places Previously, he talked about just basically scraping data to gain competitions as well.

10:25 So all of that stuff started to come out as well.

10:27 I think they wanted to distance themselves.

10:29 Yeah, I can imagine.

10:31 Yikes. Okay, well, thank you for sharing that.

10:33 Now, before we get to the next one, let me tell you about this week's sponsor, Datadog.

10:36 They're a cloud-scale monitoring platform that unifies metrics, logs, and traces.

10:41 Monitor your Python applications in real time, find bottlenecks with detailed flame graphs, trace requests as they travel across service boundaries, and their tracing client, auto instruments, popular frameworks like Django, AsyncIO, Flask, so you can quickly get started monitoring the health and performance of your Python apps.

11:00 Do that with a 14-day free trial, and Datadog will send you a complimentary t-shirt, cool little Datadog t-shirt.

11:06 So check them out at pythonbytes.fm/datadog.

11:10 This next one kind of hits home for me because I have a ton of services and a lot of servers and websites and all these things working together, running on microWSGI, U-W-S-G-I.

11:21 And I've had it running for quite a few years.

11:24 It's got a lot of traffic.

11:26 You know, we do like, I don't know, 14 terabytes of traffic a month or maybe even more than that.

11:32 So quite a bit of traffic going around these services and whatnot.

11:35 So it's been working fine.

11:37 But I ran across this article by the engineers at Bloomberg.

11:41 So they talked about this thing called configuring microWSGI for production deployment.

11:46 And I actually learned a lot from this article.

11:49 So I don't feel like I was doing too many things wrong, but there was a couple of things I'm like, "Oh, yeah, I should probably do that." And other stuff just that is really nice.

11:57 So I just want to run you through a couple of things that I learned.

12:00 And if you want to hear more about how we're using MicroWizKey, you can check that out on Talk Python 215.

12:07 Dan Bader and I swap stories about how we're running our various things, you know, Talk Python Training and realpython.com and whatnot.

12:16 So this is guidance from Bloomberg's engineering structured products application group.

12:22 Oh, that's quite the title.

12:23 And they decided to use microWSGI because it's really, you know, good for performance, easy to work with.

12:29 However, they said, microWSGI is, as it's maturing, some of the defaults that made sense when it was new, like in 2008, don't make sense anymore.

12:40 The reason is partly just because the way people use these sites is different or these servers is different, for example, doing proxies up in front of microWSGI with say, Nginx, that used to not be so popular.

12:54 So they made these defaults built into the system that maybe don't make sense anymore. And so what they did is we're going to go through and they said, we're going to go through and talk about all the things that we're going to override the defaults for and why unbit the developer microWSGI is going to fix all these bad defaults in the 2.1 release. But right now it's 2.0 as of this recording. So you're going to have to just, you know, hang in there or apply some of these changes.

13:20 Now I do want to point out one thing.

13:23 When I switched on a lot of these, I did them one at a time.

13:25 And the way you get it to reload its config is you say relaunch the process, restart the process with system CTL, like a daemon management thing from Linux.

13:35 And one of their recommendations is to use this flag die on term, which is for it to die on a different signal that it receives.

13:44 And for whatever reason, maybe I'm doing it wrong, but whenever I turn that on, it would just lock up, and it would take about two minutes to restart the server, because it would just hang until it eventually timed out, it was like forcefully killed.

13:56 So that seems bad, so I'm not using that.

13:58 But I'll go quickly over the settings that I use that I thought were cool here.

14:02 So you've got these complicated config files.

14:05 If you want to make sure everything's validated, you can say strict equals true.

14:08 that's cool, that will verify that everything that's typed in the file is accurate, and is valid, because it's kind of forgiven at the moment.

14:15 Master is true is a good setting because this allows it to create worker processes and recycle them based on number of requests and so on.

14:23 Something that's interesting, I didn't even realize you could do, maybe tell me if you knew this was possible in Python apps, you can disable the gill, the global interpreter lock, you can say, you know what, for this Python interpreter, let's not have a gill.

14:36 Wow, how does that work?

14:37 Yeah, well, it's I mean, people talk about having no gill is like, Oh, you can do all this cool concurrency and whatnot.

14:42 But what it really means is, you're basically guaranteeing you can only have one thread.

14:46 So if you try to launch a background job on a micro WSGI server, and you don't pass enable thread, that's true.

14:53 It's just going to not run.

14:54 And because there's no gill, and there's no way to start it.

14:57 So that's, that's something you want to have on vacuum equals true, this one I had off and I turned it on.

15:03 Apparently, it cleans up like temporary files and so on.

15:05 also a single interpreter. It used to be that microWSGI was more of an app server that might have different versions of Python and maybe Ruby as well. And this will just say, no, no, it's just the one version. A couple other ones, you can specify the name that shows up in like top or glances. So it'll say like, you can give it, say, your website name, and it'll say things like, that thing worker process one, or that thing master process, or whatnot.

15:30 And so there's just a bunch of cool things in here with nice descriptions of why you want these features.

15:35 So if you are out there and you're running microWSGI, give this a quick scan, it's really cool.

15:40 Now, this next one also is pretty neat.

15:43 So this one comes from the people who did spaCy, right?

15:46 What do they got going on?

15:47 - Yep, that's right.

15:48 So this was just released a couple days ago, and it's called Think, and they bill it as a functional take on deep learning.

15:56 And so basically, if you're familiar with deep learning, there's kind of two big competing frameworks right now, TensorFlow and PyTorch and MXNet is also in there.

16:07 So the idea of this library is that it abstracts away some of the boilerplate that you have to write for both TensorFlow and PyTorch.

16:14 PyTorch has a little bit less TensorFlow with Paris on top also has a little bit less, but you end up writing a lot of the same kind of stuff and there's also some stuff that's obfuscated away from you, specifically some of the matrix operations that go on under the hood.

16:30 And so what Think does is, so it already runs on Spacey, which is an NLP library under the covers.

16:38 So what the team did was they surfaced it so that other people could use it more generically in their projects.

16:45 And so it has that favorite thing that we love, it has type checking, which is particularly helpful for tensors.

16:52 When you're trying to get stuff and you're not sure why it's not returning things.

16:56 It has classes for PyTorch wrappers and for TensorFlow, and you can intermingle the two if you want to, if you have two libraries that bridge things.

17:06 It has deep support for NumPy structures, which are the kind of the underlying structures for deep learning.

17:12 It operates in batches, which is also a common feature of deep learning projects, so they process features and data in batches.

17:22 And then it also, sometimes a problem that you have with deep learning is you're constantly tuning hyperparameters or the variables that you put into your model to figure out how long you're going to run it for, how many training epochs you're going to have, what size your images are going to be. Usually those are those clustered in the beginning of your file is kind of like a dump or a dictionary or whatever. It has a special structure to handle those as well. So it basically hopes to make it easier and more flexible to do deep learning, especially if you're working with two different libraries, and it offers a nice higher level abstraction on top of that.

17:57 And the other cool thing is, is they have already released all the examples and code that are available in Jupyter Notebooks on their GitHub repo.

18:05 So I'm definitely going to be taking a closer look at that.

18:08 Yeah, that's really cool.

18:09 They have all these nice examples there, and even buttons to open them in Colab, which is, yeah, that's pretty awesome.

18:15 This looks great.

18:16 And it looks like it's doing some work with FastAPI as well.

18:21 I know they hired a person who's maintaining FastAPI, which is cool.

18:25 Also their Prodigy project.

18:27 So yeah, this looks like a really nice library that they put together.

18:30 Cool, and Enos has been on the show before.

18:33 Enos from Explosion AI here as a guest co-host as well.

18:37 Super cool.

18:37 That's awesome.

18:38 And this next one I want to talk about.

18:40 I'd love to get your opinion because you're more on the data science side of things, right?

18:43 Yeah.

18:44 So this next one I want to tell folks about.

18:46 This is another one from listeners, you know, we talked about something that validates pandas, and they're like, oh, you should also check out this thing.

18:54 So this comes from Jacob Deppin, thank you, Jacob, for sending this in.

18:58 And so it's pandas-vet.

19:00 And what it is, is a plugin for Flake 8 that checks pandas code.

19:06 And it's this opinionated take on how you should use pandas.

19:10 They say one of the challenges is that If you go and search on Stack Overflow or other tutorials, or even maybe video courses, they might show you how to do something with pandas, but maybe that's a deprecated way of working with pandas or some sort of old API, and there's a better way.

19:27 So the idea is to make pandas more friendly for newcomers by trying to focus on best practices and saying, don't do it that way, do it this way, you know, read CSV, it has so many parameters, what are you doing?

19:38 Here's how you use it, things like that.

19:41 So this is based on a talk, or this linter was created, the idea was sparked by a talk by Ania Kupczynska.

19:50 Sorry, I'm sure I blew that name bad, but at PyCascades 2019 in Seattle, I want your code responsibly, so I'll link to that as well.

19:57 So it's kind of cool to see the evolution like, Ania gave a talk at PyCascades and then this person's like, oh, this is awesome, I'm going to actually turn this into a Flake 8 plugin and so on.

20:08 What are your thoughts on this? Do you like this idea?

20:09 Yeah, I'm a huge fan of it. I think in general there's been kind of like this I want to say culture war about whether notebooks are good or bad and there was recently a paper released I want to say not a paper but a blog post a couple days ago about how you should never use notebooks This there was a talk by Joel Grus last year about what the all the things that notebooks are bad with I think they have their place and I think this is one of the ways you can have I want to say guardrails around them and help people do things.

20:40 I like the very opinionated warning that they have here, which is that DF is a bad variable name.

20:45 Be kinder to yourself, because that's always true. You always start with the default of DF, and then you end up with 34 or 35 of them. I joke about this on Twitter all the time.

20:54 But it's true. So that's a good one. The LOC and the .ix and loc/iloc is always a point of confusion, So it's good that they have that.

21:02 And then the pivot table one is preferred to pivot or unstack.

21:06 So there's a lot of places, so pandas is fantastic, but there's a lot of these places where you have old APIs, you have new APIs, you have people who usually are both new to Python and programming at the same time coming in and using these.

21:20 So this is a good set of guardrails to help write better code if you're writing it in a notebook.

21:24 Oh yeah.

21:25 That's super cool.

21:25 Do you know, is there a way to make Flake 8 run in the notebook automatically?

21:29 I don't know.

21:30 - You probably can, yeah.

21:31 - It probably wouldn't be--

21:32 - Yeah, yeah, but I don't know.

21:33 - Yeah, but I'm thinking, it's interesting that you ask that because that's not generally, that's something you would do with notebooks, but maybe this kind of stuff will push it in the direction of being more like what we consider quote unquote mainstream or just web dev or backend programming.

21:49 - Yeah, yeah, cool.

21:50 Well, I definitely think it's nice.

21:51 If I were getting started with Pandas, give this a check.

21:54 You also, if you're getting started with Pandas, you may also be getting started with NumPy, right?

21:58 - Yep, so NumPy is the backbone of numerical computing in Python.

22:03 So I talked about TensorFlow, PyTorch, machine learning in the previous stories.

22:08 All of that kind of rests on the work and the data structures that NumPy created.

22:13 So Pandas, Scikit-learn, TensorFlow, PyTorch, they all lean heavily, if not directly, depend on the core concepts, which include matrix operation, through the NumPy array, also known as an NDRA.

22:25 The problem was with NDRAs is they're fantastic, the documentation was a little bit hard for newcomers.

22:31 So Anne Bonner wrote a whole new set of documentation for people that are both new to Python and scientific programming, and that's included in the NumPy docs themselves.

22:42 Before, if you wanted to find out what arrays were, how they worked, you could go to the section and you could find out the parameters and attributes and all the methods of that class, but you wouldn't find out how or why you would use it.

22:54 And so this documentation is fantastic because it has an explanation of what they are, it has visuals of what happens when you perform certain operations on arrays, and it has a lot of really great resources if you're just getting started with NumPy.

23:07 The strong recommend for me, if you're doing any type of data work in Python, especially with pandas, that you become familiar with NumPy arrays, and this makes it really easy to do so.

23:16 - Yeah, nice.

23:17 It has things like, how do I convert a 1D array to a 2D array, or what's the difference between a Python list and a numpy array and whatnot.

23:27 Yeah, it looks really helpful.

23:28 I like the why, that's often missing.

23:31 You'll see like, you do this, use this function for this, and here are the parameters.

23:36 Sometimes they'll describe them, sometimes not.

23:38 And then it's just like, well, maybe this is what I want.

23:42 Stack Overflow seemed to indicate this is what I want.

23:44 I'm not sure, I'll give it a try, right?

23:45 So I like the little extra guidance behind it, that's great.

23:47 - Yeah, it does a really good job of orienting you.

23:49 - Cool, all right, well, Vicki, those are our main topics for the week, But we got a few extra quick items just to throw in here at the end.

23:58 I'll let you go first with yours.

23:59 Sure.

23:59 This is just a bit of blatant self-promotion about who I am.

24:03 So I am a data scientist on the side.

24:06 I write a newsletter that's called Norm Core Tech.

24:09 And it's about all the things that I'm not seeing covered in the mainstream media.

24:12 And it's just a random hodgepodge of stuff.

24:15 It ranges from anything like machine learning, how the data sets got created initially for NLP.

24:21 I've written about Elon Musk memes.

24:23 I wrote about the recent raid of the Nginx office in great detail and what happened there.

24:29 So there's a free version that goes up once a week and paid subscribers get access to one more paid newsletter per week.

24:36 But really it's more about the idea of supporting in-depth writing.

24:38 So it's just vicki.substack.com.

24:40 Cool.

24:41 Well, that's a neat newsletter and I'm a subscriber.

24:45 So very, very nice.

24:46 I have a quick one for you all out there.

24:49 And maybe two actually.

24:51 pip 20.0 was released.

24:54 So not a huge change, obviously, pip is compatible with the stuff that it did before and whatnot.

25:01 But it does a couple of nice things, and I think this is going to be extra nice for beginners because it's so challenging.

25:09 You go to a tutorial and it says, "All right, the first thing you got to do to run whatever, I want to run Flask, or I want to run Jupyter." As you say, "PIP install Flask" or "PIP install Jupyter." and it says you do not have permission to, you know, write to wherever those are going to install, right?

25:25 Depending on your system.

25:26 And so if that happens now in pip20, it will install as if --user was passed into the user profile.

25:35 That's cool, huh?

25:36 That's really neat.

25:37 Yeah, so that's great.

25:38 And cache wheels are built from GitHub requirements and a couple of other things.

25:43 So yeah, nothing major, but nice to have that there.

25:46 And then also I previously gone on a bit of a rant saying I was bugged that homebrew, which is how I put Python on my Mac, was great for installing Python 3 until 3.7.

25:58 So if you just...

25:59 It's even better because if you just say brew install Python, that means Python 3, which not legacy Python, which is great.

26:07 But that sort of stopped working.

26:08 It still works, but it installs Python 3.7.

26:11 So that was kind of like, oh, sad face.

26:15 But I'm sorry, I forget the person who sent this over on Twitter, but one of the listeners sent in a message said, "You can brew install python at 3.8 and that works." Why that's not... Is it safe to brew again? I've just started downloading directly from python.net. I know, exactly.

26:30 Exactly. So I'm trying it today and so far it's going well.

26:35 So I'm really excited that on macOS we can probably get the latest python.

26:39 Even if you got to save the version, I just have an alias that re-aliases what Python means in my zshrc file, and it'll just say, "You know, if you type Python, that means Python 3.8 for now." Anyway, I'm pretty...

26:51 - Fingers crossed. - Yeah, fingers crossed.

26:52 So it looks like it's good, and that's nice.

26:54 Hopefully, it just keeps updating itself.

26:56 I suspect it will, at least within the 3.8 branch.

26:59 All right, you ready to close this out with a joke?

27:00 - Yeah. - Yeah, so...

27:02 I'm sure you've heard the type of joke, you know, a mathematician and a physicist walk into a bar and...

27:08 Right? Well, some weird thing about numbers and space ensues.

27:12 So this one is kind of like that one. It's about search engine optimization.

27:17 So an SEO expert walks into a bar, bars, pub, public house, Irish pub, tavern, bartender, beer, liquor, wine, alcohol, spirits, and so on.

27:27 It's bad, huh?

27:30 I like that. That's nice.

27:32 Yeah, it's so true.

27:33 Like, you remember how blatant websites used to be like 10 years ago, They would just have a massive bunch of just random keywords at the bottom.

27:41 Just, you know, it seems like...

27:43 Yeah, and sometimes they would be in white and white text.

27:45 Yes, exactly, white and white.

27:47 But then if you highlight it, it would be like whole three paragraphs.

27:49 Here's where the SEO hacker went.

27:52 I don't think that works so well anymore.

27:54 But yeah, it's a good joke nonetheless.

27:56 And Vicki, it's been great to have you here.

27:59 Thanks so much for filling in for Brian and sharing the data science view of the world with us.

28:04 - Thanks for having me. - You bet. Bye.

28:05 Bye.

28:06 Thank you for listening to Python Bytes.

28:08 Follow the show on Twitter via @PythonBytes.

28:10 That's Python Bytes as in B-Y-T-E-S.

28:13 And get the full show notes at pythonbytes.fm.

28:16 If you have a news item you want featured, just visit pythonbytes.fm and send it our way.

28:20 We're always on the lookout for sharing something cool.

28:23 On behalf of myself and Brian Auchin, this is Michael Kennedy.

28:26 Thank you for listening and sharing this podcast with your friends and colleagues.

Back to show page