Brought to you by Michael and Brian - take a Talk Python course or get Brian's pytest book


Transcript #342: Don't Believe Those Old Blogging Myths

Return to episode page view on github
Recorded on Monday, Jun 26, 2023.

00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to

00:04 your earbuds. This is episode 342, recorded June 25th, 2023. I'm Michael Kennedy.

00:12 And I am Brian Okken.

00:13 And this episode is brought to you by Brian and me, us, our RRWorks. So support us,

00:19 support the show, keep us doing what we're doing by checking out our courses over at

00:23 Talk Python Training. We have a bunch, including a really nice pytest course written by Brian.

00:27 Check out the Test of Code podcast, the Patreon supporters. Brian's got a book as well in pytest.

00:32 You may have heard of this. So please, if you check those things out, share them with your friends,

00:36 share, recommend them to your co-workers. It really makes a difference. You can also connect with us

00:40 on Mastodon. You'll see that over on the show notes for every episode. And finally, you can join us

00:47 over at pythonbytes.fm/live if you want to be part of the live recording, usually, usually

00:53 Tuesdays at 11 a.m. Pacific time. But not today. No, Brian, we're starting nice and early because,

00:58 well, it's vacation time. And well, plum bomb, I think we should just get right into it.

01:04 Sure. Plum bomb, let's do it.

01:09 It's a new saying. It's an expression. Plum bomb, let's just do it.

01:13 Let's just do it. Yeah, I have no idea where this comes from. But the, well, I do know where it comes

01:19 from. It was last week. Last week, we talked about shells and Henry Schreiner said, hey, you should

01:28 check out plum bomb. It's kind of like what you're talking about, but also neat. So I did.

01:33 We were talking about shh.

01:35 Oh, right.

01:35 We were talking about shh.

01:37 Don't tell anyone.

01:38 So plum bomb, it's a little easier to search for, actually, than shh. So what is it? It's a Python

01:46 library. And it's got, it's shell combinations. It's for interacting with your environment. And

01:53 there we go. Henry Schreiner, one of the maintainers. So it's a tool that you can install so that you can

02:01 interact with your, your operating system and file system and stuff like that and all sorts of other

02:07 things. And it's got a little bit, a little bit different style than shh. But it, so I was taking a look at,

02:15 it's kind of like a local command for one. The basics are like import from plum bomb, import local, and then you

02:22 can run commands as if you were just running a shell, but you do this within your Python code. And there's also

02:29 some convenience ones like sh has like LS and crap and things like that. But, but, but you, it generally

02:38 looks like there's more stuff around how you operating, operate with a shell normally things like piping. So

02:46 you can, you know, you can pipe one like LS to crap to word count or something like that to count files. You

02:53 can, I mean, there's other ways to do it within Python, but if you're used to doing it in, in the shell,

02:58 just wrapping, wrapping the same work in a Python script, why not? Things like re yeah, redirection work,

03:04 manipulating your working directory, just all sorts of fun stuff to do with your shell, but through Python,

03:11 you know, the pipe overriding the, you know, the pipe operator and Python overwrite sort of actually in

03:18 the language being the same as in the shell is a little bit like pathlib doing the divide aspect,

03:24 right? Like we're going to grab some operator and make it that it probably was never really imagined to

03:29 be used for, but we're going to make it use it to, so it looks like what you would actually, you know,

03:33 the abstraction you're representing, which is pretty interesting.

03:35 Yeah. And they, like this example, they have an example in the, the read me of piping LS to crap to

03:43 word count. And they, they like define that as a chain and if, and it didn't even, it doesn't even run it. I

03:50 I don't think, it just defines this new sequence. So you, so you can chain together,

03:55 script commands and if you print it, so it has a, probably a, a stir or a wrapper, implementation

04:05 that shows you exactly what the, the, all the pipe and the chaining was. So that's kind of a neat thing

04:09 for debugging. And then when you actually run it, then it, you call that thing like a function and it

04:15 runs it. That's pretty neat. Yeah, it is. You can even do them in line, just put parentheses

04:19 around them and kind of execute at the end. Yeah. Pretty interesting. Yeah. Anyway, just a fun,

04:25 little quick shout out to Plumbum. Yeah. If you thought SH was cool last time, you might also check

04:30 this out, right? They kind of play in similar spaces. Yeah. Just one of the things I like about

04:34 Python and the Python community is, this variety of different, different libraries that might

04:40 solve the same space, but, have a different flavor. you know, some people like chocolate,

04:45 some people like vanilla. Well, I'm a big fan of caramel. So how about we talk about faster CPython?

04:51 Okay. I'm not sure.

04:55 So the faster CPython is, they're really starting to show some results, right? Python 3.11 was 40% faster,

05:04 I believe is, you know, roughly speaking, working with averages and all those things.

05:09 And we've got 3.12 coming with more optimizations. And ultimately the faster CPython plan was,

05:17 you know, put together and laid out by Mark Shannon. And the idea was if we could make, you know, improvements

05:24 like 40% faster, but over and over again, because of, you know, compounding sort of numbers there,

05:32 we'll end up with a really fast CPython, a faster one, you might say in five releases,

05:37 five times faster and five releases. And so, you know, that started really with 3.10 and at 3.11,

05:43 3.12, not the one that's coming, but the one that's coming in a year and a few months, 3.11,

05:48 they're laying out their work for that. And it's looking pretty ambitious. So in 3.12,

05:54 they're coming up with ways to optimize blocks of code. So in 3.11, stepping a little bit back,

06:00 we've got the adaptive specializing interpreter or specializing adaptive interpreter. I don't have

06:06 it pulled up in front of me, which order those words go in, but that will allow CPython to replace the

06:12 byte codes with more specific ones. So if it sees that you're doing a float plus a float operation,

06:20 instead of just doing a word, we're doing an abstract plus, you know, is that, is that a list plus a string?

06:26 Is that an integer and a float? Is that actually a float and a float? And if it's a float and a float,

06:31 then we can specialize that to do more specific, more efficient types of math and that kind of stuff.

06:37 Right. 3.12 is supposed to have what they're calling the tier one optimizer. And so, which

06:44 optimizes little blocks of code, but they're pretty small. And so one of the big things coming here in

06:49 3.13 is a tier two optimizer. So bigger blocks of code, something they're calling super blocks,

06:59 which I'll talk about in just a second. The other one that sounds really amazing is enabling sub

07:04 interpreters from Python code. So we know about PEP 554. This has been quite the journey and massive

07:12 amount of work done by Eric Snow. And the idea is if we have a gill, then we have serious limits on

07:19 concurrency, right? From a computational perspective, not from an IO one potentially. And you know,

07:24 I'm sitting here on my M2 Pro with 10 cores and no matter how much multi-threaded Python I write,

07:31 if it's all computational, all running Python bytecode, I get, you know, one 10th of the capability of this

07:37 machine, right? Because of the GIL. So the idea is, well, what if we could have each thread have its

07:43 own gill? So there's still sure a limit to how much work that can be done in that particular thread

07:50 concurrently, but it's one thread dedicated to one core and the other core gets its own other

07:54 sub interpreter, right? That doesn't share objects in the same way, but they can like pass them around

07:59 through certain mechanisms. Anyway. So this thing has, has been a journey, like I said, created 2017.

08:06 And it has like all this history, up until now. And, status is still says draft. And now the

08:15 Python version, I think the PEP is approved, but, and work has been done, but it's still in like pretty

08:20 early stages. So that's a pretty big deal is to add that that's supposed to show up in,

08:25 3.13 and 3.13 and in Python code. And this is a big deal. I think that in 3.12, the work has been

08:36 done so that it's internally possible. It's internally done by remember correctly, but there's no way to

08:42 use it from Python, right? Like it's, if you're a creator of interpreters, basically you can use it.

08:48 So now the idea is like, let's make this possible for you to do things like start a thread and give

08:53 it its own sub interpreter, you know, copy its objects over, let it create its own and really do

08:59 computational parallelism, I'm guessing interaction with async and await and those kinds of things.

09:04 And also, more, improved memory management. Let's see what else.

09:07 Well, so I guess along, along with that, we're going to have to have some tutorials or something on how

09:12 to, how to, how do they, the two sub interpreters share information.

09:16 Yeah, exactly. Yeah, we will. We will. I'm, what I would love to see is just, you know, on the thread

09:22 object, give the thread object, use sub and isolating, you know, isolate sub interpreter or new

09:28 sub interpreter equals true. And off it goes, that would be excellent. And then maybe pickles the object.

09:33 I don't know. We, we can see how, how they come up with that, but this is, this is good news. I think

09:38 it's the kind of thing that's not that important necessarily for a lot of people, but for those who it is,

09:44 it's like, you know, really, we want this to go a lot faster. What can we do here? Right?

09:48 Yeah. Yeah. That sounds complicated. Does it make it go faster? Yay. Then do it.

09:53 Well, and you know, compared to a lot of the other alternatives that we've had for,

09:59 I have 10 cores. Why can I only use one of them on my Python code without multiprocessing?

10:05 This is one of those, that doesn't affect single threaded performance. It's one of those

10:11 things that there's not a, a cost to people who don't use it. Right. Whereas a lot of the other

10:16 types of options are like, well, sure, your code gets 5% slower, but you could make it a lot faster

10:22 if you did a bunch more work. Yeah. Yeah. And that's been a hard sell and also a hard line that,

10:27 you know, put in the sand saying like, look, we can't make regular, non-concurrent Python slower for the sake of, you know, this more rare, but sometimes

10:37 specialized, right. concurred stuff. So they've done a bunch of foundational work.

10:41 And then the three main things are the tier two optimizer, sub interpreters for Python and memory

10:46 management. So the tier two optimizer, there's a lot of stuff that you kind of got to look around. So

10:51 check out the detailed plan. They have this thing called copy and patch. So you can generate like

10:59 roughly these things called super blocks, and then you can implement their planning to implement basic

11:04 super block management. And Ryan, you may be thinking, what are the words you're saying, Michael?

11:09 Duplo. They're not those little like us. No, they're big, big duplos. But it's kind of true.

11:15 So they were optimizing smaller pieces, like little tiny bits, but you can only have so much of an effect

11:20 if you're working on, small blocks of code that you're optimizing. So a super block is a linear piece

11:26 of code with one entry and multiple exits. It differs from an, a basic block and that it,

11:33 it may duplicate some code. So they just talk about, considering different types of things you might

11:38 optimize. so I'll link over to us, but there's a big, long discussion, lots of, lots of graphics.

11:45 People could go check out. So yeah, they're going to add support to D opt, as support for D

11:52 optimization of soup blocks, enhance the code creation, implement the specializer and use this

11:59 algorithm called copy and patch. So implement the copy and patch machine code generator. You don't

12:05 normally hear about a machine code generator. Do you know, but, you know, that sounds like a jet compiler

12:09 or something along those lines. Yeah. Anyway, so that's the goal and reduce the time spent in the

12:15 interpreter by 50%. If they make that happen, that sounds all right to me just for this one feature.

12:19 That's pretty neat. Yeah. Wow. Pretty good. And I talked a whole bunch about the sub-interpreter's final thing.

12:25 The profiling data shows that a large amount of time is actually spent in memory management and the cycle GC.

12:32 All right. And while when Python, I guess if you do, you know, 40% a bunch of times, it was maybe half this fast

12:40 before, like, cause remember we're like a few years out working back on this plan in three, nine, three, eight,

12:45 maybe it didn't matter as much because a percent as a percentage of where is CPython spending its time.

12:52 It was not that much time of memory management, but as all this other stuff gets faster and faster,

12:57 if they don't do stuff to make the memory management faster, it's going to be like, well,

13:00 half the time is memory manager. What are we doing? So they say, as we get the, the VM faster,

13:05 this is only going to be a larger percent of our time. So what can we do? So do fewer allocations to

13:10 improve data structures, for example, partial evaluation to reduce the number of temporary

13:15 objects, which is part of the other section of their work and spend less time doing cycle GCs.

13:21 This could be as simple as doing fewer calculations or as complex as implementing a new incremental

13:26 cycle finder either way. And it sounds pretty cool. So that's the plan for a year and a couple of months.

13:32 Pretty exciting. I'm really happy that these people are working on it.

13:37 I am too. It's a team of, I think last time I counted five or six people, there's a big group of them around

13:43 Guido at Microsoft, but then also outside. Yeah. So for example, this was written by Mark Shannon,

13:49 who's there, but also Michael Dropboom, who was at Mozilla, but I'm not, I don't remember where he is

13:54 right now. Cool. Last name. Yes, indeed. All right. Over to you, Brian.

14:00 Brian. Well, that was pretty heavy. I'm going to do a kind of a light topic is we need more people to write

14:07 blogs about Python. It would help us out a lot really. And one of the ways you could do that is

14:13 to just head over and check out one of the recent articles from Julia Evans about some blogging myths.

14:20 And I guess this is pretty lighthearted topic, but, but also serious, but we have some more fun,

14:28 fun stuff in the extras. So don't worry about it.

14:33 Anyway, so there's a few blogging myths and I just wanted to highlight these because I think it's good

14:38 to remember that, you know, these are just wrong. So I'll just run through them quickly. You don't need

14:44 to be original. You can write content that other people have covered before. That's fine. You don't

14:49 need to be an expert. Posts don't need to be a hundred percent correct. Writing boring posts is bad. So these are,

14:58 Oh, wait, the myths are the myth is you need to be original. That's not true. Myth. You need to be an

15:05 expert. Posts need to be a hundred percent correct. Also myth. All these are myths. Writing boring posts

15:10 is bad. Boring posts are fine. If they're informational, you need to explain every concept. Actually, that will

15:18 just kill your audience. If you explain every little detail page views matter. More material is always better.

15:26 Everyone should blog. These are all myths, according to Julia. And then she goes through a lot of the in

15:33 detail into each one of them. And I kind of want to like hover on the first two a little bit of you

15:39 need to be original and you need to be an expert. I think it's we when we're learning, we're learning

15:47 about the software, a new library or new technique or something. Often I'm like, I'm reading stack

15:53 overflow. I'm reading blog posts. I'm reading maybe books, who knows, reading a lot of stuff on it. And

15:59 you, you'll get all that stuff in, in your own perspective of how it really is. And then you can

16:06 sort of like, like the cheating book report you did in junior high where you just like rewrote some of

16:12 the encyclopedia, but changed it. Don't do that. But it doesn't, you don't have to come up with a

16:18 completely new technique or something. You can just say, oh, all the stuff I learned, I'm going to put

16:24 it together and like write like my, my workflow now or the process or just a little tiny bit. It doesn't

16:30 have to be long. It can be a short thing of like, oh, I finally got this. It's way easier than I

16:36 thought it was. And writing little, little aha moments are great times to just write that down

16:41 as a little blog post. The other thing of you don't need to be an expert is a lot of us got started

16:47 blogging while we were learning stuff as a way to write that down. So I'm, you're definitely not an

16:53 expert as you're learning stuff. So go ahead and write about it then. And it's a great way to, and that

16:58 ties into, it doesn't need to be a hundred percent correct. As you get more traction in your blog,

17:03 people will like, let you know if you made a mistake and in the Python community, usually it's nice.

17:08 they'll, they'll like mention, Hey, this isn't quite right anymore. and I kind of love that about

17:14 our community. So, I'll the, I want to go back to the original part is you don't even have to be

17:19 original from your own perspective. If you wrote about something like last year, go ahead and write

17:24 about it again. If you think it's important and it needs it and you sort of have a different way

17:29 to explain it. You can write another blog post about a similar topic. So yeah, I'm, I totally agree. I

17:34 also want to add a couple of things. Okay. I would like to add that your posts, the myth, your posts have

17:41 to be long or like an article, or you need to spend a lot of time on it. Right. You know, the biggest example

17:47 of this in terms of like successful in the face of just really short stuff is, John Gruber's

17:54 daring fireball, right? Like this is an incredibly popular site and the entire articles are, it starts

18:01 out with him quoting often someone else. And that's like two paragraphs, which is half the article and

18:05 say, here's my thoughts on this. And, or here's something interesting. Let's, let's highlight it or

18:09 something. Right. And my last blog post was four paragraphs in a picture, maybe five. You count the

18:15 bonus. Right. I don't, not too many people paid attention to mine because the titles, you can

18:20 ignore this post. So I'm, I don't know why I'm having a hard time getting traction with it, but

18:24 I actually, I like that you highlighted the junk that good John Gruber style. There's a lot of

18:31 different styles of blog posts. And one of them is reacting to something instead of, because a lot of

18:37 people have actually turned, you can either comment on somebody's blog or talk about it on Reddit or

18:42 something, or you can react to it on your own blog. and link to it. So link to it on Reddit or

18:48 something. Yeah. Yeah. Not anymore. Cause Reddit went private out of protest, but you know, somewhere

18:52 else if you find another place or maybe post on Twitter. No, don't do that. Let's master it on.

18:57 It's getting more. Yeah.

18:58 Funny. I had another one as well, but, oh yeah. So this is not a myth, but just another thing, you know,

19:07 another, source of inspiration is if you come across something that it really surprised you,

19:11 like if you're learning, right. It kind of to add on, like, I'm not an expert is if you come across

19:15 something like, wow, I thought really, it broke my expectations. I thought this was going to work

19:19 this way. And it, gosh, it's weird here. People, if it seems like a lot of people think it works this

19:24 way, but it works in some completely other way, you know, that could be a cool little write-up.

19:28 also, you know, people might be searching like, why does Python do this? You know, they're,

19:33 they might find your quote, boring article and go, that was really helpful. Right. So yeah.

19:37 I, I still remember way back, when I started writing about, pytest and unit tests and stuff,

19:43 there was a, a feature, a behavior of teardown functionality that, behaved different.

19:51 It was like, sort of the same in nose and unit test and then different in pytest. And I,

19:58 I wrote a post that said, maybe unit test is broken because I kind of like this pytest behavior.

20:03 And I got a reaction from some of the pytest contributors that said, oh no, we just broke,

20:09 we just forgot, didn't test that part. So that's wrong. We we'll fix it.

20:13 Yeah.

20:15 What a, what a meta problem that, pytest didn't test a thing.

20:20 Yeah. Well, I mean, it was, it was really corner case, but I'm kind of a fastidious person when I'm

20:27 looking at how things work. but the other thing I want to say is a lot of, a lot of

20:32 things written by you, other people are old enough that they don't work anymore. If you're,

20:38 if you're following along with like a little tutorial and it doesn't work anymore because,

20:42 you know, the language changed or the library they're using is not supported anymore or something.

20:47 That's a great opportunity to go, well, I'll just kind of write it in my own language, but

20:52 or in my own style, but also make it current and make it work this time. So that's good.

21:00 Indeed.

21:00 Well, anyway. Okay. Well, let's, let's go back to something more meaty.

21:05 Yeah. Something, like AI. So I want to tell you about Jupyter AI, Brian, Jupyter AI is a pretty

21:13 interesting, pretty interesting project here. It's a generative AI extension for JupyterLab. I

21:21 believe it also works in Jupyter and IPython is just IPython prompt as well. And so here's the idea.

21:28 There's, there's a couple of things that you can do. So Jupyter has this thing called a magic,

21:33 right? Where you put, two percents in front of a command and it, it applies it to an extension to

21:40 Jupyter and not, not trying to run Python code, but it says, let me find this thing. In this case,

21:44 you say percent, percent AI and then you types and stuff. So that stuff you type afterwards,

21:49 then, you know, turns on a certain behavior for that particular cell. And so this AI magic,

21:56 literally it's percent, percent AI, and then they call it a magic or it is a magic.

22:00 So AMI, AI magic turns Jupyter notebooks into reproducible. It's the interesting aspect,

22:07 generative AI. So think if you could have ChatGPT or open AI type stuff clicked right into your

22:15 notebook. So instead of going out to one of these AI chat systems and say, I'm trying to do this,

22:20 tell me how to do this. Or could you explain that data? You just say, Hey, that cell above,

22:25 what happened here? Or I'm trying, I have this data frame. Do you see it above? Okay, good.

22:30 How do I visualize that in a pie chart or some, you know, when those donut graphs using plotly,

22:37 and it can just write it for you as the next cell. Interesting. Okay. Interesting. Right. Yeah.

22:43 Yeah. It runs anywhere the Python kernel works. So JupyterLab, Jupyter notebooks, Google collab,

22:49 VS Code, probably by charm, although they don't call it out. And it has a native UI chat. So in

22:56 JupyterLab, not Jupyter, there's like a left pane that has stuff. It has like your files and it has

23:03 other things that you can do. And it will plug in another window on the left there. That is like a chat

23:08 GPT. So that's pretty cool. Another really interesting difference is this thing supports

23:14 its model or platform agnostic. So if you like AI 21 or Anthropic or OpenAI or SageMaker or Hugging Face,

23:25 et cetera, et cetera, you just say, please use this model. And they have these integrations across these

23:30 different things. So you, for example, you could be going along saying, I'm using OpenAI, I'm using OpenAI.

23:35 That's a terrible answer. Let's see, let's ask Anthropic the same thing. And then right there

23:41 below it, it'll, you could use these different models and different AI platforms and go, actually,

23:46 it did really good on this one. I'm just going to keep using that one now for this, this part of my data.

23:50 Okay.

23:51 Okay. So how do you install it? You pip install jupyter_ai and that's it. It's good to go. And then you plug in,

24:00 then you plug in, like your various API keys or whatever you need to as environment variables.

24:08 They give you an example here. So you would say percent percent AI space ChatGPT. And then you type

24:13 something like, please generate the Python code to solve the 2d Laplace equation in the Cartesian coordinates,

24:19 solve the equation on the square, such and such with vanishing boundary conditions, et cetera. Plot

24:24 the solution to matplotlib. Also, please provide an explanation. And then look at this, it goes,

24:29 and down it goes. And you know, you can see off it, off it shows you how to implement it. And that's

24:33 only part of that's shown. You can also have it do graphics. Anything that it, those models will

24:38 generate is HTML just show up. So you could say, create a square using SVG with a black border and

24:43 white fill. And then what shows up is not SVG commands or like little definition. You just

24:49 get a square because it put it in HTML as a response. And so that showed up. You can even do LaTeX, like

24:55 --F is math, generate a 2d heat equation. And you get this, partial differential equation

25:02 thing in, in LaTeX. You can even ask it to write a poem, whatever you do. But that's one of the,

25:10 go back to the poem one. Yeah. It says, write a poem in the style of variable names. So you can have

25:16 commands with variable, insert variable stuff. So that's interesting.

25:21 So you can also Jupyter has inputs and outputs, like along the left side, there's like a nine and

25:29 a 10. And those are like the order they were executed. You can say, using input of nine,

25:35 which might be the previous cell or something, or output of nine, go do, you know, take that and go

25:41 do other things, right? Like kind of, that's how I opened this conversation.

25:44 One of the really interesting examples that David Q pointed out, there's a nice talk that

25:49 he gave in a link to in the show notes at high data, like a week ago was he had written some code,

25:57 two examples. One, he'd written some code, a bunch of calculations and pandas, and then he created a

26:02 plot, but the plot wasn't showing because he forgot to call plot.show. And, he asks one of the AIs,

26:09 it depends, you know, you can ask a bunch depending on which model you tell it to target.

26:14 He said, why isn't, Hey, in that previous cell, why isn't my plot showing? It said,

26:18 because you forgot to pull, call show. So here's an example of your code above, but that

26:24 works and shows the plot. That's pretty cool for help, right?

26:27 Yeah.

26:27 Geez.

26:28 Instead of going to stack overflow or even trying to copy that into one of these AIs, you just go,

26:33 Hey, that thing I just did, it didn't do what I expected. Why? Here's your answer. Not in a general

26:38 sense, but like literally grabbing your data and your code. Two final things that are interesting

26:43 here. The other, maybe three, the other one is he had some code that was crashing and I can't

26:49 remember what it was doing, but it was throwing some kind of exception and it wasn't working out. And so he

26:55 said, why is this code crashing? And it explained what the problem was with the code and how to fix

27:00 it. Right. So super, super interesting here. I'll check that out. Yeah. We have that link.

27:08 Yeah. Yeah. Yeah. The talk is really, really interesting. I'm trying to think there's one

27:13 other thing that that was in that talk. It's like a 40 minute talk. So I don't remember all. Anyway,

27:18 there's, there's more to it that goes on. also beyond this, it's, it looks pretty interesting. If you

27:24 live in Jupyter and you think that these, these AI models have something to offer you, then this is

27:30 definitely worth checking out. Alvaro says, you know, as long as it doesn't hallucinate a non-existing

27:36 package. Yeah. That's, I mean, that is the thing. What's kind of cool about this is

27:42 like, it puts it right into code, right? You just, you could run it and see if it's pretty

27:47 cool. If it does indeed work and do what it says. So anyway, that's, that's our last. Yeah. Go ahead.

27:53 Oh, before we could move away too much, I was listening to a, NPR, show about

27:59 talking about AI and, somebody did research. I think that was for the times, New York times,

28:05 a research project and found out that like there were, there were some, sometimes they would ask

28:10 like, when, what's the first instance of this phrase showing up in the newspaper or something.

28:15 And it would make up stuff. and even, and they'd say, well, you know, can you, what are those,

28:22 you know, show those examples. And it would show snippets of fake articles that actually never were there.

28:29 It did that for, that's crazy. It did that for, legal proceedings as well. And a lawyer

28:35 cited those cases and got sanctioned or whatever lawyers get when they do it wrong.

28:40 Those are wrong. Yeah. Don't, don't do that. But I also, the final thing that was interesting that I

28:47 now remember that, you made me pause the thing, Brian, is you can point it at a directory of files,

28:54 like, HTML files, markdown files, CSV file, just like a bunch of files that happen to be part of

29:01 your project and you wish it had knowledge of. So you can say slash learn and pointed at a subdirectory

29:09 of your project. It will go learn that stuff in those, in those documents. And then you can say,

29:16 okay, now I have questions, right? Like, you know, if it learned some statistics about a CSV,

29:22 the example that David gave was he had copied all the documentation for Jupyter AI over into there,

29:28 and it told it to go learn about itself. And then it did. And you could talk to it about it

29:32 based on the documentation. Oh, that's so if you got a whole bunch of research papers, for example,

29:38 like I learned those. Now I need to ask you questions about this astronomy study. Okay.

29:43 who, who, who studied this and what did, who found what, you know, whatever, right? Like these

29:47 kinds of questions are pretty amazing. Yeah. And actually some of this stuff would be super powerful,

29:51 especially if you could make it not like keep all the information local, like, like,

29:57 like, you know, internal company stuff. They don't want to like upload all of their source code into

30:02 the cloud just so that they can ask it questions about it. Yeah. Yeah, exactly. The other one,

30:08 was to generate starter projects and code based on ideas. So you can say, generate me a Jupyter

30:14 notebook that explains how to use matplotlib. Okay. Okay. And it'll come up with a notebook and it'll do,

30:22 so here's a bunch of different examples and here's how you might apply a theme and it'll create things.

30:27 And one of the things that they actually have to do is they use Lang chain and AI agents to in parallel,

30:33 go break that into smaller things that are actually going to be able to handle and send them off to all

30:38 be done separately and then compose them. So it'll say, Oh, well, what's that problem? Instead of saying,

30:42 what's the notebook, it'll say, give me an outline of how somebody might learn this. And then for each

30:47 mo each step in the outline, that's a section in the document that it'll go have the AIs generate

30:53 those sections. And it's like a smaller problem that seemed to get better results. Anyway, this is a,

30:57 this is a way bigger project than just like, maybe I can pipe some information to ChatGPT. There's like,

31:04 there's a lot of crazy stuff going on here. the people who live in Jupyter might want to check out.

31:10 It is pretty neat. I, I was not around the Jupyter stuff, but I was thinking, that a lot

31:17 of software work is the maintenance, not the writing it in the first place. So, what we've done is

31:23 like taking the fun part of making something new and giving it to a computer and we'll all be just

31:28 like software maintainers at the, afterwards. Exactly. Let's be plumbers.

31:36 Sue or overflow again, call the flower. No, I don't want to go in there.

31:40 And also I'm just imagining like a whole bunch of new web apps showing up that are generated by like

31:45 ideas and they kind of work, but nobody knows how to fix them. but yeah, sure. I mean,

31:51 I think that you're right and that that's going to be what's going to happen a lot, but you technically

31:55 could come to an existing notebook and add a, a cell below it and go, I don't really understand.

32:01 Could you try to explain what is happening in the line in the cell above? Yeah. And it, you know,

32:06 it also has the possibility for making legacy code better. And if that's the reality, we'll see.

32:11 Yeah. Hopefully it's a good thing. So cool.

32:13 All right. Well, those are all of our items. That's the last one I brought. Any extras?

32:17 I got a couple extras. Will McCoogan and gang at, textualize, have started a YouTube

32:26 channel. and so far there's, and some of these, I think it's a neat idea. Some of the

32:32 tutorials that they already have, they're just walking through some of the tutorials, in video form at

32:37 this point. and there's three up so far of, stopwatch intro and, how to get set up and use

32:43 textualize and yeah, well, I like what they're doing over there and it's kind of fun. another

32:49 fun thing from, I like it too, because it's, you know, textualize riches is a visual thing,

32:55 but textualize is like a higher level UI framework where you've got docking sections and all kinds of

33:00 really interesting UI things. And so sometimes learning that in a interact, an animated active

33:07 video form is really maybe better than reading the docs. Yep. And then, something else that they've

33:12 done. So maybe, watch that if you want to try to build your own, command line,

33:18 use their text, you use your interface, a two E as it were, do we, or you could take your

33:25 command line interface and just pipe, use a trogon, all trogon. I don't know how you say that.

33:32 T R O G O N it's a, by, textualize also it's a new project. And the idea is you just,

33:40 I think you use it to wrap your own, your own command line interface, tool, and it makes a graphic or

33:47 text-based user interface out of it. There's a little video showing an example of, trogon app applied to

33:54 SQLite utils, which, has a whole one SQLite utils has a bunch of great stuff. And now you can

34:00 interact with or interact with it with a GUI instead. And that's kind of fun. works around click,

34:08 but they're apparently they will support other libraries and languages in the future. So interesting.

34:13 Okay. So yeah, it's like you can pop up the documentation for a parameter while you're working

34:19 on it and a little modal window or something looks, looks interesting.

34:22 Yeah. Well, I'm, I was thinking along the lines of even, like in a internal stuff, it's,

34:27 fairly, you're going to write like a make script or a build script or some different

34:33 utilitarian thing for your, your work group. if you use it all the time, command line is fine.

34:39 But if you only use it like every, you know, once a month or every couple of weeks or something,

34:43 it might be that you forget about some of the features and yeah, there's help, but having it as a

34:48 GUI, if you could easily write a GUI for it, that's kind of fun. So why not? the other

34:53 thing I wanted to bring up a completely different topic is, the June 2023 release of, visual

35:00 studio code, came out recently. and I hadn't taken a look at it. I'm still, I've installed it,

35:07 but I haven't played with it yet. And the reason why I want to play with it is, they've revamped the,

35:13 test discovery and execution. So, apparently you can, there were some glitches with finding

35:19 tests sometimes. so I'll, I'm looking forward to trying this out. You have to turn it on though.

35:24 You have to, there's, so these, this new test discovery stuff, you have to, go, you have to like

35:31 set a opt into flag. and the, I just put the little snippet in our show notes so you can, just copy

35:38 that into your settings file to try it out. So, yeah, I guess that's all I got. Do you have

35:44 any extras? I do. I do. I have a report, a report from the field, Brian. So I had my 16 inch

35:51 MacBook pro M one max as my laptop. And I decided I just, it's, it's not really necessarily the thing

36:01 for me. So I traded in and got a new MacBook air 15 inch, one of those big, really light ones. And

36:08 just want to sort of, compare the two of people are considering this. You know, I have my

36:12 mini that we're talking on now with my big screen and all that, which is a M two pro is super fast.

36:19 And I found like that thing was way faster than my, my, much heavier, more expensive laptop.

36:25 Like, well, why am I dragging this thing around? If it's, if it's not really faster, if it's heavy,

36:30 has all these, you know, all these cores and stuff that are just burning through the battery.

36:34 even though it says it lasts a long time, it's like four or five hours was a good day for

36:39 that thing. I'm like, you know what, I'm going to trade it in for, the, the new little bit bigger

36:44 air. And yeah, so far that thing is incredible. It's excellent for doing software development thing.

36:50 The only thing is the screen's not quite as nice, but for me that like, I don't live on my laptop,

36:55 right? I've got like a big dedicated screen. I'm normally at then I'm like out somewhere. So

37:00 small is better. And it lasts like twice as long and the battery. So, and I got the black one,

37:05 which is weird for an Apple device, but very cool. People say it's a fingerprint magnet and

37:10 absolutely, but it's also a super, super cool machine. So if people are thinking about it,

37:15 I'll give it a pretty, I'll give it like a 90% thumbs up. the screen's not quite as nice.

37:21 It's super clear, but it kind of is like washed out a little harder to see in light. But other than that,

37:25 it's excellent. So there's my report in my expensive MacBook for an incredibly light,

37:31 thin and often faster, right? When I'm doing stuff in Adobe audition for audio or video work

37:38 or a lot of other places, like those things that I got to do, like noise reduction and other sorts of

37:43 stuff, it's all single threaded. And so it's, it's like 20% faster than my $3,500 MacBook Pro max thing.

37:50 Wow. And lighter and smaller, you know, all, all the good things.

37:53 But you're still using, your, your mini for some, for some of your workload.

37:59 I use my mini for almost all my work. Yeah. If I'm not out, then I usually, or sitting on the couch,

38:04 then it's all mini, mini, mini all the time. Okay. Yeah. It's a black on the outside also then.

38:10 Yeah. Yeah. It's, it's cool looking.

38:12 And you can throw a sticker on that and somebody to hide that it's Apple and people might think you just

38:17 have a Dell. They wouldn't know that's right. Run parallels. You can run, run Linux on it.

38:23 They're like, okay, Linux got it. What is that thing? It's a weird. Yeah. You could disguise it

38:28 pretty easy if you want, or just your sticker stand out better. You never know. All right.

38:31 So people are thinking about that and pretty, pretty cool device. but Brian, if somebody

38:36 were to send you a message and like tricky, like, Hey, you won a MacBook, you want to get your MacBook

38:41 for free. You don't, you don't want that. Right. No. So, you know, companies they'll do tests.

38:47 They'll like test their, their people just to make sure like, Hey, we told you not to click on

38:52 weird looking links, but let's send out a test and see if they'll click on a link. And there's this guy,

38:58 there's this picture, this guy getting congratulated by the CEO.

39:03 I T I T group congratulated me for not failing the phishing test. And the guy's like, dear head

39:10 likes like, Oh no. me who doesn't open emails is what the picture says.

39:15 So you just ignore all your work email. You know, you won't get caught in the phishing test. How about

39:21 that? yeah. those are, you, you've been out of the corporate for a while. That, that happens.

39:28 We've got, I've, I've had some phishing tests come through this. Yeah. Yeah. Well, like the,

39:34 the email like looks like it came from. So that's one of the problems is it looks like it's legit. And,

39:41 and it has like, you know, the, the right third party company that we're using for some,

39:46 some service or something. And, and you're like, wait, what is this? and, and then the,

39:52 the link doesn't match up with the, whatever it says it's going to and things like that. But,

39:57 it actually is harder now. I think that to, to, to verify what's real and what's not when more

40:04 companies do use 30 third party, services for lots of stuff. So yeah. Yeah. Yeah. It's,

40:11 you know, it's a joke, but it's, it is serious. I worked for a company where somebody got a, got a

40:15 message. I think either I might've been through a hacked email account or, or it was spoofed in a

40:22 way that it, it looked like it came from like a higher up to say, there's something really big

40:27 emergency. This vendor is super upset. We didn't pay them. they're kind of sue us if we don't,

40:32 you know, could you quick transfer this money over to this bank account? And because it, it came from,

40:39 you know, somebody who looked like they should be asking that, right. It, it almost happened. So

40:44 not good. That's not good. Yeah. the, I get texts down, like the latest one was just this

40:50 weekend. I had a text or something that said, said, Hey, we need information about your shipping for,

40:55 Amazon shipment or something. And it's like copy and paste this link into your browser. And it's just

41:02 like bizarre link. And I'm like, no, it would be amazon.com something. there's no way it's going to be

41:09 Bob's Bob's burgers or whatever. yeah. Amazon. Yeah. Let's go to amazon.com.

41:16 Oh, anyway. Oh, well, well may, may everybody get through their day without clicking on phishing

41:24 emails. So that's right. Yeah. May you, may you pass the test or don't read the email. Just stop

41:29 reading email. Yeah. Think about how productive you'll be. Well, this was very productive, Brian.

41:35 Yes, it was. Yeah. Well, thanks for, for hanging out with me to this morning. So it was fun.

41:40 Yeah, absolutely. Thanks for being here as always. And everyone, thank you for listening. It's been a lot

41:45 of fun. See you next time. Bye.

Back to show page