Brought to you by Michael and Brian - take a Talk Python course or get Brian's pytest book


Transcript #342: Don't Believe Those Old Blogging Myths

Return to episode page view on github
Recorded on Monday, Jun 26, 2023.

00:00 - Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.

00:05 This is episode 342, recorded June 25th, 2023.

00:10 I'm Michael Kennedy.

00:12 - And I am Brian Okken.

00:13 - And this episode is brought to you by Brian and me.

00:17 Us, our work.

00:18 So support us, support the show, keep us doing what we're doing by checking out our courses over at Talk Python Training.

00:24 We have a bunch, including a really nice pytest course written by Brian.

00:27 Check out the Test to Code podcast, the Patreon supporters.

00:30 Brian's got a book as well in pytest.

00:32 You may have heard of this.

00:34 So please, if you check those things out, share them with your friends, share, recommend them to your coworkers.

00:37 It really makes a difference.

00:39 You can also connect with us on Mastodon.

00:41 You'll see that over on the show notes for every episode.

00:45 And finally, you can join us over at pythonbytes.fm/live if you want to be part of the live recording, usually, usually, Tuesdays at 11 a.m. Pacific time, but not today.

00:56 No, Brian, we're starting nice and early because, well, it's vacation time.

01:01 And, well, plumb bum, I think we should just get right into it.

01:04 - Sure, plumb bum, let's do it.

01:09 - It's a new saying, it's a new expression.

01:12 Plumb bum, let's just do it.

01:14 - Let's just do it.

01:15 Yeah, I have no idea where this comes from.

01:18 But the, well, I do know where it comes from.

01:20 It was last week.

01:21 Last week we talked about shells, and Henry Schreiner said, "Hey, you should check out Plumbum.

01:29 It's like what you're talking about, but also neat." I did.

01:33 >> We were talking about shh.

01:35 >> Oh, right. We were talking about shh.

01:37 >> Don't tell anyone.

01:39 >> Plumbum, it's a little easier to search for actually than shh.

01:44 What is it? It's a Python library and it's got shell combinations.

01:51 It's for interacting with your environment.

01:53 And there we go, Henry Schreiner, one of the maintainers.

01:56 So it's a tool that you can install so that you can interact with your operating system and file system and stuff like that and all sorts of other things.

02:07 And it's got a little bit different style than, so I was taking a look at, this kind of like a local command for one, the basics are like import from Plumbum, import local, And then you can run commands as if you were just running a shell, but you do this within your Python code.

02:28 And there's also some convenience ones like SH has, like LS and grep and things like that.

02:35 But it generally looks like there's more stuff around how you operate with a shell normally, things like piping.

02:45 So you can, you know, you can pipe one like LS to grep to word count or something like that to count files.

02:53 You can, I mean, there's other ways to do it within Python, but if you're used to doing it in the shell, just wrapping the same work in a Python script, why not?

03:02 Things like, yeah, redirection, manipulating your working directory.

03:07 Just all sorts of fun stuff to do with your shell, but through Python.

03:11 - The pipe overriding the, you know, the pipe operator in Python overwrite sort of actually in the language being the same as in the shell, it's a little bit like Pathlib doing the divide aspect, right?

03:24 Like we're going to grab some operator and make it, that it probably was never really imagined to be used for, but we're going to make it, use it to, so it looks like what you would actually, you know, the abstraction you're representing, which is pretty interesting.

03:36 - Yeah, and they could, like this example, they have an example in the, the readme of piping LS to grep to word count.

03:44 And they, they like define that as a chain.

03:47 and it doesn't even run it, I don't think.

03:51 It just defines this new sequence so you can chain together script commands.

03:57 If you print it, so it has probably a stirrer or a wrapper implementation that shows you exactly what all the pipe and the chaining was.

04:09 That's a neat thing for debugging.

04:11 Then when you actually run it, then you call that thing like a function and it runs it. That's pretty neat.

04:16 Yeah, it is. You can even do them in line, just put parentheses around them and kind of execute at the end.

04:21 Yeah, pretty interesting.

04:22 Yeah, anyway, just a fun little quick shout out to Plumbum.

04:27 Yeah, if you thought SH was cool last time, you might also check this out, right? They kind of play in similar spaces.

04:33 Yeah, just one of the things I like about Python and the Python community is this variety of different libraries that might solve the same space, but have a different flavor.

04:44 You know, some people like chocolate, some people like vanilla.

04:46 Well, I'm a big fan of caramel.

04:48 So how about we talk about faster CPython?

04:51 [laughs]

04:53 Okay, I'm not--

04:55 [laughs]

04:56 Faster CPython is--

04:58 They're really starting to show some results, right?

05:01 Python 3.11 was 40% faster, I believe, is--

05:05 you know, roughly speaking, working with averages and all those things.

05:09 And we've got 3.12 coming with more optimizations.

05:14 And ultimately the faster CPython plan was put together and laid out by Mark Shannon.

05:21 And the idea was if we could make improvements like 40% faster, but over and over again, because of compounding sort of numbers there, we'll end up with a really fast CPython, a faster one you might say, in five releases, five times faster in five releases.

05:39 And so that started really with 3.10, We got 3.11, 3.12, not the one that's coming, but the one that's coming in a year and a few months, 3.11.

05:48 They're laying out their work for that.

05:50 And it's looking pretty ambitious.

05:52 So in 3.12, they're coming up with ways to optimize blocks of code.

05:57 So in 3.11, stepping a little bit back, we've got the adaptive specializing interpreter or specializing adaptive interpreter.

06:05 I don't have it pulled up in front of me.

06:07 Which order do those words go in?

06:08 But that will allow CPython to replace the byte codes with more specific ones.

06:15 So if it sees that you're doing a float plus a float operation, instead of just doing a word, we're doing an abstract plus, you know, is that a list plus a string?

06:26 Is that an integer and a float?

06:28 Is that actually a float and a float?

06:30 And if it's a float and a float, then we can specialize that to do more specific, more efficient types of math and that kind of stuff, right?

06:37 3.12 is supposed to have what they're calling the Tier 1 optimizer, and so which optimizes little blocks of code, but they're pretty small.

06:47 One of the big things coming here in 3.13 is a Tier 2 optimizer.

06:54 So bigger blocks of code, something they're calling super blocks, which I'll talk about in just a second.

07:01 The other one that sounds really amazing is enabling sub-interpreters from Python code.

07:07 So we know about PEP 554, this has been quite the journey and massive amount of work done by Eric Snow.

07:14 And the idea is, if we have a GIL, then we have serious limits on concurrency, right?

07:20 From a computational perspective, not from an I/O one, potentially.

07:23 And, you know, I'm sitting here on my M2 Pro with 10 cores, and no matter how much, you know, multi-threaded Python I write, If it's all computational, all running Python bytecode, I get one-tenth of the capability of this machine, because of the GIL. So the idea is, what if we could have each thread have its own GIL?

07:44 So there's still sure a limit to how much work that can be done in that particular thread concurrently, but it's one thread dedicated to one core, and the other core gets its own other sub-interpreter that doesn't share objects in the same way, but they can pass them around through certain mechanism.

08:00 Anyway, so this thing has been a journey, like I said, created 2017 and it has like all this history up until now.

08:11 Status is, still says draft and now the Python version, I think the PEP is approved and work has been done, but it's still in pretty early stages.

08:21 That's a pretty big deal is to add that.

08:23 That's supposed to show up in 3.13 in Python code.

08:30 this is a big deal. I think that in 3.12, the work has been done so that it's internally possible, it's internally done, if I remember correctly, but there's no way to use it from Python, right? Like it's, if you're a creator of interpreters basically you can use it. So now the idea is like, let's make this possible for you to do things like start a thread and give it its own sub-interpreter, you know, copy its objects over, let it create its own, and really do computational parallelism, I'm guessing interaction with async and await and those kinds of things.

09:04 And also more improved memory management. Let's see what else.

09:07 Well, so I guess along with that, we're going to have to have some tutorials or something on how to how do they the two sub-interpreters share information.

09:16 And yeah, exactly. Yeah, we will. We will. What I would love to see is just, you know, on the thread object, give the thread object use sub-isolating, you know, isolate sub-interpreter or new subinterpreter equals true, and off it goes.

09:31 That would be excellent.

09:32 And then maybe pickles the object.

09:34 I don't know, we can see how they come up with that.

09:36 But this is good news.

09:38 I think it's the kind of thing that's not that important necessarily for a lot of people, but for those who it is, it's like, you know, really we want this to go a lot faster.

09:47 What can we do here, right?

09:48 - Yeah, that sounds complicated.

09:51 Does it make it go faster?

09:52 Yay, then do it.

09:53 - Well, and you know, compared to a lot the other alternatives that we've had for, I have 10 cores, why can I only use one of them?

10:02 My Python code without multi-processing.

10:05 This is one of those that doesn't affect single-threaded performance.

10:10 It's one of those things that there's not a cost to people who don't use it, right?

10:15 Whereas a lot of the other types of options are like, well, sure, your code gets 5% slower, but you could make it a lot faster if you did a bunch more work.

10:24 - Yeah.

10:25 - Yeah, and that's been a hard sell, and also a hard line that Guido put in the sand saying like, look, we can't make regular non-concurrent Python slower for the sake of this more rare, but sometimes specialized, right, concurrent stuff.

10:40 So they've done a bunch of foundational work, and then the three main things are the Tier 2 Optimizer, Subinterpreters for Python, and Memory Management.

10:47 So the Tier 2 Optimizer, there's a lot of stuff that you kinda gotta look around.

10:51 So check out the detailed plan.

10:54 they have this thing called copy and patch.

10:56 So you can generate like roughly these things called super blocks, and then you can implement, they're planning to implement basic super block management.

11:06 And Ryan, you may be thinking, what are the words you're saying, Michael?

11:09 Duplo, they're not those little Legos, no, they're big, big Duplos.

11:14 Well, that's kind of true.

11:15 So they were optimizing smaller pieces, like little tiny bits, but you can only have so much of an effect if you're working on small blocks of code that you're optimizing.

11:23 So a superblock is a linear piece of code with one entry and multiple exits.

11:29 It differs from a basic block in that it may duplicate some code.

11:34 So they just talk about considering different types of things you might optimize.

11:39 So I'll link over to--

11:41 but there's a big, long discussion, lots of graphics people could go check out.

11:46 So yeah, they're going to add support for deoptimization of super blocks, enhance the code creation, implement the specializer, and use this algorithm called copy and patch.

12:00 So implement the copy and patch machine code generator.

12:04 You don't normally hear about a machine code generator, do you?

12:07 >> No.

12:07 >> But either that sounds like a JIT compiler or something along those lines.

12:11 Yeah. Anyway, so that's the goal.

12:13 Reduce the time spent in the interpreter by 50 percent.

12:17 They make that happen, that sounds all right to me, just for this one feature.

12:20 >> That's pretty neat.

12:21 Yeah, wow.

12:22 Pretty good. And I talked a whole bunch about 7Tripper's final thing.

12:25 The profiling data shows that a large amount of time is actually spent in memory management and the cycle GC.

12:32 All right. And while when Python, I guess, if you do, you know, 40% a bunch of times, it was maybe half this fast before, like, because remember, we're like a few years out working back on this plan in 3.9, 3.8.

12:45 Maybe it didn't matter as much because as a percentage of where is CPython spending its time, it was not that much time on memory management.

12:54 But as all this other stuff gets faster and faster, if they don't do stuff to make the memory management faster, it's going to be like, well, half the time is memory management.

13:01 What are we doing?

13:02 So they say as we get the VM faster, this is only going to be a larger percent of our time.

13:07 So what can we do?

13:08 So do fewer allocations to improve data structures.

13:11 For example, partial evaluation to reduce the number of temporary objects, which is part of the other section of their work, and spend less time doing cycle GCs.

13:21 This could be as simple as doing fewer calculations or as complex as implementing a new incremental cycle finder.

13:27 Either way, it sounds pretty cool.

13:29 So that's the plan for a year and a couple of months.

13:33 - Pretty exciting.

13:34 I'm really happy that these people are working on it.

13:37 - I am too.

13:38 It's a team of, I think last time I counted, five or six people.

13:41 There's a big group of them around Guido at Microsoft, but then also outside.

13:46 So for example, this was written by Mark Shannon, who's there, but also Michael Dropboom, who was at Mozilla, but I don't remember where he is right now.

13:55 - Cool last name, Dropboom.

13:56 - Yes, indeed.

13:59 All right, over to you, Bri.

14:01 - Well, that was pretty heavy.

14:02 I'm gonna do kind of a light topic, is we need more people to write blogs about Python.

14:09 It would help us out a lot.

14:11 And one of the ways you could do that is to just head over and check out one of the recent articles from Julia Evans about some blogging myths.

14:21 And I guess this is a pretty light-hearted topic, but also serious.

14:26 But we have some more fun stuff in the extras, so don't worry about it.

14:33 Anyway, so there's a few blogging myths, and I just wanted to highlight these 'cause I think it's good to remember that these are just wrong.

14:41 So I'll just run through them quickly.

14:43 You don't need to be original.

14:45 You can write content that other people have covered before.

14:48 That's fine.

14:49 You don't need to be an expert.

14:52 Posts don't need to be 100% correct.

14:54 Writing boring posts is bad.

14:58 So these are, oh wait, the myths are, the myth is you need to be original.

15:02 That's not true.

15:04 Myth, you need to be an expert.

15:06 Posts need to be 100% correct.

15:08 Also myth, all these are myths.

15:09 Writing boring posts is bad.

15:12 Boring posts are fine if they're informational.

15:15 You need to explain every concept.

15:17 Actually, that will just kill your audience if you explain every little detail.

15:21 Page views matter.

15:24 More material is always better.

15:27 Everyone should blog.

15:28 These are all myths, according to Julia.

15:30 And then she goes through a lot of the, in detail into each one of them.

15:35 And I kind of want to like hover on the first two a little bit of you need to be original and you need to be an expert.

15:42 I think it's, when we're learning, we're learning about the software, a new library or a new technique or something, often I'm like, I'm reading Stack Overflow, I'm reading blog posts, I'm reading maybe books, who knows, reading a lot of stuff on it.

15:59 And you'll get all that stuff in your own perspective of how it really is, and then you can sort of, like the cheating book report you did in junior high where you just rewrote some of the encyclopedia but changed it, don't do that.

16:16 But you don't have to come up with a completely new technique or something, you can just say, oh, all the stuff I learned, I'm gonna put it together and write my workflow now or the process, or just a little tiny bit.

16:30 It doesn't have to be long, it can be a short thing of like, oh, I finally got this.

16:35 It's way easier than I thought it was.

16:36 And writing little aha moments are great times to just write that down as a little blog post.

16:43 The other thing of you don't need to be an expert is a lot of us got started blogging while we were learning stuff as a way to write that down.

16:51 So you're definitely not an expert as you're learning stuff.

16:54 So go ahead and write about it then.

16:56 And it's a great way to, and that ties into, it doesn't need to be 100% correct.

17:01 As you get more traction in your blog, people will let you know if you made a mistake.

17:06 And in the Python community, usually it's nice.

17:09 They'll mention, "Hey, this isn't quite right anymore." And I kind of love that about our community.

17:15 I wanna go back to the original part, is you don't even have to be original from your own perspective.

17:21 If you wrote about something like last year, go ahead and write about it again.

17:25 If you think it's important and it needs, and you sort of have a different way to explain it, You can write another blog post about a similar topic.

17:32 - Yeah, I totally agree.

17:34 I also want to add a couple of things.

17:38 I would like to add that your posts, the myth, your posts have to be long or like an article or you need to spend a lot of time on it.

17:46 The biggest example of this in terms of like success behind in the face of just really short stuff is John Gruber's Daring Fireball.

17:55 This is an incredibly popular site and the entire articles are, it starts out with him quoting often someone else, and that's like two paragraphs, which is half the article.

18:05 And say, here's my thoughts on this, or here's something interesting, let's highlight it or something, right?

18:10 And my last blog post was four paragraphs and a picture.

18:14 Maybe five if you count the bonus, right?

18:17 Not too many people paid attention to mine 'cause the title's You Can Ignore This Post, so I don't know, I'm having a hard time getting traction with it, but.

18:25 (laughing)

18:26 - I actually, I like that you highlighted that John Gruber style.

18:31 There's a lot of different styles of blog posts and one of them is reacting to something.

18:36 Instead of, because a lot of people have actually turned, you can either comment on somebody's blog or talk about it on Reddit or something or you can react to it on your own blog.

18:45 - And still link to it on Reddit or something, yeah.

18:49 - Yeah.

18:50 - Not anymore 'cause Reddit went private out of protest but somewhere else if you find another place.

18:54 - Or maybe post on Twitter.

18:55 - No, don't do that, let's, Mastodon.

18:57 - It's getting hard.

18:58 - Yeah.

18:59 - Funny.

19:02 I had another one as well, but, oh yeah, so this is not a myth, but just another thing, another source of inspiration is if you come across something that really surprised you, like if you're learning, right, kind of to add on, like I'm not an expert, is if you come across something, like wow, Python really, it broke my expectations.

19:19 I thought this was gonna work this way, and it, gosh, it's weird here.

19:21 People, it seems like a lot of people think it works this way, but it works in some completely other way.

19:26 You know, that could be a cool little write up.

19:29 Also, you know, people might be searching, like, why does Python do this?

19:32 You know, they might find your quote, boring article and go, that was really helpful, right?

19:37 So, yeah.

19:37 - I still remember way back when I started writing about pytest and unit tests and stuff, there was a feature, a behavior of teardown functionality that behaved different.

19:51 It was like sort of the same in Nose and UnitTest and then different in pytest.

19:57 And I wrote a post that said, maybe UnitTest is broken because I kind of like this pytest behavior.

20:03 And I got a reaction from some of the pytest contributors that said, oh no, we just forgot, didn't test that part.

20:11 So that's wrong.

20:12 We'll fix it.

20:14 (laughing)

20:17 >> What a meta problem that I tested and test the thing.

20:21 >> Yeah. Well, I mean, it was really corner case, but I'm a fastidious person when I'm looking at how things work.

20:29 But the other thing I want to say is a lot of things written by other people are old enough that they don't work anymore.

20:37 If you're following along with a little tutorial and it doesn't work anymore because the language changed, or the library they're using is not supported anymore or something, that's a great opportunity to go, well, I'll just kind of write it in my own language, but, or in my own style, but also make it current and make it work this time.

20:58 So that's good.

21:00 - Indeed.

21:01 - Anyway, okay.

21:02 Well, let's go back to something more meaty.

21:05 - Yeah, something like AI.

21:07 So I want to tell you about Jupyter AI, Brian.

21:11 Jupiter AI is a pretty interesting project here.

21:16 It's a generative AI extension for JupyterLab.

21:21 I believe it also works in Jupyter and IPython as I just IPython prompt as well.

21:27 And so here's the idea.

21:28 There's a couple of things that you can do.

21:30 So Jupyter has this thing called a magic, right?

21:34 Where you put 2% in front of a command and it applies it to an extension to Jupyter, not trying to run Python code, but it says, let me find this thing, in this case, you say percent percent AI, and then you type some stuff.

21:47 That stuff you type afterwards, then turns on a certain behavior for that particular cell.

21:54 This AI magic, literally, it's percent percent AI and then they call it a magic or it is a magic.

22:00 So AI magic turns Jupyter Notebooks into reproducible, it's the interesting aspect, generative AI.

22:09 So think if you could have ChatGPT or open AI type stuff clicked right into your notebook.

22:16 So instead of going out to one of these AI chat systems and say, I'm trying to do this, tell me how to do this, or could you explain that data?

22:22 You just say, hey, that cell above, what happened here?

22:26 Or I'm trying, I have this data frame, do you see it above?

22:30 Okay, good.

22:31 How do I visualize that in a pie chart or some, you know, in those donut graphs using Plotly?

22:38 it can just write it for you as the next cell.

22:40 >> Interesting. Okay.

22:42 >> Interesting, right?

22:43 >> Yeah.

22:43 >> It runs anywhere that I Python kernel works.

22:46 So JupyterLab, Jupyter Notebooks, Google Colab, VS Code, probably PyCharm, although they don't call it out, and it has a native UI chat.

22:55 So in JupyterLab, not Jupyter, there's a left pane that has stuff.

23:00 It has your files and it has other things that you can do, and it will plug in another window on the left there that is like a ChatGPT.

23:09 So that's pretty cool.

23:11 Another really interesting difference is this thing supports its model or platform agnostic.

23:17 So if you like AI21 or Anthropic or OpenAI or SageMaker or HuggingFace, et cetera, et cetera, you just say, please use this model.

23:28 And they have these integrations across these different things.

23:31 So you, for example, you could be going along saying, I'm using OpenAI, I'm using OpenAI.

23:35 That's a terrible answer.

23:37 Let's ask Anthropic the same thing.

23:40 Then right there below, you can use these different models and different AI platforms.

23:45 Actually, it did really good on this one.

23:47 I'm just going to keep using that one now for this part of my data.

23:51 >> Okay.

23:52 >> Okay. How do you install it?

23:55 You pip install jupyter_ai, and that's it.

23:58 It's good to go. Then you plug in like your various API keys or whatever you need to as environment variables.

24:08 So they give you an example here.

24:09 So you would say percent percent AI space ChatGPT, and then you type something like, please generate the Python code to solve the 2D Laplace equation in the Cartesian coordinates.

24:19 Solve the equation on the square such and such with vanishing boundary conditions, et cetera.

24:23 Plot the solution to matplotlib.

24:25 Also, please provide an explanation.

24:27 And then look at this, it goes da-da-da-da-da-da, and down it goes.

24:30 and you can see off it shows you how to implement it.

24:33 And that's only part of that's shown.

24:35 You can also have it do graphics.

24:36 Anything that those models will generate is HTML just show up.

24:40 So you could say, create a square using SVG with a black border and white fill.

24:44 And then what shows up is not SVG commands or like little definition.

24:49 You just get a square because it put it in HTML as a response and so that showed up.

24:53 You can even do LaTeX, like dash F is math, generate a 2D heat equation and you get this partial differential equation thing.

25:03 - Wow.

25:04 - In LaTeX.

25:06 You can even ask it to write a poem, whatever you do.

25:09 So that's one of the--

25:10 - Go back to the poem one.

25:12 Yeah, it says write a poem in the style of variable names.

25:15 So you can have commands with variable, insert variable stuff.

25:20 So that's interesting.

25:22 - Mm-hmm, mm-hmm.

25:23 So you can also, Jupiter has inputs and outputs like along the left side.

25:28 there's like a nine and a 10, and those are like the order they were executed.

25:32 You can say using input of nine, which might be the previous cell or something, or output of nine, take that and go do other things.

25:42 That's how I open this conversation.

25:44 One of the really interesting examples that David Q pointed out, there's a nice talk that he gave in a link to in the show notes at high data like a week ago was he had written some code, Two examples, one he had written some code, a bunch of calculations and pandas, and then he created a plot, but the plot wasn't showing because he forgot to call plot.show.

26:07 He asked one of the AIs, it depends, you can ask a bunch depending which model you tell it to target.

26:14 He said, "Hey, in that previous cell, why isn't my plot showing?" It said, "Because you forgot to call show." Here's an example of your code above, but that works and shows the plot.

26:26 >> That's pretty cool for help, right?

26:27 >> Yeah.

26:28 >> Instead of going to Stack Overflow or even trying to copy that into one of these AIs, you just go, "Hey, that thing I just did, it didn't do what I expected." Why? Here's your answer.

26:37 Not in a general sense, but literally grabbing your data and your code.

26:41 >> Interesting.

26:42 >> Two final things that are interesting here.

26:43 The other one is he had some code that was crashing.

26:48 I can't remember what it was doing, but it was throwing some exception and it wasn't working out.

26:54 And so he said, "Why is this code crashing?" And it explained what the problem was with the code and how to fix it, right?

27:02 So super, super interesting here.

27:05 - I'll have to check that out.

27:07 Yeah, we have that link in the show notes.

27:10 - Yeah, the talk is really, really interesting.

27:12 I'm trying to think, there's one other thing that was in that talk.

27:14 It's like a 40-minute talk, so I don't remember all of it.

27:18 Anyway, there's more to it that goes on also beyond this.

27:23 It looks pretty interesting.

27:24 If you live in Jupiter and you think that these AI models have something to offer you, then this is definitely worth checking out.

27:32 Alvaro says, "You know, as long as it doesn't hallucinate "a non-existing package." Yeah, I mean, that is the thing.

27:41 What's kind of cool about this is like it puts it right into code, right?

27:46 You can run it and see if it does indeed work and do what it says.

27:49 So anyway, that's our last.

27:52 Yeah, go ahead.

27:53 Before we move away too much, I was listening to a NPR show about talking about AI, and somebody did research, I think it was for the Times, New York Times, a research project, and found out that there were some, sometimes they would ask, what's the first instance of this phrase showing up in the newspaper or something?

28:16 And it would make up stuff.

28:19 And even, and they'd say, well, can you, what are those, show those examples and it would show snippets of fake articles that actually never were there.

28:28 (laughing)

28:29 - It did that for, that's crazy, it did that for legal proceedings as well and a lawyer cited those cases and got sanctioned or whatever lawyers get when they do it wrong.

28:41 - Those are wrong, yeah, don't do that.

28:44 - Also, the final thing that was interesting that I now remember that made me pause to think, Brian, is you can point it at a directory of files, like HTML files, Markdown files, CSV files, just like a bunch of files that happen to be part of your project and you wish it had knowledge of.

29:05 So you can say /learn and point it at a sub-directory of your project.

29:11 It will go learn that stuff in those documents.

29:15 - Oh, interesting. - And then you can say, okay, now I have questions, right?

29:18 Like, you know, if it learned some statistics about a CSV, the example that David gave was he had copied all the documentation for Jupiter AI over into there and it told it to go learn about itself.

29:30 And then it did, and he could talk to it about it based on the documentation.

29:33 - Oh.

29:35 - So if you got a whole bunch of research papers, for example, and a guy learned those, now I need to ask you questions about this astronomy study, okay?

29:44 Who studied this and who found what, you know, whatever, right, like these kinds of questions are pretty amazing.

29:49 - Yeah, and actually, some of this stuff be super powerful, especially if you can make it not like keep all the information local, like again, like, you know, internal company stuff.

29:59 They don't want to like upload all of their source code into the cloud just so they can ask it questions about it.

30:05 Yeah, yeah, exactly.

30:06 The other one was to generate starter projects and code based on ideas.

30:12 So you can say generate me a Jupyter notebook that explains how to use matplotlib.

30:18 Okay.

30:19 - Okay, and it'll come over the notebook and it'll do, so here's a bunch of different examples and here's how you might apply a theme and it'll create things.

30:27 And one of the things that they actually have to do is they use link chain and AI agents to in parallel go break that into smaller things that are actually gonna be able to handle and send them off to all be done separately and then compose them.

30:40 So it'll say, oh, what's that problem?

30:41 Instead of saying what's the notebook, it'll say give me an outline of how somebody might learn this.

30:47 And then for each step in the outline, that's a section in the document that it'll go have the AIs generate those sections.

30:54 And it's like a smaller problem that seem to get better results.

30:57 Anyway, this is a way bigger project than just like, maybe I can pipe some information to ChatGPT.

31:04 There's a lot of crazy stuff going on here.

31:07 The people who live in Jupyter might wanna check out.

31:10 - It is pretty neat.

31:11 I was not around the Jupyter stuff, but I was thinking that a lot of software work is the maintenance, not the writing it in the first place.

31:21 So what we've done is taken the fun part of making something new and giving it to a computer, and we'll all be just software maintainers afterwards.

31:31 - Exactly, we'll just be plumbers.

31:33 (laughing)

31:36 Sewer overflowed again, call the plumber.

31:38 No, I don't wanna go in there.

31:40 - And also, I'm just imagining a whole bunch of new web apps showing up that are generated by ideas, and they kind of work, but nobody knows how to fix them.

31:49 - Sure, I think that you're right and that that's going to be what's gonna happen a lot, but you technically could come to an existing notebook and add a cell below it and go, I don't really understand, could you try to explain what is happening in the line, in the cell above?

32:06 And it also has the possibility for making legacy code better, and if that's the reality, we'll see.

32:11 - Yeah, hopefully it's a good thing, so cool.

32:13 - All right, well, those are all of our items.

32:15 That's the last one I brought.

32:16 Any extras?

32:17 I got a couple of extras.

32:21 Will McGugan and gang at Textualize have started a YouTube channel.

32:27 And so far, there's--

32:29 and some of these--

32:30 I think it's a neat idea.

32:31 Some of the tutorials that they already have, they're just walking through some of the tutorials in video form at this point.

32:38 And there's three up so far of stopwatch intro and how to get set up and use Textualize.

32:44 and I like what they're doing over there and it's kind of fun.

32:48 Another fun thing from--

32:49 - I like it too because it's textualize.

32:53 Rich is a visual thing, but textualize is like a higher level UI framework where you've got docking sections and all kinds of really interesting UI things, and so sometimes learning that in an animated, active video form is really maybe better than reading the docs.

33:10 - Yep, and then something else that they've done, So maybe watch that if you want to try to build your own command line, or text user interface, a TUI, as it were.

33:22 - TUI.

33:24 - Or you could take your command line interface and just pipe, use Trogon, Trogon, I don't know how you say that, T-R-O-G-O-N.

33:34 It's by Textualize also, it's a new project.

33:37 And the idea is you just, I think you use it to wrap your own command line interface tool and it makes a text-based user interface out of it.

33:49 There's a little video showing an example of a Trogon app applied to SQLite Utils, which has a whole bunch, SQLite Utils has a bunch of great stuff, and now you can interact with it with a GUI instead, and that's kind of fun.

34:07 Works around Click, but apparently they will support other libraries and languages in the future.

34:12 So, interesting.

34:14 >> Yeah, it's like you can pop up the documentation for a parameter while you're working on it in a little modal window or something.

34:21 It looks interesting.

34:22 >> Yeah. Well, I was thinking along the lines of even like in internal stuff, it's fairly, you're going to write like a make script or a build script or some different utilitarian thing for your work group.

34:36 If you use it all the time, command line is fine, but if you only use it like every, you know, once a month or every couple weeks or something, it might be that you forget about some of the features and yeah, there's help, but having it as a GUI, if you could easily write a GUI for it, that's kind of fun, so why not?

34:53 The other thing I wanted to bring up, completely different topic is the June 2023 release of Visual Studio Code came out recently and I hadn't taken a look at it.

35:06 I'm still, I've installed it, but I haven't played with it yet.

35:09 And the reason why I want to play with it is they've revamped the test discovery and execution.

35:15 So apparently you can, there were some glitches with finding tests sometimes.

35:20 So I'm looking forward to trying this out.

35:23 You have to turn it on though.

35:24 You have to, there's so this new test discovery stuff, you have to go, you have to like set a opt into flag.

35:34 And I just put the little snippet in our show notes so you can just copy that into your settings file to try it out.

35:41 Yeah.

35:43 - Excellent.

35:44 - Guess that's all I got.

35:45 Do you have any extras?

35:45 - I do, I do.

35:47 I have a report, a report from the field, Brian.

35:50 So I had my 16 inch MacBook Pro M1 Max as my laptop and I decided it's not really necessarily the thing for me so I traded it in and got a new MacBook Air 15 inch, one of those big, really light ones.

36:07 And I just wanna sort of compare the two if people are considering this.

36:11 I have my mini that we're talking on now with my big screen and all that, which is M2 Pro, it's super fast.

36:19 And I found that thing was way faster than my much heavier, more expensive laptop.

36:26 Well, why am I dragging this thing around if it's not really faster, if it's heavy, has all these cores and stuff that are just burning through the battery, even though it says it lasts a long time.

36:37 It's like four or five hours was a good day for that thing.

36:40 I'm like, you know what?

36:41 I'm gonna trade it in for the new, little bit bigger Air.

36:45 And yeah, so far that thing is incredible.

36:48 It's excellent for doing software development thing.

36:50 The only thing is the screen's not quite as nice.

36:52 But for me, I don't live on my laptop, right?

36:55 I've got like a big dedicated screen.

36:57 I'm normally out then I'm like out somewhere.

36:59 So small is better and it lasts like twice as long and the battery.

37:04 So, and I got the black one, which is weird for an Apple device, but very cool.

37:08 People say it's a fingerprint magnet and absolutely, but it's also a super, super cool machine.

37:14 So if people are thinking about it, I'll give it a pretty, I'll give it like a 90% thumbs up.

37:18 Screen's not quite as nice.

37:21 It's super clear, but it kind of is like washed out a little harder to see in light.

37:24 But other than that, it's excellent.

37:26 So here's my report.

37:28 I traded in my expensive MacBook for an incredibly light, thin, and often faster, right?

37:34 When I'm doing stuff in Adobe Audition for audio or video work or a lot of other places, like those things that I gotta do, like noise reduction and other sorts of stuff, it's all single threaded.

37:45 And so it's like 20% faster than my $3,500 MacBook Pro Max thing.

37:51 Anyway, and lighter and smaller.

37:52 You know, all the good things.

37:54 >> But you're still using your mini for some of your workload.

37:59 >> I use my mini for almost all my work.

38:01 Yeah, if I'm not out, then I usually, or sitting on the couch, then it's all mini, mini, mini all the time.

38:06 >> Okay.

38:07 >> Yeah.

38:08 >> Is it black on the outside also then?

38:10 >> Yeah, yeah, it's cool looking.

38:12 >> You can throw a sticker on that and somebody to hide that it's Apple and people might think you just have a Dell.

38:17 >> They wouldn't know, that's right.

38:20 Run parallels, you can run Linux on it, and they're like, okay.

38:24 - Linux, got it, what is that thing, that's a weird.

38:27 Yeah, you could disguise it pretty easy if you want.

38:29 Or just your sticker stained out better, you never know.

38:31 All right, so people are thinking about that, pretty cool device.

38:35 But Brian, if somebody were to send you a message and like trick you, like hey, you want a MacBook?

38:41 You wanna get your MacBook for free?

38:42 You don't want that, right?

38:44 - No.

38:44 - So, you know, companies, they'll do tests.

38:47 They'll like test their people just to make sure, like hey, we told you not to click on weird looking links.

38:54 But let's send out a test and see if they'll click on a link.

38:57 And there's this picture of this guy getting congratulated by the CEO.

39:02 IT congratulating me for not failing the phishing test.

39:08 And the guy's like, deer head, like, oh no.

39:12 Me, who doesn't open emails, is what the picture says.

39:15 [LAUGHTER]

39:17 So you just ignore all your work email.

39:19 You won't get caught in the phishing test.

39:21 How about that?

39:22 >> Yeah. You've been out of the corporate for a while.

39:28 That happens. I've had some phishing test come through.

39:31 >> You've gone through this? Yeah.

39:32 >> Yeah. Well, the email looks like it came from, so that's one of the problems is it looks like it's legit, and it has the right third-party company that we're using for some service or something.

39:48 You're like, "Wait, what is this?" And then the link doesn't match up with the whatever it says it's going to and things like that, but It actually is harder now. I think that to To verify what's real and what's not when more companies do use their third-party Services for lots of stuff. Yeah. Yeah. Yeah, it's you know, it's a joke, but it's it is serious I worked for a company where somebody got a got a message I think either I might have been through a hacked email account or or it was spoofed in a way that it looked like it came from a higher-up that says, "Hey, there's a really big emergency.

40:28 This vendor is super upset. We didn't pay them.

40:30 They're going to sue us if we don't.

40:32 Could you quick transfer this money over to this bank account?" Because it came from somebody who looked like they should be asking that.

40:42 It almost happens. It's not good.

40:45 >> That's not good. I get texts now.

40:49 with the latest one was just this weekend, I got a text or something that said, "Hey, we need information about your shipping for Amazon shipment or something." It's like, "Copy and paste this link into your browser." It's this bizarre link and I'm like, "No, it would be amazon.com something.

41:06 There's no way it's going to be Bob's Burgers or whatever." >> Yeah. Amazon.

41:15 Yeah. Let's go to amazon.com.

41:17 - Oh, anyway.

41:19 - Oh well.

41:20 - Well, may everybody get through their day without clicking on phishing emails.

41:25 - That's right.

41:26 Yeah, may you pass the test.

41:28 - Or don't read email, just stop reading email.

41:31 - Yeah, think about how productive you'll be.

41:33 Well, this was very productive, Brian.

41:35 - Yes, it was.

41:35 - Yeah.

41:36 - Well, thanks for hanging out with me this morning.

41:39 So, it was fun.

41:40 - Yeah, absolutely.

41:41 Thanks for being here, as always.

41:42 And everyone, thank you for listening.

41:44 It's been a lot of fun.

41:45 See you next time.

Back to show page