
·E687
Vibe Coding: Building Faster with AI-Powered Development - JSJ 687
Episode Transcript
Cla command kind of became the test with a front end and the back end.
You need to actually write some tests lean towards end and integration type tests that test the whole flow instead of testing small parts.
Speaker 2Of your coach.
Speaker 1Your codal change very quickly, so you have to rewrite those tests a lot.
Speaker 3But does that mean that?
But that means, if understand your description correctly, that you're effectively manually testing it.
I mean you're you're running a certain script, but then you're look, you're you're you are the one who's actually checking to see that you get the expected results.
Speaker 1Well, it's still a I mean, it's automated because you wrote a test.
The test itself is going through steps that you would manually do yourself.
Speaker 2But like I said, it's like it's like an end to end test.
Speaker 3Yeah, it's not an end to end test.
Yeah, that's what that was aiming for.
So so yeah, so so vibe coding.
I guess is because the system is so dynamic and everything is changing so rapidly, you're effectively also kind of acting, as to an extent, the manual QA for what you're developing.
Speaker 2Yes, yeah, I think that's that's a good way to look at it.
Speaker 1And as much as you can build in ways what a system can qa itself, you should there's always going to be at least some sort of points in time where you have to step outside and actually just like try it and see if it works or not.
So I think there's a lot of the same kind of issues that we get just with testing in general, like what's it efficient way to test it work not?
Speaker 4You know, in a lot of cases when I've done it, I just loaded in the browser and just click around and then I'll have it, I'll vibe code it and prompt it for unit tests.
Did you but that it really does depend on what I'm building, because sometimes that's not the right way to do it if it's you know, completely command line based or something.
Speaker 3By the way, coding, what do you usually generate, Anthony?
Speaker 2What do I in terms of like what LM am I using?
Speaker 3Or no, what you're using the l ELM generate code?
Is it React code, straight on, dumb code, jQuery code, HTMX code.
Yeah, I know you're generating a.
Speaker 2Lot of stuff.
Speaker 1Like I said, I first started by building a CLI and then I extended that functionality to a back end and front end which have gone through many different tech stacks.
Speaker 2You know, I was a big framework.
Speaker 1Guy, so I'm always kind of trying out different tools.
My current stack in terms of what I'm deploying the auto show app with, which is the app I vibe coded is Astro with solid as the templating language, and Tailwind for the styling.
And yeah, so that's the that's basically the back end and the front end.
I've done stuff with like a noe JS Fastify server, I've done stuff with React.
Really, any kind of popular technology that has a lot of usage and a lot of docs will probably do pretty well.
Some people say you should always use next JS and super base, but like I think, as as all as you're not trying to use something really obscure and really new, it tends to work pretty well.
If it is very obscure, very new, you should create like a just a single file that kind of has all the docs in it and drop that into the context of the LM.
Speaker 2That will kind of help short change that issue.
Speaker 3I think these days you also have various mcps that make it very easy to get content off of GitHub for examples.
Speaker 1Yes, yeah, yeah, mcps at a whole other level of things for sure.
Speaker 4Yeah, but when you're using the LL no, I just curious because you're asking about the tools are you doing this with like copilot connected to Gemini or GPT or Claude or using something like cloud code, or how are you doing that night.
I'll also point out the copilot if you put it into agent mode instead of I can't remember the other mode, it'll actually put the code into your ID for you.
Speaker 2And so I'm a little curious, like what level are you working at there?
Speaker 1Yeah, so I've tried co pilot and cursor, I've tried clock code, I've tried you know, things like bolt and v zero.
I think that is what Versailles thing is called.
And then like when you have the agent loop, like you said, was actually changing your code for you.
That is the full Vibe code experience.
Like that is Steve Sworzneimer for sure.
So I tend to it's kind of nice, It's.
Speaker 2It can be very very nice.
Yeah, totally.
Speaker 1I think for like personal projects, that's definitely what I would do.
I have an I have a flow that some people will consider it kind of strange and inefficient.
I had used this tool called rebomix, which is a way to kind of intelligently take your code base and smush it into something that is small enough to fit in the context of these lllms.
And then it also includes instructions of how I want my code to be styled and written.
Speaker 2And then I just write what.
Speaker 1I want to do as a line at the bottom of that whole thing, and then I PLoP that into either shatshept, Claude, or Gemini.
I use all three kind of depending on which was going to work best, and then it gives it gives you the output, and then I put it into my file.
And as I'm doing that, I'll quickly kind of look over the code and I'll look at the diffs.
So I'm not doing a pure vibe coding experience that I am paying attention to my cokes.
I'm doing a production app that has you know, payments and log in and stuff.
So that gives me a step to kind of slow down, look at the code that's being written, look at what is changing in my project, get a sense of what's happening, and then I can then test it and see if it works.
Speaker 3So do you generally, when when the NLM is suggesting stuff, do you usually approve it one by one, or do you just approve it all and then look at I don't know the get diffs or whatever.
Speaker 1So I PLoP in all the changes that it gives me, and I tell to write codes files in full, so I just click copy paste for each of them, and the way my projits are structured usually only has to change a handful of files to make the change if it's atomic enough.
And then after I've put all the files and all that, look at the diffs on each of those files.
So if there's three files, I also go click over to the deff you're on vs code kind of look at all those and then I'll test the application with either a test scripture or just clicking through.
Speaker 3Now, you mentioned that you use the project itself as the context or put it in the context in order to obviously create consistency within the code base.
Yep.
For consistency is only worthwhile once you have some code, and also assuming the code is structured the way that you like it.
So do you also manually create or have created some sort of rule files or something along these lines as kind of a starting point or a way to keep aligning your code with a certain desired outcome.
Speaker 2Yeah.
Speaker 1So with repomix, it includes an instructions file that gets appended to each one, and for in there, I have things like how I want the logging to be done JavaScript kind of styling, so always using imports, not you know.
Speaker 3Require kind of like cursor rules.
Speaker 2Yeah, they're exactly that curse rules, Yeah, totally.
Speaker 1So yeah, I think everyone should have those, and they should make sure like get down in writing how they actually want their code to look and behave and the coding styles that they want.
Speaker 3So for me, it's interesting.
So I've recently I'm working on several legacy projects at work that are there are definitely legacy, but there's still very much undergoing development by several developers, and I introduced rule files into all of them.
And it was a very interesting experience for me in a lot of ways.
It was my first real encounter with prompt engineering, you might say.
So, first of all, it was really odd for me, being an old timer in tech, to effectively write configuration as English in a markdown file rather than in Jason or Yama or anything that you normally would expe configuration to be in.
It's a very like it's a very strange experience the first time you do it, Like instead of saying, you know, yeslint colon, on, you say I want you to use yeslint.
It's it's it's it's a very odd experience.
But the other thing that obviously was a very significant thing for me was saying how seemingly very small changes that I would make in the rule files could have very significant impact on the generated code.
And it wasn't always very trivial to totally understand why M.
Speaker 1Is that The experience as one, Yeah, for me, this whole the rules came around very iteratively where I was working a lot with the lms, so it was generating a lot of code, and there would be a lot of time doing something consistently that I didn't want, and then every time that would happen, I would write a rule, and then I would make sure that rule actually change the thing I wanted to change, didn't change anything else.
Speaker 2So you kind of need to do this.
It takes some time.
Speaker 1There's really no shortcut to doing it, and you have to just kind of get familiar with the llms and how they generate code, and when you're changing your rules trying it's another kind of like iterative process, where you get the outputs you see, are they what you want?
Tweak the prompt if it's not what you want, and once you get it where you want, try not to mess too much with your rules file.
Speaker 3Yeah.
So, so just to finish that point check for me, what I literally did was I started without any rule files.
I would pick a file, let's say, that was in a style that I didn't like, and literally gave just the one command refactor, saw what got generated, then put in some rules, did it again, saw the difference, tweak them, did it again, And like you said, it was an iterative process.
What I I also discovered was sometimes even though I gave certain instructions, and even though the atherlam very explicitly stated that it was following those instructions, it still didn't or didn't fully right.
Speaker 1And this is where having examples, like if the rules match what your code already does, then it will be it'll be better at doing that because they'll both have the English language to explain it an example of what it actually looks like.
So you're right, sometimes when you're just writing in English, it won't always get it.
You actually need a file that's written with that style, you can show it to be like, no, do it like this.
Speaker 3So for example, for me, it was I wanted it to not introduce in any slint errors or.
Speaker 1Warnings your s link configuration is.
Speaker 3Yeah, and I obviously gave it the PAS to the slint configuration, and it would literally say I found some e s slint issues and fixing them, and then I found some more and fixing them, and even then when it was all done, it still had some issues.
Speaker 1So yeah, and that's because you're you're putting a tool in the middle that it has to understand well enough to be able to do that, instead of just knowing enough about code in general to know like what import versus require is.
Anytime you're eye in these extra tools and actual layers of indirection, that it's more chances for the LM to.
Speaker 2Get confused and do stuff wrong.
Speaker 4Yeah, because it has to know about es lint in its latent space because you're not teaching it.
Speaker 1So if you had a doctor or an es lint you know file that actually has es lint contacts and explains what the s lint stuff means and your configuration, how that maps to eslint, how it works, that would be an additional piece of context you could give it that would help it figure that out.
Speaker 2Right, then you are teaching it how to do that.
Speaker 4So I have a question because you guys are talking about rule files and this is something that I haven't really used typically.
I'm getting in and I'm saying, look, I want you to do this, I want you to use these tools.
I want you to and yeah, I have to keep reminding it when I'm when I'm doing the code.
Speaker 2So where do you put those into your tools?
Speaker 1For me, everything goes through repomix, So this is a huge part of my toolbase.
And if you're using cursor, you won't do this, you won't have this workflow at all.
But for me, I just have literally, uh, the way the way reap wakes works, you just have a separate markdown file with your rules written in it.
Speaker 2It just grabs it and depends it.
I have a customs script that.
Speaker 1Does a whole bunch of crap, but it's really it's just a It's just a hug of markdown is all it is.
And that gets added to the project's contexts that repomix creates for me.
Speaker 3And that's essentially the same way that it works.
And cursor just takes it from a different location.
And yeah, the whole point and the whole point is exactly what you said, Chuck, to avoid needing to manually put those in the stuff that the same stuff in the prompt again and again and again each and every time.
Speaker 4Right, Yeah, and it's usually the ongoing stuff.
Right, So it's oh, just a remind No, I'm using tailwind four not tailwind three, right, or.
Speaker 3First of all it's I'm using Yeah, first of all, it's I'm using tailwind rather than let's say, see a sense.
It might figure it out from your existing content, or it might not.
Right if you're starting a new project, then how would it know?
So you would need to remember to tell it use tailwind, But if you put it in your in your root five, you don't.
Speaker 2Yeah.
Speaker 1This also gets to know something about conversation length.
The longer you talk to an LM and a single conversation, the dumber it will get.
This is something that's highly unintuitive that a lot of people do not understand about lms.
That's why I like repo mix because every time I'm making a change, I'm kind of one shotting it getting code back if it doesn't work.
I'll have a very quick back and forth to fix the bug, and then I'm immediately going to go to a new context, a new conversation every time.
Speaker 4So that was the thing that I wanted to bring up with repo mix, and I was going to ask earlier, but you kind of already answered it by saying repo mix and that is is you know, like when I'm starting out, it can kind of keep most of the app and in the context, but as I build more things in, it obviously gets to the point where it can't.
Speaker 1Yeah, so it makes more like test files and things like that.
You can configure it to say I just want my front end files to be in this configuration and my back end files to be in this.
So if you have an appic it is very very large, you just start to scope it two different sections that will have the context it needs to make the change you need you want to make.
Speaker 4So do you have to tell it I'm working in this section of code, and so only care about because because That's what I'm wonderings on it.
Speaker 1I have a script where I create a bunch of different configurations that allow repomix to generate different types of context and I'll just run a command like repo back for my back end or something.
It's there's a lot of ways which you can configure it.
In terms of how many different ways you need to configure it will kind of depend on how larger project is and how many different sections there are, and how easily it will get confused if you give it the whole project versus just the parts of the project it needs.
But since I'm vibe coding it, every time I want repo mix command, I just give it my code and say, hey, I have this repo mix command to cover this part of a project.
Speaker 2It gives me the new command.
Speaker 3Okay, you mentioned before that the NLM gets dumber the longer the conversation continues.
Yeah, why is that so?
Speaker 1It's just because of what context length itself is.
Context length basically means that there's a certain number of tokens that the LM is able to kind of keep in its like working memory as a certain point as I guess longer longer, longer II at full.
So we'll have to start bumping off text from the beginning, and then that means it will lose important context if it made some changes at the very beginning and then you get to a very long conversation, it won't be able to be aware of those at the very beginning.
Speaker 2Why you have to keep it within the.
Speaker 1L ELM's context window, not just for the first message, so it doesn't say this is too long, I can't respond, So it doesn't then as the claude actually does something, smart cloud will tell you at a certain point in time when your conversation is done, and they will not let you keep using They will tell you have to create a new chat.
I'm not sure if any of the lms do that right now.
Most of them just let let you keep going forever, and then it will just be that's where you get the hallucinations.
Speaker 2That's where you get a lot of these errors.
Speaker 1People think about when they think of l elms as once you exceed the context window in your conversation.
Speaker 3So it's basically to a certain point in time, you're kind of getting lost in the weeds, as it were, with all the noise that accumulated throughout that entire conversation, all the dead ends and wrong paths remain in the context and actually bump out the more important stuff that you probably gave it the very beginning of the conversation, as you said, like the goal that you're working towards, et cetera.
By the way, we have a question from the audience.
Does he always try all three models Claude, Gemini, Chajipt, Which one tends to work best for which types of code?
Speaker 2Sure?
Speaker 1So I won't switch to different models if the first one I tried isn't working, Like if it gives me broken code and then I tell it to fix it and it gives me code that's still broken in the same way or broken in a different way.
I treat most of these things pretty pretty pragmatically.
And the question more so is like, so what cause if you do all three you other then check all three and if they all work, then you just wasted, you know, a ton of time.
Speaker 2The bigger question is which one do you start with?
Speaker 1Which is one is most likely you give be correct answer the first time, so you don't have to use another model.
Speaker 2And that has changed a lot for me over time.
Speaker 1I'm constantly kind of switching back and forth as new models come out because the space is so competitive right now that almost no one is able to hold onto the best model for more than a couple of months.
Speaker 2So My go to right now is Claude for Sonnet, not Claude for Opus.
Speaker 1This is a rare case where this quote unquote best biggest, newest model is not really the best son It is better than Opus because one, it has a longer context window, WHI I don't know why they made opens a smaller contact window.
It also is slightly faster, and it just doesn't have as many kind of does it seem times much downtime.
There's one time where Opus was broken and then I realized I could just switch to sign it and sign it wasn't so like sometimes if Claude is down, a Claude model is down, not Claude the whole thing.
Speaker 2So would cloth Horse sign it is my go to right now.
Speaker 1Then if that doesn't work, I'll try Gemini two point five, and then if that doesn't work, I'll try Chat.
Speaker 4Should be t relating starting, Yeah, I I just kind of start with whatever I'm sitting on and then I'll just change it when I need to.
Speaker 2That's I don't start with any one of them.
I just rotate.
Speaker 4It's like this isn't doing as well, and I also don't follow along so much with the there's a new model out.
I want to try it, I just I'll wind up switching to it when it's like okay for whatever reason, and this isn't doing what I want.
Speaker 2My question though, is how much are you paying for this?
Speaker 4Because as you use the different models, you typically have to pay some of them.
It's per usage, right, So it's like I put so many tokens.
Speaker 2In or you know whatever.
They're usually only usage based.
Speaker 1If you're using the APIs, it's usage based in the sense that you need to buy more expensive monthly subscriptions to get higher usage caps.
So I'm not literally paying like buy the token, but and this and a lot of this will have to do.
It's just like how much do you use it?
How much code do you write?
And is that code that you're writing like work you know, related or not?
So you can get away.
I think most people they should start by getting the twenty dollars subscription to whatever LLM they're using, whether it's chatche to your claud or whatever.
And then if you find it is useful for you but you're hitting the usage limit, then look at some of the more expensive ones.
Claude has one hundred dollars one and then at two hundred dollars one where you can get five x more usage or twenty x more usage.
I think chatshiw he just has the two hundred dollars plan where you get essentially unlimited usage.
Speaker 2I've never hit the usage cap on chashivt, so I.
Speaker 1Pay two hundred dollars for both of those because I'm billion an app that is going to like be my full time income hopefully one day.
Speaker 2So I can kind of justify that cost.
Speaker 1If you're someone who's just learning these things, are just starting out, I would recommend just starting with a twenty dollars a month one.
I definitely recommend not using the free plan if the free plan does not give you the best model.
You want to have the best model, even if it's twenty dollars a month.
Speaker 3And there's a reason that Nvidia is not worth for trillion seriously, So for new folks.
Speaker 4There are a couple of things we've kind of thrown in here that I want to explain.
One of them is tokens.
Tokens are essentially words or parts of words that give meaning to the context, and the context is what the LLM remembers about your conversation, so it breaks it up into tokens and then it figures out what it means.
The other one is is you know Dan's reference to Nvidio going up.
Speaker 2A lot of the these models are.
Speaker 4Trained using the GPUs, and Nvidia is the largest maker of the best GPUs for the training for these and so as you get bigger and bigger models and more and more information crammed into them, they need more machines with more GPUs to feed the data in so that they can build these models.
You can run them on your own machine, and a lot of times it will take advantage of your own your GPU and your computer which is probably also made by Nvidia.
But at the end of the day, that's why is because they're buying They're buying the GPUs like candy because in order to get a bigger, stronger model, they need more hardware.
Speaker 3Or put another way, all that money that's coming from both us as users and from the vcs, it's all flowing downstream into videos pockets.
Speaker 4Right yeah, because because they're basically the big or only game in town for the hardware you need.
Speaker 1Well, they always say in a in a gold rush, you want to be, you know, selling shovels.
Speaker 2This is The best example of that is.
Speaker 4What are you building and what's kind of your workflow as you build it out?
And then related to that, what what do you find that you know is working or not working in that workflow.
Speaker 1Yeah, so I'm building something called auto show.
It's autoshow dot app.
If you want the latest, go to dev dot autoshow dot app.
If you're someone watching right now, I just can't wait.
That will be in upstreams by the end of today, hopefully.
But it's something that I first built for myself, just kind of as a personal tool that I thought would be useful.
I explained it very briefly at the beginning where I wanted to take my podcast.
So I'm a podcaster like you guys, I do lots of live streams as well, and I wanted to be able to get an m to write the chapter titles for me.
You know, if most people fail us to do a podcast like at your Lex Friedman's or your Joe Rogan's or whatever, you'll get Actually Joe Rogan doesn't do this, but let does.
You get chapter titles and time stamps for each so you can click to a certain point and they'll jump to that point in the conversation and you can kind of read that over a guess as of what is the guest going to talk about on this show.
But that takes a lot of time to have a three hour podcast gallists to the whole thing and find those times.
So I found that if I used WHISPER, which is open Eyes open Source transcription model along which gives you the transcription and the time stamp for each line, I could give that to chat Shept and say, hey, I want chapter titles, read this, chunk it up in topics, and then give me where the topic starts.
Speaker 2And that was the first thing I did, and.
Speaker 1I was like wow, Like just that alone saves me a lot of time and it is very useful.
So then I created a scripting workflow to do all that myself.
Instead of running whisper, copy paste the transcription into chat schpt, writing the prompt or you know, saving the prompt somewhere on my computer and copy past see it and then giving it to the LM and then getting the output back.
I created, ah, just a command or CLI to do all those steps for me.
So you would write NPM, run auto show, give it the RL, and then you would have the full show notes in without doing any other effort.
So I was like, wow, that's pretty cool, and I started expanding it out.
I started adding more prompts, you know, things like summarize the whole episode, or give me the key takeaways, or write frequently asked questions based on this, or write a rap song based on this, or write a blog post based on this.
Speaker 2I just kept adding more and more prompts and more and more.
Speaker 1Things you could do, and then I was like, okay, I need to So then I started showing it to people.
You know, i'ld go on streams with my friends and I'd be like, hey, look at this thing up bill, And after showing it to people, I just kept being told over and over and over again.
Speaker 2People would be like, dude, you should charge for this.
Speaker 1And I'm like, okay, well, if it's people really think it's useful, they think it could be a useful app that I should try and productize this.
So I've been working on building the front end of the back end so there'd be kind of a nice user interface that people could use.
And so yeah, so now you have a you know, click and interface.
You can just drop in a link to the YouTube video you want or upload a local file from your computer, You select the transcription service you want, you select the prompts you want, and then you select the ll M you want, and then it gives you the show notes back.
So yeah, that's the that's the go on, like a whole product.
Speaker 5So, before we're going too much farther, I want to say, comparing us to Joe Rogan was a very appropriate comparison, I think.
Speaker 4Of course, right, I mean, you guys aren't the podcast in my he's almost as he's almost as pretty as we are.
Speaker 2Yeah, yes, I also want to time.
Speaker 4Yeah, so with these tools, I mean I've used some tools that do some of the things you talked about, not all.
Speaker 2Of them, but yeah it Yeah.
Speaker 4And then the other thing I was going to point out is I'm actually working on kind of the other end of things where I want kind of a podcast assistant.
Of course, I plan to monetize the platform that we host the shows on, but I want to build a LLM based system in there where it's like, hey, this is how we do the scheduling.
Can you schedule an episode with so and so?
Can you invite them to the podcast?
And you you know, they said they want to talk about vibe coding.
Can you give me three or four resources that I can go check out before the episode in order to talk to help.
Speaker 2You build that if you want, right, let's talk afterwors.
Yeah, definitely.
Speaker 4And so it's funny because the two kind of yea, you know, they kind of blend in the way I think.
Speaker 2Yeah.
Speaker 4And the other thing that to be interesting is then you know, can I license auto show?
Speaker 2Right?
Speaker 4And so it's like, hey, for all of the process stuff on the other end instead of building it myself, you know, say hey, we're going to send your episode over to auto show and we're going to get all the metadata back that we need in order to publish it, and so a lot of that just anyway, it'd be really interesting to kind of compare notes and see where this is going.
Speaker 2I'm doing most of mine in Ruby, but.
Speaker 1Yeah, anyway, Yeah, it would just be a back end end point that you could hit and access if you want to just use the auto show part.
Speaker 2Yeah.
Speaker 1The only reason why I like having so many friends who are content creators and why I like going to your shows and explain it to them because most people are like, hey, I could use that.
Speaker 4Yeah, I've seriously about doing that stuff.
But I want to focus on the other stuff because that's where I spend most of my time.
Speaker 2And so yeah, if you'll do other stuff, cool.
Speaker 3Yeah, And so a question about that.
So you're building this whole thing, as you said, using vibe coding.
If you were to have done it three years ago, you would have probably written it manually by hand.
Yeah, how much of a difference has the vibe coding aspect of it made?
Speaker 1It just completely accelerates, Like it makes such a huge difference in terms of the speed at which I could build new features, new functionality, fix bugs.
It's just it's just a huge, huge accelerator because you think about all.
Speaker 2The things you had to do when you didn't have this.
Speaker 1You would have to forfare out what you want to do, what tech you're going to use, You have to go there.
Speaker 2You have to then try it out.
Speaker 1I mean, if you get to the point where you know all your tools, you're already very experienced with it, then you're just you know, writing the code.
But even then, you know you have to figure out what the future you want is, how you're going to implement it.
You have to then to write all the code to implement it, you then have to test it, you have to write new code, it is broken.
Speaker 2So for me, it's just a huge, huge accelerator.
Speaker 1And as I've done it more and more and I've learned how to do it more efficiently, how to you know, anticipate the weaknesses has and how to mitigate those, and I'll continue to accelerate my development speed even more as up and continuing to do this.
So yeah, I just think it's It doesn't necessarily make you a better deb but it makes you a much, much, much faster dev.
Speaker 4So the thing that I can see people thinking is are people going to lose their jobs because of this?
Speaker 2Well, no, because there still had to be me to think of the app, to then prompt it, to build the app, and then test it, to use the app.
Yeah, I mean, but I don't.
Speaker 4Know my employer could hire me to use tools like this and not have to hire two or three other people.
Speaker 2Maybe sure they get to that point.
Speaker 1Yeah, or it's a question of you know, but if they think of if they then have success and they're making money and they want to do more things, then they'll have to hire more devs, even if they're AI enabled.
I think that is kind of like an economics question.
Will there'll be a point where people would be like, well, I'm making enough money, so instead of trying to expand and make more money, I'm just gonna keep making the money I make and then cut you know, workers.
I just I just don't think that's really how it happens in practice.
I think in individual cases companies may make that decision, but on like the scale of the whole economy just hasn't how it happens throughout history, So this could be different.
I'm definitely not saying that that's not possible, But I say, if you look at technological advances throughout history that has never happened.
Speaker 3Before, there's another aspect here, and go to the whole testing aspect of it, think about it.
I'll give a different analogy.
Now, like you said this might yeah, you think about pilots with the autopilot.
I mean, if you think about pilots flying example a plane, commercial jet, the autopilot does like ninety something percent of the actual flying people don't know, but even the takeoffs and landings these days are pretty much automated.
But that does not mean you don't want a pilot in the plane's cockpit because occasionally you have situations that the autopilot can't properly handle and you want a person in the loop.
Speaker 1And like, I've never heard that that example before.
That's part I'm gonna start using that.
Speaker 3Well, I mean, I'm going to charge you for that.
Speaker 5This past weekend, I got a chance to drive a Tesla and the guy who owned it was friend of mine, and he's like, dude, check this out.
And you said on autopilot, and he was driving and steering and stuff, but it says right there on the screen, be there ready in case something happens.
Right, you don't just sit back and take a nap and okay, wake me up when we get to where we're going, because something could obviously happen.
So you know, as you were talking, that's how I'm thinking.
It's doing a lot for you, but you're still there sort of keeping your eye on it.
Speaker 2Make sure it's to.
Speaker 3Be there, happy, be there ready.
Sorry, I don't want it, because I'd be much more stressed out having my hands hanging over the weed rather than simply just holding the weed.
Speaker 4I don't know, but well, my my thinking on this is much more in line with what Anthony explained, And you know, I asked the question because I want to hear what he thinks.
I don't want to tell him what I think and then go, you agree.
Speaker 2You think your idology, right.
Speaker 4But I look at it and a lot of the other technological advances that we've seen where people were like, you're going to put people out of work because you automated a factory, right, you know, with an assembly line, or with you know, with robots or tools or things.
I mean in some of those cases with the physical products, I mean, yeah, you're only going to sell so many widgets and so right, yeah, you kind of see that there.
But with a lot of the other technological advances, and especially in software, my experience has been that our backlog is longer than we can do in fifty years.
Right, We've got a zillion things that we want to put in there, try out, run with whatever.
And so now, if you've got developers that cost you more or less.
Speaker 2What they cost you anyway, and you can give them these.
Speaker 4Tools to accelerate, you just wind up doing more things more than you wind up laying people off.
You know, that may not always happen in every case, right, There may be people that go, you know what, we've cornered this market.
We're pretty comfortable where we're at, and so yeah, they wind up doing the other thing.
But my experience is is that the cost of making software that does what people need is going to wind up going down, and that's going to reflect to all of the other areas of economy, and the companies that fail to innovate with this stuff are going to wind up.
Speaker 2Getting left behind.
Speaker 4And you're not going to let Yeah, you're going to accelerate all the people you have and make them way more efficient so that you can stay competitive.
Speaker 3Also, look, do I think that we might eventually get to a point I don't know when it will be.
You know, people talk about AGI and stuff like that where you don't need a person in the loop.
Maybe, But when that happens, it won't stop at software.
So if you're saying you know your job is at risk, well I could argue that eventually every job.
Speaker 1Who made the l M then and them.
Speaker 3At a certain time.
Speaker 4If you're talking about a GI that's actually a different animal than what we're dealing with here, where it's it's more capable of actually making decisions where right now, the llms, they kind of do need a human running them, and so the idea of an AGI or as superhuman intelligence, the idea behind those is that you don't need the human behind them, and.
Speaker 2We're just not there yet.
It's a different thing.
Speaker 3I do think that it is changing what it means to be a developer and the skill set associated with it.
Oh absolutely, although again I'm not exactly one hundred percent sure in exactly how because like you said, at the end of the day, for example, Anthony, you said, you're still going into the code and fixing various things by hand.
Now maybe with better models you would do it less frequently, but you still kind of need to be able to do it.
It's also interesting for me that at the end of the day we are generating code in react or generating code in solid and you have to kind of think of to ask, like why if at a certain point in time, if it's not the person interacting with the code, why does it need to even be human readable code at.
Speaker 1Least some of the browser can interpret.
That's going to be the thing.
You'll not be able to go beyond that.
If you're building for the web, the platform you're building with will to kind of define unless the platforms change and start accepting English language, then have them spin up code on.
Speaker 2The spot to Yeah.
Probably that's just a lot of issues.
Speaker 3That's probably not the energy efficient.
Once you have the task worked out, you probably want something more automated.
Speaker 1I'm just glad we all agree now that React is the only thing we'll ever write for the rest of our lives.
Speaker 3No, because we did have We did have an interesting conversation recently about the fact that eventually you might get to a point in time where if you're talking about an agentic web kind of like you know, think about Google now, where you put in a query in Google and instead of searching necessarily the search result you're you're looking at what Gemini generates for you.
And currently what Gemini generates is mostly text, but you could theoretically think about the future where Gemini effectively generates a web page for you based on what you requested.
Speaker 4Hang on, I've got to take off, so I'll let Anthony answer, but I've got to go.
Speaker 2So yeah, really interesting to see how this wraps up.
Speaker 3Bye, guys.
So Anthony, now it's just it's going to be just the two of us, I guess.
Speaker 2Yeah, it's all get out of talking.
Dan the same here, where are you working these days?
Speaker 3If we're taking a detour, we still have audience.
So just so you know, this is not a part of conversation.
I'm actually working for the past year.
I've left the previous company that I worked at, Next Insurance, but year ago.
By the way, they just recently got sold, which is nice because I kept my stock.
Hell yeah, yeah, And now I'm working at a company called size Sense, which does analytics, and we are very much impacted by AI.
Both in the development process.
We are kind of doing the AI revolution inside like it.
Basically managements came basically gave a mandate that all development, I won't say it's VIBE coded, we're not there yet, but all development is now AI assisted and even AI driven.
Speaker 2That's super interesting.
So this is relevant to the conversation then.
Speaker 3Yeah, for sure.
So for example, all the Typeescript JavaScript development, because we still have some legacy stuff even written in JavaScript, all of that stuff is now being done with cursor.
Interestingly, a lot of our back end is implemented in Java, and Java people are having a hard time with it.
They've tried to use cursor and it's I don't know exactly why.
I haven't looked into it personally, but they're having.
Speaker 1Tell exactly why because the models weren't trained on Java.
Speaker 2They were trained on Python and JavaScript code.
Speaker 3But I think it's even beyond that.
They're even getting you know, Cursor is essentially vis code with stuff, and they're getting it.
They're facing all sorts of challenges to work with the Java tooling, stuff like.
Speaker 2That tool is all divine for it.
Intelligent, yeah, stuff like that.
Speaker 3So they literally even have you know, some of the developers literally tried to work by having both Intelligent and Cursor open at the same time working Cursor, but do all the builds from Intelligent and you know, having them sink and they're not having fun.
Speaker 1Yeah, well that's how That's how I always felt what I had to write Java, So now you know how the rest of us felt.
Speaker 3Well, look, I've gone through all through all the programming languages, just so you know, I've started the.
Speaker 2Worst, right, no comment, I don't hear Java is the best.
Speaker 3Well, let's put it this way.
When I moved from I had a certain stint using the visual Basic and move moving moving from Visual Basic to Java was actually pleasurable move because I prefer the see the curly bracket syntax over you know, if and if and stuff like that.
But going back to our original topic, so you're saying that, let me put it bluntly, would you have been able to do this project without this whole AI assistant development?
Speaker 1Probably not, I would say, because you know, it's kind of a question of I could, but it would have taken so long that almost wouldn't have been worth it for me for me to do it, you know.
So I do think it has been a step change in enabling me to build stuff that would have been very challenging for me to have built in the past in a reasonable amount of time.
Speaker 2You know.
Speaker 1And this is partly fully admit, this partly do with my level of experience.
You know, I didn't start coding until my late twenties.
I've been a professional dev for about five years now.
Someone who has been a professional dev for fifteen years.
It'll be very different in terms of what an LM enables them to do before and after.
I think it will speed them up if they use it correctly, but it will be less of a thing where it will be a step change in enabling them to build brand new stuff.
And there's actually there's there's academic research to support this that shows people who are you know, junior to intermediate gain the most from LMS.
That's not just in software development, that's across all like legal and stuff like that.
Speaker 3So I'm having an interesting experience right now.
As I mentioned, we're using it with a lot of existing legacy projects.
So for example, we have a legacy project which is implemented in JavaScript with React and class components, and manually transitioning from class components to functional components and hooks is well, it's very it's labor intensive and and and but fairly boring.
Not a lot of creativity involved in that.
Let's put it this way, and it's definitely something that you can tell an LLM to do.
You'll probably need to fix the results, but maybe do as tests to the test.
Well, obviously, whenever you're doing the factoring, well, let's put it this way, when you're doing factoring, it doesn't really matter what you're using, manually or at LM or whatever.
You need to have robust tests as as a baseline.
You can't.
I would never start to major a significant refactor with out good test coverage.
But it's really effort intensive to do this kind of thing, labor intensive, and it's it's not very interesting work, it's not very rewarding work.
So if you can jumpstart this process within LLM, that's a significant upside.
So it's not just for let's vibe code a new project.
From my perspective, it's also very very useful when it's let's refactor an existing project and get it to where we would like it to be, something that otherwise might be so labor intensive that we might never actually do it.
Speaker 2No, I totally agree.
Speaker 1I'll tell people the same thing, and I think this may be finally the point in time we like rewrite all those cobol apps, you know, or get off of Java so that you could then have a better dev experience with llms.
Speaker 2So I agree, refactors.
Speaker 1Now are you can do a lot larger refactors a lot faster with LMS.
For sure.
Speaker 3That'll interesting telling it telling an LLM here's the Java application, rewrite it in a node or something like that and see what happens.
Speaker 2Yeah, that'll be hard.
Speaker 1The React class composed to functional composed would be a little easier because you can do you know, business pieces at a time.
But if you just have to switch the entire language, you almost have to refactor like the entire thing.
Speaker 3Yeah, that is true, but so an interesting point that you raised about being an experience rather than a relatively less experienced developer.
I can see it working both in favor and against both types of developers.
Like let's consider the junior developer.
On the one hand, you might say that you using that an LAM that if you've got a team that let's say in the past you might have a team with one senior developers and two juniors.
Well, now you might just give the senior an LM and tell them you don't need the juniors anymore.
You can effectively get at the LM to do the junior stuff for you, and then you're under You're going back to Chuck's question before you're potentially undercutting the work of junior developers.
Do you think there's a risk.
Speaker 1Of that potentially?
It's I think it'll be an economic question for the company.
Do they get more value out of having their senior dev have an LM do the junior dev work, or would be better to have the two junior devs use an LM and get closer to senior work, because then you would have you know, you think of it has having a senior and two half seniors.
Speaker 3But you would still need the senior, or could you?
So it works both ways.
It could be maybe I just keep the senior and have the LLLM instead of the two juniors, or maybe I could have the two juniors working as semi seniors and doing without the senior, or maybe with fewer seniors.
Speaker 1That gets back to what Chuck said though, is that you do have a finite amount of work for them to do, or is it once they've done their work you have more work for them to do, in which case having more means you do more work.
Speaker 3My opinion, by the way, is that you would still need the senior, but for a slightly different thing like what from my perspective, at the core of software development is the ability to take complex problems and break them down into several simpler problems that you can then take the way the results and bush them together and get the solution for the original complex problem.
And you do this recursively or iteratively until you get to such simple problems that are relatively straightforward to implement, and even if you're using an LLM, you'd still need to have a person kind of driving this process.
I mean, think about you developing your app.
You couldn't tell the l here's my idea.
Now go develop an app for it and do all the development process and just send me a text when you're done.
Speaker 2You know.
Speaker 3Again, maybe we'll get there one day, but we're totally not there yet.
So that's what I think you need the seniors for, and in a lot of ways, that's what you need the seniors for with juniors anyway.
Speaker 2Yeah, no, I agree.
Speaker 1So I think I think that the argument you're making, and what I agree with is that seniors are probably going to be sticking around.
If people are being replaced because of lms, it's probably that's going to be harder to break in if you're.
Speaker 3A junior, which will raise an issue for the industry because if you don't have juniors, well with, where will the seniors come from?
Speaker 2Yeah?
Speaker 3So, how so you know when when you're vibe coding, how do you know when you're done?
Speaker 1Well, the it is for me, it's just how I built the feature that I wanted to build.
You know, Like you were talking about how the point of the seniors to kind of have this vision, what is the problem you're trying to solve and how do you get there?
So for me, it's like I first had this initial vision of a workflow I wanted where I could generate show notes without any manual steps along the way, and then from there I've just been adding on more features, like oh, I wanted to be able to work with not just audio files but also video files so it can extract the audio.
Or I want additional prompts so it could do more, or oh, I want to be able to run it twice on different lms, or oh, once it gives me the output, I then want to run text to speech so I could listen to the show notes, or oh, I want to now generate a cover image, so I need to text to an image.
Speaker 3You should just release it.
Speaker 2Well, it is released, That's what I'm saying.
Speaker 3It is released.
Yeah, can use it.
Cool, So obviously I think you are right.
You gave the link both to the production version and to the development version that people can try out.
Speaker 2Yeah, people should go to I could screenshat and I can show you if you want.
Speaker 3Yeah, well maybe we'll do it quickly.
Although again for people listening on the podcast.
Speaker 2That's yeah, I can.
Speaker 1I'll speak through what's that's kind of happening, so's it'll be quicker.
We'll take me very long to kind of go through it.
So so you have interface where you just.
Speaker 3Start before you continue to our listeners.
Again, Anthony will be describing what he's showing.
But if you really want to see you can always find that.
You'll be able to find the video on YouTube because we always release the episodes also on YouTube as actual videos as well.
Speaker 2Yep.
Speaker 1And then how do I hope shure, how do I get this off screen?
Are you able to see the riverside thing right now?
Speaker 3Yeah?
Yes, I am, okay, I just track it a little bit.
Yeah like that.
Speaker 2Okay, great.
Speaker 1So you start with selecting either a file from your computer or you give a video you arel link.
Speaker 2It doesn't just have to be you.
Speaker 1It could be something like Twitch or Vimeo, any kind of service works.
And then after you select the thing you want to process.
Speaker 2Oh wait, that's good.
Speaker 1Yeah, then it gives you the available transcription services and you select which one you want.
There's a credit system, so depending on whether you use a more expensive transcription model versus cheaper, you'll have different credit options, and you just it's just per usage, so there's no subscription.
It's it's pretty simple.
After you select your transcription service, you select the prompt you want to use.
There's a ton of prompts right now.
Things based around content, so different length summaries, and chapters.
You can pull out quotes.
You can create social media posts, blog posts.
You can do business related stuff, create email campaign series, press releases.
You can do creative stuff like songs and short stories.
You can create educational material, and then stuff like personal professional development.
So I'm going to add what you have over them.
It will show you a quick preview of each so you can kind of tell what is actually going to give you.
That's just something I haven't done quite yet.
Then you'll select your LLM.
Speaker 3Do you have a question, No, just a thought or suggestion that eventually you may want to implement some sort of a plug in mechanism which will make it both easier for you to add new features without having to release new versions, but also make it possible for third parties to add their own plugins into your system.
Speaker 2So what would be a plugin that would.
Speaker 3Like all those services that you showed before, would be plugins.
Speaker 1Okay, but that would still have to something would have to deploy and then.
Speaker 3Yeah, they would need to be somehow deployed securely into your system.
I'm not saying that it's a trivial, but it would make the system a lot more or extensible, both make it.
Generally speaking, whenever something is a service that you can invoke from your system, you should always think about whether that can or should be a plug in, because a it kind of decouples the infrastructure itself from that particular type of processing and also again makes it possible for somebody else to effectively extend your system.
Now, then you get into other interesting questions about what you know, Like you said, how do I do something like that securely?
How do I prevent data leaks?
How do I monetize it?
Stuff like that.
But it's still it opens up a lot of possibilities.
Speaker 1Yeah, and I am planning on having the kind of back end API of this be exposed to other people like so of like Chuck, if he wants to use this, you wouldn't have to go straight through the front end.
Speaker 3Yeah.
Yeah, but that's using you as a service.
I'm talking about you using other services to perform particular operations.
Speaker 2Yeah, it's possible for sure.
Speaker 3Now now, So one of the issues that I recall people bringing up, you know, when people have that had no experience in software development at all, vibe coded like services that actually that they actually sold or provided, they ran into all sorts of security issues, like silly things like putting the AWS keys on in the front end code or stuff like that, because obviously the LLM doesn't care.
How do you, given that you effectively are not writing a lot of the code that you're generating, how are you preventing these sort of things from happening.
Speaker 2Yeah, so you can actually finish the flow first, and.
Speaker 1Then I'll answer yeah, sorry, sorry, Yeah, that's all good.
So then you select the model you want.
We have chat Schibt, Claude and Gemini, and then it gives you the show notes this this styling needs to be fixed.
Speaker 2He'll be a little nicer.
Speaker 1I actually ran this on a Java javas jams first episode, so we can kind of see what the output is.
So we see here we have an episode summary.
I'll just read like the first sentence this inaugural episode.
JavaScript Jobber introduces the podcast and panel members featuring aj O'Neil, Peter Cooper, Jamison Dance and Charles Ball.
Speaker 3The old school guys.
Speaker 1Yeah, and then it gets you the chapters and then if you keep scrolling, you'll see the prompt that gave it to you.
When I first shared this with my friend Scott, he thought that was a mistake.
He's like, you know, you're showing people the prompt, right, And I'm like, well, yeah, I want him to see the prompt that was used.
Speaker 2To generate He's like, but that's the secret, sauce.
Speaker 1Why would they use your service and they can just use the prompt And I kind of get they're coming from.
But there's all this other stuff beyond the prompt that the app does, Like it's the the uses that it you know, takes, processes the video and connects to a transcription service, and connects to an LM service.
It does the whole thing.
So I think there's still value to the app even if you do include the prompt.
Speaker 3That might be I agree, because also people use these sort of things in order to save time and effort.
If if you know, they could copy the prompt and they could get some of the functionality, but at the significant expense of time and effort, the process would certainly not be a streamline, so for sure.
Speaker 2And then this is a bug the transcript should be here?
Is that something for me to fix?
Speaker 1And so the transcript will be included along with the show notes, and then you can see some configuration like what model did you use?
How many credits did it cost you?
And then metadata includes like the title, the YouTube linkt came from the YouTube channels connected to you can see the cover and from the show as well.
So a lot of this you could kind of take.
You could turn this straight into like a markdown file with front matter that can be a web page for each of your episodes or something that's kind of what I do on my website for all my videos I do.
Speaker 3It seems that NLMs have really been revolutionary for content creators, and that's yet another great example, like at the end of the day, it's a person creating the content, but it's the NLM doing a lot of the chores and all these sorts of things around the actual content, stuff that otherwise would have taken away time that could have been spent creating even more content.
Speaker 2M yeh.
Yeah.
Speaker 1So going back to your security question, So for me, I do have enough experience to know, basic security things like don't put API keys in your front end stuff like that.
So that is where you know, just having some sort of de experience us in handy.
If you're not a dev at all, you're trying to launch a whole app, that it's very risky, and I would recommend trying to find someone you could pay some sort of small amount of money to at least you a very very baseline security check for you.
If you can't do that, spend a lot of time prompting your LM to have it give you instructions for security testing your app, so for higher level stuff like pros sise scripting and stuff like that.
I am in the process right now of kind of hardening the application.
I didn't roll my own off I didn't roll my own payments, so that also I'm kind of.
Speaker 2Leaning on the services to do some security for me.
Speaker 1Banking on Clerk and Stripe handling, you know, their end of the security, and that I have to just make sure that the app itself can't get hacks.
So someone could then have access to like your your credits and your credit card that's attached to Stripe and things like that.
Speaker 3And the main advantage is obviously, aside from you know, reducing that effort is the fact that there are a lot of examples out there of people integrating with Clerk and with Stripes, so the LLM is not short of for usage proper usage examples.
Speaker 2Yeah, totally.
Speaker 1And this raises the value of these third party services and leans even further into the don't roll your own off kind of argue, which which I see both sides there.
Speaker 2For sure.
Speaker 1I'm not saying no one should ever roll their own off, but for for me and what I'm doing, what I'm billing, it definitely makes more sense for me not to roll my Okay.
Speaker 3Then we are running towards the end of the of the show, so before we wrap up, is there anything else you would like to speak to say, either about the service you're created, or about vibe coding in general, or anything that we might have missed.
Yeah.
Speaker 1I mean, I just encourage devs or even non devs to try and work with lms as much as they can and try and really understand how they work, what you can do with them to get the most out of them, what or weaknesses are, and a lot of this you're only really going to get through experience.
I personally find it really fun.
I'm having more fun developing now than I ever have.
One of my friends was like, it's so boring the code LM.
You just sit there wait for the LM to write the code.
And it's like, well, you know, do some dishes in the middle.
Speaker 2That's what I do.
Speaker 1I literally do chores while I code.
I'll do a prompt and ask this writing code.
I'll like do like a minute or two of chores and then come back to my computer.
Speaker 2That's kind of It.
Speaker 3Kind of reminds me of the old days when we would be running compilers like you know, yeah, a lot of coding and C plus plus and stuff like that, and the and in the old computers way back when you know, before we had the end ones will take a while to run, and we would like sit there and you know, surf the web or whatever while the compiler was running.
Speaker 1So is that those XKCD comic or two do just like sort fighting in the office and like, hey, what are you doing, Like we're waiting for code compile.
Speaker 3He's like, okay, yeah, I got so now we're it's we're waiting for the L and M to finish.
Speaker 1Yeah, no, I've actually I've made that same comparison actually for sure.
So yeah, I would also say try out different models, you know, see which ones you feel like work best for you, and don't feel like you have to like dive headfirst into the entire AI space.
Really the most important thing is just, you know, finding a model that works for you and can be useful for you, and try and work it into like your day to day stuff, even if it's not coding.
If you have some sort of other tasks that you feel like could be faster automated, just throw into an LM and see what happens.
And they're getting better all the time.
They're connecting to more external services all the time.
I would say, don't worry abou jumping into MCP yet.
Like MCP is cool and it's going to be revolutionary, but it's super new and it's probably not something that people should really jump into unless they like working with new tech when it's kind of in changing breaking states, you know.
Speaker 3Yeah.
Cool.
So we usually have picks, but to be honest, I don't have any special pick today.
Do you have anything that you would like to shout out as a pick before we finish?
Speaker 1I mean, the things I mentioned throughout the episode.
Repo mix is cool.
Check out Claude definitely.
If you're looking at a code, I think that's probably the best one right now in terms of you know, price and features and speed and all that.
Speaker 2And then check out auto Show.
Let me know what you think.
Speaker 1I just launched it like a week ago, so it's still pretty new and there may be some bugs and things like that.
Just hit me up online if any of that happens.
I'll use some free credits to test it out.
Speaker 2If you want.
Ajac webdev everyone on the.
Speaker 1Internets x YouTube LinkedIn GitHub, So yeah, check it out.
Speaker 2Let me know if it's useful for you.
Speaker 1Let me know if there's features you want me to build or prompts you want me to add.
You'll have the ability to write your own custom prompts pretty soon if you don't want to use any of the regular prompts.
Speaker 2And uh yeah, that's that's pretty much it.
Speaker 3Excellent.
Thank you very much for coming on the show.
I think you shared a lot of super useful information.
I think we're literally watching our revolution and your case in point.
So thank you again, and to all our listeners, thank you for listening in and see you next.
Time bye