Navigated to Radio Better Offline: Edward Ongweso Jr. & Allison Morrow - Transcript

Radio Better Offline: Edward Ongweso Jr. & Allison Morrow

Episode Transcript

Speaker 1

Also media.

Speaker 2

Hello and welcome to Better Offline.

I'm your host Ed Zichron, and we are recording here in beautiful New York City, and I have a wonderful pair of guests today.

I have, of course, Alison Morrow from CNN Nightcap Newsletter and Edward Anguiso Junior, the hater himself from the Tech Bubble newsletter.

Thank you so much for joining me today.

Speaker 3

Great to be here as all.

Speaker 2

So I think we should start with exactly what we were just talking about.

The open AI claims that they have worked out what causes hallucinations.

Alison, do you want to go over this?

Speaker 3

I should have read the paper a bit more carefully, but I can you know.

The highlights were getting digested yesterday on X and Blue Sky, and it seems like it's kind of the test taking problem of when you encourage students when they're taking anderdized tests and you don't know the answer, you guess, And that's exactly how these models are trained, and they don't.

You don't get a point if you say, I don't know, come up with something, and the models are meant to keep guessing until they get something close to right.

Right, So that's why you get a lot of nonsense and hallucinations and open AI at least in their reading of it, says oh, this is a simple solution.

We'll just encourage the models to understand better when it's like a binary question and when you can say I don't know the answer to that, so we'll see.

Speaker 2

I read the paper, albeit not today because my brain just immediately read it as marketing copy.

Just put that, replaced it with another anime theme song.

I went through it and I was like, okay, so it's going to encourage them to say I don't know.

This feels like a very flat view of what hallucinations are, though, because hallucinations as people know them authoritatively stating something that isn't true, but hallucinations in life coding model are.

It will just say yeah, I did that when it didn't.

This is very common you going through the Cursor and the Claude code forums, you can see this or subredits at least, and it's not that they say something that's true.

They don't know that something's not true also, so they might say they don't know, and it's just very silly because they claim that they're going to fix this problem with this solution, but they've done years and hundreds of millions of dollars of reinforcement learning.

Do they unreinforce do they reinforce it?

More so that I don't know.

I'm fucking tired of these companies are going to be completely honest.

I realized this is kind of cliche for the show, but it's the fact that these things get written up as very serious things.

US is saying, you, guys still haven't worked this out.

It's kind of frustrating to me.

And they don't seem to be the models.

Some getting bad dimension returns and all that, but this is the best they've got.

Speaker 4

I mean, does this kind of thrust feel like, you know, do you guys feel like it's a downstream of the attempts by some of these firms to say, oh, actually, like if we dialor bad the sick a fancy, you know, then we'll be able to have a much more engaging consumer product.

We'll be able to have it hallucinate less, you know, and do psychosis less.

Speaker 2

You know.

Speaker 4

But does it do they feel linked in that way?

Does it just feel like a they you know, another maybe another dead end.

Speaker 2

I feel like it's just them trying to work shit out and trying to like this also feels like a very rudimentary answer that they probably already had.

It doesn't.

Anytime someone comes up with an idea that I can that's technical, I'm like, okay, mate, you cannot be cooking with gas.

This is this cannot be that good an idea.

And the thing is the sick of fancy problem I don't think is solvable through solving hallucinations.

The problem is it should stop.

It needs to just It should not say I don't understand, it should say no.

Actually, you sound like you think I'm God King and that you are God King, and that as God kings together we will destroy the world.

Yeah.

I mean, I mean my case is true.

Speaker 4

But do you think like the emphasis or the attempt to overcorrect on that is leading them to go down solutions where they think, oh, like if we just like this and that, then we.

Speaker 2

Yeah, yes, yes, actually I think that that's right because they have the I don't know if you saw the Attorney general.

Attorney's general from Delaware and California sent a letter to open AI last week saying, hey, look, if you need to fix these safety protocols, you need to actually have them, because what you have right now doesn't work, and we will block your nonprofit otherwise.

I was really happy to read that right up until the bit they say we wish you well with your quest for AI dominance.

I'm just like, these are the fucking people protecting us from the They're like, no, it's great, you want to dominate everyone with AI.

It's just you drove some people to a murder suicide situation.

Speaker 3

And wasn't that part of the problem with GPT five as they tried to dial back the SICKA fancy and then it took away the character and like the humanness that people had gotten attached to in GPT four, yeah, in earlier models, and like it all seems to come back to open.

AI doesn't realize that what it's selling often is like a companion and a therapist, and it doesn't.

It reminds me of like Q tips.

You're not supposed but them in your ear right, but that's all anyone uses qtubes.

Speaker 2

I was gonna say they're not meant to do, of.

Speaker 3

Course not, but like that's the consumer, that's how the consumer has chosen to use this product.

And they're saying like, well, we don't condone that, we don't think it's like the best use of our product, and you know, we know better than the than the consumer, of course.

Speaker 2

I actually I think it's one abstraction high, which is I don't think they know what chat GPT is.

I don't.

I did AI boaster piece came out last week, but it was I had this whole thing where it was.

It's so they don't describe what chat GPT does.

If you're on the website, like it's like, yeah, analyzes data and it's it's brainstorming and charts and stuff.

The agent does things too, Please buy it and then you try and like actually look for any use cases and there's nothing.

I think that they're just guessing.

But my favorite thing I'm seeing, my favorite thing is that now people are like GPT four h isn't the same because they brought it back and now people are just freaking out.

People like, no, it's not the same, it's different.

Somehow it's on I honestly don't know if it's true.

I just think that they've entered gamer mode.

This is just what.

This is how gamers react.

It's like the gun isn't the same, it's literally the same code.

No, you've changed something, You've I know, and it's what happens when you release an imprecise glazing bot onto the world.

It's also really funny how literally any of these companies could have been open AI.

There doesn't seem to be anything special about this company at all anymore or ever.

Speaker 4

Well you know, I mean, not everybody can lose as much money that feels a very special or have the connection to you.

Speaker 2

Oh my god.

What I haven't heard from Maco's son in the minute.

We haven't had an announcement of tomorrow or tomorrow is gonna be?

I like that he bought a he bought a Fox count Lab, an old Fox con lab too turned into an AI server building place in like Ohio.

I think it is.

It's like, mate, what are you doing?

Man?

Are you okay?

Someone needs to check in on mass We need to hold space for Masso Shei's son.

Speaker 4

Yeah, bring them on.

Speaker 2

I have asked him.

I have genuinely emailed soft Banks PR and I've emailed Anthropics PR for Dario and han't heard back from either.

I assume there's an email issue because there's no other reason they wouldn't come on this show.

It's just it now.

I what's fun to watch though is considering how many times we've had like AI bubble conversations on here, everyone seems to be kind of waking up to it.

It's kind of fun.

Must be clear, everyone here was early on this.

I was new, actually very early.

You've been early on every day as early as you No, I give you credit.

You're one of like the three people who, when the metaverse was happening, was actually calling it out.

So it's good.

It's good to see in here, but also insane that it's still going.

That's what I don't like.

This open a one hundred and fifteen billion spurned within by twenty twenty eight, I think it is.

I don't understand how people keep publishing this and being like and that will happen.

Speaker 3

I will say, to my relief, the metaverse went away quickly because when it first was announced and I wrote a piece just being like what.

Speaker 2

Yeah, what is what?

Speaker 3

And then like everything about it seemed really dumb in every iteration coming after, and so I was like, oh, okay, I wasn't crazy, but with AI and I know you can relate to this, I feel crazy because the lack of utility is still there and like the absurdity of the investment is still there, and it does seem like that.

That's why I wrote about the vibe shitt like it has been like going in Edzitron's favor in the last few weeks.

Speaker 2

Fucking vindication finally.

But it's it's funny as well because even with them, this open Ai story comes out and people are still like, yeah, they're gonna burn one hundred and fifteen billion dollars.

People reported about a month ago that they're going to spend thirty billion dollars a year on Oracles starting twenty twenty eight.

How how how?

I just That's what I don't understand.

How these articles keep coming out.

And I understand that reporters have to report the news and things they have discovered.

I get it, But can no one just be like and no one has any idea where this money's coming from.

No one, thirty billion dollars a fucking year from Oracle for service that are yet to be built, and all of this.

They don't need to worry about the fact that the servers aren't actually built.

In Applene Texas is not finished, and the money doesn't exist, and it isn't obvious how Oracle affords it they will have to take on debt to build this, and Crusoe and primary digital Infrastructure have already done that.

And I mean, other than that, it could happen any day.

I just wonder if the media is actually not prepared for this.

And I don't mean in a conspiratle way, I mean just in a is the media actually set up for companies just lying or just projecting.

Speaker 4

I mean, I feel like it reminds me of kind of the relationship that maybe our critics or you know, our coverage might have for the medium, where there's not an inherent antagonism or skepticism of claims that are being offered, an assumption of good faith that continually gets betrayed or you know, punished, but gets carried on with over and over and over again.

I feel like the AI bubble discussions that we're seeing.

Part of me also feels like they are going to disappear the second we start to see some of the firms maybe announce some favorable metrics, even though, as we've been talking about for a long time, right, revenues are not there, profits are not there now, the burn is only increasing, right, and there's no way forward in the short term.

Right, you know that I can think of where these companies start to actually do the things that they're claiming they're going to be doing with transforming the world.

But I can see a scenario where you know, someone has a favorable quarter for adoption even though you know, we just I just saw yesterday Apollo Global Management was talking about large firms are actually scaling back AI adoption, which already wasn't even providing returns and was hurting productivity in the first place.

Speaker 1

Right.

Speaker 2

It's and it's really weird as well, because there was the MIT study where it's like they said, ninety five percent of generative AI integrations don't have any return on investment.

There are some people critiquing that number, but something that comes out of that study that I really like was it was saying that enterprise adoption is high, but the actual transformation is low because this shit doesn't work.

And it's it's so strange, and I think that the only reason that things won't immediately unrebel with a good quarter is because the media has chosen to follow a direction.

Now you've got the when you've got the Atlantic, the goddamn Atlantic publishing a story saying yeah, it turns out that AI isn't really doing much, but it's holding up our economy.

It's like, holy shit, the Atlantic's willing to admit something that happened happened.

I didn't even know this was in there.

I thought that they just wrote up whatever was email to them by and dohold.

Speaker 4

Your bread, they just published Mike Solana.

Speaker 2

Yeah, actually I retract all my statements to But it's I think with the media.

Speaker 3

I see it in political reporting and business reporting in tech.

There's a deference to authority that I think American media, but all media, have an issue with.

And I think that sort of speaks to the underlying economics of being in media right now, where there's a general chill both economically and politically.

Reporters are worried about their bylines being out there and getting stuff wrong.

And I'm not saying that that's an excuse, but I do think that is an institutional mindset that has taken route, especially in the last ten years, that's just become like really hard to be a journalist and to do it right.

But you are starting to see that MIT report was so important because it caused people on Wall Street authority figures to say hmm, I don't know about this, And then that got a lot of mainstream financial media to kind of like do that questioning headline about AI that they maybe wouldn't have done six months ago.

Speaker 2

I do like that there's a rumor about the next deep Seek model doing agents.

They're not going to, but even just like if that comes out and they even claim they can, I think that might have a market panic just because they'll be like, ah, China, I.

Speaker 4

Mean yeah, I mean that's I.

I actually think there's something to that, right, because you know, we did agents here, didn't do shed for sales powers really the market did the market even?

Speaker 2

Really?

Actually and a wonderful story from this information open AI in their in their whole projections through twenty twenty nine, they've reduced the amount of money of the micromagents by twenty six billion dollars.

Why I want anyway?

So, but yeah, it's like that this deep Seek thing could inspire people to get scared horribly because no one actually believes that agents exist.

Looks they don't, but they think they do, but they will, but they won't.

It's this I don't think I've ever actually seen anything like this in tech.

I'm gonna be honest, it's worse than crypto, even worse than just the general generative AI.

I think is this concept of agents though, because I saw some fucking thing about some one political blog saying that Donald Trump would do some sort of act he would do the I forget the exact thing, but it would be an act that would make AI copyright holders just handshit over to AI due to us needing to beat China.

And it mentioned with you know, like, yeah, the growing agency capabilities of AI.

It's just what the fuck is that.

I've never seen a tech thing in my life now that has not existed like this, and people talk about it like it's real.

Speaker 4

And I think also it's interesting to see the more it doesn't manifest, the more some of the recommendations to make them happen just sound more and more like these are also things that might somehow bend the cost curve in our direction.

Maybe we should make the Internet unusable to anything other than some of these programs.

Yeah right, And you know, I'm curious which one is going to give out first, Like the really savvy ability a lot of these firms have had in spinning, sputtering or you know, spinning you know, a crisis that might drain the markets or deter investors and to oh, actually we just need even more capital, similar to what they did with deep Seek.

The solution is even more compute intensive models.

You know, whether they're going to be able to do that faster than people wising up and saying, you know, maybe we shouldn't misallocate trillions of dollars a capital over the next few years towards this.

Speaker 2

But this is the thing, though, I don't think there's anything stopping this, because the suggested thing was the okay, so all of these modial companies can steal from everyone, which is what happened already even with this anthropic settlement.

It's not it's great people are getting three thousand dollars.

I love that, but.

Speaker 4

It's also what the companies are offering that's similar to what the your payout would have been if you said, yeah, right, I didn't know that.

I think like some publishers were offering, or you know, trying to ask authors, Hey, if we pay you this, would you allow your book to be trained on would you allow us, you know, be put into a data set.

The payment that you're getting from the settlement feels or reminds me of the amount of money that people are being offered and those sorts of deals.

Speaker 2

I think the thing is, Okay, they already steal everything.

Well, okay, we need to give them as much money as possible.

We've already done that.

Are we just going to do this forever?

Because even if we do this for rever, nothing's going to change, even if I'm completely wrong and open ai keeps going another five years.

Okay, so we're just going to annihilate one hundred and fifteen billion dollars.

There is no there are no more things here like they're projections open ays projections from the information their chart.

I don't even need to show you this because this is just fan fiction at this point.

There is in twenty twenty six starting there is this growth of this orange thing that is other revenue.

Who knows what it is?

I don't know.

Open Ai doesn't seem to And that's really important because they're going to make what looks like several billion dollars from this next year.

What the fuck is going Every time I look at this company, I feel a little more insane because they've now lowered their expectations of selling their access to their models by five billion dollars over the next few years.

What even is open ai at this point?

Is it just a wrap?

Have they become a wrapper company of their own models?

They're no better than Cursor.

It's just it's so weird, and I realize them kind of going in circles at this point, but it's I even the metaverse, even Crypto, even Crypto functioned.

It was bad.

It's still bad.

It is bad cloud software, but it still did the thing.

Hey, I doesn't even seem to be doing it, and they need more money to prove that it can't do it.

And actually they don't have enough right now, but they're going to need even more.

I don't even know how people are still taking this seriously because on top of that, did you hear about the Microsoft negotiations over the nonprofit.

Speaker 4

I've been hearing.

Speaker 2

Well, they're delaying it to next year.

They have to.

They need to convert by the end of the year otherwise SoftBank comes their round in half.

And everyone's just like, yeah, I'll be fine, mate, it'll work it out.

What the fuck have you have either of you ever?

I know ed you covered Uber a lot.

I don't even think the economics match with that either.

Speaker 4

No.

I mean, you know, it's interesting because I think Uber's strategy, central strategy from the beginning was, you know, we have a few existing playbooks that we need to reference, you know, the coke deregulation of the taxi industry in the nineties, as well as Regula deregulatory campaigns that they led, you know, in Seattle and historic campaigns in San Francisco.

There's a lot that we can reference.

And if we can figure out a way to bootstrap ourselves onto the model and onto those previous histories of deregulation while delaying scrutiny long enough our economics to actually, you know, get to a profitable place, we'll get there, which is what they did right.

But even then, I mean, I feel like from the beginning, as much as I hated a lot of the coverage of Uber for years, the people who were always correct about it, We're like the labor reporters who had actually, you know, if you spend time talking to the drivers, that will lead you to be a little bit more interested in the you know, what can justify the suffering behind this, and then you almost always will see that there's no act.

At that point, there were no way the unit economics worked unless you subsidize everything.

Speaker 1

Yeah.

Speaker 2

I feel like.

Speaker 4

Similarly, there's something going on with with artificial intelligence firms and the global AI value chain where if you start with a labor analysis and you look at you know, invisible workers or ghost workers that are integitical.

Speaker 2

The Kenyan people training the models m M yeah.

Speaker 4

Or labeling or you know, an any any laborer that's out of sight, out of mind.

You know, then the you know, starting there and going up, it becomes hard to ask, Okay, how are we supposed to allocate all this capital towards a model that, as is right now is cutting all these corners for costs and it's still burning tens and tens and tens of billions of dollars and it's asking for trillions of dollars more.

Speaker 2

But I don't know.

Speaker 4

I mean, part of my fear is that they are successful in the way that Uber was, where if you get enough buy in from military industrial comp well, you know, for an AI's case, if you got enough buying from military doustrial complex, if you get enough buy in from UH you know, social programs and interfacing them and helping cut cutting them or redirect traffic through them, if you get enough buy in from other tech firms, if you get rents from other startups they need to use and get access to your product.

And also if you graft yourself onto everybody's daily interactions, daily lives, the way they interface with the Internet, can you actually make it work?

Which is also another way to say, like, what if you just become a massive parasite?

Speaker 2

Funny, the funny and grim thing is is AI is a terrible parasite.

It's not good at it.

It doesn't like because Uber's success came from being able to graft itself on through utility and subsidized pricing.

That meant that everyone used it and also cabs kind of fucking sucked.

Yeah, I mean, yeah, they sucked.

Speaker 4

And also transit in most city sucks, and I mean it has only gone worse.

Speaker 2

The inherent colonialism of most technology applies very good from Karen Howe.

Of course, Empire of Ai.

Empires of Ai did NEP's a great book, but in this one, it's kind of shit colonialism as well, because they don't even they haven't found a way to actually exploit in a way that's profitable.

They haven't found a way to use human beings because the fundamental thing they want to do.

It's kind of like if Uber sometimes you got in the car and you got out at the wrong place, and I don't mean like in a different country, or you got into the car and it just exploded sometimes And I sound like I'm joking, but it really is that bad.

And on top of it, it's not replacing labor.

And it's also not the kind of tech that can replace labor.

So I it's my grand theory that they're just playing the hits.

They just they're trying in the same way that you just eloquently put Uber played the hits of his how we did deregulation, his how we did growth, and this is how software grew in the past.

I think that AI is trying to do the same thing, and it's bad at it.

It's like watching a new class of dipshit try and do what the more evil dipshits of the past did and fail.

In fact, these lobbying groups lobbying for AI.

I hear a lot of people saying, oh, they're lobbying their lobby It's like what for.

Oh no, they're gonna build data centers everywhere.

They already do.

They're gonna steal from it.

They already do.

They're trying to replace, They're already trying everything that they that they everyone's scared of.

They already can do.

Other than the doing part, they can't do any of that.

I'm sorry, I just I just remembered as well yesterday trying to read that because I went on the chat gpt pro subreddit because I hate myself, and I was trying to find someone who'd used the agent, and every post was someone saying, anyone ever used the agent, You've got any tips?

And everyone's just it doesn't fucking work.

It's broken.

It's actually, here's a good question, asn't ed have you?

Can you think of a company that's ever released something just completely broken before?

Because the meta us kind of worked.

It wasn't what they were promising, but it worked.

It was a virtual world ish.

Speaker 3

I mean, the pharmaceutical industry has a nice long history putting out quasi effective drugs that have all kinds of consequences.

And I can't remember who skeeted it about this a few weeks ago.

It was after one of the you know, it's become such a genre of the journalism right now, about AI, about like this man became delusional and had a psychotic episode because of his Chatgypt relationship.

It was one of those going around and someone's skeeded about how if there was if this is coming from a pharmaceutical company, it would be recalled immediately.

There are real regulations in place that could actually claw that back and help save people's lives.

But there are no regulations around AI, so we get Chatgypt gods and spiritual awakenings and all these psychotic episodes.

Speaker 2

I do think that that stuff is going to genuinely be its downfall though, because right now it's burning more money than anyone's ever barned before.

And the most common use case people can talk about is, yeah, it drove that guy insane, that guy went crazy.

That there are children who's horrifying killing themselves because of this thing.

That's what it's getting known for.

And otherwise it's like, yeah, your most annoying friend loves this because that really it's them.

You love them, but they're like I learned about well listen, chat JBT is like you didn't well you know on that.

Speaker 4

Part of also part of my a fear I have is I think similar to how when firms were rolling out facial recognition surveillance and insisting that we need to biometric surveillance to help keep city safe, community safe, products safe.

One angle that people used to attack it was, well, you know, like the racial bias of these things will allow them to you know, miss identify black or brown people more often than not, and they might get arrested, they might get targeted by the police, won't wear another and that is why we should get rid of the technology versus we should get rid of the technology.

And I think, like, I'm curious, you know, how it's going to go, the concern about it inducing psychosis or inducing suicides, because I could see a scenario easily where they patch together something that like a fix, and it's not until later a year, two or three after there's after people are much more defended that other harms come to the foreground, whereas we lose something God, I think not to say, you know, not to say or marginalize the fact that it has immense social costs or harm here, but it does in some ways remind me of the way in which that debate over facial recognition went and then they you know, they solved, you know, with quotation marks the racial bias problem, and now people have more or less accepted that facial recognition is okay, actually long this it's not racist.

Speaker 2

And that's the thing.

It's people say, this is a white bloke.

But it's like people really underplay how endemic that racism is within all algorithms, you know, Compass, which is like this very very very old algorithm for being like it's basically minority report, both in the reference to the thing and it reports and minorities in that it says, yeah, this this person will likely have again, and that should be and it isn't a unilateral the judge has to take it.

But what a surprise.

It's often used to black people to the jail systems because it's heavily biased against them.

And yeah, I somewhat fear lllm's doing similar they're probably already doing.

And I think that every algorithmic system is inherently racist.

There's not enough people running them who actually fucking try.

It's inherently biased against a woman.

I think there's also I wish I had this in front of you.

But there's also something about how like more there were more fans of Generative AI who were male than female, or it's just but.

Speaker 4

Do you think it's possible that you know, they'll try to say, oh, we can solve for the psychosist problem, and then that will undermine a large The problem is of the criticism.

Speaker 2

How do you solve it?

Because it is probably a small it is probably a small scale problem.

We actually don't know, and it's not that these companies know or will tell us.

But nevertheless, each one is so horrifying.

This is horrib like the story in the Wall Street Journal, it's Julie Jargon, another person there who wrote that where it was like a murder suicide, an actual son of Sam Mortman's situation.

Whi's fucking terrifying that this is happening.

I don't know if you can completely solve that, because well, it takes is one popping up again for them to go fuck.

And it's also not just a chat GPT problem.

There's this woman on TikTok who she has been saying what Claude has been telling her, and it's like, ah, this is giving me psychic visions.

I think it is.

It's also an the ultimate grifter tool.

It's just it's that's why I think it's taken off so much on social media as well.

It's a tool that naturally fits into the grifter's toolbox.

I think that I actually have similar fears that they will try and find ways to handwave away from this if it was the only problem, but they have so many problems.

There have so many problems at this point.

But I do also think that people need to remember how racism in algorithms is.

Fairly, it's in all of them.

I mean, you remember Microsoft Connect which literally couldn't see black people, which was a joke in the show Better Off Ted if anyone watched that, a great show.

It's just it's insane the I mean, sadly, it's very obvious why this keeps happening.

It's because the people are predominantly white and male, and it's just they can't really and also you can't really fix this stuff without intentionally building the data, which would require and spend money on something they don't care about.

Speaker 3

And they don't really understand what they're doing when they go in to tweak these models.

They don't know how overcorrecting or undercorrecting their being so they kind of have to just try and then put it out in the world and then wait for something bad to happen.

It's funny.

It's not funny, it's extremely sad.

In that journal story that you referenced, which I read twice because I was like horrified, and also the reporting was incredible, it was really and they said it as this appears to be the first instance of a murder resulting like we've seen sue sides, but like this is a murder suicide.

And when OpenAI responded to the question about did the bot ever respond to this guy who is clearly having a delusional episode, Hey, you need to talk to a real life therapist, you need to go to the hospital, you need to seek help.

And I think they declined to comment.

It was like a very evasive maneuver.

But like ultimately the journal had seen.

The one time where the bot said please go to the emergency room was when the guy, the parent guy who was having paranoid delusions, said I think my mom is trying to poison me, and the bot said, if you think you've been poisoned, you should go to the hospital and get your stomach pumped.

Speaker 2

I also, I agree they don't know how to tweet these things, but I must be clear worked in tech for a long sixteen seventeen years now, and that's not even including my game's journalism work.

It is not hard for them to just have a just a unilateral thing of oh, you're talking like this, I'm going to stop.

I mean, topic just announced that they have a thing that will cut a conversation, which is good.

All of them should do this, but it's they could if someone and the whole thing people I've seen where it should be.

If you start talking like I'm going to do this, I am becoming this, it should say, hey, you sound like you're having a paranoid like I'm worried about you.

You should go and speak with it and just stop working with them.

People will say, well, the way they get around that is by telling the chat GPT window, Oh yeah, I'm writing a story.

I don't know.

Do we need them to write a story about that?

Do we need?

Of course?

What is the answer, And the answer is they don't give a rap fucking no one's mate, I really, I genuinely think because it's really the same thing with like same thing with social networks as well.

It's you don't ban every slur the moment someone says it.

But I don't know you have a thing that says, hey, someone said a slur.

Maybe take a quick look at the slur and you could probably just ban that person, because I'm guessing that most uses of the N word on social media are not used as in culturally sensitive ways.

They're probably insanely racist.

Speaker 5

You've just calt them down.

It's like, well, we can't.

It's an issue of free speech.

Fuck you, no, it's not.

It's an issue of free speech when a person can't exist online without racism happening to them, right, and these models, they could stop them.

But I do think there's a compelling argument of they really don't know what to do that every time they touch it, something else breaks.

Speaker 2

It's honestly, it's kind of the most egregious version of the most common software problem, which is coding is really fucking annoying, and we don't know how these work.

Speaker 3

Also, and generalor AI is not going to fix your coding problems, no matter how many times you tell us, Sam Altman that a guy is just going to fix everything for us.

Speaker 2

That's actually be my favorite thing to do right now.

It's go on the R slash cursor, AR slash chat, GPT pro R slash claude AI and just looking at people complaining, and what they're complaining about is, hey, I keep hitting rate limits, Hey I keep it keeps breaking things.

Hey, it's you get one guy every so often who says this has changed my life, and then you see what the responses being like, yeah, but fucked up all my stuff really badly doesn't really work.

And we have an upcoming episode with a cult vult Voji about this where it's like, the average software engineer is not just writing code anyway, and so you this is also I think this is actually really this is funny, This is a good this is a good one to laugh about.

So their only real growth market right now is writing code.

The problem is writing code requires you to use reasoning models.

Reasoning models inherently burn more tokens.

And the way they burn tokens is because they're think thinking.

They don't really think.

They look over what a prompt ask for and goes, okay, what would be the steps to solve this?

With code that becomes so so complex, And the more models reason, the more they hallucinate.

So the very product that they are building that is going to save them is also the one that is going to burn more compute and this is a rumor I've heard from a source that like it can take like four to twelve GPUs for one person's particular particularly rough coding, like a.

Speaker 4

Refactory that's sustainable.

Speaker 2

And that's for one of the smaller models as well.

That's for like four many which is a reasoning model.

It's like, what do you think the big ones are?

Speaker 4

Like in the information they talk about open ai having a new eighty billion dollars and eighty billion dollars in casts that they spend yes over the next three Yeah, it's like it's.

Speaker 2

One hundred and fifteen by twenty twenty nine as well.

Speaker 4

It's a good chunk of this come out of Oh, it turns out that computer is incredibly expensive and we want to center our business model around it.

Speaker 2

I think it's that, and I think it's just they don't know what else to do.

It's kind of like we're saying with the the uber model, they're playing the hits.

It's like, fuck, what did we do in the past.

We spent a lot of money ship what do we buy GPUs?

I guess what may we train more?

They're going to spend so much money on training and it's like, to what end, your last model was a joke.

This is why it was.

Speaker 4

It was really interesting to see that OpEd that came from Eric Schmid and his research where he was you know, Eric Smith is someone who was an architect of the idea that offormacy of Google, pharmacy of Google, you know, chairman of National Security, a group that was trying to figure out how to merge artificial intelligence and too defence contractors, and how to create a you know, foreign policy that would allow America to dominate, you know, really to win an arms race and AI arms race with China.

And he comes away saying the strategy I basically helped craft, which was that we need to prioritize a GI so that we can get prioritize a GI so that we can get like a permanent lead to deter any potential rivals.

Is scaring everyone, you know, and it doesn't work.

It's a waste of capital, it's misallocating capital, it's it's imposing all these harms.

And if we look at the you know, competitor that we're going up against against China, that by abandoning the AGI pursuited instead prioritizing ways to figure out how to experiment with it, integrate with it, build up you know, build up practical use applications.

You know, there's a much more general public acceptance of it, willingness to try it out, adopt it.

And we're not seeing because we're not trying to scale out these massive either monopolies or one size fits all models.

You see wider you know, adoption and something that looks like it's a more sustainable model.

Are we going to follow it?

Probably not, of course not.

Speaker 2

What I love about chasing China as well is China has had stories for like a year where it's like, yeah, we have a bunch of unused GPU compute.

Yes, well we're massively overbuilt.

The Joseph Sia I think it was the Chinese billionaires said, yeah, it's a bubble.

We haven't really a gp of bubble, and America is just like we can need to fucking copy it, we need to beat them.

We're great, We're gonna run our economy into the ground.

Speaker 4

China, it's like we're saying we're gonna confu them, And what is it that we're actually doing.

We're prioritizing developing artificial intelligence that has like a question mark consumer use.

That's going to be used in you know, killing machines and drones maybe and for surveillance purpose.

Yeah, that's not even generator of a I.

But that's where the actual excitement for any sort of artificial intelligence future is.

And this is you know, the generative AI stuff.

It's talked about as if it is the future, the transformative future of artificial intelligence.

In reality, it's just the actual you know, the actual interest excitement capital is gonna I think go back to the center of gravity, which is like, how do we just figure out the shiniest, the most fearsome weaponry.

Speaker 2

But I think that what's weird about this is I've I don't think we've had a bubble that spreads so far into consumers' hearts.

I'm not saying it's as bad as the housing bubble consumer software.

If we go back to the dot com boom, I think it's like forty five percent of Americans had access to the Internet.

It was relatively small in comparison, though the massive other investment in fiber happened.

But I don't think people realize what they see to the chat GPT may not exist in a year or two at least not in the same way it's going to be.

So you're already seeing week long weight rate limits on anthropics.

Claude, like, do people not realize that this could have I guess they don't realize, and I don't.

I think that there's going to be a big dunce mask off.

There are so many people who have fallen behind this.

I mean not to bridge too aggressively into this, but there is a story in the Wall Street Journal that I shared with you, of course, about this movie called Critters with a Z or a Z from my Canadian UK listeners, where open Ai will be providing the compute and the tech to do a movie called Creators with a budget of less than thirty million dollars.

Though it's not obvious whether open ai and their computer is part of that.

But it's the weirdest shit in the world.

Alison, you would bring this up that like they're still using a bunch of humans.

Speaker 1

Yeah.

Speaker 3

So I was reading the same story and I haven't done any This came out this morning, so I haven't done my own reporting on it, but I will say from the story I read, it seems like they're hiring two different animation studios with artists and writers working on the script.

They're hiring human actors to voice the characters, and then some mystery X amount of the movie will be put together with AI.

And I honestly don't know how different that is from a regular Pixar or DreamWorks animation process, but it seems like there's When I first saw the you know, the teaser image is very cute, and I was like, oh God, They're like, this is gonna be some AI propaganda and it's gonna be very cute and hard for me to refute.

But actually it's just a human made movie, it seems, and with an extra computer help.

Speaker 2

And this picture I'm holding up.

Of course we're listening to a podcast that you can all see this.

It's just this generic blue furry creature.

Speaker 3

It looks like an extra from Monsters, Inc.

Speaker 2

It really does.

And what it's not a copyright.

Yeah, it's the same thing, it's different.

But what's funny with that?

As well?

As I was mentioned this as a lead in it's that thirty million dollar thing.

If that doesn't include open eyes compute, it probably costs the same as a pixel movie.

Because you're still like, actually, three D animation is one of the few other GPU use cases, so really it's just a different thing running.

It'll be funny.

Speaker 4

Also if they save money because they don't do any marketing, and they're like, see how cheap it is if you don't have as a movie at all.

Speaker 3

I think they might be getting around some Hollywood unions.

Speaker 2

Yeah, oh really, they're going completely overseas too.

Speaker 3

I'd have to check.

Don't quote me on it, but I think I know they were using at least one overseas animation studio, so they're probably saving a lot on the animation process by not paying animators.

I would guess.

Speaker 2

It's so cool.

And also another fact from the story is we don't know how long the piece will be, and it's if it's like five minutes long, I'm so sorry, come on, feature length, feature length.

Speaker 4

I should make it as long as the Silent Napoleon film.

Speaker 2

Which one six hours.

Yeah, I actually love that they should be forced to it.

I don't know how they're going to do a feature long movie, because I don't know if you've cursed yourself by looking at the AI generated movies that people try every so often, one pops up on Twitter where it's it'll be Yeah, I made this entire thing in AI, and you look at the front and it's like a different fucking thing each time that balloon boy one, different size balloon, different color balloon.

You read the stories about the balloon Boy one, it's like, yeah, they kept putting a face in the balloon.

We don't know why.

I just I and I know I have a good amount of film and TV people who listen who are quite anxious about this.

This doesn't scare me because they're very vague about the details.

Every other big tech innovation, I even other than the metaverse, I guess they usually like to show you behind the curtain a little bit and like talk up that it.

There'd be a big, splashy story and like MIT Technology Review or something like that, being like all New York Times be like, oh look at this, look at that, look at all things, and the like, Yeah, we're just using some people somewhere in a place and they will make it.

And in the War Street Journal story as well, they showed sketches that would then be turned into AI.

I just this feels like a death rattle far more than something terribly scary, And I understand film TP people are likely a bit scared, but it's like they're using out of the country studios.

They're try that.

Of course, I just assume they're skipping union stuff because this is all they do.

It's like this is the best they can squeak out years in fucking how is this?

And it's like a boring looking children's thing, I guess with a name from two thousand and one.

Speaker 3

Oh, it does have the producers or writers who worked on Paddington and Peru apparently, So what a.

Speaker 2

Movie about a criminal a sequel to a movie about a criminal who unfairly attacked you?

Grun No, sorry, I mean then this is the question of.

Speaker 4

You know where in the Uber analogy is this is this you know Uber's failed expansions where they tried different models overseas, or is this Uber returning home where they take the lessons from overseas or they use those overseas things to buy them a bit more time to then subsidize operations.

Speaker 2

Here is my comparison, and this is the drone deliveries.

This is the drone deliveries.

It's the Amazon drone deliveries.

Great job, Casey and Kevin several years ago talk about the Amazon drone deliveries and never fucking happened.

Mate.

It's hilarious as well because it is the same thing.

It's like, we cobbled together this.

It sucked.

It took so much money.

It's horribly inefficient.

It sucks.

We hate it, you hate it, the customers hate it.

We hate we hate doing this, but we did it.

M M toada And it's okay, Well you sure prove that, do you?

Wort Alison.

It's like, yeah, we use the power of AI to hire a bunch of people to do all the real work, because you can't trust this to work.

It does not work.

Speaker 3

When I saw a headline for an AI movie, I was like, it's it's gonna be awful.

Yeah, it's gonna like writing a movie is hard.

Speaker 2

Well wait a minute.

Also, this is the other thing of Oh my god, how are they gonna lip sync this shit?

How do you live sing this?

You can't generate, you can't generate the same frame.

How are they gonna Are they gonna go in and post edit it with humans?

I assume at this point how much you actually relying on AI for.

Speaker 3

It's very unclear.

Speaker 2

That's just it all fits.

It feels like being at a party where everyone's pissed themselves.

Speaker 3

It does feel like some next level like young propaganda.

Yeah, like if they can get kids to enjoy whatever this monstrous movie is going to be, then maybe there's like a longer term brand play for open Ai as a warm and cuddly but safe for children.

Speaker 2

The thing is French and Korean companies have already already been doing slot based three D shows.

I don't mean the famous one from the career K pop demon Hutness, which is apparently very good.

I've not watched it, and please don't kill me.

I'm not attacking that.

I'm talking about there is a gluttony of like very cheap kids three D kids shows, and they've been around for decades because you can do this on the cheap.

Now you've already another thing where the Uber model made sense as long as you didn't count the costs, which is, yeah, this is a way of getting people around that become dependent on because it's useful.

This is like we have found an extremely expensive and annoying way to do something that we already have a cheap alternative to do.

It's not like there was a cheap, a cheap, reliable cab service that Uber replaced there was a slow shit cab service that Uber replaced everywhere, and it's like, is it a good company?

Is it horrible to work us?

Yes?

But does it work?

Yes?

This is we're going to automate everything with the power of AI, other than labor, other than stuff.

Speaker 3

That's where the AI story starts to overlap again with Crypto, where at least with Uber you understand what you're getting as a consumer, and then with AI you're kind of like, I don't really know what this is.

I don't know what problem it's solving.

It's like a solution in search of a problem.

And that was Crypto's same bag.

It's just like, oh, we invented this cool new alternative money system.

Speaker 2

Why the thing is with Crypto is they always had a plan, which sucks.

I really should have seen it coming.

I was not smart enough at the time.

It was they always wanted to just get embedded in the financial system and then just turn the funny money into real money.

Hey, I doesn't have that.

There is no way to turn this into new You can't just generate new money.

That's what Crypto did and it fucking sucks.

And by the way, the next Crypto crash is going to wash out some real like it's gonna really fuck people up.

I don't think people realize that SBF two, who at this point might just be SBF Like if he just gets pardoned and comes back, Honestly, if he comes back and does it again, no one, no one can complain.

Speaker 4

I'm going to the last clove.

I've got to go in the hyperbolic time chamber.

I joined the fight just so I can put him in cuffs.

Speaker 2

You're going to put Sam Backman free back in cuffs, Yes, Sam Altman free would be good.

It's it's just I don't see an end point for this.

I don't see everyone's that, even the boosters at this point, they're like, and then it will be powerful.

When how what are you seeing that even tells you this.

I don't even want to fight.

Just tell me.

Speaker 3

I do think there's so there's just so much money behind it, and there's so many people who've invested.

I was listening to a VC guy get interviewed on the Odd Lots podcast, and I can't remember his name, so I apologize, But he was talking about how all these founders, like all these smaller startups that are getting in on the AI game.

All these founders have kind of been raised with this idea of Silicon Valley and what it will bring you, and it's life changing amounts of wealth and when you have enough people, and like the vcs are part of that, the actual tech startups are part of that.

Stanford and like kind of the whole ethos of the of the valley is like, if you just keep going and work hard enough, you can have generational wealth and that is a very powerful force.

And I feel like I think we're going to be seeing AI kind of hype last longer than we have and other previous bubbles and tech cycles, in part because the potential for the wealth is outstanding and it's like nothing we've ever seen.

Speaker 2

But that's the thing.

You're completely right, except AI has one problem, which is all the companies lose a shit ton of money and no one's selling, no one is buying that.

There's been like three acquisitions.

There's one to AMD, one to end Video, one to a public company called Nice, which sold the customer It was an AI cognitive I think they were called.

It was like an AI customer service thing.

Either never really seem that good anyway, But that whole thing is true.

And I think that that's what people and I think that the myth of you can just use AI to spin up a startup quickly as well has kind of gone into has kind of fueled that myth us as well.

But the problem is this is so different because the whole point of silicon Bag, the whole thing where you can just move there and start a startup, is because it didn't cost ruinous amounts of money to start one.

You didn't get three million dollars from a VC and expect to spend two and a half million of that on compute.

You were like, Okay, we're gonna we're gonna have to bootstrap a little bit further.

We've just got a little bit of venture capital.

We're gonna go this far.

This is like every step of the way, this cost increases massively.

It used to be it was sales and marketing and just people.

AI is people plus compute plus marketing plus this plus that I think, you know, perplexity.

The AI search engine spent one hundred and sixty four percent of their revenue in twenty twenty four just on compute and aws like, it's like, this is not this whole generational wealthing.

I fully agree it's what they're using to sell it.

I just don't think it's gonna work.

And it's scary because this could have the wide thing, and I really haven't talked about this enough.

The wider problem is as well, is all of these people who went to Silicon Valley raised all this money or have pretty much raised to sell companies that will never sell, that they can never take public because they've barn too much money, they don't really have great user bases because they don't really have those, and so they're just gonna sit there.

And then you've got a bunch of VC money tied up in that that will never exit, and a bunch of limited partner money that will never exit.

I think that there is an entirely separate bubble building that when that bust is going to, the depression within Silicon value is going to be insane.

It's already pretty gnolly, but I think it's like thirty three percent of bench capital when it's a AI last year.

It's like, eventually people are going to realize there's no exit for anyone.

And I don't know what that does.

I mean, it will piss off limited partners.

The money that comes to VCS is just not going to be there.

Speaker 4

Well, so then that's the question, right, because venture capital encourages on one level overvaluing because you need to figure out a way to make more money than what you put in on the exit within acquisition or some merger.

Yeah, but on another level, you're also working within a network trying to enrich yourself and your friends, or trying to build the infrastructure for future startups, portfolio options that you and your friends make to come in and make money, whether.

Speaker 2

You're building a platform the other people can invest in bits.

Speaker 4

Of and so you know, on one level, I really, I really do.

I agree that there's not really much of an exit ramp if there's actually no revenue and no profits.

But then also i'd be curious, like do you think they're going to try to ram these things through similar to like what we saw with Corey Weve right where you know, like you you talked, I think extensively about ways in which the financials there do not actually make sense if you're interested in a company that actually has the capital to do what it's going to say is going to do, which is provide gpu compute to everybody, and even though it has such a central role in this ecosystem.

It can't make profits that are you know, that justify the capital that's getting it has odious and burd burden some debt that should be a massive red flag and it might be you know, round tripping, right, yeah, but this is supposed to be like the darling of the sector and it and it got pushed through.

Part of me feels like because of.

Speaker 2

Of I mean Invation and Video and Magneto and Magnetar Capital of course famous for the CDOs.

Yeah, right, start they're back.

But with core Weave, they pushed it through onto the markets.

But that doesn't mean it can't die.

Speaker 4

Right, Well, so that's the thing.

Speaker 2

I do you think.

Speaker 4

That it's possible that they'd be successful in pushing it onto markets but it dies because I do, Yes, I feel like there will will definitely be a lot of investment incineration, But I also do think we're gonna have bags dumped on everybody.

Speaker 2

I think you could do it with something like core Weave and Lambda, which is another situation where Nvidia is the customer invested and also sells them the GPUs, which they then use as collateral to buy more GPUs using debt which is so good.

You'll notice that there are no software companies going public.

There's no software AI companies going public.

Everyone thinks that open ai goes public here and oh they've got the market's gonna if they can even convert, the markets are gonna eat them for dinner.

Oh yeah, we're gonna burn bazillions of dollars forever.

No, they the markets didn't like Core.

Weav that Core We've wouldn't have gone public had Nvidia not put more money in.

Lambda is probably gonna be exactly the same, if they even make it.

You won't see software companies because that's the other thing Core We've had, albeit bunches of debt assets, they have data centers kind of through Core Scientific God, I hate these fucking companies, but they don't they have things that they can point to in relationships, even open Ai.

That's the thing with them, they don't.

They barely have assets.

Oracle is building their data center in Abilene with Cruso.

They don't own any of the GPU.

They have a few GPUs I think for research I've heard, but Microsoft owns most of their their infrastructure.

They don't own their R and D well they do, but Microsoft also has access to that their intellectual property.

Same deal.

So it's like what actual value doesn't AI start up have?

People always say oh, they're getting the data.

They get the data so that the data will tell them.

It's like what it's all that these horrible stories about, like oh, Dozea's got an LLM, they're doing this one.

What's the end point?

It's scaring, don't get me wrong, But and then what and there never is one?

And I hope someone, I hope a AI software company goes public.

I want to see this so bad.

I want to see you have any idea.

If you give me the open AI books, the anthropic books, you become the official homie better offline.

I'll mention you on every episode.

Get me these books.

But because I think all of them are going to be like a dog's dinner, there's I've actually looked at the markets an uber uber by comparison.

They did burnish it to the money.

It's it's like twenty five billion between twenty nineteen twenty twenty two.

A lot of that was on sales and R and D.

It's pretty much group on.

I think also the R and D with the autonomous cars, but separate problem but it's like, there wasn't a I can't find an example of someone that just annihilated fuel unless it's like planes.

And I think we've established the use case for planes by now clear god Old.

It's just it's all very frustrating.

But you know what, I think I'm gonna call it there.

I think we've had a good conversation.

Allison.

Where can people find you?

Speaker 3

You can find me on Blue Sky at amorrow or on CNN dot com slash nightcap ed.

Speaker 4

You can find me on Twitter at Big Black Jacobin.

You can find me on Blue Sky on Edward and Guss Junior, and on substack at the Tech Bubble.

Speaker 2

And you can find me of course at Google dot com.

Just type in prabagar Ragavan you'll find me.

I pop right up.

That's all me.

Thank you so much for listening to everyone.

This is my episodees are coming out in a weird order because I'm recording this knowing there's a three part of this week, but this will come out with a monologue of some sort.

Thank you so much for listening everyone, of course, but he thank you for producing here out in New York City.

And yeah, thanks everyone.

Speaker 6

Thank you for listening to Better Offline.

Speaker 2

The editor and composer of the Better Offline theme song is Matasowski.

You can check out more of his music and audio projects at Matasowski dot com.

Speaker 6

M A T T O S O W S K I.

Speaker 2

You can email me at easy at better offline dot com or visit better offline dot com to find more podcast links and of course, my newsletter.

I also really recommend you go to chat dot where's youreaed dot at to visit the discord, and go to our slash.

Speaker 6

Better Offline to check out our reddit.

Speaker 2

Thank you so much for listening.

Speaker 1

Better Offline is a production of cool Zone Media.

For more from cool Zone Media, visit our website cool Zonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever

Speaker 3

You get your podcasts.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.