Episode Transcript
Media.
Speaker 2Hi, I'm ed Zitron, and welcome back to Better Offline And this is our third and final part of our Better Offline and Vidia special, where we're talking about well, the shakiness behind its growth and how the company, despite being on incredibly infirm ground, is definitely not Enron or Nortail or Wolcom or Lucien or any other dot com bubble era affirm that imploded under its own way and well quite dodgy accounting.
The thing is, even if Enron is nothing like them, there's still quite a few causes for concern, and that's largely driven from the fact that in Vidio makes the majority of its money selling GPUs to a handful of customers, and so well, some of those also look to be on some of their own incredibly shaky ground.
And yeah, I'm talking about Oracle now.
In Vidia's health saying nothing of its growth isn't just tied to these customers.
It's also tied to whether these customers can actually turn a profit from their capex spending.
And even that's not even certain so due to the fact that so much money has been piled into building AI infrastructure and big tech has promised to spend hundreds of billions of dollars more in the next year.
Big tech has found itself in a bit of a hole.
How big of a hole?
Well, by the end of the year, Microsoft, Amazon, Google, and Meta will have spent over four hundred billion dollars in capital expenditures, much of it focused on building AI infrastructure, on top of two hundred and twenty eight point four billion dollars in capex in twenty twenty four and around one hundred and forty eight billion in capital expenditures in twenty twenty three, for a total of seven hundred and seventy six billion in the space of three years, and they expect to spend more than four hundred billion dollars more in twenty twenty six.
Every time I read these numbers, I feel a little crazy.
As a result, based on my own analysis, big tech needs to make two trillion dollars in brand new, brand spanking new revenue, specifically from AI by twenty thirty.
All of this was effectively for nothing.
Now, I go into detail about this in the premium newsletter I did on October thirty first, but I'm going to give you a short explanation here.
First, though, we have to talk about depreciation, and because I'm lazy, I'm going to quote myself in that newsletter I just mentioned a couple of seconds ago at hem.
So when Microsoft buyers say one hundred million dollars worth of GPUs, it immediately comes out of its capital expenditures, which is when a company uses money to invest in either buying or upgrading something.
It then adds to its property, plants and Equipment assets PPE for sure, although some companies list this on their annual and quarterly financials as property and equipment PPE sits on the balance sheet, it's an asset as it's stuff for the company that it owns or as least, GPUs depreciate, meaning they lose value or over time, and this depreciation is represented on a balance sheet and the income statement.
Essentially, the goal is to represent the value of an asset that a company has on the income statement, and we see how much the assets have declined during the reporting period, whether that be a year or a quarter or something else, Whereas the balance sheet shows the cumulative depreciation of every asset currently in play.
Depreciation does two things.
And I know this sounds like a lot, but I'll break it down for you First, it allows a company to accurately to an extent, represent the value of things it owns over their useful life.
Secondly, it allows a company to deduct the value of an asset across said useful life right up until its eventual removal, versus having to take a big hit up front.
The way this depreciation is actually calculated can vary.
There are several different methods available, with some allowing for greater deductions at the start of the term, which is useful for those items that will experience the biggest drop in value right after buying them and their initial use.
An example you're probably familiar with is a new car which loses a significant chunk of its value the moment is driven off a dealership.
Long creation has become a big ugly problem with GPUs specifically because of that useful life to find either as how long the thing is able to be run before it dies or how long before it becomes obsolete, and nobody seems to be able to come up with a consensus about how long this should be.
In Microsoft's case, the appreciation for its service is spread over six years, a convenient change it made in August twenty twenty two, A few months before the launch of chat GPT and before it bought a bunch of fucking GPUs.
This means that Microsoft can spread the cost of tens of thousands of a one hundred GPUs brought in twenty twenty or the four hundred and fifty thousand, h one hundred gupus it bought in twenty twenty four across six years, regardless of whether those are the years they'll be generating revenue or naturally functioning.
Corwy for what It's worth says the same thing, but largely because it's betting that it'll still be able to find users for older silicon after its initial contracts with the companies like OpenAI expire.
The problem is is that aigpus are fairly new concepts, and thus all of this is pretty much untested ground.
Whereas we know how long, say a truck or a piece of heavy machinery can last and how long it can deliver value to an organization, we don't know the same thing about the kind of data center GPUs that hyperscalers are spending tens of billions of dollars on each year.
Any kind of depreciation schedule is based on at best assumptions and at worst hope.
Now this is important.
The concept of an AI data center is super new.
We maybe saw the first ones in twenty ninety.
In question, it's kind of hard to say, but even at the scale we're seeing today a Gigawa data center pretty much brand new, maybe a couple years old.
I don't even think they've even built any but we'll get to that in a bit.
There are a lot of assumptions at play.
There's the assumption that the cards won't degrade with heavy usage, or the assumption that future generations of GPUs won't be so powerful and impressive that they'll render the previous ones more obsolete than expected, kind of like how the first jet powered planes of the nineteen fifties did to those manufactured just a decade prior.
The assumption that there will be in fact a market for Alder cards, than that there'll be a way to lease them profitably.
What if those assumptions are I don't know, wrong.
What if that hope is ultimately irrational.
So there's a quote from the Center for Information Technology Policy framing this problem, well, that'll link to in the notes.
Here is the puzzle.
The chips at the heart of the infrastructure build out have a useful lifespan of one to three years due to rapid technological obsolescence and physical wear, but companies appreciate them over five or six years.
In other words, they spread out the cost of their massive capital expenditures over a longer period than the facts warrant what the economist is referred to as the four trillion dollar accounting puzzle at the heart of the AIICLOUD.
This is why Michael Burry brought it up recently because spreading out these costs allowed spigtech to make their net income i e.
Their profits look better in simple terms.
By spreading out the costs over six years rather than three, hyperscalers are able to reduce the line item that eats into their earnings, which makes their companies look better to the markets.
So why does this create an artificial time limit?
Well, let's start with a horrible fact.
It takes it's two point five years of construction time in about fifty billion dollars per gigawa of data center capacity.
No matter when the GPUs for a giga what data center are bought one way or another, these GPUs are depreciating in value either through death or reduced efficacy through wear and tear, or becoming obsolete, which is very likely as in Video is committed to releasing a new GPU every single year.
Newer generation GPUs, like in videos Blackwell and Verra Reuben require entirely new data center architecture, meaning that one as to why they build a brand new data center a retrofit an old one.
Essentially, we have facilities that are being built around a GPU design or product that may change in a year or two.
Now I hear that in the Oberon racks that they use for the Blackwells will be used with some Verra Rubin.
But even then there's going to be an even bigger, more huger Vera Ruben that comes that might even that I read somewhere that there might even be like killer what level ones just in like one hundred killer what ones that this company's insane.
Nevertheless, at some point Wall Street is going to need to see some sort of return on this investment, and right now that return is negative dollars.
I break it down on my October thirty first premium piece, but for your sake, I'll just say it.
I estimate the big techniques to make two dollars for every dollar of CAPEX they've spent, and this revenue must be new, brand new, As this CAPEX is only for AI.
This CAPEX is useless for everything else.
It does not help it.
And no, it doesn't help that they bolted copilot onto fucking everything that is not working, and in fact, the Australian Competition Commission is suing them.
Maybe I mentioned that later, but whatever, Meta, Amazon, Google and Microsoft are already years and hundreds of billions of dollars in and are yet to see a dollar of profit, creating a one point two to one trillion dollar hole just to justify the expenses.
So around six hundred and five billion dollars of CAPEX all told at the time I calculated it.
Much of this CAPEX has been committed or spent before they've even turned on a single goddamn GPU.
You might argue that there's a scenario here where say, an A one hundred GPU is useful past the three or six year shelf life.
Even if that were the case, the average rental price of an A one hundred is ninety nine cents an hour.
This is a four or five year old GPU, and customers are paying for it like they would a five year old piece of hardware.
The same fate awaits the H one hundred, which was released in twenty twenty two but was still sold in great volume through twenty twenty four, and I hear the H two hundred of the same generation is still selling to this day.
Every year in Vidia releases a new GPU, lowering the value of all the other GPUs in the process, making it harder to fill in the holes created by all the other GPUs is capex and costs.
This whole time, nobody appears to have found a way to make a profit, meaning that the hole created by these GPUs remains unfilled, all while big tech firms by more GPUs, creating more holes to fill.
So now that you know this, there's a fairly obvious question to ask, why in the hell are they still buying GPUs?
Well, so, where the fuck are these GPUs going?
So a few weeks ago I wrote a piece Premium one called the Hater's Guide to Nvidia, and I asked the basic question in there, where have all the GPUs that Invidia has sold actually gone?
In particular, the six million Blackwell GPUs that Jensen one keeps banging on about Now there's little evidence that these are being used in the volume which they're sold, suggesting that they're either languishing in the supply chain or being warehoused by hyperscalers, or even in Nvidia themselves.
Now there's the argument that this could be and this is wanky Nvidian bullshit, this could actually be two GPUs per GPU sold because there's two chips on each GPU.
Even if that was the case, three billion GPUs Blackwell, specifically the brand new ones, they're not in service.
Speaker 1Now.
Speaker 2While I'm not going to go and copy paste an entire premium piece into this script, I am, however, going to go into detail about what I found.
And the truth is, I can only really see in this inclodes, looking over like bunches of data center maps, reading hundreds of press releases, documents, earning statements.
I've only been able to find maybe a couple hundred thousand Blackwell GPUs in existence, maybe half a million to seven hundred and fifty thousand if you include the stuff that hasn't even been built yet.
Let's go into it.
So Stargate Apilene allegedly four hundred thousand Blackweld gps A, going there now Oracle CEO Coco, I should say, Clay McGurk, that's probably not how you say that.
He claimed very recently there were ninety six thousand of them in stored, so not great.
There's theoretically in one hundred and thirty one thousand Blackwell GPU cluster owned by Oracle that they announced in March twenty twenty five, so that should be online.
Never five thousand blackweld GPUs at the University of Texas Austin, which sound like they're online, more than fifteen hundred in a Lambda data center in Columbus, Ohio.
Those are online.
The Department of Energy is still in development one hundred thousand GPU supercluster as well as ten thousand in Vidia blackweld GPUs that are expected to be available in twenty twenty six, and it's Equinox cluster.
Really can't establish how many of those are actually in operation.
Fifty thousand of these blackweld GPUs going into the still unbuilt musk Run Colossus to supercluster, Corby's largest GB two hundred Blackwell cluster of two four hundred and ninety six Blackwell GPUs, tens of thousands of them deployed globally by Microsoft, including forty six hundred Blackwell Ultra GPUs and two hundred and sixty thousand of them, these Blackwell GPUs going into five AI data centers for the South Korean government, and yeah, I just want to be clear that that is also fairly recently announced, so probably not even not even built, let alone powered on.
I goan to be honest, I'm genuinely unable to find one million Blackwell GPUs like inexistence.
Now, some of you might say, oh, there's a bunch of secret ones.
There's a bunch of them.
They don't announce every single one.
Here's the thing.
Three million of these fucking things have allegedly been shipped.
I can't find a million of them.
And considering everybody always talks about their GPU purchases, I'm kind of shocked at calm.
Now.
I do not know where these six million black Weld GPUs have gone, but they certainly haven't gone into data centers that are powered and turned on.
In fact, power has become one of the biggest issues of building these things, and the fact it's really difficult and maybe impossible to get the amount of power these things need to the goddamn data centers.
In really simple terms, there isn't enough power or built data centers for those black world GPUs to run, in part because the data centers aren't built, and in part because there isn't enough power for the ones that are.
Microsoft CEO Sacha Nadella recently said in a podcast that his company and I quote didn't have the warm shells to plug into, meaning buildings with sufficient power, and heavily suggested that Microsoft may actually have a bunch of chips sitting in inventory that they couldn't plug in now.
Just to give you an estimate here, even if we say three million GPUs, even if we're going with the moonmath, the Vinvidia, if we're going into the make believe world, the twisted mind of Jensen Wang, still three million GPUs.
We'll look at still like five or six gigawatts of capacity.
It's not being built.
I don't even think two gigawats of data center capacity have been built.
And I swear to fucking God, if one of you emails me and said, and Eric is built, and twenty gigawatts or something, power can get built.
Power can get built.
You can build power, getting it to the data center and actually powering the data center correctly, as in things turn on, everything works, nothing overloads, nothing blacks out, and the power is consistently done.
Takes what's just months of surveys and scientific stuff and then years to just get it done.
Stargate Appleine only has two hundred megawatts.
They're gonna need over one point four gigawats just to turn the fucking thing on.
I'm so tired of He's God damn GPUs bo.
With all this said, why pray tell?
Is Jensen Huang of Nvidia saying that he has twenty million Blackwell and Vera Rubin GPUs ordered through the end of twenty twenty six.
Where are they fucking going?
Jensen?
Now, I think that number also includes the six million.
And also, to be clear, I know a lot of you aren't technical, which is awesome.
I love.
I want you all to know about this.
You need to know that this is part of the course within Nvidia in video loves schmushing accountancy things together and coming up with random numbers.
Credit the case could go our friend of the show for telling me the story.
But during the early twenty twenties, so that there is twenty twenty two during the Big Crypto rush.
In Vidia classified gaming GPUs that were sold to bitcoin miners as gaming revenue.
They got digged by the SEC.
Wasn't fraud, but just so you know, in video will move shit around.
And I truly do not know where these GPUs are.
I do not know even why anyone is still buying GPUs.
Speaker 1Now.
Speaker 2AI bulls will tell you that there's this insatiable demand for AI and these massive amounts of orders are proof of something or rather, and you know what, I'll give them that it's proof that people are buying a lot of GPUs.
I just don't know why nobody has made a profit from AI, and those making revenue aren't really making that much.
Let me give you an example.
My reporting on open ai from November twelfth suggests that the company I only made four point three two nine billion dollars in revenue for the end of September, extrapolated from the twenty percent revenue share that Microsoft receives in the company.
And now some people who write really shit our substacks have argued with the figures, claiming that they're either delayed or are not inclusive of the revenue that open Ai is paid from Microsoft as part of being's ai integration and sales of open AI's models throughout Microsoft as ure.
So I want to be clear of two things.
Some a deeply bitter person.
This is a crual accounting, meaning that these numbers are revenue booked in the quarter I reported them.
Any comments about quarter long delays or naive approaches, and you know who I'm fucking talking about, if you're listening, are incorrect and a riboso.
Also, Microsoft's revenue share payments to open ai kind of pathetic, totally based on documents reviewed by this newsletter publication whatever you call me, media entity floating blob in the podcast, for sixty nine point one million dollars in counting the year Q three, twenty twenty five.
And by the way, the actual number for that three month period, including all royalties, is about four point five two seven billion dollars of revenue.
I just want to be clear about something with open Ai.
I'm not saying they're misrepresenting their numbers to anyone.
I hope that open ai is being honest with their revenues.
But if it comes out, I'm right, if it comes out that it turns out that they've been telling investors completely different numbers, I'm gonna be absolutely fucking insufferable.
I'm going to bring in I'm going to be playing Tommy Trumpets a walk around cheering that you can get five minutes of monologue about that.
Also in the same period, Open Ai spent eight point six seven billion dollars on inference, which is the process in which an l them creates its output.
This is the biggest company in the generative AI space, with eight hundred million weekly active users in the Mandate of Heaven in the eyes of the media, Anthropic, it's largest competitor, Allegacy, will make eight hundred and thirty three million dollars in revenue in December tween twenty five, and based on my estimates, will end up having about four and a half to five billion dollars if revenue by the end of the year.
Based on my reporting from October Andthropic spent two point six six billion dollars on Amazon Web Services through the end of September, meaning that it, based on my own analysis of reported revenues, spent one hundred and four percent of its revenue up to that point just on AWS likely spent as much on Google Cloud.
Now, the reason I'm bringing up these numbers is these are the champions, the champions of the AI boom, yet their revenues kind of fucking stink.
Wow.
Even if open ai made thirteen billion dollars this year, even if Anthropic made five billion dollars, okay, wow, so that's not even twenty billion dollars.
That's like nineteen billion dollars less than Microsoft spent on GPU's and other capex in the last quarter.
That's dogshit.
I'm sorry, I'm just tired of I am tired of humoring this.
I'm sure all of you are too.
I find it loathsome that we have to pretend these people are gifted somehow, they have shit.
Ask businesses that burn billions of dollars, and you know what.
Another thing I'm tired about is everybody telling this story about Anthropic being more efficient and only burning two point eight billion dollars this year.
Now one has to ask a question about why this company that's allegedly reducing costs had to raise thirteen billion dollars in September twenty twenty five, after raising three point five billion dollars in March twenty twenty five, after raising four billion dollars in November twenty twenty four.
Am I really meant to read stories about Anthropic hitting break even in twenty twenty eight with a straight face, especially as other stories say that we cash flow positive as soon as twenty twenty seven.
This company's as big a pile of shit as open Ai.
Open Ai raised eighteen point three billion dollars this year.
That's less than two billion dollars more than Anthropic, who makes a bunch less revenue.
Can't believe I'm defending open Ai.
But these companies are the two two largest ones in the generative of AI space, and by extension, the two largest consumers of GPU COMPUW Both companies burn billions of dollars and require an infinite amount of venture capital to keep them alive.
At a time when the Saudi Public Investment Fund is struggling and the US venture capital system is set to run out of cash in the next year and a half, The two largest sources of actual revenue for selling AI compute are subsidized by venture capital and debt.
What happens if these sources dry up.
They're not paying out of cash flow, and in all seriousness, who else is buying AI compute?
What are they doing with them?
Hyperscalers other than Microsoft, which chose to stop reporting its AI revenue back in January when it claimed it made about a billion dollars a month in revenue, don't disclose anything about their AI revenue, which in turn means that we have no real idea of how much real actual money is coming to justify these GPUs core We've made one point three six billion dollars in revenue and lost one hundred and ten million dollars doing so in the last quarter.
And if that's indicative of the kind of actual real demand for AI compume, I think it's time to start panicking about whether all of this was for nothing.
Corweave has a backlog of over fifty billion dollars in compute, and twenty two billion dollars of that is open AI company that burns billions of dollars a year and lives on venture subsidies.
Fourteen billion dollars of that is Meta, which is yet to work out how to make any kind of real money from generative AI.
And no, it's generative AI ads are not the future four or four media.
I love you, but that story was bunk and the rest of it is likely a mixture of Microsoft and Video, which agreed to buy six point three billion dollars of any unused compute from Corewave through twenty thirty two.
Should also be clear, I do pay and subscribe to four or four I love it.
Just the AI ads story was wank, I love you, I love you, Joe, I love I love I love the publication.
Sorry.
I also forgot Google, by the way, which is renting capacity from Corewave to rent to open AI, and I'm not shitting you.
Oh fuck Sorry.
I also forgot to mention that Corwy's backlog problem stems from data center construction delays.
That and Corewave has fourteen billion dollars in debt, mostly from buying GPUs, which is able to raise by using GPUs as collateral, and then it had contracts and customers willing to pay for it, such as in Video, who is also selling it the GPUs.
I also left something out of this script, which is that just the last week, core We've just raised another two billion dollars of debt.
When this all ends, I am going to be a little insufferable.
But let's just be abundantly clear.
Core Weaver has bought all those GPUs to rent open AI, Microsoft for open Ai, Meta Google for open Ai, and in Video, which is the company that benefits from core Weave's continuum ability to buy GPUs?
Otherwise, where's the fucking business?
Exactly?
Who are the customers, who are the people renting the GPUs?
And what is the purpose for which they're being rented?
How much money is renting those gps?
Can you?
Can you tell me?
Can anyone tell me?
Can anyone tell me anything?
You can sit and wank and waffle on about the supposed glorious AI revolution all you want, but where's the goddamn money?
And why exactly are we still buying GPUs?
What are they doing, who are they being rented for what purpose?
And why isn't it creating the kind of revenue that's actually worth sharing or products that are actually worth using.
Is it because the products suck?
Is it because the revenue sucks?
Is it because it's unprofitable to make the revenue?
And why at this point in history do we not know hundreds of billions of dollars that have made in Vidia the biggest company on the stock market, and we still do not know why people buy these fucking things, nor do we know what they fucking cost imagine if we sold cars and we didn't have a milesber gallon rating.
I'm serious.
That's effectively where we are.
Oh God, and Video is currently making hundreds of billions of dollars in revenue selling GPUs to companies that either plug them in and start losing money or I assume put them in a warehouse for safety.
And those companies increasingly a raking up mountains of debt to do so, and billions more in long term lease payments.
And this brings me to my core anxiety.
Why exactly a company's pre ordering GPUs?
What benefit is there in doing so?
Blackwell does not appear to be more efficient in a way that actually makes anybody a profit.
And we're potentially years from seeing these GPUs in operation in data centers at the scale they're being shipped, So why is anyone buying more?
I just want to be really specific about something, because I don't feel like I nailed this down two and a half years.
Fifty billion dollars per gigabot of data centers.
You may be thinking, what black Wells, You'll just shove them in the old data centers, right, No, they use these Oberon racks specific new racks.
They take a bunch more power, and they need a bunch of liquid cooling.
You can't just retrofit easily.
You have to bulldoze shit and rebuild well, remove all the housing and then add HVAC stuff.
It's very expensive and takes a long time.
And look, I just don't know what's happening with these GPUs, and I'm a little bit concerned, And I doubt these are new customers.
They're likely hyperscalers, neo clouds like core Weave, and resellers like Dell and supermicro, who also both sell to Gorewave.
Because the only companies that can actually afford to buy GPUs are those with massive amounts of cash or debt, to the point that even Google, Amazon, Meta, and Oracle are taken on massive amounts of new debt or without a plan to make a profit.
Oracle is looking potentially at fifty six billion dollars of debt.
It's completely bonkers.
In Video's largest customers are increasingly unable to afford its GPUs, which appear to be increasing in price with every subsequent generation.
In Video's GPUs are so expensive that the only way you can buy them is by already having billions of dollars or being able to raise billions of dollars, which means in a very real sense that in Video is dependent not on its customers, but on its customers credit ratings and financial backers, and the larger private credit institutions, which I'm eventually going to have to do a newsletter on and a podcast an because honestly, every time I read about the private credit situation with blue out again, I'll begin here in the wa bit from kill bill it's not good.
And to make matters worse, the key reason that one would buy a GPU is to either run AI services using it, or rent it to somebody else to run AI services, and the two largest parties spending money on these services are open Ai and Anthropic, both of whom lose billions of dollars and thus are much like the people buying the GPUs depending on venture capital and debt.
Now remember open ai and Anthropic both have lines of credit, four billion dollars for open Ai and two and a half billy for Aanthropic.
In simple terms, in Nvidia's customers rely on debt to buy as GPUs, and in Vidia's customers customers rely on debt to pay to rent them.
Yeah, it's not great yet it actually gets worse from there.
Who after all of the biggest customers paying the company's renting GPUs to sell their AI bottles.
That's right, AI startups, all of which are deeply unprofitable.
Cusser.
Anthropic's largest customer and now it's biggest competitor in the AI coding sphere, raised two point three billion dollars in November after raising nine hundred million dollars in June.
But plex D, one of the most popular right put that in their quotes, raised two hundred million dollars in September after raising one hundred million dollars in July, after seeming to fail to raise half a billion dollars in May.
After raising five hundred million dollars in December twenty twenty four, Cognition raised four hundred million dollars in September after raising three hundred million dollars in March, and Coher raised one hundred million dollars in September, a month after it raised five hundred million dollars.
None of these companies are profitable, not even close.
I read a story in Newcomer by Tom Datan that said the cursor cents one hundred percent of its revenue to Anthropic to pay for its models.
Very cool.
So I really want to lay this out for you because it's very bad when you think about So venture capital is feeding money to startups.
They fund the startups AI startups, and then they pay either or both open ai or Anthropic to use their models.
Now open Ai and Anthropic need to serve those models, right, So they then raised venture capital or there to pay hypersu scalers or neoclouds to rent Nvidia GPUs.
At that point, hyper scalers and neoclouds then use either debt or exist in cash flow in the case of Hyperscalers, though not for long, to buy more in Vidia GPUs.
Only one company appears to make profit here, and it's in Vidia.
Well in video and its resellers like Dell and super Micro, which buy in Vidia GPUs, put them in service and sell them to neo clouds like Lamter or Core with at some point to link in.
This debt back chain breaks because very little cash flow exists to prop it up.
At some point, venture capitalists will be forced to stop funneling money into unprofitable, unsustainable AI companies which will make those companies unable to funnel money into the pockets of anthropic and ope and AI who rent the GPUs will then not be able to funnel money into the pockets of those buying GPUs, which will make it harder for those companies to justify buying GPUs.
And at that point some of this comes to InVideo and Invidia doesn't make so make so much money.
I'm honest, none of Nvidia's success really makes any sense.
Who's buying so many GPUs and where are they going?
Why are in Vidia's inventories increasing?
Is it really just pre buying parts for future orders?
Why are their accounts receivable climbing?
And how much product is Nvidia shipping before it gets paid?
While these are both explainable as this is a big company and this is how companies do business, and that's true, why do receivables not seem to be coming down?
And how long, realistically can the largest company on the stock market continue to grow revenues selling assets that only seem to lose its customers money and don't seem to even be in use for years.
I worry about it in Nvidia, not because I think there's a massive scandal, but because so much rights and its success and its success rights on the back of dwindling amounts of venture capital, and there because nobody is actually making money to pay for these GPUs, let alone running them.
In fact, I'm not even saying in video goes tits up.
I want to be clear about that.
I think they may even have another good quarter or two in them.
It really just comes down to how long people are willing to be stupid and how long Jensen Wong is able to call up Sachin Adella and Co.
At three in the morning and say, buy one billion dollars of GPUs, you pig Finnom style baby.
But really, I think much of the US stock market's growth is held up by how long everybody is willing to be gas lit by Jensen Wong into believing that they need more GPUs.
At this point, it's barely about AI anymore, as AI revenue real cash made from selling services run on those GPUs, doesn't even cover the costs, let alone create the cash flow necessary to buy more seventy thousand dollars GPUs thousands at a time.
It's not like any actual innovational progress is driving this bullshit.
In any case, the market's crave are healthy in video has so many hundreds of billions of dollars of invidious stock sits in the hands of retail investors and people's four O one ks, and its endless growth has helped paper over the pallid growth of the US stock market and by extension, the decay of the tech industry's ability to innovate.
Once this pops, and it will pop because there's simply not enough money to do this forever, there must be a referendum on those that chose to ignore the naked instability of this era and the endless lies that inflate the AI bubble.
I will be walking around with a gavel.
I am going to be taking heads.
I am fucking sick of this era.
And what I'm most sick of is that so few people are still to this day willing to admit how bad this is.
And I know in the next few months we're going to get articles some major media outlets that say, how could we have seen this coming?
And like I said in the previous episode, they could have fucking looked.
All of them could have looked, and they could have looked a year ago.
The incredible support I get from all of you truly makes this show a joy to make, even though I've done way too many retakes on this and apologies to Mattasowski for the noises I make.
But I think in the next few months we're all going to be validated.
It's going to be the great vindication.
But until then, everybody is betting billions on the eye idea that wily coyote won't look down.
He's gonna have to at some point, won't it.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matasowski.
You can check out more of his music and audio projects at Mattasowski dot com, m A T T O.
S O W s ki dot com.
You can email me at easy at Better offline dot com or visit Better Offline dot com to find more podcast links and of course, my newsletter.
I also really recommend you go to chat dot Where's Youreed dot at to visit the discord, and go to our slash Better Offline to check out our reddit.
Thank you so much for listening.
Speaker 1Better Offline is a production of Cool Zone Media.
For more from Cool Zone Media, visit our website.
Cool zonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
