Episode Transcript
All zo media.
You know what time it is.
It's your better offline monologue for the week, and I'm ed z Tron.
Now.
Today's monologue is an attempt to bring you further into my work, which I've kind of already been doing with multiple listeners materially contributing to the reporting behind this show.
So many of you are eager to help, and if you want to help, well, reach out to me with information and do so on signal ez itron dot seventy six.
That's ez it r o N or e z it r o N if you're Canadian or British dot seventy six on signal.
Also, before I go any further, I've had a lot of emails and reddits and as such, if you're wondering about my thoughts on the Open Ai Disney deal, they are as followers.
Wow, Disney invested a billion dollars in open Ai.
That's about month's worth of inference.
This only delays the inevitable.
No one cares.
They're threatening Google with legal action.
Google will settle and they'll do exactly the same crap with VO.
Nothing will happen, a bunch of money will get lost.
Similarly, if you've heard about time naming AI executives as its Person of the Year.
Remember that Mark Bennioff of Salesforce Huji Booster owns the publication and literally ran advertorial for Salesforce's agent Force.
Wank.
It's nothing, nothing's happening.
It's just more money being passed around, so we can all get blown on nothing.
But back to the rest of the monologue.
I'm going to tell you about the things I need to do a better job.
Not don't literally mean the research.
I mean, if you know this stuff, I need your help.
Let's start with a big one.
If you have first hand knowledge of GPUs being warehouse, this is Nvidia AI GPUs.
Please God, if you reach out with this about gaming GPUs, I'll be genuinely pissed off of you.
But I need to know about this.
I want pictures if you can get them.
I want numbers.
No, I want to know whose GPUs they are and ideally what models A one hundreds eight one hundred's probably not going to be that, it's going to be those blackweld GPUs if you see them.
Similarly, if you've heard anything about Nvidia's aigpus that might not be well known, reach out especially if it pertains to how much they cost to run.
Similarly, if you know anything about Nvidia's blackweld GPUs that you suspect I don't, please do reach out.
I had somebody recently tell me about remarkable failure rates and I want to know more.
Similarly, costs.
I cannot ask for costs enough.
Help me.
Please, if you have any information pretending to the revenues behind open Ai, Anthropic or any major AI company, please get in touch.
Similarly, I want to hear from you if you work at any of the large cloud companies and know anything about the AI revenues or indeed the associated costs of running AI on Google Cloud, Amazon Web Services and Microsoft Asure.
And yeah, if you have anything from open ai or Anthropic about what they might be paying show as your Google Cloud or Amazon Web services, please please please reach out to me.
Similarly, if you have anything of the same from Corweave or Lambda or Nebius or any of those, please reach out.
Also, if you work at an AI startup, one that pays open ai or Anthropic to use their services or models, please tell me about those costs.
We don't know very much about them at all, and your thoughts and your help, well, it'd be very much respected and loved, and you'll be the better offline legend of the week if you can help me out here, maybe even the legend of the year.
I'm also looking at any and all information related to data center costs.
I want to know the total cost of ownership of any AIGP, and that includes the A one hundred, the H one hundred, h two hundred, B one hundred, B two hundred, B three hundred and so on.
Blackwell obviously my biggest priority.
I want to know how many GPUs hyperscalers have too, So if you work at Amazon, Microsoft or Google and you know how many of the goddamn GPUs they've got, please reach out.
And I want to know the hourly cost of running these GPUs, ideally on a per GPU basis.
I'm trying to work out how much it actually costs to run these goddamn things, and it's in the same we don't know now my Grail data.
The things that would materially change my reporting other than everything I just mentioned would be the underlying cost of running large language models, which means understanding both how many GPUs are used for inference in training and the actual costs of running said GPUs.
The costs of running GPUs are at the center of the bubble, and I believe that truth will be what bursts in.
Though I'm repeating myself a little.
My other grail data is the underlying cost of running CHET, GPT, Google, Gemini or any other major popular LLLM service such as Cursor or rep them the same gohoest for Microsoft, Meta, Amazon, Google, Oracle, or really any other major cloud provider.
Look together, I believe we can make the world better by providing the public with the truth.
And this era is one steeped in outright lines and stal selfishness that deprives good companies of funding and shoves dysfunctional software on millions of people that just wish it would go the fuck away.
Now, so many of you have already been so helpful, and I hope I can I can hear from more of you.
The show continues to grow.
It's been a crazy year.
I'm insanely grateful to have all of you.
And next week we have a three part in video podcast that tells you all about the largest we'd just weirdest.
I'm not fixing it.
I'm not fixing that you want the raw stuff anyway.
In video, it's the weirdest company in stock market history.
Oh God, I'm gonna hear from you all about that one.
I really do love hearing from you though it's been really crazy.
Twenty twenty five has been crazy and bad and good in many different ways.
And I'm sure I'm gonna I'm gonna hear fun stories from you all because a bunch of you reach out and no matter what I say, to reach out about or not reach out about, and I kind of love it.
I love how chatty you all are.
So yeah, look forward to hearing from you, and I hope you enjoy the podcast about the Weirdest company on the stock market.
