Basil Halperin: Leading Indicators for TAI, Conditions for the Singularity, and Tax Policy at the End of History
Episode Description
In this week’s episode of Justified Posteriors, we interview TAI expert and friend of the show Basil Halperin of the University of Virginia. There Basil is doing some of the most fascinating work on the economics of TAI with Anton Korinek and other leading researchers.
The first section of our conversation covers Basil’s early career, including jobs at Uber and AQI, how he got interested in AI as a research topic, and his role in managing the Stripe Economics of AI Fellowship.
We then discuss a paper we’ve already covered on the show: his work on whether the real interest rate can be interpreted as a leading indicator of the probability of TAI (or ‘doom’). Listen to our previous conversation on his paper, and view show notes, including links to that paper and blog post here: If the Robots Are Coming, Why Aren't Interest Rates Higher? Seth was previously convinced by Basil’s arguments, but Andrey was a hold out — we discover Basil’s takes about Andrey’s reservations.
Our third subject is Basil’s new paper with Anton about the relevant elasticities for a singularity in research progress “When Does Automating Research Lead to Explosive Growth?” Basil explains how the key issues are the degree of fishing out and spillovers in/across different industries, as well as the extent to which research can be automated. We also take a step back to ask what theoretical research like this teaches us.Finally, we cover Basil’s back and forth with friend of the show Phil Trammel’s new blog post with Dwarkesh about Piketty and optimal taxation in the age of TAI, link below, and ask him to explain the meme he posted, summarizing his arguments:
Additional references:
Does carbon taxation yield a double dividend (environmental plus fiscal)?
We hope you enjoy the conversation! Transcript follows:
[00:00] Seth Benzell: Welcome to the Justified Posteriors podcast, the podcast that updates its beliefs about the economics of AI and technology. I’m Seth Benzell, looking forward to the Basil exposition we’ll get today, coming to you from Chapman University in sunny Southern California.
[00:35] Andrey Fradkin: And I’m Andrey Fradkin, looking forward to creating a new accord with Basil, coming to you from San Francisco, California. And today we’re very excited to welcome Basil Halperin to our show. Welcome to the show.
[00:49] Basil Halperin: Thanks Andrey. Thanks Seth. Super excited to be here.
[00:53] Andrey Fradkin: So as background, Basil is an expert on the economics of transformative AI and he’s currently...
[01:00] Seth Benzell: Expert is underselling. He is one of the most interesting thinkers around on... Alright, continue.
[01:07] Andrey Fradkin: Yes, he’s great. And he’s a professor at the University of Virginia. We have an exciting show for you today touching on many topics, but we first wanted to get a start with some of the biographical tidbits. In particular, Basil, how did you get interested in this topic? And it seems like you were a lot earlier than other economists. So I’m curious what drew you in before everyone else to this interesting set of topics?
[01:38] Basil Halperin: I mean, not as early as you two, I don’t think. Uh, I don’t know. I was just a nerd growing up. I read a lot of sci-fi. I read Ray Kurzweil in high school when his The Singularity is Near book came out in the 2000s, just because it was popular. The idea got in my head. I was kind of like, “Well, this is interesting, but eventually...” I was like, “I have a few decades to work on other things before any of this becomes relevant.” And then GPT-3 came out in that long hot summer of 2020. I freaked out a little bit for a week or two. This is crazy. How is this happening so fast? So that sort of woke me up a bit. I started thinking about these issues and gradually more and more have gotten sucked into working on it.
[02:20] Seth Benzell: What were your favorite sci-fi growing up?
[02:23] Basil Halperin: Ender’s Game was always the classic.
[02:26] Andrey Fradkin: Now I saw on your resume that you spent a stint at AQR, which is a large capital management firm. I’m curious, what did you learn working there?
[02:37] Basil Halperin: Yeah. So I didn’t expect to go into finance out of college, but basically the opportunity came along. I found out that this firm seemed pretty interesting. So the background is, this firm was founded by two PhD students of Eugene Fama, the Nobel Laureate in finance. Basically taking his ideas seriously and other ideas from the asset pricing literature seriously and applying them to earn a bunch of money. So I didn’t know anything about finance going into that job. So I learned a whole bunch and some of that has been applied in my research that I think we’ll talk about today.
[03:13] Seth Benzell: Ooh, wait, yeah. Pricing assets in the age of AI. Fascinating.
[03:17] Basil Halperin: Yeah, yeah. Talk about it.
[03:19] Andrey Fradkin: So I do think this is an interesting background because a lot of people in our field don’t have a finance background. That’s not where they’re coming from in terms of thinking about technology. So it maybe gave you this strong, prepared mind to be thinking about the asset pricing implications of transformative AI. Did you get to interact with Cliff Asness or were you too much of a, like, intern, low-level employee?
[03:45] Basil Halperin: No, I was there for a year and a half or two years, but too junior. I think one time I made a bad joke to him in the elevator and he like, pretended to laugh. That was pretty much the highlight.
[03:56] Andrey Fradkin: Well, he also likes to make a lot of bad jokes, so you have that in common. Some of them are good too.
[04:05] Basil Halperin: [Laughs] These bad jokes are funny.
[04:06] Andrey Fradkin: What about at Uber? You also spent some time there working with John List, is that right?
[04:11] Basil Halperin: Yeah, yeah. John taught my first ever Econ class when I was undergrad at Chicago, Intro Micro. And he helped inspire me to become an economist plausibly. And then yeah, I worked for him when he was Chief Economist at Uber. Which, Andrey, as you well know, being an economist in tech is an interesting experience. And Uber in 2017 was a particularly interesting time because it was a controversial firm. Sort of like OpenAI is today, the firm that’s always in the headlines.
[04:42] Andrey Fradkin: Were there specific perspectives that you gained there that have informed your subsequent economics career? Or was it more of just like you learned some useful skills in data science or something else?
[04:55] Basil Halperin: Yeah, I don’t know how much super tangible I have to say, but it definitely was informative in general to work in the private sector before going into academia, just to see how different things are. You know, like in the private sector you’re being paid to tell your boss that he or she is wrong. And then in academia that’s not so much a recommended strategy.
[05:19] Seth Benzell: Wait, wait, okay. So tell us about... so you’re there, it’s in 2017. Uber is one of the most evil, fast-growing companies on the planet. So you said it was interesting. So what was interesting about that? Were you pressured to write an economics report you didn’t agree with? Did you feel like you had to like wear, you know, a hoodie going into the office as people were throwing trash at you? What was it like?
[05:43] Basil Halperin: No, it was just... I mean, I certainly didn’t have a negative experience or negative view of the company, though I’m sure there were negative things the company did, like any large organization. But the team I was on, this Chief Economist team, was like five people. So it was pretty small. So we just had a lot of leverage to go around the company, be sort of an internal consultancy and do a lot of crazy things, varied things that I otherwise never would have had the chance to do. Like I was sort of a software engineer for one month that I was there, which was otherwise something that never would have happened to me. Or running large scale experiments on a million riders or whatever, which... I would love to do macro experiments if any central bank wants to volunteer for some coin flips. But otherwise, as a macroeconomist now, I don’t really have that opportunity.
[06:35] Andrey Fradkin: So this kind of is a, you know, is a nice segue into our next topic, which is... like a lot of people are worried about their careers these days, obviously because of AI.
[06:49] Seth Benzell: Not me! Podcasting is never gonna go out of style, Andrey!
[06:53] Andrey Fradkin: Fair enough. But I think that’s a very broad question and perhaps too broad to answer. But I think for people with an interest in economics—you know, you were in tech, you decided to go into academia. I’ve made the same decision in my life. But I’m curious like what advice would you have? And maybe this is a good opportunity to also speak about the efforts you’ve been doing with the Stripe Economics of AI Fellowship.
[07:23] Basil Halperin: Yeah, okay. So two points here. One point is that I feel like on every good AI podcast, there’s a question of, “What do you tell young people? What they should be studying today?” And like there’s zero good answer to that question. So yeah, I don’t have any good answer to that question.
[07:38] Seth Benzell: Study the Justified Posteriors podcast. Listen to every episode every day. Three times a day.
[07:45] Basil Halperin: But besides that, it’s not clear. The other thing I guess I can say is that if you’re an economist, working on the economics of AI is like a really cool thing to do. There’s just like so much low hanging fruit. There’s so many insights that can be arbitraged from other fields, which is always a good place to be. You can... instead of going to have to pick the fruit yourself, you can just take the fruit out of other people’s hands, maybe translate it to the language of economics.
[08:12] Seth Benzell: Yeah, I understand later we’ll be talking about the economics of fruit picking. But so hold those fruit picking thoughts.
[08:20] Basil Halperin: All of my economic metaphors are about fruit. So we’re going to get pretty fruity or something today. Um, I don’t know, Andrey, maybe you were suggesting that I talk about this fellowship that I help run.
[08:31] Andrey Fradkin: Yeah, tell us about the Stripe Fellowship. What fruit is the Stripe Fellowship?
[08:35] Basil Halperin: Tell us about what you learned running it and what is it, you know, give a brief description. Yeah.
[08:41] Basil Halperin: Yeah, this is this fellowship that we run for early career economists that I do working with Stripe, the financial technology company. Where they decided that they want to support more economics research on the economics of AI, thinking that economists are not working on the issue enough. Which is an empirical claim that you can debate. And so we had the first cohort this past year, 24-25 fellows, mostly grad students, a few APs [Assistant Professors]. And this is a lot of... in part giving people money to do research, but in large part like building a community of people to speak together and share ideas and maybe work together. Folks that probably are listening to your podcasts and that maybe you all should consider interviewing. So that’s been super fun. Very interesting to be on the side of someone reviewing applications as opposed to being on the other side of applying and seeing... I mean, first of all, it’s frankly like... I can’t complain. It’s a very cool opportunity to be running this thing. But it’s terrible to reject people. Like it’s absolutely no fun. All these extremely well-qualified people who are definitely smarter and more accomplished than me. Like that’s not a fun part of it. On the other hand, very cool to get to support all these cool people doing very cool research and seeing them decide to co-author together and things like that.
[10:15] Seth Benzell: Oh, can you point... that’s particularly exciting. Can you point towards any papers that you think you may have generated that we should maybe discuss on our podcast?
[10:25] Basil Halperin: So two... so it’s been like six months or something since the fellowship launched and you guys know how long these timelines are. So no counterfactual papers yet.
[10:35] Seth Benzell: Oh, well I know how short my AGI timelines are.
[10:38] Basil Halperin: Well, you’ll have to tell us that later. No counterfactual papers yet, but a bunch of people have amazing stuff out. Phil Chen at Harvard just put out a very cool paper using GitHub data to look at how software engineer labor has changed. Parker Whitfill’s been putting out like a paper every few months on compute and labor, complements versus substitutes, with Cheryl Wu. And yeah, there’s a whole bunch of stuff. We have this website, you can Google “Stripe Econ Fellowship of AI” and see folks’ websites. There’s a ton of very cool stuff. I don’t have time even to read all the papers, at least yet.
[11:18] Andrey Fradkin: Well, that’s yeah, super awesome initiative. I guess, you know, one follow-up question on there. What do you think most of these people are going to be doing three, five years from now? Do you think they’re going to become assistant professors? Are they going to work at AI labs? Are going to do something else? Like what is the career trajectory for a young person?
[11:39] Seth Benzell: Are they going to be podcasters?
[11:41] Andrey Fradkin: Yeah, are they going to be podcasters? Like... and maybe, what do they think they’re going to be doing is an interesting question, right? Because it’s a time of great uncertainty.
[11:51] Basil Halperin: Yeah, I don’t know. So like... one way of answering that is that I think kind of any question about speculating about the future comes down to: how fast do you think AI capabilities are going to progress? AI technology going to develop? As has come up a whole bunch of times in this conversation. And there’s various ways people try to forecast how quickly the technology will develop. Like one way is just go and survey machine learning engineers and trust that they know something about how the future is going to go and take an average of their opinions. So that’s one method. Another method is something that’s gone back to like Hans Moravec at the very least of: think that computers are like human brains and try and estimate how much computing power the human brain does and try and forecast Moore’s Law and algorithmic progress to see...
[12:31] Seth Benzell: Ray Kurzweilian, yeah.
[12:33] Basil Halperin: Exactly, like Ray Kurzweil. To see how long until we have enough computing power to match the human brain and say that’s when we’ll develop AGI. But like, sort of setting that to the side or something... I don’t know. We’re trying to encourage research. So we’re selecting for people who are like stubbornly pursuing research. So there’s that. But if you’re like asking about the future for econ PhDs... econ grad students...
[12:58] Seth Benzell: We’re not talking about the future of econ PhDs generally. We’re talking about this elite cohort you’ve gathered. You think that there’s a chance that this elite cohort of the best young thinkers on Econ of AI are going to be obsoleted in three years?
[13:13] Basil Halperin: Uh, I mean, I think there’s a non-zero chance that we’re all living in some communist utopia in a few years. Not a high one, as my research would indicate, but non-zero. Which is like crazy to think about. We could get unhinged and talk about that, but maybe we can save it for later.
[13:30] Andrey Fradkin: Yeah, I guess I was trying to actually push you in a different direction, which is more like... you know, Tyler Cowen famously gave Leopold Aschenbrenner the advice of not going into economics academia, right? You know, he was someone who was, and still is I think, working on some economics research.
[13:46] Seth Benzell: Yes, including with friend of the show Phil.
[13:49] Andrey Fradkin: Yeah. Exactly. So I was kind of more thinking like, is it really the best place if you’re really AI-pilled to be sitting at a university? Why did you choose to do that? I’m sure you had... you could have had other options that you pursued.
[14:04] Basil Halperin: Yeah. I mean, so what is best for any individual varies a lot. And I don’t know, like don’t you guys think that people who go into academia are kind of stubborn? Like they want the independence of not having a boss. They’re willing to accept the ginormous pay cuts relative to the outside option.
[14:24] Seth Benzell: I wanted the wizard robes.
[14:26] Basil Halperin: You wear wizard robes to lecture or what?
[14:29] Seth Benzell: I do. I have it hanging on my wall right now. I would point my camera, but my lighting is so beautiful right now.
[14:34] Basil Halperin: We should have worn them for the video. So I don’t know, like really that idiosyncratic taste shock is I think driving a lot of people. But yeah, I totally agree that there’s a lot of amazing research to be done in the private sector and like the new Anthropic economic team seems to be doing amazing stuff, for example.
[14:52] Seth Benzell: Basil, I don’t want to answer this question for you, but if I may offer kind of a riff on that idea of it being idiosyncratic taste... I think it’s a, you could call this a taste thing, but you might call it also an idiosyncratic valuation of certain virtues, right? You might find yourself associating with the virtues of being an economist or being a professor and having open inquiry, etc., etc., etc., that are not necessarily as associated as firmly with other professions. You could call that taste or you could call that something else.
[15:28] Basil Halperin: Yeah, let’s bring virtue ethics back into economics.
[15:32] Seth Benzell: Bringing the virtue ethics back to economics, exactly.
[15:35] Andrey Fradkin: Yeah. Well, cool. You know, very interesting to think about these career implications, but I think it’s maybe a natural place to transition to discussing some of your really interesting thoughts that you’ve had recently. And I think Seth has some questions.Basil Justifies His Research:Transformative AI, existential risk, and real interest rates
[15:53] Seth Benzell: [Grabbing microphone] Give me the mic, Andrey. I’m grabbing the mic from Andrey now. Basil, if I recall correctly, the way we e-met was because I got very frustrated with you over one of your papers. And this was your paper, “Transformative AI, Existential Risk, and Real Interest Rates.” So I guess before kind of I explain my strong emotional reaction to this paper and how you eventually won me over, maybe you can refresh our podcast listeners. We did an episode on this podcast as one of our very first episodes. I encourage our listeners to go back and listen to it. But for those who don’t have the time, can you give us maybe a two-minute gloss on that paper before we start putting you to the test on it?
[16:45] Basil Halperin: Yes. So I second that listeners should go back and relisten to that old episode because I did before this and that was a really nice summary that I really appreciated. Obviously the critiques were wrong, which we’ll get to. That’s a joke. There were some good points. But yeah, so the motivation here is like, everyone wants to know how quickly is AI going to progress? AI technology going to develop? And there’s various ways people try to forecast how quickly the technology will develop. Like one way is just go and survey machine learning engineers and trust that they know something about how the future is going to go and take an average of their opinions. So that’s one method. Another method is something that’s gone back to like Hans Moravec at the very least of: think that computers are like human brains and try and estimate how much computing power the human brain does and try and forecast Moore’s Law and algorithmic progress to see...
[17:33] Seth Benzell: Ray Kurzweilian, yeah.
[17:35] Basil Halperin: Exactly, like Ray Kurzweil. To see how long until we have enough computing power to match the human brain and say that’s when we’ll develop AGI. We in this paper want to present sort of an indirect way of thinking about this, which is using one of the most powerful supercomputers humanity has, and that is the calculation power of financial markets. Where in economics, you know, we like to think that prices are good at aggregating dispersed wisdom across the economy. And financial market prices in particular, by being forward looking, by being particularly liquid and having this strong incentivizing power through the magic of no arbitrage—or arbitrage incentives—are a particularly good way of collecting humanity’s dispersed wisdom about how the future could proceed. So in particular, we suggest in this paper that...
[18:31] Seth Benzell: But Basil, there’s no... at least when you were writing this paper, I’m not aware of a high liquidity market that just says “when does AGI happen?” or “when does TAI happen?” So what price should we look at?
[18:43] Basil Halperin: Indeed. And if you’ll allow me to rant on that for a second before summarizing the argument... like today, even today, there’s still no, despite the rise of prediction markets, there is no long horizon prediction market on when could advanced AI be developed. There’s these forecasting platforms that just allow people to submit their own forecasts and take the average of them. Metaculus, Manifold Markets. People sometimes refer to these as betting markets, prediction markets... they are not prediction markets. They do not have the incentive, the financial incentive to ensure forecasters pay attention, update their forecasts, and so on. So those are great websites, but they’re limited. Kalshi, Polymarket, these new prediction markets... somehow there’s just... it’s shocking how bad the lack of good forecasting opportunity to forecast AI is. There’s very limited things. There are some things, but they’re not very good.
[19:35] Seth Benzell: Do you speculate that it’s like a defining AGI problem? It’s the Oracle problem? It’s like, “how would you know it when you see it?” Or did you speculate on why that is?
[19:43] Basil Halperin: Yeah. So part of it is that. So for example, the very best question that I’m aware of is Kalshi has a market on: will this fancy version of the Turing test be passed by 2030? Where it’s some like souped up version of the Turing test based on a bet that Ray Kurzweil actually—we keep mentioning his name—made. So that’s like the best existing thing...
[20:00] Basil Halperin: ...but it’s this limited definition.
[20:04] Andrey Fradkin: So I actually have a different question which is related to your paper. But let’s say we had a prediction market on GDP growth. And you know, it was like: will we have, I don’t know, 5% GDP growth or 10% GDP growth at least once by year X? You know, it’s hard to imagine that that would happen without transformative AI.
[20:31] Seth Benzell: Ah, Andrey, I could tell a story.
[20:33] Andrey Fradkin: Yeah. No, I could tell a story. I could tell a story, but it would be highly correlated. Are there markets like that that are very close analogs to this?
[20:42] Basil Halperin: If there are, I would love to know. And like, I do a periodic search and there’s... it’s like there’s really not. It’s infuriating. Hence the origin of this paper.
[20:51] Seth Benzell: But you can bet... you can bet super out of the money calls on like the stock market. You can bet on the stock market growing 500%, right?
[20:59] Basil Halperin: Yes. Well, I don’t know about 500%. Out of the money calls, like the range is not that large. But betting on GDP growth in particular is difficult. And like, does higher GDP growth raise equity valuations? It’s actually not obvious. Like, we can really dive into that, but for a whole bunch of reasons... for a whole bunch of reasons I think equities are just kind of a very confusing asset class in general to interpret. Which is why...
[21:27] Andrey Fradkin: Yes, so tell us why you picked interest rates. Yeah, and then we’ll go back to why equities may or may not be good.
[21:33] Seth Benzell: Because equities are a bad asset, what I’ll do is measure equities over time. [Laughter]
[21:40] Basil Halperin: Yeah, so the best price in the economy—that’s kind of a joke—the price we recommend looking at in this paper is real interest rates. So that is to say the inflation-adjusted risk-free rate of return you would earn on a bond, particularly at long horizons. Like say the 10-year real interest rate or the 30-year real interest rate. And the argument for why that’s a useful price to look at is the following: If you knew you were going to be super rich next year, no reason to save today. You’re going to be super rich next year anyway. If no one’s saving, then that pushes up interest rates. Interest rates clear the market, the supply and demand for savings.
So that would be the case where we expect AI to rapidly raise economic growth, rapidly raise our incomes, in particular rapidly raise our consumption. And so if we saw really high real interest rates, that would be indicative of this case of aligned AI raising human incomes. Alternatively, another case with AI that people talk about is that, you know, AI is going to wipe us all out. And you’ve done podcasts on this topic. Similarly, if we’re all going to be dead next year because AI was going to wipe us all out, then there’d be no reason to save today. You’re going to be dead next year. No reason to hold on to assets for next year. Likewise, that pushes up interest rates.
So, you know, we could go and look at interest rates. Are they much higher than they have been? And like, no, they’re well within the range of normal variation. And when I started thinking about this back in fall of 2021, it was particularly salient because at that time long-term real interest rates in the US, and indeed around the world, were at all-time lows, like negative. So you know, you’d give $100 to the US government, they give you back $99 inflation adjusted at the end of the year. Interest rates have gone up a non-trivial amount since then actually, but really not that much. Really, it’s probably not because of AI. Maybe a bit. So that’s the core argument. That if markets were expecting aligned or unaligned transformative AI, then we’d see high real interest rates today.
[23:51] Seth Benzell: All right, great arguments. And now I’m going to explain why this was so frustrating for me in 2021 to read this argument. I had been working on transformative AI topics and had been thinking about, you know, kinds of economic downsides of AI. And one of the mechanisms that I had become worried about was the anticipation of AI leads to dissaving and that dissaving is large enough that interest rates skyrocket and actually you don’t get enough reinvestment in the economy to have significant economic growth, right? Set aside for a second whether or not the dissaving you have in mind is so extreme that you would literally like cancel out the gains from AI. But I had been kind of pushing on this idea that, you know, AI is going to lead to dissaving... as the world’s interest rates were plummeting. And so I had kind of pivoted into trying to think about, okay, well, if we do get really good AI, how could you get to a world where there are very low interest rates, right? And so one version of this idea I worked on with our friend and co-author Erik Brynjolfsson is the idea that, well, maybe there will be a kind of labor that will be infinitely reproduced, but there will be still some scarce human factor. And then actually that scarce human factor will make all of the gains and then interest rates can remain low.
Another story would be: well maybe we don’t have transformative AI, we have an AI that takes over, you know, 50, 60, 70% of jobs. We see the labor share of national income go down from, you know, 60% to 20%. But if you actually play that out in a big macroeconomic model where you try to realistically model national savings rates... well, you’re kind of pushing against the tide. Like we talked about, in 2021 we had this huge—it was called by some an international saving glut—that was maybe driven by the rise of an Asian middle class that all of a sudden had all of this money, needed to save for retirement. There was a scarcity of safe assets. And so even if you automated a lot of jobs, there might be still a lot of absorptive capacity for that savings before you would significantly bid up interest rates.
And so kind of for both this sort of a theoretical reason and a sort of a kind of a macro simulation reason, I fired off to you this angry email saying, “Don’t you realize blah, blah, blah, blah, blah?”
[26:28] Basil Halperin: Yeah, the audience wants your original comment. They want you to read it.
[26:32] Andrey Fradkin: Oh, that email will be in the post, don’t worry.
[26:36] Basil Halperin: I have it on hand. I have it on hand.
[26:38] Seth Benzell: Oh wait, let’s hear it. Let’s hear it, Basil. How bad was it?
[26:41] Basil Halperin: This is going to be the unhinged portion of the episode. So Tyler Cowen kindly reposted the essay.
[26:49] Seth Benzell: [Laughs] It was like, “A crazy guy emailed me.”
[26:51] Basil Halperin: Well, so initially it was an email. Initially it was a comment on the Marginal Revolution post sharing the essay. And so, like, you know, I...
[26:59] Seth Benzell: And everyone knows that that is where the sanest people hang out.
[27:03] Basil Halperin: I, like some neurotic person or whatever, skim through these comments and there’s this one guy Seth Benzell: “Hey, I’ve read a few of his papers, including that one you mentioned with Eric. This is so dumb.” That’s my first introduction to Seth. Of course, since then things have changed. But welcome to the internet.
[27:26] Seth Benzell: Wow, “so dumb.” I came out of the gate swinging. You have to remember it was the pandemic. We were all cooped up. Some people went to BLM protests. I commented on Marginal Rev. But now I’ll tell you how you won me over, Basil. Which is, you sat me down and you said, “Seth, those scenarios that you’re thinking about, the one where there’s still, you know, a scarce human factor that’s making the wins, or the one where we automate 60% of jobs, those are ‘AI is a big deal’ scenarios, but those aren’t the transformative AI, AGI scenarios that I’m actually writing about.” And then I apologize for not having read the paper.
[28:06] Andrey Fradkin: You’re a true Marginal Revolution commenter, Seth. Who I don’t think any of them have ever read a paper.
[28:15] Basil Halperin: This is worth noting. So like, the paper and the argument really is zoomed in onto this particular scenario, which I think was like much more top of mind to the people thinking about this a few years ago. So like, you know, before ChatGPT... our essay, initial essay was posted a month after ChatGPT came out. Before ChatGPT, there weren’t that many people in the world thinking about AI, right? And the people that were, a lot of them were focused on like these fast takeoff “foom” scenarios. Things would happen fast, things would happen big. More likely than not, we’re going to die. P(doom) is high as they say, right? So we were really focused on like these kind of extreme possibilities: either we’re all going to die or we’re going to have what we operationalized as 30% annual GDP growth. An order of magnitude increase in annual GDP growth. Which would be crazy. It would be as if the whole economy is growing as fast as Moore’s Law, more or less. So yes, it’s an extreme scenario for sure.
[29:13] Seth Benzell: And but yes, but so given that extreme scenario, you won me over. And I said, “Andrey, when we start our podcast, I want to talk about this paper because nothing has moved my priors so much as this paper.” Maybe it was just moving my definitions around. Maybe it gave me like a stronger understanding of what people really mean by transformative AI versus just AI that is so good that it automates 70% of jobs. But I talked to Andrey about it and Andrey, remind me, were... did I fully convince you of Basil’s arguments or remind me?
[29:47] Andrey Fradkin: No, I don’t think so. Andrey wasn’t convinced at all. I just... I mean... I just feel like the people being so certain that this transformative AI is coming in this particular way seems unlikely to me. It’s not like how humans tend to think or behave about most things in life. And then it’s hard for me to imagine a world where they essentially like, it’s a coin flip: either we all die or we have amazing transformative AI. And we don’t have any intermediate types of outcomes where, for example, you might want to engage in precautionary saving. I know you talk about certain precautionary savings in your paper, but like, that’s just a very natural response to a lot of uncertainty. There are of course also scenarios where there is tremendous economic growth, but it’s held by very few people. It’s ex-ante not obvious who those people are going to be. Or maybe it is obvious, I don’t know. Maybe they already have all the capital, right? There are just a lot of things, a lot of details to think through and I’m sure you’ve thought through a lot more of those than we have in our podcast.
[31:04] Basil Halperin: Yeah. So one thing I should say is that like this transformative AI 30% GDP growth scenario, that’s not something we made up or pulled out of thin air. Like this really was and is a paper dedicated to a specific conversation, just like any academic paper, right? It’s a conversation among a particular group. So that’s one thing. Another thing to say is like, to me... so one thing Andrey that you spoke about in the last podcast on this that I totally agree with is skepticism of quantitative macro predictions. So I think you went beyond what I would say in terms of skepticism, but I so strongly share the belief or the view that macro does not have an amazing track record in terms of precise predictions. And that’s why... like that’s like a strong motivation for the approach in this paper. Where instead of like, we’re going to write down an optimizing model, a model of optimizing agents where in equilibrium we determine the structural forces determining the real interest rate and we’re going to calibrate all these different forces and feed in the simulation. Instead, it’s just this like dead simple thing where we have this very robust, strong prediction from any intertemporal macroeconomic model: that higher growth or higher mortality risk raise real interest rates. And people are predicting, people are moving tens, hundreds of billions of dollars, literally in San Francisco, under the belief that these things are going to happen. One of these two things is going to happen. It’s going to happen in the next 10, 5, 1 year. And this provides some sanity check on like, most of all, like the very shortest timeline predictions.
[32:51] Seth Benzell: Yeah, so maybe I can pay...
[32:52] Andrey Fradkin: But I guess does everyone need to believe in those predictions? I mean...
[32:56] Seth Benzell: It has to be like the median investor, right? Who has... who’s the guy that we’re talking about the beliefs of?
[33:01] Basil Halperin: The marginal unit of capital. So, you know, markets don’t reflect average beliefs. They reflect the belief of the marginal unit of capital, the marginal trader, just like any price reflects the marginal buyer/seller. And like a priori and lots of theory and so forth to back this up, like you would think that the marginal trader is the one who has the most knowledge or the most incentive to buy/sell. You can think about deviations from that, but like that’s...
[33:26] Seth Benzell: Isn’t the marginal trader a noise trader?
[33:28] Andrey Fradkin: Or like if we have a distribution of beliefs, isn’t the marginal trader someone who has an intermediate belief?
[33:35] Basil Halperin: Um, so one thing I will say is that... one thing I’ve learned from this whole project is it’s confusing to me how underdeveloped the literature on asset pricing under heterogeneous beliefs is. I think it’s in part because like you get these no trade results where if people don’t... anyway, the theory is hard. But the way I think about it is that the sort of robust prediction of theory is that asset prices are like a wealth-weighted average of beliefs. Maybe wealth-weighted risk tolerance weighted average of the distribution of beliefs.
[34:13] Seth Benzell: That right? You think if I’m super out of the money, can I still move the middle somehow? In other words, if I’m the guy... if I’m a 99% “AI never happens” or “AI always happens,” in what sense am I being included in that weighted average?
[34:27] Basil Halperin: Just directly. So like this is about consumption-savings decisions rather. Like what, how fast will the growth rate be? That average.
[34:39] Seth Benzell: Okay. Oh, you’re talking more about the national saving rate. That part of it.
[34:43] Basil Halperin: I’m thinking like the g, the growth rate that goes into the real interest rate determination, that’s the average belief over that.
[34:54] Seth Benzell: Right. And the reason that that matters is that is going to drive the saving rate, which drives the interest rate? Or through a different mechanism?
[35:01] Basil Halperin: Yes, yes, yes.
[35:02] Seth Benzell: Okay.
[35:04] Andrey Fradkin: I have a... so I have a question related to, you know, we touched upon this when we did the podcast, but I’m curious what you think about it is: It seems hard for me to imagine a scenario where we get to your scenario without a lot of hints in advance, right? Like... like your scenario is literally like most people agree that we’re going to have 30% growth next year. What... what does the path to that look like? Does that mean that we first have 20% growth, 10% growth? Uh, like... are there other assets that we expect to be leading indicators there? Because I do think in some sense, if we get to your scenario, then you’ve already told us what happens.
[35:48] Basil Halperin: It’s not my scenario. I want to emphasize.
[35:51] Andrey Fradkin: No no, sorry. To your analysis. If we get to the point in your analysis—I know it’s not your scenario—then...
[35:56] Seth Benzell: Is your warning light a leading indicator or a late indicator?
[36:01] Andrey Fradkin: Yeah. We thought it was a late indicator. But I’m curious if you have ideas for leading indicators. Yeah.
[36:07] Basil Halperin: Ah, so I really think this is a leading indicator because like interest rates reflect expectations about future growth, not current growth. So like wages would be a lagging indicator where those are only going to fall once the technology has developed. Interest rates will rise once people expect the technology to be developed.
[36:25] Andrey Fradkin: So no, so I think we both agree with that. I’m just saying that like it’s hard for me to imagine that enough percent of capital believes that we’re going to have 30% growth without it being apparent in other economic statistics long in advance of that.
[36:38] Seth Benzell: Like will we be... I guess... the people who read your paper will be convinced that AGI is coming before interest rates go up.
[36:48] Basil Halperin: So that’s sort of a question of like how efficient do you think markets are plausibly, right? Is that what you’re saying?
[36:58] Seth Benzell: I think that’s fair, right? Andrey is saying that the sophisticated... I mean that’s how I read it.
[37:01] Andrey Fradkin: Well, one is efficiency. The other is like... let’s say for... if we thought that for AGI to happen, we needed to have substantial data center and energy build outs...
[37:13] Seth Benzell: Elon’s robot factory.
[37:15] Andrey Fradkin: Yeah, but to the extent of like 5% of GDP, 10% of GDP, right? Like these things will be happening. There... you know, there’ll still be uncertainty. So it’s not necessarily that it’s an efficient markets failure, but um... like what are the... you know, those are kind of the things that I’m curious about if you have any thoughts. Like what are the precursors to this moment?
[37:41] Basil Halperin: So I mean, I still think interest rates can go up before... like capital takes time to build. But if the discussion is like what things will happen on the way to transformative AI, like yeah, the... what’s the line from the bard of our times, our dear leader: “everything is compute”? Like we’re going to tile the planet with computers. So like 1% of US GDP last year was hyperscaler capital expenditure.
[38:15] Seth Benzell: And let me... yeah. Let me try to ask this a slightly different way, which is, I guess maybe try to make you be a little bit quantitative about how sensitive your personal predictions about TAI are based on different interest rate scenarios. So I’m going to give you a conditional expectation here. Feel free to use it or to give me a different one, but I want you to try to be quantitative if you can. What is your conditional probability of TAI within five years if the interest rate is less than 6% versus TAI in less than five years if the interest rate is above 15%? Real interest rates.
[38:53] Basil Halperin: If the real interest rate is above 15%, then like if this is the real risk-free interest rate, then I think TAI is here and growth is going bananas. I think plausibly even if real interest rates are above 6%... so like the 30-year right now is like 2.6. The 10-year is like 1.8. And so like the 2.6...
[39:13] Andrey Fradkin: Just to be clear to the listeners, once again, we’re talking about inflation-adjusted interest rates.
[39:16] Basil Halperin: That’s important. So the 1.8% number for the 10-year real interest rate is like really in line with where things have been over the last 25 years. The 2.6 for the 30-year is like a little bit elevated. So even 6...
[39:31] Seth Benzell: The numbers I were using were kind of risky equity market rates. So feel free to substitute whatever numbers you like.
[39:35] Andrey Fradkin: Well that’s just a totally different object, right?
[39:39] Basil Halperin: So...
[39:40] Seth Benzell: Oh god. Right. Alright. So okay, risk-free rate. So right now you’re telling me we’re at what? 3%?
[39:44] Basil Halperin: 2.6 for the 30-year.
[39:46] Seth Benzell: 2.6. All right. So what’s your conditional expectation on TAI in five years in the future given that next year the risk-free rate is under 3%? And then what is it if the risk-free rate goes above 10%?
[40:02] Basil Halperin: Again, if it goes above 10%, I think growth is going bananas. That’s a huge jump.
[40:07] Seth Benzell: Anticipated growth. So you don’t even think... you think we’d see the growth before we’d see the interest rate?
[40:12] Basil Halperin: Sorry, it depends on what horizon interest rate we’re talking about here.
[40:15] Seth Benzell: 30-year.
[40:17] Basil Halperin: If the 30-year goes up to 15? Or above 10?
[40:20] Seth Benzell: 10 or 15. You choose numbers. I want you to try to be quantitative at me.
[40:24] Basil Halperin: Well, so here’s the thing, here’s the thing. The interest rate at a particular horizon tells you among other things about growth expectations at that horizon. So you can look at the entire yield curve, interest rate at 1 year, 5 year, 10 year, 30 year, and get the expectations sort of with lots of other things going on at those different horizons. So like I wouldn’t want to just look at just the 30 year. I’d want to look at the 1, 10, 5, 30.
[40:48] Seth Benzell: All right. So choose whatever... the curve is the same. Move the level up or not down.
[40:53] Basil Halperin: I guess if it does it for you.
[40:57] Seth Benzell: Gimme. Feed me.
[41:01] Basil Halperin: Real interest rates rose two percentage points from the... two or three percentage points from the COVID depths to where they are now. And again, now they’re like sort of more or less in where they were 20 years ago. If they went up another percentage point, I’d be... pretty surprised and interested. How much does that raise like my probability of transformative AI in the next five years if the...
[41:26] Seth Benzell: That’s the question. That’s the question. This is what your paper is about.
[41:31] Basil Halperin: But again, like I’m not here to make quantitative forecasts, especially going from market prices back to probabilities. I’m here to say that there’s this...
[41:43] Seth Benzell: I know, you’re making a directional argument, but give me... does it double your odds of TAI? Or I can let this go if you’re going to really refuse.
[41:50] Basil Halperin: I mean, so what I can do... I can tell you what my AI timelines are and like what feeds into that and how...
[41:55] Seth Benzell: Yes.
[41:56] Andrey Fradkin: Let’s just do that. Yeah.
[41:58] Seth Benzell: And then tell us how they would change if interest rates got up.
[42:02] Basil Halperin: Okay, well, like... again, like I really emphasize that to me the right way to read this paper is this interest rate argument is like an outside view, here’s a sanity check. So like my view is much more informed by like all these other things now that I’ve spent like a whole bunch of years reading the AI literature, the AI economics literature. So for example... if you just extrapolate forward the “meter time horizon” trend that you guys have spoken about...
[42:30] Andrey Fradkin: What’s the... what’s the...
[42:32] Basil Halperin: ...the length of a task that... of a software engineering task, a machine learning research task that these large language models can do with 50% accuracy. If you extrapolate that trend forward... this is currently doubling every seven months or that’s what it’s been for the last six years. If you extrapolate that forward, take into account very importantly the fact that by like 2030... capital expenditures by hyperscalers can be like a trillion dollars and that scaling can’t continue. So like take into account the fact we’re going to hit the compute wall and then investment’s going to slow down. We’ll have models that can do one month tasks with 50% accuracy by I think it’s 2033. And one year tasks by 2039. This is Whitfill, Snowden, Parker’s new paper. So that’s on this narrow range of tasks done in these meter benchmarks at 50% accuracy: 2039, one year horizon. If you then adjust for the fact that like these are particular kinds of tasks... like I don’t know, say that adds another six years, so that’s like another six doublings or something like that. And then take into account that rather than 50% accuracy, we want 99% accuracy. That takes you like to late 2040s. I think... just like this particular stylized fact about time horizons already gets you to like fairly long potentially... at least the possibility of potentially long time horizons for AI. So that’s like...
[44:12] Seth Benzell: I guess we’ll come back to this... and maybe we’ll talk about this a little bit more with your new paper where we talk about to the extent that algorithmic progress can substitute for compute progress, right? Because that’s going to be a key factor here.
[44:22] Andrey Fradkin: But to be clear, let’s dwell on this a tiny bit more.
[44:26] Basil Halperin: Yeah, there was a lot of sub-points in there that I went through very fast.
[44:29] Andrey Fradkin: Yeah, but yeah, so I think... I think one thing, you know just Seth to your point very briefly is like the METR graph takes into account algorithmic progress. So that’s why it goes as fast as it does.
[44:43] Seth Benzell: Right. But then he said he was also going to take into account... okay, anyway.
[44:47] Basil Halperin: So that’s like I think one... that’s like a median view. But I think like you really have to think of terms of different scenarios. So like the “AI 2027” guys... like that report seems a little crazy, this idea that things are not just going to grow at a constant rate but are going to go hyperbolic. Like that seems a little crazy and maybe even... yeah, a little crazy. But like there is enough flesh on that argument, including this new paper Seth that you mentioned, could point towards that, that I think like you have to have some non-zero probability on like... maybe not literally AI 2027 but like AI before 2030.
[45:27] Seth Benzell: Do you have to put non-zero probability on anything that isn’t conceptually impossible?
[45:31] Basil Halperin: Yes, okay. I mean like non-1% probability. So like I put like 10 or 15% probability on like things getting really crazy before 2030. And then I put like 50 to 80% probability on something between 2035 and 2050. And then like whatever is left, 10, 20% on like some factor X... Moore’s Law slows down, energy runs out and like things take longer than 2060 or whatever. Or including never being able to develop such technology.
[46:00] Seth Benzell: So did I get you right? So the median forecast is the mid 2040s for AGI? Is that what you’ve given me?
[46:05] Basil Halperin: The quantitative numbers here are really hard, but yes, something like 2035 to 2050.
[46:10] Andrey Fradkin: It’s not AGI to Seth. I mean... I mean it’s very different concept...
[46:17] Seth Benzell: TAI, TAI. TAI is what we want to talk about. Okay. TAI, excuse me.
[46:21] Andrey Fradkin: But Basil, I’m going to give you a counterpoint. I think the METR graph drastically understates the time horizon of tasks that can be done.
[46:30] Basil Halperin: Understates?
[46:31] Andrey Fradkin: Yes.
[46:33] Seth Benzell: Because Ralph OODA Loop.
[46:36] Andrey Fradkin: I mean, yeah, but broadly, right? Like a lot of these evals are doing dumb things. They’re taking a model out of the box and just asking it to do it. And that is not how you would do any task if you had to do it, right? Like you... you know, a big theme of I think our show and worldview is we believe in a multitude of models interacting in an ecosystem to produce outcomes. And the scaffolding really matters.
[47:08] Seth Benzell: How we were epi-ing the Lessin-Kuld show.
[47:10] Andrey Fradkin: Uh, the scaffolding matters, right? The... you can have different models from different providers interacting with each other and calling other tools. And so to evaluate the ability of just like an out of the box LLM to do a specific task... that’s never how you would actually do it in real life.
[47:31] Seth Benzell: Yeah, we see this in Andrey’s data where there are, you know, very clear people use a mix of models. It’s there in the data.
[47:39] Basil Halperin: Yeah, I mean I think unhobbling is like one possible reason that like there’s 15% chance that we’re colonizing the stars before 2030. That unhobbling could be enough. Leopold had it right. Maybe.
[47:53] Andrey Fradkin: Yeah, yeah. I mean, for what it’s worth, I think the bigger, you know... I think the thing I agree with you more is that some of these METR tasks are really unrepresentative of most tasks in the economy. And in particular I don’t think they teach us much about robotics. And I think like robotics has to be an ingredient of any TAI scenario eventually. And so...
[48:18] Seth Benzell: Only a computer scientist would think that computer science is the final task.
[48:23] Basil Halperin: The strawman obviously being that, you know, a brain in a vat—the brain of the computer—can solve robotics just by doing better software on the computer. That’s the strawman.
[48:32] Andrey Fradkin: Yeah, no, no. I understand, but we’re still talking about human tasks being done, you know.
[48:38] Basil Halperin: Totally, totally.
[48:40] Seth Benzell: A brain in the vat still needs faith in God in order to believe in the exterior world, dude. Haven’t you read your dualism?
[48:49] Andrey Fradkin: Um, all right, so...
[48:51] Seth Benzell: Wait, let me wrap up... I want to finish up this topic. Last question on this topic and then we can move on. Which is: okay, you’ve shot me down on asking a quantitative question about the macro. Will you give me an answer about: are you changing your environment... your portfolio? I mean, you said 10% chance of s**t gets crazy. Sorry, that’s my one curse per episode. 10% chance. How do you allocate your assets based on that? Are you dissaving?
[49:19] Basil Halperin: So like the first thing I’d say is, for someone at my stage of the life cycle, like my most important asset is my human capital. And I’ve reallocated that heavily from studying monetary policy, which was the thing I was obsessed with for years and years, to now being focused a lot on the economics of AI. So like that asset of my portfolio I’ve shifted a lot. Have I changed what my savings are...
[49:44] Seth Benzell: Are you dissaving your social capital through drugs and alcohol?
[49:49] Basil Halperin: Well, there’s a different consideration there where like I want to stay healthy until the singularity so I can live forever. So I think actually the consideration might go the other way in terms of intertemporal substitution. But, do I try hard to consumption smooth? Absolutely. It would bother me when people in grad school were like, “Yeah, I’m putting money into my 401k.” I’m like...
[50:08] Seth Benzell: Are you putting money into your 401k?
[50:11] Basil Halperin: I put the minimum amount to get the matching funds.
[50:14] Seth Benzell: The minimum, dude. The minimum. I thought this was a guy who believed in his own papers.
[50:17] Basil Halperin: There’s no other reason to do it.
[50:21] Seth Benzell: All right, you have him, Andrey.
[50:23] Andrey Fradkin: All right, all right. I think Seth has given up on life at this point. So cool. Let’s talk a little bit about your new paper with Tom Davidson, Thomas Holden, and Anton Korinek. Why don’t you tell us a little bit about the premise?Basil Justifies His Research:When Does Automating AI Research Produce Explosive Growth?
[50:44] Basil Halperin: Yeah. So this is a paper that in some ways is about that 15% probability that things could get crazy soon. And in some ways is about some like deep or some standard economic growth theory. So the idea here is to like take seriously the structure of modern machine learning and put that, embed that into the canonical model of economic growth. Where, by that I mean like: how does AI get trained? How does it develop? Well there’s two key ingredients: software progress, hardware progress. So Moore’s Law and other trends mean that we’re able to produce more chips, better chips at lower prices over time. And algorithmic progress means that even for a fixed quantity of computer hardware, you can get more output from a computer program because we are able to write better computer programs. We are able to train better AI models.
So taking into account the fact, maybe most concretely, that OpenAI uses Nvidia chips to train better AI. And then Nvidia increasingly uses AI to design better chips. This is like Google’s AlphaChip has been put to use designing better TPUs, Google’s version of the GPU chip. So that’s like the motivation, sticking this into a canonical economic growth model, seeing what changes. What that cashes out as...
[52:20] Andrey Fradkin: Yeah, so before we get deeper into the paper... isn’t the idea that research helps do... like, you know, creating new ideas accelerates economic growth through subsequent acceleration of research and development efforts already embedded in the Romer growth model? How is this different?
[52:46] Basil Halperin: 100%. So what this does differently is that it says that there’s different kinds of research. So there’s like software research and there’s hardware research. And those are heterogeneous in interesting ways compared to each other, compared to you know, biomedical research or whatever. And taking seriously that heterogeneity and seeing what that heterogeneity implies.
So like in particular... one of the key lessons—so what we do in the paper is we write down a general networked semi-endogenous, like a Romer-Jones, general networked growth model. And draw out a couple of key insights I think. And so the core insights are around this idea of diminishing returns where we stand on the shoulder of giants to like... you know, we’re picking fruit from the tree of knowledge. We stand on the shoulder of giants to reach higher and higher fruits, but eventually the fruit gets harder and harder to pick because we pick all the low hanging fruit first. This idea of diminishing returns. And I think this idea of diminishing returns is like kind of obvious to economists, but it’s not always obvious in these conversations. Like the idea of an intelligence explosion, the idea of the singularity, kind of a lot of times can fail to recognize the importance of diminishing returns where there’s this idea that if you have a self-improving AI, like doing surgery on its brain to get smarter and smarter, that naturally has to lead to a singularity. But it doesn’t if the diminishing returns are strong enough.
[54:17] Seth Benzell: Okay, so now we gotta go back to the fruit. So okay, so now earlier you were talking about there were fruits, we were going for them... Explain this concept of diminishing returns through fruit because I’m really hungry.
[54:30] Basil Halperin: Yeah. So you’re hungry and so you’re picking fruit from the tree of knowledge. You pick the low hanging fruit first. And you know, that makes you stronger and gives you more energy to pick more fruit. But like eventually you pick all the low hanging fruit. And now you have to reach up and pick higher hanging fruit that’s harder to pick. And because fruit gets harder to pick—ideas get harder to find over time—you’re not just going to grow to become 100 feet tall, a thousand pounds because you’re running into diminishing returns in terms of fruit on the tree of ideas.
[55:10] Seth Benzell: So it’s like I grab one fruit and that gives me the energy to eat 0.9 more fruit, which gives me the energy to have 0.9 more fruit and it kind of peters out. I’m just riffing here, but is this like... is the Garden of Eden story... is that actually about diminishing returns somehow? It’s like we’re not in Eden because we have diminishing returns from apples?
[55:28] Basil Halperin: Yeah, I guess... I don’t want to say that the snake is Chad Jones because he’s the one who taught us this stuff.
[55:34] Seth Benzell: No, the snake is obviously Bloom and Reenen and all...
[55:38] Basil Halperin: Right, right. And Jones. Yeah, yeah. I guess so. But so exactly as Andrey said, like this is well known in the literature, this idea of diminishing returns. What we do is have this networked model where you have the software research sector and the hardware research sector interacting. There’s spillovers across sectors. And that teaches you a few things that I can talk about.
[56:02] Andrey Fradkin: But so at a high level... you know, if I’m understanding the idea in the paper correctly, is that you can undo diminishing returns with a networked production function for research, if you will. Here’s a question for you: What if we took an old growth model and just did away with diminishing returns, you know, all together and we had to have increasing returns? Wouldn’t we also get an explosion? Like... am I interpreting things correctly there? You’re kind of trying to microfound why increasing returns would happen.
[56:54] Basil Halperin: Yes. Yes. So to say that another way... like the original Romer model in this literature implied that there were no diminishing returns. Chad Jones comes along and points out empirically there must be diminishing returns. That’s because like we’ve had this constant 2% growth rate of ideas, that is 2% growth rate of total factor productivity or 1.5% percent. Meanwhile the growth rate of researchers has been 4% for like the last hundred years. So we have increasing number of scientists—like the two of you, thinking great thoughts—but we’re only producing the same growth rate of ideas of 1.5%.
[57:40] Andrey Fradkin: That’s because we’re podcasting too much.
[57:43] Basil Halperin: Seems plausible.
[57:44] Seth Benzell: It’s for the AI. We’re improving the AI, Andrey.
[57:48] Basil Halperin: Patrick Collison has this tweet that I think about a lot where he pointed out that when... when did growth in the US fall off a cliff a bit? It was like 2003 or TFP growth. And that’s you know, right when Facebook came out. Social media became the great distraction. Anyway, so yes, ideas get harder to find. That explains why growth slows down. And Andrey you point out that if you just get rid of that idea, then yeah indeed you could have a growth explosion. And indeed we are saying that spillovers across sectors can counteract those diminishing returns. And additionally, importantly, automation can also counteract the diminishing returns.
[58:27] Andrey Fradkin: Another thing to say is actually, and I think this is super interesting—not something I thought about going into the paper—is that you can estimate this diminishing returns parameter, this critical diminishing returns parameter by sector. And I can explain what these numbers mean, but that number for the economy as a whole is -3. So zero would be no diminishing returns. For the economy as a whole, it’s -3. For the software sector it’s -1. For hardware, like Moore’s Law, it’s -0.2. So the hardware sector has the least degree of diminishing returns of any sector that’s been estimated. So you know, if compute becomes a larger share of the economy, becomes more important, then this diminishing returns just inherently will become less of a thing. And then on top of that you have this spillover issue and this automation issue I’ve hinted at.
[59:17] Seth Benzell: So I know... natural question... and now I’m going to put on my applied microeconomist hat on is: where are you getting these numbers from, man? Yeah, you gotta parameterize this model.
[59:33] Basil Halperin: Yeah, so this is just looking at the time series. I can spell that out and I think I have an intuitive way of doing it, but yeah this is just looking at...
[59:40] Andrey Fradkin: Yeah, well let’s like walk through the hardware example. Let’s just like give us some intuition for where that number comes from. Because in my mind that seems like a really hard number to come up with even though we do have Moore’s Law, right? Yeah.
[59:53] Basil Halperin: No, so the ideal here would be to run an experiment. And you know, maybe METR has enough money to do that or something and maybe they should. But the way...
[1:00:00] Basil Halperin: ...the way that Bloom et al, the same paper that Seth mentioned, does this... the literature does this is the following: So say, you know, there’s like a hundred guys and gals thinking about how to improve semiconductors, how to improve hardware in the world. Fix that population. If ideas were not getting harder to find, that same hundred people would produce Moore’s Law. So Moore’s Law says that hardware productivity grows like 40% per year. That gets you the doubling every two years or more law. So something like 40%. Hundred people get 40% growth.
But we’ve had this constant 40% growth for 50 years, 60 years in hardware. But that’s required more than just like the original hundred. It’s required that that population of hardware researchers has grown by say 8%, call it, per year since the 1960s. So you’ve needed an increasing number of people to get the same progress in hardware. And so the way that that 0.2% diminishing returns number comes from is that ratio of 8% to 40%. That’s that point two.
[1:01:17] Andrey Fradkin: Okay. So now I’m going to tell you... now I’m going to use your paper to tell you why that number is wrong. So why is that wrong? It’s because it’s not just those hardware engineers that are producing that Moore’s Law. That Moore’s Law is being produced by everyone else in the economy that... who is producing let’s say like design software or even, you know, like I don’t know, cell phones... like all sorts of things contribute to Moore’s Law.
[1:01:47] Basil Halperin: Yes, exactly.
[1:01:48] Andrey Fradkin: And then there’s also just like physical returns to scale, right? So we’re producing more and more chips so that’s a production function parameter rather than a research parameter. So I don’t... so to me it seems a little strange to like lean so heavily on that number which ignores the entire point of your paper.
[1:02:10] Basil Halperin: So, so, so... a few things to say. One is...
[1:02:17] Seth Benzell: I mean I... yeah, give it a shot. You can also just crawl into your closet and we can hang up now. Your choice.
[1:02:22] Basil Halperin: No, no, this is basically the next paper that co-authors and I should write. Maybe Andrey you can co-author with us. Which is: indeed these prior estimates of these coefficients ignores exactly the factors that we discuss. So yeah, I don’t need to repeat what you said because that argument was well put and totally correct. But what that means or or as you said I think, what that means is that the degree of diminishing returns is underestimated because the progress is being benefited by spillovers which are not captured. So if you re-did the estimation with spillovers, you would find that diminishing returns is even harder and that like the singularity is less likely. Totally agree.
[1:03:07] Seth Benzell: I have a separate concern about these parameters. So alright, you want to tell us about the parameters we need in order to get this hyperbolic growth, right? But it kind of really seems like once you kind of like start the hyperbolic growth, once you like get on that curve, stuff’s going to get super weird super fast. Yeah. And like wouldn’t the parameters change pretty fast? So like how can you even extrapolate from today’s parameters to this crazy regime parameters?
[1:03:38] Basil Halperin: Yeah. I again am going to be in total agreement with you. I again am not someone who like wants to take macroeconomic models seriously as quantitative forecasts, but instead see them as formalized, mathematically formalized fables from which we can draw out particular insights and intuitions that were able to check are internally consistent because they’re written in language of mathematics. So that’s why the takeaway I have from writing this paper with Tom, Tom, and Anton is these ideas about: diminishing returns are important; spillovers can mitigate diminishing returns; automation can mitigate diminishing returns. And I feel pretty comfortable saying with the caveats that Andrey just emphasized, that hardware and software have less diminishing returns than other sectors. Though we should re-estimate those and hopefully will in a future paper. And that on its own is interesting. But not take super seriously like, where are we on the side of zero or negative? Are we on the side of increasing returns or decreasing returns? Like that stuff... yeah, these parameters I don’t have any reason to think those are stable as we go through 10 orders of magnitude of growth or something like that. Some people on the internet do take those that seriously and yeah, I completely agree.
[1:05:01] Seth Benzell: Uh if I... okay maybe we can talk for just what... we talked about the spillovers. Maybe you want to talk for a little bit about how automation might overcome “fishing out.” If I may suggest a motto for this: “If you fish fast enough, you can outrun fishing out.”
[1:05:15] Andrey Fradkin: Well maybe actually like maybe before you get to that we can just... one of the nice things about this paper is there’s like a concise message which is this Equation Number 1 in the paper.
[1:05:28] Seth Benzell: Yeah the one you... the equation you just told us to not care about. Tell us about it.
[1:05:32] Basil Halperin: Yeah. So I said that for the hardware sector this diminishing returns parameter is 0.2 and for the economy as a whole it’s 3. And again that was the intuition that the 8% researcher population growth versus the 40% productivity growth. Whereas if there was 0% population growth/researcher growth, then that diminishing returns parameter would be zero because you’d have zero divided by 40. Meanwhile if that number were negative, then you’d have the increasing returns and the hyperbolic growth, the singularity.
So the reason why I mentioned that is that zero there is the focal point, but really it’s like a... it’s a one plus a zero. So you have this critical condition of: are feedback effects greater than or less than one? And in like the canonical one sector model that comes down to this one diminishing returns parameter. In a networked growth model, instead of having one parameter that tells you are you having diminishing returns or non-diminishing returns, you have a spillover matrix. And the largest eigenvalue, the spectral radius of the matrix... I know you had Ben Golub on recently so...
[1:06:58] Seth Benzell: Just say, say the magic word. Give the audience the Eigenvalue.
[1:07:00] Basil Halperin: This is becoming the eigenvalue podcast I guess. If that largest eigenvalue is greater than one, then you have explosive growth. So “is that largest eigenvalue greater than one” can be summarized in this somewhat simple condition we have in the introduction of... it’s very loosely speaking like a weighted average of like the inverse of the diminishing returns parameter where the weights are determined by how automated is each sector. I don’t know how much sense that’s going to make out loud. In a lot of ways this paper is one of these papers where like looking at the math is actually a lot easier than saying it in words. But hopefully some of the insights have come across.
[1:07:45] Andrey Fradkin: So there are these like F... F terms which are the fraction of tasks that are automated by AI. Now like the first term of your equation is F of Y, which is the share of consumption good output that is production that is automated. Am I interpreting that correctly?
[1:08:07] Basil Halperin: Yes.
[1:08:08] Andrey Fradkin: Okay. Now what if that’s one just by itself?
[1:08:14] Basil Halperin: Right.
[1:08:15] Andrey Fradkin: That means that the entirety of the economy that we would actually care about in terms of consumption is automated already. So that’s kind of... in that case we don’t have explosive growth. It’s kind of on the boundary condition. Is that... am I interpreting that correctly? Because things aren’t getting better, it’s just that everything we want is is just being produced automatically.
[1:08:38] Basil Halperin: Right. If there’s nothing else going on, it’s right on the boundary. If you have epsilon of any other productivity growth going on or anything, you get above the exponential to super exponential.
[1:08:48] Seth Benzell: It would be like unstable in some sense if you were like exactly at one.
[1:08:52] Basil Halperin: Yeah, to perturbation.
[1:08:56] Seth Benzell: So Basil, I guess the last question I want to ask about this paper before we move on is... so you’ve explained how there’s a bunch of different things going on in the research process in the economy that are either going to kind of accelerate research and it’s going to get stronger and stronger or might slow down research and we’re going to get diminishing returns. Two of the most important factors here are kind of this idea of spillovers across sectors, but also this idea that you might be able to automate some research, right? As you get better AIs, you might be able to get faster algorithmic improvements. When I read kind of like LessWrongers, the kind of the latter kind of seems like the show, right? If you can get the AI to write better AI algorithms, there you are. In your model is that the important factor or are they kind of them all equally important? How do you think about that?
[1:09:47] Basil Halperin: Yeah, okay so let me want to say this. So the way I’d frame it is that these spillovers... or sorry, the diminishing returns limit the effects of AI progress. Spillovers in some like static sense... like we don’t think of spillovers as changing much over time. The innovation network doesn’t change much. But we think of as the economy grows, more and more tasks are getting automated. So spillovers provide some like static offset to the diminishing returns, whereas as automation increases, it’s continually offsetting diminishing returns. So I guess in like a dynamic sense, perhaps automation is more important. But sort of in the almost static way that we incorporate automation... either one is equally powerful in offsetting diminishing returns if you sort of do the comparative static. But in the sense of automation is the thing that actually changes over time, that’s the more important one.
[1:10:47] Seth Benzell: Okay. Stands to reason.
[1:10:49] Basil Halperin: If I can add one more thing about paper actually. So I didn’t mention one critically important limitation. So if you talk to economists about what will prevent AI from leading to explosive growth, I think we say one of two things. One is the diminishing returns. That’s that’s what this whole discussion has been focused on. But the other one is this idea of bottlenecks: that even if you have really fast progress in software engineering, then if you don’t have progress in the robotics side of the econ, the physical side, then that will bottleneck the growth if these sectors are complements.
[1:11:24] Seth Benzell: Yeah, and the essential thing is going to be the elasticity of substitution across sectors. Yeah.
[1:11:28] Basil Halperin: Right. And so we completely ignore the bottlenecks issue. We’re just focused on this diminishing returns idea, which to my mind is not a claim that there’s not bottlenecks. I think bottlenecks are super important. I think like there’s a 5 or 10% chance bottlenecks aren’t important—hence my earlier timelines forecast—but like...
[1:11:47] Seth Benzell: We all get uploaded. I mean yeah, there’s a universe where we all just get uploaded and like who cares that we don’t have robots for a while.
[1:11:53] Basil Halperin: Yeah or something like that. But yeah, the focus... the paper is meant to just like zoom in on the diminishing returns logic and to turn off the bottlenecks. But that’s important when thinking about how to quantitatively interpret the paper.
[1:12:08] Seth Benzell: There you go. Basil admits to one possible drawback to his paper. All right.
[1:12:13] Basil Halperin: That’s all you’ll get from me.
[1:12:15] Andrey Fradkin: Now I wanted to ask one more question actually because we’re natural right here and then we can go to the next topic. Which is like: how have you found the profession’s reaction to these sorts of exercises? Like you know, I can tell you what I... various opinions I’ve heard, but I’m curious like you were... you’re an author of these types of papers, so what has been your reaction? What has been like the feedback you’ve gotten? Yeah.
[1:12:43] Basil Halperin: I’m so curious about your experience. I have limited experience submitting these things through the publication process still because publishing takes so long. Yeah, I’ve only started submitting recently. Um, I guess what I would say is that like I feel like views on this are kind of polarized where some people are like, “This is super interesting and I’m glad to see economists taking this seriously as opposed to like wordcel mumbo jumbo from Silicon Valley or something like that.” Which I don’t want to say that I endorse that criticism, but some people have that criticism. And other people are like “This is...”
[1:13:16] Seth Benzell: This is a pro-wordcel podcast. You’re safe here.
[1:13:19] Basil Halperin: Yeah. Or are you calling yourself a shape rotator? Whatever.
[1:13:24] Seth Benzell: I’ll leave that up to you two. This podcast cannot rotate very many shapes. But that’s a topic for another episode.
[1:13:32] Basil Halperin: So that’s like really all to say that like to me it’s like too soon for me to say. And that’s why I would love to know what your experience is.
[1:13:42] Seth Benzell: My experience is that I found it completely impossible to publish and ended up having to publish a book. Yeah I think Seth has been trying to... Seth has been trying to publish this style of work for a very long time and the profession is not very interested, right?
[1:13:58] Andrey Fradkin: I would say opinions are changing, but I think the people have been battered for so long into being obsessed with like very micro identification... and given I’m not a macroeconomist... but like at least on the micro side that a lot of microeconomists just don’t consider it you know scientific unless there’s a tight identification argument. Or there’s an inherent skepticism of theory in some sense, which I do share to a large extent, which is that you can kind of get anything to happen if you’re a good theorist. And then it’s pretty hard to adjudicate between theories. And then to the extent that, you know, transformative AI is a mostly theoretical field at this point... it’s hard to adjudicate between transformative AI theories. So I think I’ve grown a lot more favorable to this type of work obviously over time because I just think like we might as well be working on the most important topics even if we can’t answer them as precisely. But I think a lot of people...
[1:15:09] Seth Benzell: Yeah, rather than just looking under the street light. Yeah.
[1:15:12] Andrey Fradkin: Exactly. Yeah. A lot of people are just not comfortable with that level of speculation. Yeah.
[1:15:18] Basil Halperin: “This is so dumb,” some might even say. No, yeah. Getting untethered from reality is like such a real risk on these big questions. In macro in general it’s so hard and you definitely see that happening. So it’s fair, it’s tough.
[1:15:48] Andrey Fradkin: I mean I think one of the interesting things that you did, right, is posted it on LessWrong. And in some sense like that has been more influential than any paper economics version of this paper that you could have ever written. For sure. Which says something.
[1:16:03] Basil Halperin: So to clarify for listeners, originally this was just some some shitpost. This was a blog post that I put out because like I was getting in fights with some friends in group chats and I was like, “Well the market doesn’t believe what you guys have to say.” And yeah and like it wasn’t going to be a paper and it just... it got such positive feedback that like it seemed like the demand was there for it to be developed a bit further into a paper. Uh, and in some ways I think that maybe I should instead of spending thousands and thousands of hours polishing papers before putting them out, I should be putting more out as blog posts first to...
[1:16:40] Seth Benzell: Dude, honestly yes. Because if you’re asking like my honest advice, I think when it comes to this TAI stuff there’s so much taste at the evaluation level that like spending another thousand hours polishing the same idea, the marginal returns are pretty low. At least as a practical careerist observation. If you feel like you’re learning, keep going.
[1:16:59] Andrey Fradkin: Well I do think that you know, if you get it... you know, for the profession, if you get into a top five journal there are obviously enormous rewards. But I think like there’s a risk of like polishing it for like some you know specialist field journal and still spending two years on it. I mean it almost makes one think that like you know there should be a new journal of Transformative AI Economics. I’m sure Anton has suggested something like that.
[1:17:27] Seth Benzell: Yeah, okay that’s what I was... maybe can we talk for a minute about your department? Which sounds so cool. You’ve got Anton Korinek who I remember back when he was doing macroprudential policy. I was like, “This is one smart cookie. I want to see where... let this guy cook.” What’s it like working with him? What’s this TAI department you guys are setting up?
[1:17:44] Basil Halperin: Yeah. So Anton has, yeah, been interested in the economics of transformative AI for longer than almost anyone, right? Like somehow back in 2016 he was thinking about this stuff. I’m still a little confused how he got into this so early. I think he did like a master’s in computer science maybe and had this in the back of his head. But yeah, so he’s managed to get a bunch of money to start this Economics of Transformative AI Institute here at the University of Virginia. Which is very cool. So me, Anton, and Lee Lockwood, who is a public finance economist, are sort of the three folks here who have written papers at least on the topic. And yeah I don’t know, trying to get folks to think more about the issue and write some research.
[1:18:28] Seth Benzell: What is it like working with Anton? Do you just like sit down with him and he’s like, “I already have solved all of the problems” and you just like you take notes on him as he dictates to you? What is it like collaborating with a guy like that?
[1:18:39] Basil Halperin: What can I say? I mean yeah, Anton’s been thinking about these issues for a long time. I can recommend his Coursera on the topic. In fact I went through that during the depths of the pandemic where he talks about the macroeconomics of AI and some models, Shannon information theory and interesting things. Yeah.
[1:19:00] Andrey Fradkin: Shannon information theory gets you to scaling laws? How does that come in?
[1:19:04] Basil Halperin: I don’t remember why he was teaching that but I was you know interested in the topic.
[1:19:08] Seth Benzell: This is neat. I’m Anton Korinek and this is what smart people think is fun.
Basil Justifies His Blog Posts:Optimal Taxation in the Age of AI
[1:19:16] Seth Benzell: You recently got in a Twitter back and forth with other friend of the show Phil Trammell about optimal tax policy. You posted this really spicy meme of the two astronauts on the moon...
[1:20:00] Seth Benzell: ...and there’s the Puerto Rican astronaut with the gun to the American astronaut saying...[1:20:00] Seth Benzell: ...and the American astronaut says, “So, even in the age of TAI, Pigouvian and Georgist taxation is the right way to go?” And then the Puerto Rican says, “Always has been.” Would you explain the context of you posting that meme, the Phil and Dwarkesh post, and how people should understand that?
[1:20:27] Basil Halperin: So yeah, Phil Trammell, Dwarkesh Patel... two guys that anyone interested in this stuff should be reading or following, listening to. Admittedly, Dwarkesh is a competitor of you two...
[1:20:39] Andrey Fradkin: No, no, no. We believe in coopetition.
[1:20:41] Seth Benzell: We’re cooperating... everyone should listen to both of our podcasts. We’re complements.
[1:20:46] Basil Halperin: Nice.
[1:20:47] Andrey Fradkin: We are actually complements, to be clear.
[1:20:54] Basil Halperin: So yeah, they wrote this great post, “Capital in the 21st Century,” playing on Piketty, saying Piketty was right in the past, but will be right in the future. And made this argument that as more of the economy gets automated, labor income will no longer be a sufficient tax base, and that power will be unequally distributed because capital income is so highly concentrated.
[1:21:24] Seth Benzell: Feels like these are three separate arguments already.
[1:21:27] Basil Halperin: There’s a couple different arguments in this piece, yes. And yeah, calling for capital taxation in the future, both for redistribution purposes of financial resources and to prevent sort of power concentration, is how I interpreted the piece.
[1:21:44] Seth Benzell: But I was taught in public finance class that capital taxation is bad.
[1:21:48] Basil Halperin: Yeah, I think there’s a lot of logic to that argument. So yeah, I wrote this thread just making a couple points. One of which is based on—we were just talking about my colleagues Anton and Lee, Anton Korinek and Lee Lockwood—so they had a recent paper summarizing sort of how should we think about public finance in a transformative AI world. So like take an AK economy, so an economy where all production is done by capital, no labor involved. What is optimal taxation in that world? And they point out or they show that consumption taxation is still optimal rather than introducing capital taxes. As long as you can raise enough revenue from that consumption taxation to fund whatever you need to fund. So that was like a first point I was making, that consumption taxation is going to dominate capital taxation.
[1:22:42] Seth Benzell: Let’s pause there for a second. Because I feel like all of my normie friends don’t understand this point. And in fact my advisor once, he tells me this story—I mean I assume it’s true—where he had like a half hour meeting with Bernie Sanders where he was trying to explain to him why consumption taxation is better for poor people than capital taxation. And Bernie Sanders’ brain was like, “But, but poor people no have capital.” Explain to a normie: why is consumption taxation considered preferred to capital taxation? Because only rich people have capital, right?
[1:23:14] Basil Halperin: So let’s see if I can do this with the caveat that I’m not a public finance economist, I just play one on Twitter. So the intuition I always come back to is this one that capital taxation is equivalent to explosive consumption taxation. So what do I mean by that? If I save... so you know, the University of Virginia pays me one dollar. I can either use that to go like buy a candy bar today or I can save that to tomorrow.
[1:23:41] Seth Benzell: But you don’t save it because of TAI.
[1:23:43] Basil Halperin: But I won’t save it because of TAI, indeed. I got to go party. And consumption taxation would be taxing that purchase of the candy bar. Capital taxation, taxing the savings. And if I save the dollar to tomorrow and try and buy a candy bar tomorrow... the capital taxation then would just be taxing consumption tomorrow differently than consumption today. And do we... like if we’re trying to equalize consumption across people, does it make sense to tax people who consume in the future rather than consume today? Like what’s the difference there? Is like one intuition pump. Honestly, like again, I’m not a public finance economist, I’m not sure on the spot I’m going to give the clearest exposition.
[1:24:38] Seth Benzell: No, I think that was pretty good. I think that was pretty clear. Okay, but then the memes about Pigouvian and Georgist taxation.
[1:24:45] Basil Halperin: Right, right. So first point, consumption taxation dominates capital taxation anyway. A bigger picture point that isn’t AI specific but does apply to the AI world is that we have these other taxes that not only are they less distortionary than consumption taxation, they might even be efficiency enhancing. So those taxes are taxes of externalities—Pigouvian taxes—should we tax carbon? Should we tax pollution? And Georgist style taxes where you tax owners of unimproved land or unimproved natural resources. People who just by luck and by happenstance happen to find out they have an oil well under their house. Like there’s no economic efficiency, and arguably no moral reason for those people to earn rents from the fact that all of a sudden, whoa, there’s a gold mine under my house.
So today, we should be taxing externalities to fix those negative externalities. Today we should be redistributing the pure rents of unimproved land, unimproved fixed resources. And that will only remain true in an AI driven economy. And those natural resources will become even more important in an AI driven economy where there are no scarce... there’s no scarce labor, there’s no scarce capital. The only thing that is scarce is natural resources. All that said, like I’ve mentioned this caveat that: are those taxes enough to fund the necessary redistribution or the necessary government spending?
[1:26:28] Seth Benzell: Land is the only scarce factor. You must imagine its price will be quite high.
[1:26:32] Basil Halperin: Yeah, in the limit, you would really think so. Maybe on the transition path... so this is a very good point that Phil made in the Twitter discussion of like, how quickly will the natural resource share rise? It’s not clear. I would be so interested if someone could answer that question in a convincing way or something.
[1:26:47] Andrey Fradkin: I don’t know. I think robots will be able to mine on the moon pretty efficiently, personally.
[1:26:55] Basil Halperin: And so natural resources won’t be scarce, is what you’re saying?
[1:26:58] Andrey Fradkin: Well, there’s a lot of natural resources on the moon.
[1:27:01] Basil Halperin: Are there? On the moon?
[1:27:04] Andrey Fradkin: I think so, yeah.
[1:27:06] Seth Benzell: We got red rocks. You can make robots out of red rocks, right?
[1:27:10] Andrey Fradkin: I mean you can also do all sorts of things...
[1:27:12] Seth Benzell: Silicon! It’s silicon, dude!
[1:27:14] Andrey Fradkin: You can also, you know, like have a ton of solar panels on the moon and then use energy to run fusion and fission reactions to get any resource you want.
[1:27:28] Seth Benzell: It’s different timelines. Different horizons.
[1:27:33] Basil Halperin: Different time horizons actually is I think a big part of the reason for disagreements on this. But um, like the rents in the economy have to go somewhere, right? If labor’s not earning it and capital’s not earning it.
[1:27:48] Seth Benzell: In a pure AK economy, there are no rents. It’s just A and K, dude.
[1:27:52] Basil Halperin: Right, right. The returns have to go somewhere. The returns above replacement maybe is one way of putting it. So anyway, that’s the source of the meme. Like why hasn’t anyone estimated whether we could just fund the US government by taxing externalities, by taxing land? Like someone should have done that, especially these Georgists obsessed...
[1:28:13] Andrey Fradkin: No, no, I think... well, I think the externalities... I mean our friends in environmental economics have definitely, you know... I think Larry Goulder has a bunch of work on estimating Pigouvian taxes in general equilibrium.
[1:28:28] Basil Halperin: Read it.
[1:28:29] Andrey Fradkin: I don’t think... I don’t think it gets you there. But Georgist taxes... I can imagine it can get you pretty far.
[1:28:39] Andrey Fradkin: Well cool. Uh, thanks so much for joining us. It’s been a fascinating discussion. Any final notes for our listeners? Anywhere they want to check out, in addition to your website?
[1:28:53] Basil Halperin: Yeah, feel free to send my papers. That’s a great decision. And of course, on Twitter and Seth’s as well.
[1:28:59] Seth Benzell: [Laughs] Great.
[1:29:01] Andrey Fradkin: All right. Well, thanks for... thanks for coming on and keep your posteriors justified.
[1:29:07] Basil Halperin: Thanks, Andrey.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit empiricrafting.substack.com