Navigated to Will Super-Intelligence's Opportunity Costs Save Human Labor?

Will Super-Intelligence's Opportunity Costs Save Human Labor?

October 21
51 mins

View Transcript

Episode Description

In this episode, Seth Benzell and Andrey Fradkin read “We Won’t Be Missed: Work and Growth in the AGI World” by Pascual Restrepo (Yale) to understand what how AGI will change work in the long run. A common metaphor for the post AGI economy is to compare AGIs to humans and men to ants. Will the AGI want to keep the humans around? Some argue that they would — there’s the possibility of useful exchange with the ants, even if they are small and weak, because an AGI will, definitionally, have opportunity costs. You might view Pascual’s paper as a formalization of this line of reasoning — what would be humanity’s asymptotic marginal product in a world of continually improving super AIs? Does the God Machine have an opportunity cost?Andrey, our man on the scene, attended the NBER Economics of Transformative AI conference to learn more from Pascual Restrepo, Seth’s former PhD committee member.We compare Restrepo’s stripped-down growth logic to other macro takes, poke at the tension between finite-time and asymptotic reasoning, and even detour into a “sheep theory” of monetary policy. If compute accumulation drives growth, do humans retain any essential production role—or only inessential, “cherry on top” accessory ones?

Relevant Links

* We Won’t Be Missed: Work and Growth in the AGI World — Pascual Restrepo (NBER TAI conference) and discussant commentary

* NBER Workshop Video: “We Won’t Be Missed” (Sept 19 2025)

* Marc Andreessen, Why Software Is Eating the World (WSJ 2011)

* Shapiro & Varian, Information Rules: A Strategic Guide to the Network Economy (HBR Press)

* Ecstasy: Understanding the Psychology of Joy — Find the sheep theory of the price level here: Seth’s Review

Priors and Posteriors

Claim 1 — After AGI, the labor share goes to zero (asymptotically)

* Seth’s prior: >90% chance of a large decline, <10% chance of literally hitting ~0% within 100 years.

* Seth’s posterior: Unchanged. Big decline likely; asymptotic zero still implausible in finite time.

* Andrey’s prior: Skeptical that asymptotic results tell us much about a 100-year horizon.

* Andrey’s posterior: Unchanged. Finite-time dynamics dominate.

* Summary: Compute automates bottlenecks, but socially or physically constrained “accessory” human work probably keeps labor share above zero for centuries.

Claim 2 — Real wages 100 years after AGI will be higher than today

* Seth’s prior: 70% chance real wages rise within a century of AGI.

* Seth’s posterior: 71% (a tiny uptick).

* Andrey’s prior: Agnostic; depends on transition path.

* Andrey’s posterior: Still agnostic.

* Summary: If compute accumulation drives growth and humans still trade on preference-based or ritual tasks, real wages could rise even as labor’s income share collapses.

Keep your Apollonian separate from your Dionysian—and your accessory work bottlenecked.

Timestamps:

[00:01:47] NBER Economics of Transformative AI Conference

[00:04:21] Pascual Restrepo’s paper on automation and AGI

[00:05:28] Will labor share go to zero after AGI?

[00:43:52] Conclusions and updating posteriors

[00:48:24] Second claim: Will wages go down after AGI?

[00:50:00] The sheep theory of monetary policy

Transcript

[00:00:00] Seth: Welcome everyone to the Justified Posteriors Podcast, where we read technology and economics papers and get persuaded by them so you don’t have to.

Welcome to the Justified Posteriors Podcast, the podcast that updates its priors about the economics of AI and technology. I’m Seth Benzell performing bottleneck tasks every day in the sense that I’m holding a bottle and a baby by the neck down in Chapman University in sunny, Southern California.

[00:00:40] Andrey: I’m Andrey Fradkin, practicing my accessory tasks even before the AGI comes coming to you from San Francisco, California.

So Seth, great

[00:00:53] Seth: to be. Yeah, please.

[00:00:54] Andrey: Well, what are you, what have you been thinking about recently? What have been, [00:01:00] contemplating.

[00:01:01] Seth: Well, you know, having a baby gets you to think a lot about, what’s really important in life and what kind of future are we leaving to him, you know, if we might imagine a hundred years from now, what is the economy that he’s gonna have when he’s retired?

Who even knows what such a future would look like? And a lot of economists are asking this question and there was this really kind of cool conference that put together some of the favorite friends of the show. An NBER Economics of Transformative AI Conference that forced participants to accept the premise that AGI is invented.

Okay, go do economics of that. And Andrey, I hear that somehow you were able to get the inside scoop.

[00:01:47] Andrey: Yes. Um, it was a pleasure to contribute a paper with some co-authors to the conference and to attend. It was really fun to [00:02:00] just hear how people are, um, thinking about these things, people who oftentimes I associate with being very kind of serious, empirical, rigorous people kind of thinking pie in the sky thoughts about transformative AI.

So, yeah, it was a lot of fun. Um, and there were a lot of interesting papers.

[00:02:22] Seth: Go ahead. Wait. No, before I want, I’m not gonna let you off the hook Andrey. Yeah, because I have to say, just before we started the show, you did not present all of the conversation at the seminars as a hundred percent fun as enlightening, but rather you found some of the debate a little bit frustrating.

Why? Why is that?

[00:02:39] Andrey: Well, I mean, I, you know, dear listeners, I hope we don’t fall guilty of this, but I do find a lot of AI conversation to be a little cliche and hackneyed at this point. Right. It’s kind of surprising how little [00:03:00] new stuff can be said. If you’ve read some science fiction books, you kind of know the potential outcomes.

Um, and so, you know, it’s a question of what we as a community of economists can offer that’s useful or new. And I do think we can, it’s just, it’s very easy to fall into these cliches or well trodden paths.

[00:03:20] Seth: What? What’s the meaning of life? Andrey? Will life have meaning after the robot takes my job? Will my AI girlfriend really fulfill me?

Why do we think economists would be good at answering those questions?

[00:03:34] Andrey: Yeah, it’s a great question, Seth. I’m not sure. Um,

[00:03:39] Seth: I think it’s because they’re the last respected kind of technocrat. Obviously all technocrats are hated, but if anybody’s allowed to have an opinion about whether your anime cat girl waifu AI companion is truly fulfilling.

We’re the only, we’re the only source of remaining authority.

[00:03:57] Andrey: Well, you know,

[00:03:57] Seth: unfortunately,

[00:03:58] Andrey: I think it’s a [00:04:00] common thing to speculate as to which profession will be automated last, and certainly Marc Andreessen believes that it is venture capitalist. So

[00:04:11] Seth: Fair enough. I’ll narcissism, I’ll leave

[00:04:13] Andrey: it as an exercise to the listener what economists think.

[00:04:21] Seth: So let’s talk about, so we’re, we’re at, we’re talking about whether humans will be essential in the long run because the particular paper that struck my eye when I was looking at the list of seminars topics was a paper by friend of the show, I hope he considers us a friend of the show because I love this guy.

Pascual Restrepo, a professor of economics and AI at Yale University. Um, had the honor of having this guy on my dissertation committee was definitely a role model when I was a young gun, trying to think about macro of AI before everyone on earth was thinking about macro of AI. [00:05:00] Um. And so it’s a real honor for the show to take on one of his papers and he’s got something that’s trying to respond to.

Okay. Transformative AI shows up. What are the long-term dynamics of that? Which is a departure from where he wants to be. He wants to live in near future. We automate another 10% of tasks land. Right. So I was excited to take this on. Um, Andrey, do you wanna maybe, introduce some of the questions it asks us to consider?

[00:05:28] Andrey: Yeah. So, Pascual presents a very stylized model of the macro economy and we picked two claims from the paper to think about in terms of our priors. Um, the first one of these is, um, after we get AGI in the limit, the labor share will go to zero. That is the first claim of this paper. Um, what do you think about that, Seth?

[00:05:59] Seth: Great question. [00:06:00] Um, so to remind listeners, so the labor share is if you imagine all of the payments in the economy, some are going to workers and then some are going to people who own the machines or own the AI, right? So today about two thirds of the money or about 60% of the money is paid to workers.

About 40% is paid to machines and out to profits and people who own stuff. It is a claim of this paper and a kind of a lot, a theme of a lot of the automation literature that as you get more and more automation, you’d expect the share of monies that are being paid to workers to go down, right? Because just more of the economy is just automation unconstrained by.

Um, let me tell you how I think about this question, Andrey. First of all, you know, we’re not gonna talk about out to infinity. I know these are asymptotic papers, but let’s try to stay a little bit closer. Um, so I’ll, I’ll mostly be thinking about like a hundred years after [00:07:00] AGI, right? So we have AGI, and now we’re, we’ve played it out in some sense.

We’ve had the next industrial revolution that happens from AGI, right? Assuming we don’t have an apocalypse, so this is, let’s set aside, conditional on, you know, we don’t destroy ourselves, which I don’t think there’s a huge chance of that, but that’s another question. I would say there’s a greater than 90% chance of very large decreases in labor share, you know, down from 60% today to 5%, 10%, 20%.

I really do see that. But I think there’s like a less than a 10% chance that within a hundred years of AGI, um, we’ll have, you know, literally 0% labor share or whatever, like less than 1% labor share. Why do I say that? This is something that’s gonna come up. I’m gonna start by just kind of questioning the premise of whether AGI really means.

That all services can be provided by the AI, right? I know. I don’t know if this [00:08:00] counts as being allowed. I’m gonna give you a fun example. Andrey. Have you ever heard of a pidyon haben?

[00:08:05] Andrey: No.

[00:08:06] Seth: You’ve never heard of a pidyon haben? Well, this is a tradition in Deuteronomy. It’s one of the few halakhic laws that actually make intuitive sense to me because it’s revenue-generating. When you have a firstborn son who is not a kohen or a Levi, you “buy” the baby out of service to the Temple. The cost is exactly five silver pieces (shekels) of a specified weight—they’re very specific about the weight; it’s not just any five silver coins. And here’s the thing: it has to be paid to a kohen (a member of the priestly family of Jews). Minor correction for Justified Posteriors fans: the pidyon haben is paid to a kohen, not a Levi. I couldn’t let that error stand. Thank you.

So that economic interaction is value that, by definition, can’t be captured by the AI. In some sense that’s a greater-than-zero slice of the economy, asymptotically—well, I guess it depends on whether silver is rare asymptotically. But that’s the kind of example I have in mind, and it’s why I don’t think the labor share gets literally to zero. Andrey, gimme your thoughts.

[00:09:31] Andrey: Yeah, I mean, look, zero is an asymptotic result, so I do think, let’s say less than 1% in a hundred years. With your example, I think it’s very easy to imagine a virtual kohen to collect said revenue. So I actually—no, let’s—

[00:09:53] Seth: Think about the political economy of it for a second. Who gets to decide whether it counts if you send it to the robot?

[00:10:00] Well, the rabbi. The human rabbi decides.

[00:10:02] Andrey: The human rabbi might be a capital owner, but the—

[00:10:05] Seth: The human rabbi may—that’s the danger.

[00:10:09] Andrey: Yeah. Rabbis—I mean, I can think of things. Your point is that some occupations may require a human involved, right.

[00:10:25] Seth: And they may be some sort of fraction of the economy asymptotically. They’re not linear additive, because that’s a distinction that’s going to become important.

[00:10:33] Andrey: Yeah, later. So, I think about part of this as being about population growth, and that’s a good point. Because if one of the things that AI does is increase the number of humans, and there’s some sort of human scaling law, if you will—that AGI can “make” humans very cheaply and quickly, I assume—then I think that’s one thing to think about. And then I think the other possibility, and this is not talked about in this paper, is: there are certain things where you can throw as much compute as possible and you still get returns—like exploring outer space—but there might be a difference in how much humans value that versus how much AIs value that.

[00:11:40] Seth: That’s a super good question that is not raised. I think I was trying to read this paper as “we only care about human utility,” but that’s obviously not unimportant here.

[00:11:50] Andrey: Yeah. Nonetheless, a hundred years is getting to the point where a lot can happen, but I’d still—as a betting man—say it’s pretty unlikely in a hundred years that waged labor will be less than 1%.

[00:12:08] Seth: Yeah, we’ll probably destroy enough capital along the way that we will get back to that asymptote.

[00:12:12] Andrey: Yeah. So that’s kind of my part. The second is where I think it’s a contentious claim: wages won’t go down in the long run because people can always break away. And that’s the argument in the paper. So let’s just focus on the first part of that, which is “wages can’t go down in the long run with AGI.” And what we mean here is not wages as a percentage of earnings, but real wages.

[00:12:47] Seth: Precisely.

[00:12:47] Andrey: Yeah.

[00:12:48] Seth: This seems to me like a naïve simplification of the model, which is what gives us that. It seems to me if you’re going to be so expansive as to take the stance that even my kohen example won’t hold up in the long run and it really is going to do every single job, you have to imagine some sort of crowding out of resources that are necessary for human labor to get anything done effectively. Right? This is a model that kind of very naïvely says that there’s always the— you know, the forest where there’s “good enough,” and the Lockean cliché, right? Anyone can go to the forest and take some wood and make a knife—therefore property rights, whatever. That cliché is in the back here. But of course, if you had a super-duper powerful AI, they might need that wood first. They’re going to use up all the resources. There’ll be no starting tinder for the humans to get started with. And then that will effectively drive down wages. So I don’t— I think to the extent that we get an AGI that is driving down labor share, what has to save the day is that there is some essential thing—call it a bottleneck—that only humans can do. What is the percentage chance that we get saved by one of those to keep wages up? Do I think it’s closer to—

[00:14:20] Andrey: Now are you talking about asymptotia or a hundred years?

[00:14:23] Seth: I’m talking about a hundred years.

[00:14:25] Andrey: See, this is where I’m a little confused, Seth. In asymptotia I kind of agree with you, but in the hundred-year horizon—especially since you think that wages are going to still be around—I would think that the cumulative wage would be higher than we have now.

[00:14:45] Seth: I’m saying 30% chance of this. I’m trying to make those two predictions format.

[00:14:50] Andrey: Which one is this, just to be clear?

[00:14:52] Seth: Great. I think there is a 30% chance that wages will go down. So I think there’s a 70% chance that wages will go up.

[00:15:03] Andrey: On average, as a result of AGI. So real wages per capita globally—just to be accountable—70%.

[00:15:10] Seth: This is my hundred-year prediction. A hundred years from now—dig me up—a hundred years after AGI, the real average wage will be higher than today. I’m good with that.

[00:15:25] Andrey: I would say it’s more like 80%.

[00:15:28] Seth: 80%. Okay. So you’re more—well, maybe we can talk at the end about why we start and end up at slightly different places. You ready to get into the model?

[00:15:47] Seth: We heard our priors. Now we confront the evidence. Do, do, do, do. Okay. So Pascual’s got a pretty straightforward model for us. The two premises he wants to start with are: first, the idea that we’re going to invent “robots,” which he means by “compute”—the accumulation of more AI compute over time. So literally chips and energy, I would say. But then he clarifies that this also includes any sort of physical instantiation of capital needed to move things in the physical world. So what he calls compute, I would think is more usefully thought of as robots. It’s going to do anything you need it to. The idea is that asymptotically we are going to invent robots that can do anything—any work that can be valuable in the economy. But he’s going to allow for the possibility that there’s some sort of comparative-advantage trade relationship with humans. We’ll come back to that. And then the second asymptotic here is the idea that the stock of robots and compute is going to grow indefinitely. So we’re thinking about the indefinite future: we have more robots than you possibly know what to do with. If you want your sci-fi comparison, this could be Isaac Asimov’s “Naked Sun,” where there are 50 people on a planet, each of whom owns a continent-sized estate and has vast swaths of robot servants. Maybe that’s what you should be thinking of as this asymptotic economy. From that, and just the assumption that economic output is the sum (in a complicated way) of all of these different jobs that could be done, he then distinguishes between two kinds of work in the economy: bottleneck work and accessory work, which I think is the most interesting novel distinction introduced here. Before I get into that, anything I missed from the model you want to throw in there, Andrey?

[00:18:09] Andrey: Did you mention the constant returns to scale?

[00:18:14] Seth: Go ahead and say it. Yeah, also there are constant returns to scale.

[00:18:16] Andrey: There are constant returns to scale. There is no real capital to speak of other than compute.

[00:18:24] Seth: Ownership—yeah, this is just the production side model. There’s no “where do these dynamics come from?” Maybe there’s a social planner deciding some of this, but 90% of the paper is not going to take a stance on the consumer/household side of the economy.

[00:18:53] Andrey: Yeah. And the other thing is that he uses the term “bottleneck,” but that is a very confusing word, so it’s best not to—okay, let’s get it right now—it’s best not to use it, actually. One of the key comments at the conference was to rename that word.

[00:19:09] Seth: Let’s talk, because I like it. I think you guys are being mean to Pascual for no reason. Pascual, if you’re ever in trouble, I’m going to tell you why. There is a concept I use all the time for thinking about long-run macro dynamics: when we combine automated and non-automated things, are they gross complements or gross substitutes? In a CES production function, my understanding is that the concept of bottleneck work would correspond to anything that is Cobb-Douglas or more complementary in the asymptote, and anything that’s accessory work would be more grossly substitutable than Douglas. That’s how it would work for CES production functions.

[00:20:14] Andrey: I’ll take your word for it, Seth.

[00:20:18] Seth: Well, let me give you an intuition. In one extreme, we have perfect complements: if humans are peanut butter and AI is jelly, clearly the humans are a bottleneck there. Then we have the perfect substitutes extreme: if humans are margarine and AI is butter, great—there’s more spread out there; they’re not hurting each other. Those are the two extremes. There’s a continuum between them. In a CES production function it’s clear. The underlying concept is more general: in the limit, is this a bottleneck? In the limit, is this a substitute? Maybe you don’t love this language, but there should be words for “in the limit is this a gross substitute?” vs. “in the limit is this a gross complement?” I think these are the words I’ve been looking for. Why didn’t they like it?

[00:21:23] Andrey: I think because Pascual’s example was that the AI will be out there exploring space, and people all conference long, when they use the word “bottleneck,” are thinking about current production processes where there might be bottlenecks because it’s a part AI can’t do end-to-end. So when you’re talking about bottlenecks, it’s really like, “here’s this little thing that we need a human to do in this process—like give the AI the bank account number,” or whatever. That’s a very different type of task.

[00:21:59] Seth: I’m coming at it from the consumption side and they’re coming at it from the production side. I think I’m much more on Pascual’s side. I think he’s being held back by the smooth brains at the conference.

[00:22:10] Andrey: I just don’t think any normal human being, when they think about the word “bottleneck” and tasks, is thinking about AI exploring space.

[00:22:28] Seth: His example is terrible. But he’s a beloved weirdo; that’s why he’s a friend of the show.

[00:22:35] Andrey: I’m not attacking him. I’m saying this word is not the right one. In his model, if we do have near-infinite compute ability, we will do cool stuff—like we recreate our own version of the Matrix with cell-level simulation of the entire world. Is that a bottleneck? It’s not a bottleneck. We can do all sorts of very large-scale things—at least AI can do it.

[00:23:15] Seth: Very interesting. I can see why you don’t like the word. There needs to be a word for the concepts I described. So anyway, I like these two concepts: in the limit, do you need humans to get more output, or in the limit do you not? Those are the concepts. Are you ready to proceed to his results?

[00:23:39] Andrey: I actually wanted to question you on that last one.

[00:23:41] Seth: Please.

[00:23:46] Andrey: “In the limit, do you need humans or not?” is not actually the definition in this paper.

[00:23:49] Seth: Let me think for a second.

[00:23:49] Andrey: The task, not the human.

[00:23:49] Seth: No, the human was the example. I’m sorry if that was confusing. The question is: in the limit, do you need the task or not? That is the question in the paper.

[00:24:00] Andrey: I view it as a satiation sort of thing. There are only so many live music performances the world needs, if that’s what we think humans are going to be doing. Other things—the universe is pretty large, maybe not infinite—so there’s lots to explore, and that doesn’t get satiated.

[00:24:24] Seth: I don’t see how satiation comes in.

[00:24:26] Andrey: Because one of the conditions is about the derivative of the production function.

[00:24:35] Seth: Right. So if you became satiated on an input, of course it couldn’t be a bottleneck task. Of course. Satiation would be one mechanism for not being a bottleneck. Good. Last comments before we get to the results?

[00:24:56] Andrey: No, go for it.

[00:25:00] Seth: Prop 1: All bottlenecks are eventually automated while some accessory work may be left to labor. Okay, what’s the intuition here?

[00:25:06] Andrey: The intuition is opportunity cost. If compute is being used for this task, that means it’s not being used for some other task that maybe has a higher return or humans can’t do. As a result, humans are going to be left doing some kind of low-value work because the compute is better used elsewhere.

[00:25:39] Seth: Right, but now it’s a claim about what that low-value work will be. It’s got to be the thing the AI won’t always need to make more of. If there’s anything that’s going to hold back the AI, it’s going to do more of it, because this is super-AI.

[00:25:58] Andrey: Like creating more compute, for example.

[00:26:01] Seth: Yeah, they’re not going to let the humans be in charge of that. Don’t worry. So what’s left is this concept in the paper—we can discuss how realistic it is—where humans can go to the woods and do their constant-returns-to-scale task with each other, and maybe even have a parallel economy, or maybe it’s just the cherry-on-top economy.

[00:26:26] Andrey: Yeah, so now we’re getting to the argument for why real wages won’t go down though. That’s what you’re saying.

[00:26:37] Seth: “While some accessory work may be left to labor”—I was explaining the second half of that sentence.

[00:26:42] Andrey: I think you’re mixing two concepts. Some accessory work would be left to labor is one claim. A different claim is that wages can’t go down because essentially, in his model, all the humans can say “screw this AI, we’re going to recreate our own economy,” and the AIs won’t care. So they’ll be able to do just as well as in a world without AGI. That, to me, is a ridiculous argument, but it’s also different from the argument for the fact that there are accessory jobs.

[00:27:26] Seth: Why is it interesting that there are accessory jobs? In my interpretation, there is an outside option providing a floor on wages that happens to be an accessory job.

[00:27:43] Andrey: I don’t agree. The accessory job is not providing the minimum wage. Without accessory jobs there are no wages. I don’t understand how it could be providing a minimum wage when without accessory jobs humans don’t do anything.

[00:28:16] Seth: No, that’s not this model. There is the special case where there are no accessory jobs. What they do then is a really lousy complement—they do the most human-comparative-advantage, human-complementary job.

[00:28:29] Andrey: I don’t think such a job would exist. I’d be shocked if, for anything that’s truly scalable, humans in the loop could even be positive.

[00:28:44] Seth: Let me think about that for a second. So you don’t like that special case where all tasks are ultimately bottlenecks for each other?

[00:28:52] Andrey: Yeah. What is a human going to do at an automated GPU factory, exactly? They’re going to need to be fed. I don’t see how humans could be net positive in those types of production processes.

[00:29:17] Seth: What I want to point out one more time is you’re coming at “bottleneck” from the production side, and I’m coming at it from the consumption side. One more note to Pascual to maybe think about in the next draft.

[00:29:30] Andrey: Want to skip to Proposition 3?

[00:29:33] Seth: No, we haven’t finished talking about these propositions. Just to be clear, accessory jobs are the reason humans have substantial wages at all.

[00:29:56] Andrey: That’s a different claim.

[00:30:00] Seth: The two claims have to be compatible.

[00:30:09] Andrey: Sure. I thought we’d talk about the plausibility of the model’s implications for those claims separately.

[00:30:21] Seth: I find myself unconstrained by this ordering of concepts, but happy to comply.

[00:30:28] Andrey: Go ahead. What were you going to say?

[00:30:30] Seth: In my mental model of this model: there is a special case where there is no accessory work—everything is ultimately a bottleneck for everything else. That is a special case. And then he also says that in all versions of this model, as I understand it, wages can’t go down. Those cannot both be true and it also be the case that the only thing that keeps wages from going down is the existence of accessory jobs.

[00:31:09] Andrey: I think we’re also mixing “what is in his model” versus “what are the economic forces,” which is always hard because it’s so stylized.

[00:31:26] Seth: Fair.

[00:31:26] Andrey: The interesting economic content of the model is that there are accessory jobs allowing humans to persist in having some positive labor contributions that are not taken up by the machines. Why aren’t the machines doing it? Because the machines have better things to do.

[00:31:54] Seth: One way to think about it: if you have automation and there’s perfect substitution, it kind of doesn’t affect your life. Suppose we sell oil and I’m a whaler who collects whale oil. My friend invents oil wells and gets a hundred times the amount of oil I have. In an economy where there’s only oil: that guy got a lot of oil—good for him—I still have my whale oil. In an economy where oil’s a complement to everything else, I’m ruined because now the price has collapsed.

[00:32:39] Andrey: Now let’s go to the claim that in such a world, wages can’t go down.

[00:32:51] Seth: In a world where there’s only one thing—or rather, where the things are substitutes—wages can’t go down. That’s the connection between an accessory task and a gross substitute. If your oil is good and my oil is good, and we can both enjoy each other’s corn—if you get more corn, that doesn’t affect my corn. So my wage can’t go down. I can talk about why that would break, but that’s why it happens in this model.

[00:33:27] Andrey: Any model here where there’s perfect alignment of what humans want and what the machines want—you’re producing more, and it’s going to go to humans. It’s almost a reductio that, in such a model, real wages have to go up.

[00:33:56] Seth: This is almost like a Pareto model: good things have to happen in a Pareto model.

[00:34:02] Andrey: If there’s a social planner, the planner is maximizing utility, and the utility is human utility, not machine utility.

[00:34:14] Seth: It’s like “the guy who got free stuff has more income” theorem.

[00:34:19] Andrey: Right. So I think it’s strange to think about this, because no one is seriously worried about the situation where we’re infinitely wealthy and have perfect control of our AI.

[00:34:44] Seth: Okay, so what’s the work the model is doing? It’s trying to tell us that that’s the case where there isn’t good accessory work, maybe. The sad case is where there’s a negative externality of whatever the AI is accumulating on our wages. How could that work? What’s not modeled here? There’s no sense in which AI can crowd out investment in capital that complements humans. What this model excludes is the idea that when I build a robot, I might not be building a computer for a human to use. That’s why wages go down: no one invests in making humans productive because it’s better to invest in making AI productive.

[00:35:25] Andrey: I’m not even sure that’s enough. If ultimately some part of the AI production chain is kicking back things that humans like, I’d be more worried that if the AIs have transcended humanity and all resources must be used to explore space, we might find ourselves without a planet Earth because all the resources will be extracted.

[00:35:57] Seth: Pascual did not do this model any favors in his presentation, I could tell.

[00:36:06] Seth: I think this is happening today. Will you guys listen to our “Canaries in the Coal Mine” paper? You could argue that today AI is leading to reduced investment into some kinds of young people’s human capital. That plus humans’ human capital eventually being replaceable is the kind of thing that would drive down wages in the absence of an accessory job to fall into. We can talk about what that would be—like providing mental-health services to each other in a linear way.

[00:36:56] Andrey: There’s still a distinction between a world where human labor is close to worthless and a world where humans are materially worse off. If the AI is perfectly aligned, humans don’t do any work, but they get all the goods; they can own it; they get capital income.

[00:37:19] Seth: 0% labor income equals 100% capital income.

[00:37:23] Andrey: Yeah. I feel like it’s really important to have that as a force in the model.

[00:37:31] Seth: So what’s the fantasy—the utopian fantasy? This is Bostrom; this is The Culture. You are doted on by robots that do every possible thing a human could do—except five silver coins for pidyon haben. That economy is what we’re describing, where I could have more robots, but maybe I’m saturated with robots. Maybe I have linear returns to robots; I’m just building exponentially more robots.

[00:38:00] Andrey: I think about accessory work as more addressing the meaning aspect. There’s a sweet spot, if there is accessory work, where humans are the doers of it and they find meaning.

[00:38:16] Seth: If they’re the doers of it—well, isn’t that a complement, then?

[00:38:20] Andrey: The examples he provides are musicians; I imagine that could provide a lot.

[00:38:27] Seth: Musicians make sense, because there could be some linearity to it.

[00:38:32] Andrey: We’re all going to be creating art for each other, and we’re going to value human-made art, and the AIs are going to explore the universe and create cancer cures.

[00:38:43] Seth: And then give us money.

[00:38:45] Andrey: And give us whatever—and we will have the Star Trek machine where we get any material good that we need.

[00:38:54] Seth: Okay, good. I’m looking forward to it.

[00:38:56] Andrey: Yeah.

[00:38:56] Seth: How are we doing on time? How many more props do we want to do? We want to do Prop 3. This is my hobby horse; give me a little time on Prop 3.

[00:39:06] Andrey: Sure. Let’s do that.

[00:39:08] Seth: One of the results of this paper is that asymptotically we have an AK growth model. What does that mean? It says that if you are able to automate all tasks, the economy’s growth rate will grow with the accumulation of more capital. That makes sense: robots can do everything; the output of the economy is how many robots you have and how good they are at being robots—plus a productivity term. That is true of this model. What that means is the long-term growth rate of the economy is the national saving and reinvestment rate—it’s the rate at which we compound today’s compute into tomorrow’s compute. There’s a technological aspect, but it’s also a social decision. I will never stop getting onto this chair and waving my flag: if you care about a future of automation, you should care about the national saving rate, because that is the growth rate in the world with automation. Andrey, were you pleased to see this prediction?

[00:40:16] Andrey: I think it makes sense. In these types of models it has to be true. We’ve all played Factorio—it’s not a surprise.

[00:40:34] Seth: It’s just basic Factorio-nomics. Okay, one last proposition, a variation on that. He’s starting to think about dynamics. He has some things to say about what’s happening in the dynamic model, but he points out: if you can use your compute to make AI more productive in a within-period decreasing-returns-to-scale manner, then basically the growth rate is the compute accumulation rate times a constant factor. Basically this form of science reinforcing the AI is not enough to get a regime change in the growth rate. It gives you a little boost. I thought that was cool.

[00:41:23] Andrey: It’s nice for the model not to explode.

[00:41:27] Seth: Did he get panned for that? A lot of people like models that explode. Jones has a model that explodes.

[00:41:33] Andrey: I don’t think people were concerned about finite-time explosion. They were concerned with the bottlenecks.

[00:41:40] Seth: I’m going to make a Yudkowsky-ish point. One of the main reasons that, upon reflective equilibrium, I’m not super worried about the doomer scenarios is that in my brain power has a connection to GDP, and in all of these models GDP has to grow in this regular exponential way—which is fast, but it’s not “today to tomorrow” fast. Based on how we think that works, the idea that we would get an algorithmic explosion where power explodes overnight seems out of sample.

[00:42:21] Andrey: I mean, we have no idea. It could still be—

[00:42:34] Seth: The saving rate—

[00:42:35] Andrey: We don’t know how much the AGI would choose to reinvest into its own growth. We just don’t. So I don’t think, in the transition dynamics, this is a very plausible argument. Nothing you just said prevents an AI from starting an automated AI factory and tripling itself over the course of a week.

[00:43:04] Seth: Yeah—exponential growth with an exponential rate determined by its reinvestment rate.

[00:43:09] Andrey: I don’t find that comforting. That exponential rate could be really fast.

[00:43:15] Seth: I’m saying there are models where we go from zero to infinity in finite time.

[00:43:22] Andrey: Sure.

[00:43:24] Seth: In any finite amount of time it’s still going to be one huge number and another huge number, and that gives me very little comfort personally.

[00:43:34] Seth: Okay. Viewers at home, tell us: how much scarier is an asymptote to infinity than an exponential? We’ll get those votes and report ’em next week.

[00:43:52] Andrey: It could be exponential to, you know, very—it could be a really big exponential. Big exponential.

[00:43:52] Seth: Let’s move to our conclusions and posteriors. Do you have any overall points you want to make about the paper before we move into posteriors?

[00:44:04] Andrey: It’s a fun thought exercise. I enjoy thinking about it.

[00:44:09] Seth: At a stylistic level, I really prefer the way that Pascual writes these to the way that Ben Jones and even Daron Acemoglu write these. I found the stripped-downness and the lack of rhetorical pretense in this draft really refreshing, and sensible given his comparative advantage. What’s not in here: I got on my high horse to say “saving rate important.” I think the idea that there isn’t some fixed other thing getting used up that could drive down human wages is an obvious omission that is not relevant to AI today. It seems like you’re modeling AI a thousand years from now, so at least nest what’s happening today. But it’s an elegant way of providing some fundamental points that I think are true of a lot of models, in language that I think is useful. So I liked this theory paper, even though I don’t think it’s going to move my priors that much.

[00:45:22] Andrey: I think I’m in the same boat.

[00:45:32] Seth: Moving to our posteriors. Our first question was: after we get AGI, asymptotically the labor share will go to zero. I said greater than 90% chance for large decreases of labor share, less than 10% chance of going to super-duper small—like less than 1%—within a hundred years. Am I moved here? We raised ideas in either direction that would mitigate. On the one hand, there might be some essential human bottleneck you can never automate. On the other hand, many kinds of human productivity require investment into humans—physical or human capital—that might get diverted to AI. Therefore wages could go down in an accelerated way to zero. I do not see these as contradictions along the path. But for the asymptotics, that’s a prior thing, so on this particular question I didn’t move.

[00:46:51] Andrey: I don’t know if I moved very much either. The tricky thing is infinite results vs. finite-time predictions. A hundred years is a long time, but it’s also not infinity. It’s hard for me.

[00:47:17] Seth: You might imagine a long tail—something we were riffing on before the show. Maybe first we automate 90% of jobs, then 95%, then 97%, and that asymptotic tail is still important and complementary and bottleneck-y enough a hundred years from now that there’s a big labor share because that’s the one last essential job.

[00:47:42] Andrey: Yeah. And once again—if there are jobs where humans demand that other humans do them, the only way compute can do them is to trick the human into thinking it’s a human doing it when it’s really an AI. That’s possible, but we’re getting into some pretty ridiculous—

[00:48:04] Seth: We should have a test for that. Some sort of Andrey test—or maybe a Turing test. All right, second past prior—let’s posteriorize it. We have to justify that wages won’t go down in the long run because people can always break away and recreate the economy—do their own accessory work thing.

[00:48:23] Andrey: Yeah.

[00:48:24] Seth: I said: wages won’t go down after AGI. After a hundred years, I would say real wages are higher a hundred years after AGI—70%. Did I move because of this paper? Maybe this moves me down 1% to 69%, based on a conference full of people accepting that premise.

[00:48:52] Andrey: Just to be clear, this paper is arguing for wages not going down, so why are you going down?

[00:48:56] Seth: 71%. I said 70% they go up; I’m going down to 69% they go up.

[00:49:04] Andrey: I see. I view “equal” as a knife-edge case—it’s measure zero. So you shouldn’t adjust at all.

[00:49:16] Seth: No, actually—dude—oh my God. All right, I’ll let us go out on this joke. I read a book that had the most hilarious theory of monetary policy the other day. It was in our book club that Andrey is in, where we read weird philosophy texts. Let me find it. It was in the book Ecstasy, which is a book about having fun, I guess, by a Freudian analyst. And in it he offers the following theory of the price level. So, on page 45, in his discussion of Dionysus as the scapegoat, the author writes: “Sheep represent everything of value in our Judeo-Christian world. The sheep, in fact, is the chief determinant of our currency. Every currency in the Western world—the shilling, the franc, the Deutsche mark, the lira, the peso, the Austrian thaler, from which we got our dollar—was the price of one sheep. For centuries there was no inflation in the Western world because one of our money pieces was worth a sheep. You could count on that anywhere, anytime.” Wow. Someday I hope to write economics as good as that, Andrey.

[00:50:47] Andrey: Hallucinations. I feel like the AIs are unfairly maligned when humans are very good at it.

[00:51:00] Seth: He was sent a vision of the synthesis of economic policy. This is why you’ve got to keep your Apollonian and your Dionysian separate out there, guys. So let’s leave it on that note. Keep your Apollonian separated from your Dionysian, and keep your accessory work bottlenecked.

[00:51:15] Andrey: Inshallah.

[00:51:17] Seth: Oh wait. No, before we go, I apologize to all of my guests for anything bad I did to them over the last year!



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit empiricrafting.substack.com
See all episodes

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.