Navigated to S6E19 - The Final Boss(es) (Effective Altruism & Longtermism) - Transcript
Cult or Just Weird

ยทS6 E19

S6E19 - The Final Boss(es) (Effective Altruism & Longtermism)

Episode Transcript

Kayla

Kayla: Long termism is the view that positively influencing the long term future is a key moral priority of our times.

It's about taking seriously the sheer scale of the future and how high the stakes might be in shaping it.

It means thinking about the challenge we might face in our lifetimes that could impact civilization's whole trajectory, and taking action to benefit not just the present generation, but all generations to come.

Okay, we are back with cult.

Or just weird.

While we're making our way through the TESCREAL bundle, an acronym referring to the prominent futurist ideology currently defining fucking Silicon Valley, of all things.

Chris

Chris: That must be the fastest we've ever talked about.

Started the actual topic?

Kayla

Kayla: Oh, I haven't started the topic.

Chris

Chris: It was instantly.

Kayla

Kayla: This is just for people who maybe haven't listened to previous episodes and are just tuning in, just catching them up to speed on what we're talking about.

Essentially, what we're talking about is how this is all a cope for our innate and deeply human fear of death and whether all this stuff is a cult or if it's just weird.

Chris

Chris: Yeah, I'm still impressed, though.

I don't know.

Cause it's usually.

Usually we start with, how was your day today?

How was a good day?

How did I.

I have a good day, too.

Kayla

Kayla: We'll get to that.

We'll get to that right here.

Cause we're doing our introductions.

Chris

Chris: Oh, right.

Okay.

Kayla

Kayla: I'm Kayla.

I'm a television writer.

Fear of death enthusiast, probably a lot of other things.

Thanks for listening to culture.

Just weird.

Who are you?

Chris

Chris: I'm Chris.

I make games.

I do podcasts.

I sometimes look at data.

Kayla

Kayla: If you're listening to the show, you are currently supporting the show, and we really appreciate that.

If you'd like to support us further, you can go to patreon.com culturesweird.

And if you'd like to talk more about any of the show's topics, you can find us on discord linked in the show notes.

Speaking of our Patreon, we actually have two new patrons to shout out this week.

Chris

Chris: Yes.

Kayla

Kayla: So thank you so much to Karen and Jim for joining our Patreon.

You enjoy the outtakes and the polls and some of the other stuff we got going on over there.

Chris

Chris: Our outtakes are free.

Kayla

Kayla: The outtakes are free.

Chris

Chris: But hey, you know what?

That makes the top of the funnel really wide, because everybody listening right now can just go on over to our Patreon, listen to outtakes.

Kayla

Kayla: You can hear our cats.

You can hear us burping motorcycles.

A lot of motorcycles.

It's a fun time to swear words.

Definitely swears.

Which we do not do on the show.

Chris

Chris: Fuck no.

Kayla

Kayla: That was really good.

Chris

Chris: Thanks.

That was.

Yeah.

Classic.

Classic.

I have one more bit of business, actually.

Kayla

Kayla: Business, us.

Chris

Chris: We have transcripts now.

Kayla

Kayla: Ooh, podcast transcripts.

Chris

Chris: Finally.

I know, I was like, oh, it only took us six seasons, but we do.

So if you are listening to this and are unable to hear, then go on over to our website.

Actually, the transcripts should be available wherever the podcast is available.

But I know for sure they're also on the website where the episodes live on the website and you can read episodes instead of listening to episodes.

Kayla

Kayla: Or at the same time, if you are a person like me who has to have the subtitles on while you watch television.

Chris

Chris: That's right.

It actually technically is a subtitle file.

Kayla

Kayla: Cool.

Chris

Chris: Which I thought would make a difference on YouTube, but YouTube already subtitled it.

Kayla

Kayla: YouTube does already subtitle it.

Okay, well, go check out our transcripts.

Enjoy.

We hope it makes the show more accessible to more people.

Are you ready to jump into today's topic?

Chris

Chris: I'm already ready already.

Kayla

Kayla: So last week.

I think you made that joke last week, actually.

Chris

Chris: Did I?

Okay, well then I'm not gonna do it again.

Kayla

Kayla: Well, no, we're keeping it.

Chris

Chris: I have to cut it.

Please.

Kayla

Kayla: Last week we talked about the c in test Grail cosmism.

We've gone a little bit out of order on the acronym so far.

Chris

Chris: Oh, we've been way out of order.

Kayla

Kayla: But now we're finally tackling the last two letters, EA and L.

Effective altruism and longtermism.

Chris

Chris: Okay, I have a problem with the EA.

Every other letter in test grill I know is just one thing.

An EA, for some reason, gets two letters in test grill.

Come on.

Kayla

Kayla: I mean, it is two words.

Everything else is just one word.

I guess we've touched on the EA and the L a little bit as we've gone through these last 18 episodes.

Obviously you talked about it with Doctor Emile Torres in the test grill episodes.

A lot of this stuff came up in the rationalism episodes.

Tires, so to speak.

So now it's time for us to look under the hood and really get to know what these letters stand for.

Chris

Chris: Part of my understanding, actually, of why Doctor Torres and Doctor Gebru created the test Creel acronym in the first place was because it's impossible to talk about one thing without at least touching on another.

So I think it kind of makes sense that we've already sort of bumped into pretty much everything that we're gonna be talking about today, you can't.

Kayla

Kayla: It's like wading through a pool full of corpses.

I don't know why.

That was my.

Chris

Chris: Wow.

Is that your go to?

I was gonna say it's like a cork board with yarn, but I guess corpses is good, too.

Kayla

Kayla: I guess, like, you know why that was.

Chris

Chris: Dude, you are morbid.

Kayla

Kayla: Cause, like, if you're waiting through a pool of corpses, you'd, like, keep bumping into them.

Chris

Chris: Oh, okay.

Yeah, I guess in your mind, that would be the thing that you'd think of first.

Kayla

Kayla: I'm sorry, everyone.

Chris

Chris: No, you're not.

Kayla

Kayla: So first, let's talk about.

I just have death on the brain because this is the death season, even though we're talking about AI first.

Effective altruism.

Chris

Chris: Yes.

Kayla

Kayla: A lot of our listeners might already know a little bit about EA, even outside of our podcast, because of the whole Sam Bankman fried FTX fiasco that unfolded in 2022, which we will get deeper into.

But the short version is that Sam Bankman Fried, known widely as SBF, was a cryptocurrency entrepreneur.

He founded a cryptocurrency exchange called FTX, made a shit ton of money, and then got arrested and jailed for, like, a bunch of fraud related crimes.

And I think generally, investors, like, lost a bunch of money.

But before he got in trouble, SBF was a big, effective altruism guy, donated to a number of EA causes before his downfall.

And so it was like, kind of a big deal in the news at the time.

Kayla: And everybody, a lot of the news was talking about his EA connections, and that kind of helped bring EA into the mainstream.

Chris

Chris: So can you help me clarify?

Because I think I had this notion, but I'd never really, like, explicitly clarified it, but.

So FTX, which is Sam Bankman Fried's cryptocurrency fund, that didn't in and of itself have anything to do with effective altruism, but he himself, as a person, was a big advocate for EA.

And then that's what made EA.

So, like, when FTX fell through and Sam Bankman Fried turned out to be a giant fraud, that's the thing that tarnished the EA image, because FTX wasn't itself about EA, right?

Kayla

Kayla: As far as I know, and we'll probably talk more about Sam Bankman Fried on the next episode rather than this episode.

So hold anything we say here with a little bit of a grain of salt, as far as I know, FTX Washington, a cryptocurrency exchange.

So I don't think it was about EA, but he himself was like, he made a shit ton of money.

He was an extraordinarily wealthy person and.

Chris

Chris: Was a big, like, did he make the money?

Kayla

Kayla: Ea?

Well, money was there, and it was in his name.

Chris

Chris: He acquired money.

Kayla

Kayla: Money came to be.

And he, as a Silicon Valley guy, was like, power.

A powerful enough figure that he was, like, getting people into EA.

Chris

Chris: Got it.

Kayla

Kayla: And spreading the word about ea kind of thing.

Chris

Chris: Okay.

Kayla

Kayla: As far as I know.

And again, we'll talk more about it.

Chris

Chris: No, that makes sense.

A little bit later, I was like, when that first.

When the news first broke on all this stuff, I was just a little confused.

Cause I was like, is it.

Is he in charge of some EA organization, or is it just so.

It sounds like it's just.

It was mainly his own personal charisma that was driving that.

Kayla

Kayla: Yeah, he was just a test realist.

Chris

Chris: Right.

Okay.

Kayla

Kayla: But effective altruism has a deeper history than just SBF.

It's actually been around as a concept for over a decade.

So let's go back to the beginning.

Over a decade doesn't sound like that long.

Chris

Chris: No, dude, these days, ten years.

It is ten years.

And not even just these days, but in the thing we're talking about, ten years is forever.

Kayla

Kayla: It's more than ten years.

Chris

Chris: Jeez.

Kayla

Kayla: I think some of the earliest stuff we're talking about is, like, 2000.

Chris

Chris: Wow.

Kayla

Kayla: And that's, like, ancient.

Chris

Chris: That is super ancient.

That's back when Eliezer Yudkowski was predicting the end of the world in 2008.

Kayla

Kayla: In 2011, before the world ended, an organization called giving what we can, and an organization called 80,000 hours decided to merge into a joint effort.

Giving what we can had been founded at Oxford University just two years prior.

Headed up by philosopher Toby Ord, his wife in physician Toby.

Chris

Chris: Pondering my ord.

Kayla

Kayla: Pondering my ord, his wife and physician in training, Bernadette Young, and philosopher William McCaskill.

I'm pausing here because I don't know how much I want to say about William McCaskill in this episode or save it for the next episode.

I have so many thoughts and feelings about William McCaskill.

Chris

Chris: You're bringing up the usual suspects here.

Kayla

Kayla: These are the usual suspects of test Grail and specifically of the EA and l.

Members of giving what we can pledged to give 10% of their income or more, to, quote unquote, effective charities, which at the time were largely focused on alleviating global poverty.

80,000 hours was a nonprofit focused on researching what careers are the most, quote unquote effective in terms of positive social impact.

Like, 80,000 hours refers to the average amount of time a person will spend in their career.

Chris

Chris: Oh, you just poked a neuron.

I feel like I remember 80,000 hours now.

Kayla

Kayla: There you go.

I do remember that philosopher William McCaskill was also one of its founders.

And, like, this guy was like, okay, how many years ago was 20?

What's 37 -13 24?

Yeah, this guy's, like, 24 at the time.

Chris

Chris: I hate math.

Don't make me do math.

Kayla

Kayla: When the two organizations merged, the members voted on a new name, and the center for effective Altruism was born.

The convergence and kind of like, introduction of the phrase effective altruism to describe the kind of ethical approaches taken by some philosophers at the time coincided with a couple other things that would eventually kind of fall under either the EA umbrella or at least the wider test grail umbrella.

Okay, we're talking charity assessment organizations.

I'm gonna, like, hopefully trigger some more neurons for you.

Givewell and open philanthropy, which were founded in 2007 and 2017, respectively.

Chris

Chris: I remember both of those.

Kayla

Kayla: We're, of course, talking less wrong.

The rationalist discussion forum, founded in 2009.

Chris

Chris: I am trying to forget that one.

Kayla

Kayla: We're talking the Singularity Institute, founded to study the.

I think it has a different name now, but at the time, it was the singularity Institute, and it was founded to study the safety of artificial intelligence.

Chris

Chris: In two thousand s I AI.

Yeah, so that was Elezer's thing.

Kayla

Kayla: I think it's called something else.

Chris

Chris: And now it's Miri.

Kayla

Kayla: Miri.

Thank you.

Chris

Chris: Intelligence Research Institute.

Kayla

Kayla: And we're also talking about the now defunct future of Humanity Institute, founded to study things like existential risk for humanity in 2005.

Chris

Chris: And that was the Nick Bostrom joint.

Kayla

Kayla: Bostrom joint, which.

In Oxford, I think I may leave that to you to talk about in future episodes, because there's also a lot to say about Nick Bostrom.

There's so much left to talk about here.

Chris

Chris: Too many things.

Kayla

Kayla: Everybody is so scared of dying.

Chris

Chris: And so am I, by the way.

The fall of the future of humanity.

Wait, what was it?

No, not future humanity.

What was it called?

Oh, it was called future humanity.

Oh.

That's why we named our episodes.

That.

That was only a few months ago.

It was, like, April as of publishing here.

Kayla

Kayla: Yeah, it was April 2024, I believe.

More loosely related.

There were also followers of this moral philosopher named Peter Singer, who also gravitated these circles.

And Peter Singer, I think, started.

Started his publishing in the seventies.

So this stuff's been around for a while.

All these groups and the people who either belonged to them, believed in them, promoted them, or followed them kind of all got munged together in the mid aughts and obviously beyond.

In 2013, philanthropists hosted the first annual effective Altruism Global conference, which has taken place every year since.

But what exactly is effective altruism?

We'll go back to that age old question.

What would you say you'd do here?

William McCaskill, we talked about multiple times already.

He's one of the main architects behind the movement, and he defines EA as this in his essay effective introduction.

Kayla: Effective altruism is the project of using evidence and reason to figure out how to benefit others as much as possible and taking action on that basis.

End quote.

Chris

Chris: See, again, the first, like, when you first dip your toes into this stuff.

Kayla

Kayla: I think it's noble.

Chris

Chris: Yeah.

I'm like, that sounds great.

Kayla

Kayla: I have to say, I don't have a lot of.

I went into this with a real bad attitude, and I came out of it with not a real bad attitude.

I kind of turned around on it.

I think that maybe next episode, I'm gonna have a bad attitude again.

Chris

Chris: That's how it goes here, man.

Kayla

Kayla: This episode's kind of like background, and next episode's kind of gonna be more like the poking of the holes.

Chris

Chris: Yeah, that's how we do things here.

That's what we did with.

Remember the Hare Christian episode?

The first one was like, wow, that's so neat.

They do awesome singing, and the place was cool, and it's like, cheap, good food.

And then the next one was like, murders.

Kayla

Kayla: Yeah, that is a trope.

On our show, William McCaskill's pinned tweet on Twitter goes a step further.

Affective altruism is not a package of particular views.

It's about using evidence and careful reasoning to try to do more good.

What science is to the pursuit of truth, yea, is, or at least aspires to be the pursuit of good.

End quote.

Chris

Chris: That's.

Man, I like that Easter egg.

Kayla

Kayla: For our listeners who may be into this stuff, I think that quote tweet was in reply to a Steven Pinker tweet about the pitfalls of Ea.

I'm not gonna talk about Steven Pinker right now, but just Easter egg for anybody who might be listening and has any opinions about Steven Pinker.

Largely effective altruists work to select the most effective charities to donate to and the most effective careers to dedicate their lives to, either by making the most money so that they can donate more, which is known as, quote unquote, earning to give or by choosing careers that are focused on the greater good.

And as we've learned, this is not really a niche movement.

It's fairly widespread across academia and has launched a number of institutes, research centers, advisory organizations, and charities.

Kayla: It's estimated by EA critical scholars that EA based charities have donated at least several hundreds of millions of dollars.

It's probably over a billion dollars at this point to their chosen causes.

There's a lot of money here.

Chris

Chris: I see.

Now I'm kind of like, wondering, how are they calculating what is the most good?

Kayla

Kayla: That's why there are research centers and institutes and stuff, is that they have people whose work is to calculate and figure it out and decide and recommend it.

Chris

Chris: Sounds like utilitarianism, the movement.

Like, that's what the whole thing kind of sounds.

Kayla

Kayla: It is.

There are differences that we'll get to, but there are similarities as well.

Chris

Chris: Right.

Kayla

Kayla: What are some of those chosen causes, by the way?

What are ears donating their money to the human fund?

Well, yes, no.

They actually, they've got some very specific things.

First, before we get into the actual causes, I wanted to note that EA considers something that they call, quote unquote, cause prioritization.

So, like, unlike other nonprofits who focus on a single issue, so, like Susan G.

Komen, we all know that's specifically for breast cancer.

Effective altruists believe the most money should be given to the cause that will do the most good.

So there's not, like, there's not a human fund.

There's not a, like, we are effective altruism.

Donate to us, and we'll make the most money for effective altruism.

They're like, we're gonna work to figure out where the money needs to go, rather than picking a specific thing.

Kayla: They also do not subscribe to local ideals of philanthropy.

So, like, helping your local community versus helping a community halfway across the world.

Like, a lot of nonprofits are very, like, you know, donate to this nonprofit because it helps, like, people in your city, versus donate to EA causes because they help the most people, even if.

Chris

Chris: It'S regardless of where.

Kayla

Kayla: Yeah, right.

Chris

Chris: Okay.

Kayla

Kayla: Effective.

Like I mentioned, effective altruists have organizations specifically for researching and analyzing cause prioritization.

Chris

Chris: Okay.

Kayla

Kayla: That's the whole thing.

Chris

Chris: Now, just noting here that I'm skeptical of such activities.

Kayla

Kayla: I might un skeptic you.

Chris

Chris: Okay.

I have a degree of skepticism going into it.

Kayla

Kayla: I think that you should.

And I also think that I went into this being like, you guys don't do anything.

And then I went, oh, my God, these guys do quite a bit, actually.

Chris

Chris: Yeah.

I'm not denying that they do a lot of work.

I'm sure they do a lot of work.

But you know what?

I'll let you get to that.

Kayla

Kayla: Well, hold your thoughts.

In general, though, to go to the specific causes, EA focuses currently on the, as we mentioned, alleviation of global poverty, tropical diseases such as malaria, and deworming initiatives, human deworming, animal welfare.

Like this is a big one.

A lot of especially early effective altruists focused on this.

And interestingly, a number of EA critics are also animal welfare people, like animal ethics philosophers.

Recently there was a book that came out that was, I forget exactly the title.

I think I'm linking it in the show notes because I referenced these academics.

But there was recently a book of essays that came out criticizing EA.

And the three academics were like animal.

Among other areas of study were animal ethics philosophers.

Chris

Chris: That's interesting.

It surprises me a little bit because I remember Emil saying in one of our, one part of our interview that, I hate to quote this because I don't remember who he was quoting, but it might have been McCaskill or might have been from somebody in the book that he wrote.

And that's why I don't know if it's an EA or EA est or a long termist, but he quoted somebody as saying basically, like, if certain species go extinct, that's fine, because they're not sentient or sapient like we are, so they don't.

That would be like a net positive.

Kayla

Kayla: I think that there's some.

I think that they have an interesting set of ethics around animals because it does seem like eaers are very clear that, like, humans are not animals, humans are not sentient.

And it also seems like they still can ascribe suffering to animals and say that animals suffer.

And so it's better to not cause the suffering of the animals even though they're not sentient.

Like a lot of ea people are vegan and vegetarian.

Like McCaskill, I think, is a vegetarian.

Chris

Chris: Oh, really?

Kayla

Kayla: Yes.

And this is a result specifically of their EA beliefs.

Chris

Chris: Right.

Okay.

Kayla

Kayla: And last on the list of causes, the long term future and existential risk.

They want to make sure we don't do catastrophic shit.

Now that makes life a disaster for potential future humankind.

Chris

Chris: Okay.

Yep.

There's the x risk thing.

Kayla

Kayla: First three relatively mainstream normal causes.

The last one is where we start to tip over into, like, that weirder side of the test Creole, as we've already covered.

That's where we get into AI risk.

How do we save trillions of future humans, even if that means worsening the suffering of billions of current humans?

That kind of stuff, right?

Chris

Chris: That's the l, right?

Kayla

Kayla: In short, long termism.

Yeah, but we're not there yet.

We're still talking about effective altruism.

I want to talk about how effective altruism really is.

Chris

Chris: Oh, effective.

Effective altruism.

Kayla

Kayla: Altruism, which, like, is kind of difficult thing to measure because it's such a big thing.

And it's already hard to be like, if I donate a million dollars, how much help is this doing?

Chris

Chris: That's hard to measure who affects the effectors.

Kayla

Kayla: But luckily for us, Scott Alexander, a rationalist blogger you may remember from our episodes on less wrong, has an essay titled in continued defense of affective altruism that does do the work of giving us some hard numbers.

Chris

Chris: Yeah, he has a bunch of, like, famous, I guess, if you want to say posts, unless wrong.

And he also graded Slate Star Codex, which is like, where part of the rationalist diaspora on the Internet went.

Kayla

Kayla: Now, these numbers were dug up by him, and I do believe that he's done the work to verify this stuff.

But I only verified.

I verified one of the claims personally because I'm bad at mathematic and it checked out.

So he claims.

This is the one that I verified.

He claims that effective altruism has prevented around 200,000 deaths from malaria, citing a number from against malaria foundation, or AMF.

Okay, so Givewell, the EA charity assessor we mentioned earlier, identifies against malaria foundation as one of their top recommendations.

Scott Alexander says that givewell funds about 90 million of AMF's $100 million revenue.

So to quote from Alexander's essay, Gibbel estimates that malaria consortium can prevent one death for $5,000.

And EA has donated about $100 million per year for several years.

So 20,000 lives per year times some number of years.

Kayla: I have rounded these two sources combined off to 200,000.

Side note, for me, like, yeah, I saw anywhere between like 150,000 to 185,000 to 200,000.

Chris

Chris: Okay.

Kayla

Kayla: As a sanity check, malaria death toll declined from about 1 million to 600,000 between 20 15, mostly because of bed net programs like these, meaning EA funded donations in their biggest year were responsible for about 10% of the yearly decline, end quote.

Chris

Chris: Okay, that sounds good.

I know I've heard, like elsewhere, that malaria nets are like a thing, and that's like, you know, an effective thing.

Kayla

Kayla: I remember that being like a big Bill Gates thing, like malaria has been talked about by people with a lot of money that they're looking to donate for a long time.

And clearly the deaths have gone down globally and that's a good thing.

Chris

Chris: Good job.

I agree.

Kayla

Kayla: Scott Alexander also has this to effective altruism.

Has treated 25 million cases of chronic parasite infection.

These are the numbers that I have not verified.

Chris

Chris: Okay.

Kayla

Kayla: Given 5 million people access to clean drinking water.

Supported clinical trials for a currently approved malaria vaccine and a malaria vaccine also on track for approval.

Supported additional research into vaccines for syphilis, malaria, some other things that I don't know, hepatitis C, hepatitis E.

Supported teams giving developmental economics advice in Ethiopia, India, Rwanda.

Convinced farms to switch 400 million chickens from cage to cage free.

That's where some of the animal ethic stuff comes in.

Freed 500,000 pigs from tiny crates where they weren't able to move around and gotten 3000 companies, including Pepsi, Kellogg's, CV's, and Whole Foods to commit to selling low cruelty meat.

Those are all.

If we can trace those efforts back to either EA donors or EA charity assessors, that's not small shit.

That's big shit.

Chris

Chris: Big if true.

Kayla

Kayla: Big if true.

My next sentence is now these are big claims.

If you're like me, you might be going, okay, like, are all these things actually effective altruists?

Are we just like calling some efforts EA because it's easier to like absorb something than like actually do something?

Like there's, it's like a malaria foundation out there that's doing all the work and EA is taking the credit for it?

Chris

Chris: Yeah, I'm like, and again, like, on that note, I'm also like unclear.

Like, there's clearly, there's.

Givewell is an EA specific organization, but isn't EA more like a movement?

So if I work for XYZ charity that's doing the malaria nets, that isn't givewell.

What did you call it, the name of it, against malaria.

If I'm working for, against malaria and I self identify as an EA, is that being counted?

Kayla

Kayla: Well, I think what Scott Alexander was counting there was the fact that Givewell is responsible for 90% against malaria Foundation's funding and Givewell is EA specifically to him.

And I agree that counts as like a quote unquote EA effort.

Chris

Chris: Totally.

Yeah.

Yeah.

Okay.

Kayla

Kayla: He also says this quote, I'm counting it.

And this is of everything he's evaluating here.

I'm counting it as an EA accomplishment if EA either provided the funding or did the work.

Further explanations in the footnotes.

And this is a very well footnoted essay.

Okay, I'm also slightly, this is called test reel, Scott.

I'm also slightly conflating EA rationalism and AI doomerism rather than doing the hard work of teasing them apart.

Chris

Chris: See, you can't do it.

If only you had the acronym.

Kayla

Kayla: Side note, Alexander does have a section on EAS impact on AI.

That's where the AI doomerism comes in.

But we're skipping that for now because again, the hard work of teasing them apart is hard.

And for organizational purposes, discussions of AI, to me, fit better in the framework of what we're discussing next, which is longtermism.

Why are we hewing long termism so closely to effective altruism?

Why am I doing two episodes at once?

Again is because longtermism essentially grew out of EA.

There's a reason why it's the last letter in the test Grail bundle and why it follows Ea.

It's because it's literally a subset or a subculture of effective altruism.

Chris

Chris: If you take just those, it's eel.

Kayla

Kayla: It's eel.

I'm viewing the l as kind of like the final boss of test Grail.

Chris

Chris: Yeah, yeah.

Kayla

Kayla: I'm saying that now.

And something worse is going to come along.

Not that long termism is necessarily bad.

It's not necessarily bad.

And actually, I will say there is another final boss that may or may not come up in the show.

Chris

Chris: Oh, is this like a secret boss?

Kayla

Kayla: I think there's a hidden boss.

Chris

Chris: Hidden boss.

Cool.

Kayla

Kayla: There's something.

I'll just say it here.

There's something called effective accelerationism.

That's like a movement that's currently taking shape.

Chris

Chris: Well, now it's not a secret boss anymore.

Kayla

Kayla: And that's the secret boss.

Chris

Chris: Okay, is this like one of those bosses that is optional, but if you fight it's harder?

Kayla

Kayla: Yes, sure.

Chris

Chris: Ruby.

Weaponization.

Kayla

Kayla: Effective altruism is one thing.

I'm just trying to explain what it is.

Effective altruism is like, maybe we shouldn't let AI kill everyone and we should have some safety regulations.

And effective accelerationism says fuck you.

No, the only way we can save the world and the future of humanity is if we pedal to the metal.

No regulations on AI get wrecked.

But they're not in the task creel bundle yet.

Chris

Chris: Mm.

They're sort of like orbiting around it.

By the way, speaking of letters like, do you know how hard it is for somebody in the video game industry to rework their brain around EA?

Meaning, I know.

Effective altruism and not electronic arts.

Kayla

Kayla: I know.

Me too.

One important thing to know about EA, the movement, not electronic arts, is that it's primarily a quote, unquote, like elite movement, meaning that it originated in high status educational institutions, appeals directly to the very wealthy.

Obviously.

It's all about like, give a lot of your money, earn to give, make a lot of money so you can give it.

And it has therefore become.

Chris

Chris: Alleviate your guilt.

Kayla

Kayla: Yeah.

It's therefore become very pervasive in Silicon Valley culture.

And that's where the long termist subculture incubated and hatched to define longtermism more deeply.

We'll go back to Macaskill again.

He long termism is the view that positively influencing the long term future is a key moral priority of our times.

It's about taking seriously the sheer scale of the future and how high the stakes might be in shaping it.

It means thinking about the challenge we might face in our lifetimes that could impact civilization's whole trajectory and taking action to benefit not just the present generation, but all generations to come.

Chris

Chris: Okay.

Like, again, like with every other letter on the intro bit, I'm sort of on board.

Kayla

Kayla: Yeah.

It's the argument for climate change.

Chris

Chris: Right, right.

There's just a lot of broadness and assumptions there about when you say long term future, how long?

What do you mean?

Kayla

Kayla: Who, who is a good question.

In his recent book, what we owe the Future, Macaskill breaks it down further.

And then Wikipedia pulled a great quote so I didn't have to do the hard work of going and checking the book out from the library.

Chris

Chris: Thanks, Jimmy Wales.

Kayla

Kayla: Wikipedia describes the books as such.

His argument has three parts.

First, future people count morally as much as the people alive today.

Chris

Chris: All right, now I'm off.

Kayla

Kayla: Second, the future is immense because humanity may survive for a very long time.

And third, the future could be very good or very bad, and our actions could make the difference.

End quote.

Chris

Chris: Okay.

Yeah.

Two and three seem alright.

I don't know about the valuing the future humans just as much as existing humans.

Kayla

Kayla: I got a problem with that one.

Chris

Chris: That is like mad speculative.

Kayla

Kayla: I got a problem with that one.

Yeah, I'm gonna not talk about my problems with that one yet.

I'm gonna hold off.

Chris

Chris: You're just gonna say it.

You're just gonna tease it.

Kayla

Kayla: I just.

This episode again, is more for like information and background.

And the next episode is the color episode where I get to go like, I think that this is dumb.

Chris

Chris: Oh, that's my favorite part.

Kayla

Kayla: I know.

If you'll remember from previous episodes, this boils down to, quote, bringing more happy people into existence is good.

All other things being equal, long term risks are generally focused on existential risks and preventing the destruction of humanity.

Which is a good thing.

Chris

Chris: It's a good thing.

I can't disagree with that.

As broadly as it's stated.

Kayla

Kayla: I'm back around on longtermism after this episode.

There's problems, there's problems.

But also fearing about climate change and wanting to fix it, that is a.

Chris

Chris: Long termist issue, if that's what.

For the long termists that care about that kind of thing, I agree with you.

Kayla

Kayla: A lot of them do.

A lot of them do.

Okay, existential risk.

I keep bringing up climate change, but this can also cover nuclear war, pandemics, global totalitarianism, and then, of course, the weirder stuff like nanotechnology and the grey goose stuff, and artificial intelligence.

AI AGI, that stuff.

Chris

Chris: Grey goose is good.

Kayla

Kayla: Grey goo.

Grey goo.

The nanobots just turn everything into gray, goes into vodka.

Yeah.

Long termists seek to reduce these risks so that we can improve the number and quality of future lives over long time scales.

They also believe that human.

The reason why this is, like, important to them now is they believe that humanity is currently at a critical inflection point where what we do now determines the ultimate future of humanity, which has.

Chris

Chris: Never been true before.

Kayla

Kayla: It's.

I'm.

I don't think they're totally right, but I also don't think they're totally wrong.

Chris

Chris: Yeah.

Kayla

Kayla: If you look, especially, again, climate change.

If you look at climate change and we hear all the time, like, if we don't get our emissions down, then it's gonna be ruining the world forever.

Chris

Chris: My only joke there was, at all points in time, humanity is affecting what comes after us.

Kayla

Kayla: Yes, you're right.

Chris

Chris: But, but we're extra special.

You're totally right.

Kayla

Kayla: Yeah, I think we're extra special.

I think that.

I think that.

I can't argue with the climate change thing.

We are extra special in that.

Chris

Chris: Yes.

And also, it's not.

Climate change isn't the first environmental catastrophe that we've had to contend with.

Kayla

Kayla: Oh, really?

Chris

Chris: Yeah.

Kayla

Kayla: You're sound like a climate change denier.

Chris

Chris: No, I'm not saying it's.

It's not the first man made environmental.

Kayla

Kayla: We all know.

Chris

Chris: Just don't be upset that you're.

You're taking the l here.

You're doing the l episode.

Kayla

Kayla: There absolutely is no l here for.

Chris

Chris: Me to take all kinds of l's.

It's raining l's.

Kayla

Kayla: But again, we go back to the question, what would you say you do here.

And then we go back to Scott Alexander's article on the effectiveness of these movements.

And I'm going to now focus on the AI section, because, again, that's such a big subset for long termists.

So, quoting from Scott Alexander's article, things that they have done include founded the feel of AI safety and incubated it from nothing up until the point where many people are talking about this, endorsing it.

We've got Sam Altman, which, oh, boy, do we need to talk about that next episode.

We've got Bill Gates, we've got big names, and even, I think, the us government.

We're all talking about AI safety, right?

Chris

Chris: We have enough of a notion of it that Andreessen Horowitz can just steamroll right over.

Kayla

Kayla: He's an IC guy.

Chris

Chris: I know.

Kayla

Kayla: Another thing is, EA helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future super intelligences.

They've gotten major AI companies, including OpenAI, to work with arcevals and evaluate their models for dangerous behavior before releasing them.

They became so influential in AI related legislation that political.

Accuses effective altruists of having, quote, taken over Washington and, quote, largely dominating the UK's efforts to regulate advanced AI.

Chris

Chris: Ooh, that's some language.

Kayla

Kayla: They helped the british government create its frontier AI task force.

And I like this assertion from Scott Alexander.

Won the PR war.

A recent poll shows that 70% of us voters believe that mitigating extinction risk from AI should be, a, quote, global priority.

Chris

Chris: Wonder where that poll came from.

Kayla

Kayla: I believe that quote comes from the Artificial intelligence Policy Institute, or AIPI.

Chris

Chris: Okay, so they did some polling.

Kayla

Kayla: Did some polling.

It was conducted by YouGov.

Chris

Chris: It was conducted by the t 101.

Kayla

Kayla: It was definitely conducted by.

Chris

Chris: It came door to door.

Hello.

Are you afraid of my metal body?

Kayla

Kayla: And it's the ones that say no.

You really got to watch out for a couple non AI.

But still, long termist related wins were helped organize the secured DNA consortium, which helps DNA synthesis companies figure out a.

What their customers are requesting and avoid accidentally selling bioweapons to terrorists.

Chris

Chris: That's good.

Kayla

Kayla: Yeah.

That's also, like, a thing that people buy on the dark web.

I watched this show on Netflix that I told you about.

Remember the roommate from hell or whatever that show was called?

Chris

Chris: Oh, yeah.

Kayla

Kayla: And one of the people had a roommate that was constantly trying to poison and kill her.

And she ordered.

She didn't order staph infection.

She ordered a worse, unsurvivable version of staph.

Infection off of the dark web.

Chris

Chris: Jesus Christ.

Kayla

Kayla: And, like, luckily the FBI found it or something.

Chris

Chris: Don't do that.

Don't do that, don't.

Kayla

Kayla: They also provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.

Chris

Chris: Okay, that's a good one.

Kayla

Kayla: They donated tens of millions of dollars to pandemic preparedness causes years before COVID and positively influenced some countries COVID policies.

Chris

Chris: Okay.

Kayla

Kayla: And again, these are claims from Scott Alexander.

You know, take everything with a little bit of a grain of salt, but these are ea and long termist causes and things that they're talking about thinking about saying we should donate our time, attention, and money to.

Chris

Chris: All right, keeping your Scott Alexander hat on.

What do you think he would say to Elias Rudkowski's thing where he's like, it's okay if we get into a global thermonuclear war, if it prevents AI catastrophe?

Kayla

Kayla: I don't get the sense that Scott Alexander would think that was a good idea, but I don't know.

I get the sense and I'm not.

I haven't read the sequences, but Scott Alexander seems, maybe, I don't say more measured, but definitely seems more sequenced, less focused.

Elie Isaac Dukowski is very focused on AI threat.

And I think that Scott Alexander's focus is a little wider.

Chris

Chris: A little.

Kayla

Kayla: Okay, a little broader.

The key argument for long termism is basically this.

Quoting from a vox article, quote, future people matter morally just as much as the people alive today.

There may well be more people alive in the future than there are at the present or have ever been in the past, and we can positively affect future people's lives.

Chris

Chris: I'm, again, exactly like I was before, down with all of that, except for I don't know where they're getting the future.

Hypothetical people are as important as.

Kayla

Kayla: I don't either.

I don't either.

But, like, imagine if you lived 500 years from now and you lived in a world where nuclear, global nuclear war happened 500 years prior, and now you are.

Your life fucking sucks.

Would you have some anger at your ancestors?

Would you think that they had morally owed you better?

Chris

Chris: And this is hypothetical, so this doesn't need to be hypothetical because we already do live 500 years after other humans, and we also go 100 years after other humans.

I don't particularly care for a lot of actions of my ancestors, and some of them do impact me and my fellow citizens to this day.

So I think sometimes the answer to that is yes.

I wish there were some effective altruists in the 18 hundreds that had ended slavery sooner.

Right.

That would have been nice, right.

Or if they were around when redlining was a thing and had managed to have that not be.

That would be nice.

By the same token, I don't know.

You go back far enough, and there have been world wars.

Certainly there's been world wars in this past century, but even before that, there's wars that consumed all of Europe.

Chris: I'm not saying that's a good thing.

I'm just saying that once you get far enough in the future, it's kind of like, I don't know.

I don't know if that would have been better off a different way.

I don't even know if I would exist.

Kayla

Kayla: But I think that's why these guys talk about x risk, because x risk is different than what previous peoples have been capable of.

Chris

Chris: Sure.

That's why they're concerned with the utter erasure of humankind.

And I get that.

God, now I'm, like, arguing in their favor because I'm saying, like, even more.

Kayla

Kayla: I think it's super wrong to argue in the favor.

I think we'll get into some of the problems in the next episode.

The problem comes from fucking people.

It's always, people fuck shit up.

Like, we are not perfect.

And even if you take a perfect ideology, which this is not, it's gonna go in some weird ways.

And it has gone in some weird ways, and it continues to go in some weird ways.

Chris

Chris: Right.

Kayla

Kayla: And I think that issue of future people matter morally as much as the people today has gotten really warped in some of these guys brains to mean future people matter more.

Chris

Chris: Right.

Kayla

Kayla: And we must do things to save those future people.

Fuck everyone alive today.

They can suffer and die.

Those people matter.

And that's a problem.

Chris

Chris: That dog ends up wagging that tail with the like.

Therefore, all the stuff I'm doing as a billionaire is already good.

Oh, God.

Kayla

Kayla: I think that's my biggest problem with this stuff, is that these guys that are talking about it are all rich.

And I don't care what they have.

Chris

Chris: There's ero diversity.

It's like they're all.

Kayla

Kayla: It's all rich white people.

This is a very, very white movement.

Chris

Chris: Yeah.

Kayla

Kayla: And there's just.

There's far too much wealth here for me to, like, be comfortable with these guys talking to each other and planning stuff for my life and my children's lives and my great grandchildren's lives and.

Chris

Chris: Your great, great.

Kayla

Kayla: And some of these people, you would be shocked.

I'm sure you're shocked.

Terrible records on, like, how they talk about disabled people and how they talk about.

You don't say, yeah, it's not great.

It's not great.

But that's for a future episode.

Chris

Chris: Yeah.

I just.

I don't know.

I do like your.

Your question, though.

I do like your question of, like, if you live 500 years, because I'm thinking of, like, how much do I give a shit about what they were doing in the year 1600.

Kayla

Kayla: Right.

Chris

Chris: You know?

Like, I don't know.

I don't know.

I do, and I don't.

I don't know.

Kayla

Kayla: Like I said, doing this episode kind of brought me back around on some of these ideologies, and then.

And then I scurried away.

And then they brought me back, and then I scurried away.

It's like you doing the less wrong episodes.

Like, these movements have contributed to some pretty inarguably good things.

Malaria.

Great.

Chris

Chris: Yeah, malaria is awesome.

I'm glad they contributed to it.

Kayla

Kayla: There's a lot of really bad things here, and it's.

It's no fun to just talk about the good stuff.

So next time on culture, just weird.

We are going to get into the w part of our acronym, the weird.

What the hell is going on with Eanl that's had it in the headlines over the last year?

And where is it going now?

Chris

Chris: And the J part of our acronym, Juicy.

Kayla

Kayla: Juicy.

Called her juicy weird.

This is Kayla, this is Chris, and.

Chris

Chris: This has been the long term call to her.

Just weird.