Navigated to Grand Theft Automated: How to Save a Trillion Lives - Transcript

Grand Theft Automated: How to Save a Trillion Lives

Episode Transcript

Speaker 1

Pushkin.

Speaker 2

In a penthouse apartment in the Bahamas, a billionaire is hosting a meeting.

It's the kind of place you might expect to find a billionaire, marble floors, a grand piano, a balcony with a hot tub, and views of the marina.

As the billionaire and his colleagues debate what to do with his money, they look to one man in particular for wise advice.

A business analyst, a financial expert, no a moral philosopher who's devoted his career to thinking about altruism.

This is the second of two cautionary tales about altruism.

In the first, we heard about a scientist called George Price, who helped to unravel the mystery of how evolution produced altruistic behavior, and who then became the most extreme altruist you could imagine, giving away his last penny and the coat on his back.

But Wade, perhaps the billionaire in the Bahamas is an even more extreme altruist.

The only reason he ever wanted to make money was to give it away.

He looks more like a student than a billionaire, with his baggy cargo shorts, crumpled T shirt, and disheveled hair.

He's turned his penthouse into a dorm room.

There are bean bags for napping on, monitor wires trailing haphazardly across the marble floor, a cheap bookcase full of board games, a freezer stuffed with three dollar vegetable beryani from Trader Joe's.

Over the next year, he wants to give away a billion dollars, and he wants to do it as effectively as possible, hence the moral philosopher.

The year is twenty twenty two.

The billionaire's name is Sam Bankman Freed, and his altruistic activities are about to be interrupted by arrest and imprisonment.

I'm Tim Harford, and you're listening to cautionary tales.

It's nineteen seventy two.

In London.

George Price is spiraling, convinced that Jesus wants him to give all his possessions to homeless people.

Fifty miles up the road, in Oxford, a philosopher called Peter Singer publishes an essay titled Famine, Affluence and Morality.

Singer asks us to imagine that we're walking along past a muddy pond, going about our day, when we see that in the pond a small child is drowning upon this shallow we could easily wade in and save the child.

That that would ruin the nice new clothes we're wearing.

Speaker 1

What do we do?

Speaker 2

Of course, of course we wade in and saved the child.

Speaker 1

If we didn't, we'd never be able to live with ourselves.

Speaker 2

But think about this, says s across the world, a small child is dying from hunger.

We could save that child's life by giving money to charity less than the cost of the nice new clothes we were willing to ruin.

Surely our obligation to give to the charity is just as strong as our obligation to wade into the pond.

Morally speaking, it doesn't matter if the child we can save is right there in front of us or ten thousand miles away.

If you follow Singer's logic, spending money on nice clothes instead of donating it to starving children is just as immoral as walking past the drowning child in the pond.

Follow the logic further, and as long as there's one starving child in the world, it's immoral to own anything we don't really need.

We should keep on giving until we're only just better off than the starving child ourselves.

Nobody lives like this, of course, well nobody except George Price.

Peter Singer's essay became a fixture in undergraduate philosophy classes, including the one I took.

Students tended to have one of two reactions.

Either they tried to find some flaw in Singer's logic, or they conceded that Singer might be right, but shoved that thought firmly to the back of their minds so they could resume their normal lives without constantly thinking about all the starving children they were thereby condemning to death.

Will mccaskell was different.

In two thousand and five, aged eighteen, mccaskell read Peter Singer's essay.

He thought Singer was clearly right and decided he should walk the walk by giving what he could as a student.

That wasn't easy.

Students never have much money, and mccaskell did also want to make friends.

He tried to compromise.

When his friends went to the pub for a drink, mccaskell ordered tat water, then quietly refilled the glass with cheap lager he'd brought from the store.

Mccaskell got his degree in philosophy and a job in academia.

He decided that the first twenty six thousand pounds a year of his salary would be enough to live on about thirty three thousand dollars.

Anything he earned above that he give away.

He researched the most effective ways to donate some charitable causes.

It turns out give you far more bang for your buck than others.

Bednets, for example, save lives in countries with hilaria by stopping mosquitoes from biting you while you sleep.

By one estimate, around three thousand dollars spent on bednets would save one life.

Mccaskell met others who shared his ideas.

A movement emerged.

Mccaskell and his colleagues asked themselves what it should be called, and came up with the name effective altruism.

They became the effective altruists, committed to giving away a significant chunk of their income.

In a modest basement office in Oxford, McCaskill and his colleagues set up the Center for Effective Altruism.

They ate cheap vegetarian food, supermarket baguettes and hummers, and debated the most effective ways to be altruistic.

For instance, might deworming pills do even more good than bednets per day dollar spent?

The numbers said they might.

People with money started to ask mccaskell's advice on where to donate.

That gave him a dilemma, because mccaskell was aware of studies that show classically handsome people are more persuasive at getting donations for charities, and mccaskell had always been conscious of the gap between his two front teeth.

Should he invest in braces to make himself more handsome?

On the one hand, the money he spent on braces couldn't then be spent on bednets or deworming pills.

On the other hand, it might make him a more effective advocate for those causes.

Mccaskell asked his old friends about this moral dilemma will They said, if you want to get your teeth fixed, get your teeth fixed.

In a profile of McCaskill for The New Yorker, one friend recalls it felt like it subsumed his own humanity to become a vehicle for the saving of humanity.

Mccaskeal was getting asked for another kind of advice too, career advice.

Students at Oxford University wanted to know what line of work they should go into if they wanted to do the most good.

Should they become a doctor in a poor country, for example, or a medical researcher to try to cure cancer.

Mccaskell came up with a surprising answer, none of the above.

Speaker 1

You are at a top university.

Speaker 2

He told them, you have a chance at careers that could make you lots of money.

Speaker 1

Why not make money and give it away?

Speaker 2

If you become a high flying banker, for example, you could easily fund a dozen doctors in poor countries, far more effective than becoming a doctor in a poor country yourself.

The logic was impeccable.

Mccaskal called the idea earning to give.

In twenty twelve, McCaskill visited Cambridge, Massachusetts, to spread his ideas at other top universities.

He heard about a student at MIT who might be receptive.

A physics major in his junior year unkempt a bit odd, but clearly brilliant.

McCaskill sent the student an email, Let's have lunch.

Sam Bankman Freed was surprised to get an email from a philosopher at Oxford University Who is this guy?

Why is he inviting me to lunch?

Sam was bored of his physics degree, just as had been bored at school throughout his childhood.

He was good at maths and a card game called Magic the Gathering, but bad at social interaction.

He remembers having to teach him when it's considered appropriate to smile.

His classmates, he thought, saw him as smart and maybe not all that human.

He didn't feel close to anyone except for one kid who also liked the card game Magic Gathering.

That kid remembers Sam as a rare combination of hyper rational and extremely kind.

Sam rationalized his way to a belief system.

I guess I should care the same amount about everyone, which is pretty much what Peter Singer said all those years ago.

When someone made the case to Sam that his beliefs were inconsistent with eating meat, Sam thought about it and concluded, this sucks because I love fried chicken.

But they're right.

He became a vegan.

In his cargo shorts, crumpled T shirt, and battered sneakers, Sam met Will McCaskill for lunch.

He wasn't really sure what he wanted to do with his life.

He told Will Before he came to MIT, he'd thought maybe he'd become an academic, but he now realized that he'd find academia far too boring.

Will pitched Sam on his earn to give idea.

If you want to make the world a better place, he told Sam, you should set out to make lots and lots of money.

Cautionary tales will be back after the break.

Sam bankmin Freed finished his degree at MIT and got a job on Wall Street at a trading firm.

The job involved spotting tiny inefficiencies in financial markets, patterns data that others had overlooked.

It was all about making rational calculations and thinking and probabilities.

It wasn't easy, but if you were good, you could make a fortune.

Sam was a natural.

In his first year, he was paid three hundred thousand dollars, in his second, six hundred thousand, in his third a million.

He gave most of it away to good causes, including Will mccaskell's Center for Effective Altruism.

How much might I be earning in ten years, he asked his bosses, if you keep doing as well as you are.

They said, maybe as much as seventy five million dollars a year.

But Sam wasn't happy.

I don't feel anything, he confided to his journal, or at least anything good.

I feel nothing but the aching hole in my brain where happiness should be.

Sam began to get interested in crypto currency in twenty seventeen.

Crypto was still a very new phenomenon.

It was hard to know what to make of it, an important emerging asset class or just some complicated scam.

New coins were being launched all the time, but unlike shares in say Apple or Amazon, they were often completely unrelated to anything in the real world economy.

Sam's trading firm wouldn't let him touch crypto.

Speaker 1

It was far too risky to start with.

Speaker 2

Crypto was bought and sold on exchanges that aren't regulated in the same way as stock exchanges.

Crypto was relatively easy to steal, what to misplace.

If you lose the password to your bitcoin wallet, It's not like losing the password to your online banking.

You can't call a help desk and get another one.

Still, Sam saw an opportunity.

The nascent crypto markets were far less efficient than the financial markets he was used to operating in.

The same coins could trade on different exchanges for different prices.

Sam decided to quit his job and set up his own firm.

He'd use the techniques he had learned on Wall Street to trade in crypto, But what about the risk of theft.

With a few furtive keystrokes, an employee might divert coins into their own personal account in a way that would never work for Apple shares.

Sam had a genius solution to that problem.

He would employ only effective altruists.

If all his employees were just as committed as he was to giving their money away, they would feel no temptation to enrich themselves by stealing from the firm.

Speaker 1

It was perfect.

Speaker 2

By twenty eighteen, Sam's new company, alimede A Research, had employed a couple of dozen effective altruists and raised one hundred and seventy million dollars from investors.

But things got off to a rocky start.

The first problem was Sam's leadership style.

One employee recalls he was expecting everyone to work eighteen hour days while he would not show up for meetings, not shower for weeks, have a mess all around him with old food everywhere, and fall asleep at his desk.

Then there was Sam's new bot.

He wanted to automate buying and selling coins on different exchanges.

That was a tried and tested idea on stock markets, but stock markets worked more reliably than crypto exchanges.

His management team were worried if this went wrong, it could go very wrong, very quickly.

When you switch on this bot.

They told Sam, you have to watch it like a hawk and be ready to switch it off straight away if it starts losing money.

Sam agreed.

He switched on the bot, then fell asleep.

The biggest worry of all was that, well four million dollars worth of crypto had just disappeared.

Speaker 1

Where had it gone?

Had somebody stolen it?

No one knew.

Speaker 2

Sam's management team wanted to tell their investors.

Let's not, said Sam.

I reckon, there's an eighty percent probability that it turns up somewhere.

In Sam's highper rational mind.

That was basically the same as them still having eighty percent of the four million dollars, and it would be perfectly reasonable to put that in their accounts.

We can't do that, said Sam's management team.

That's not how the world works.

The management team at Alameda Research lost patients.

Sam was a brilliant trader, but hilariously ill suited to running a company.

They walked out.

Half the employees followed.

The investors pulled out three quarters of the cash they'd put in.

Still, that left Sam with forty million dollars to play with, and now there was no one to complain when he did things his way.

Sam turned on his bot and let it run.

In Oxford, Will McCaskill and his philosopher colleagues were thinking, remember what Peter Singer had said years ago about how distance wasn't morally important.

We should care as much about a child starving ten thousand miles away as a child drowning in a pond right in front of us.

McCaskill began to think we should treat time the same as distance.

We should care as much about children who might be born in the future as children who exist right now.

Following that logic leads to some strange conclusions.

The future could last a long time.

There might be trillions upon trillions of future humans, far more than the mere few billion alive today.

But those future humans will never be born.

If today's humans carelessly go extinct in the next few decades, what might cause that a genetically engineered pandemic perhaps or a rogue super intelligent AI.

So perhaps the most effective thing altruists can do is fund academic research into how best to prevent those risks.

Of course, most of that research won't lead anywhere, but a small probability of a huge payoff can still outweigh the certainty of a small payoff.

Speaker 1

Think of it like this.

Speaker 2

If you donate three thousand dollars to buy bednets, you can be fairly hopeful of saving one life.

But what if instead you put your three thousand dollars towards holding an AI safety workshop.

The chance that it will lead to an important breakthrough is minuscule, say one in a billion, But if it does, it might save lots of future lives, say a trillion.

One in a billion times a trillion.

If you think about it rationally, that is basically the same thing as saving a thousand lives.

Far more effective then to fund AI workshops than bednets.

This new school of thought became known as long termism.

Will McCaskill got to work on a book to spread the ideas more widely.

At Alameda Research, they finally found the missing four million dollars worth of crypto.

It hadn't been stolen.

After all, there had been a computer glitch.

It had been sent to an exchange without an accompanying note about who owned it.

When Sam finally realized which exchange might have it and called them up, they were astonished, How has it taken you this long to contact us.

Sam's bot, meanwhile, was doing well.

Alimeda Research was making money, but Sam wanted more.

He'd realized that the real money making potential in crypto wasn't in trading on someone else's exchanges.

It was running an exchange of your own.

Sam came up with a clever design for a new kind of crypto exchange, one that would let its users gamble on the future price of various coins.

Many of those people would end up losing.

That's the nature of gambling, win or lose.

Sam would take his cut, just like a casino.

The exchange Sam had in mind wouldn't be legal to run in America, so he set it up in the harmas he called it FTX.

It quickly became a huge success.

It ran a Super Bowl commercial in which characters played by Larry David shown new inventions through the ages the wheel, the toilet, the light bulb.

Larry mocks them all that's stupid.

At the end, he's shown FTX and sneers dismissively.

Speaker 1

It's FTX.

It's a safe, an easy way to get into rypto.

I don't think so, and I'm never wrong about this stuff.

Speaker 2

Never the tagline don't be like Larry, don't miss out.

The AD's message is clear.

You might not understand crypto, just like Larry David's characters didn't understand the wheel or the light bulb, but it is going to be just as important.

Don't miss out.

Who cares if you don't understand it?

Gamble on it now, earn to give.

Will mccaskell had advised Sam Bankman freed.

Did anyone care how Sam was making his money as long as he was giving it away?

Cautionary tales will be back after the break.

In twenty twenty two, Will McCaskill published his book What We Owe the Future.

The organization he helped set up, the Center for Effective Altruism, moved into impressive new premises.

It bought Whiteham Abbey, a grand fifteenth century estate just outside of Oxford, to host workshops on subjects like AI safety and pandemic risk.

Some Effective Altruists felt queasy.

Was this really a better use of money than bednets?

Sure, said others.

If we hold our workshops in a century's old building, that'll help to focus everyone's minds on a long term time frame.

Sam Bankman Freed was fully on board with Will mccaskell's new long term is thinking.

One rational way to donate his money, Sam decided might be to get politicians elected who knew something about AI and pandemics.

Politicians like Carrick Flynn, an earnest, young, effective altruist who had worked on pandemic prevention, then decided to run for Congress in Oregon.

Carrick Flynn didn't know that Sam Bankman Freed had decided to throw money at his campaign.

Flynn was watching YouTube sipping diet mountain dew when YouTube cut to an ad.

Speaker 3

Carrick Flynn faced poverty and homelessness, but he pushed through to college on a scholarship and a career protecting the most vulnerable billion dollars.

Speaker 2

Flynn was so startled he covered himself in diet mountain dew, and that was just the start.

Soon, the voters of Oregon's sixth Congressional district could hardly look at a screen without encountering an ad for Carrick Flynn.

Speaker 3

Carrick Flynn, the Democrat will create good jobs.

Speaker 2

Oregonians quickly got sick of hearing the name Carrick Flynn and suspicious these wall to wall ads must be costing a fortune.

Who was paying?

Reporters found out that it was a crypto billionaire who lived in the Bahamas, and demanded of Carrick Flynn, why is Sam Bankman Freed so very keen to get you elected?

I don't know, Flynn protested.

I've never met him.

I've never talked to him.

Flynn said.

He assumed it must be because of his interest in pandemic risk.

Of course, the reporters didn't believe that.

Speaker 1

They said, he.

Speaker 2

Must want you to do something involving crypto.

Flynn became increasingly bewildered.

I'm not a crypto person, he protested.

I don't know much about it.

I've tried to read about it.

I didn't really care.

Flynn finished a distant second in his election.

For every vote he received, Sam had spent something like one thousand dollars on ads.

To put that another way, for every three votes Carrick Flynn received, Sam could have bought enough bednets to save a life.

Speaker 1

But in the new long term mist.

Speaker 2

View of effective altruism, the money spent on Carrick Flynn hadn't been wasted.

It had always been a long shot that Carrick Flynn's political career would end up preventing some future pandemic from wiping out humanity.

But if it did, it could save trillions of future lives.

If you thought rationally about altruism, funding Carrick flynn ads instead of bednets made perfect sense.

In the last two episodes of Cautionary Tales, we've heard two wildly different stories about people who took altruism very seriously.

Indeed, George Price's altruism was driven by revelation.

A vision of Jesus told him to give away whatever he had to whoever asked him.

He ended up as thin as a stick with rotting teeth, sleeping on a mattress on the floor of a squad.

Sam Bankman Freed's altruism was driven by rationality.

A moral philosopher told him to make lots of money and donate it effectively.

He ended up encouraging people to gamble on crypto so that he could put more money into politics.

Taking altruism very seriously indeed can take you to some strange places.

Will mccaskell flew into the Bahamas for a penthouse discussion about the best ways to help future people.

Sam had been trying out a new idea.

He identified a hundred experts in AI and pandemic risk and sent each of them a million dollars out of the blue and nose strings attached.

Use it well and I'll give you more.

He planned to give away a billion dollars over the next year, but how more political campaigns, more workshops at Whiteham Abbey.

As it turned out, the question was moot because when Sam's dark secret was about to be discovered, he hadn't just been earning to give, He'd also been defrauding to give.

When Sam set up FTX, he couldn't get a US bank to open an account.

Its activities were too legally murky.

That meant FTX had no way of taking dollar deposits from its customers.

But you know who did have a dollar account, Sam's company, Alameda Research.

When customers opened an account at FTX, they wired their deposits to Alamader.

Alamader could and should have kept that money safe for the FTX customers, but they didn't they used it to trade with.

This wasn't legally murky, this was very illegal.

Indeed, what was Sam thinking?

Sam was thinking that nobody need ever find out Alamada's trading was making profits, and with this extra money to play with, it would make even more profits.

And Alameda had plenty of assets to fall back on.

It owned crypto worth many times more than those customer deposits, so whenever an FTX customer wanted their deposit back, he thought it wouldn't be a problem.

As Sam told the author Michael Lewis, it felt to us that Alameda had infinity dollars.

But then crypto prices fell, Alameda now had finite dollars.

FTX experienced the equivalent of a run on the bank when all the customers rushed to withdraw their deposits at once.

Alameda suddenly had to scramble to find the money to pay them back.

Remember, when Alameda had lost sight of four men million dollars, it hadn't got any better at keeping track of what was where.

In his book Going Infinite, Michael Lewis describes a comically frantic hunt for Alameda's assets.

Speaker 4

Its CEO would come on to the screen and announce that she found two hundred million dollars here or four hundred million dollars there, as if she just made an original scientific discovery.

Some guy at Deltech they're bank in the Bahamas message ramnic to say, oh, by the way, you have three hundred million dollars with us, and it came as a total surprise to all of them.

Speaker 2

Alameda couldn't gather its money in time.

FTX was declared bankrupt.

Sam was arrested and extradited from the Bahamas to the US.

Michael Lewis makes the case that Sam wasn't so much criminal mastermind as overgrown teenager, incredibly reckless, and incredibly disorganized.

The bankruptcy lawyers eventually located enough assets in Alameda to give FTX depositors all their money back with interest, But reckless and disorganized is hardly a compelling defense.

He took money that wasn't his and spent it according to whatever logic suited him.

Sam was convicted of fraud and sentenced to twenty five years in prison.

Sam Bankman freed, is now known for his crimes, but it's his altruism that interests me.

And they have a surprising amount in common.

Sam purloined his customer's deposits because he made a hyper rational calculation that he'd probably get away with it and didn't think much about the fallout if it all went wrong.

Sam gave money to politicians, not the poor, because he made a hyper rational calculation to prioritize future people over people actually suffering today.

Both these calculations remind me how Sam's teenage classmates described him smart and maybe not all that human.

George Price was deeply depressed by what his own work said about what it means to be human.

Our altruistic instincts evolved to serve our selfish genes.

When we feel the urge to do something nice, tends to be the kind of thing that for our ancestors might have helped their relatives or forged a friendship.

Remember the contrast drawn by Peter Singer.

We wouldn't hesitate to wade into a shallow pond to save a drowning child, but we don't feel the same urge to give money to save a starving child on the other side of the world.

Why, when you think from the g Deane's point of view, it's not hard to explain.

The child who's drowning right in front of us might plausibly be a distant cousin or have parents who feel forever in our debt.

The child who's starving half a world away, not so much.

Our selfish genes can help to explain why we are the way we are, but they can't tell us what's the right thing to do.

We need our rational minds for that.

It is depressing that we ignore the starving child, because evolution simply didn't build us to care that much.

The moral philosophers are right that we can use our rational minds to transcend our selfish genes.

Then again, I'm not sure it's any less depressing to ignore the starving child because we've thought long and hard about it and decided to fund an AI workshop in stead.

If we follow the logic of altruism far enough, it can take us to places that don't feel human at all.

So perhaps we shouldn't beat ourselves up too much if we succeed in transcending our selfish genes only by a little bit, If we manage at least to do something good for people who aren't family or friends, and we don't give everything away like George Price, or feel angst about getting braces like Will mccaskell, or donate our cash to long shot chances of saving unborn trillions like Sam Bankman Freed.

It may not be a rational approach to altruism, but it is a human one.

Speaker 1

There are worse things in the world than being human.

Speaker 2

A key source for this episode is Going Infinite, The Rise and Fall of a New Tycoon by Michael Lewis.

I will be speaking to Michael Lewis next week about his time with Sam Bankmin Freed, and we're going to be answering your questions on altruism and kindness.

This episode of Cautionary Tales also relied on Gideon Lewis Krause's profile of Will McCaskill in The New Yorker.

For a fullnest of our sources, visit Timharford dot com.

Cautionary Tales is written by me Tim Harford with Andrew Wright, Alice Fines, and Ryan Billy.

It's produced by Georgia Mills and Marilyn Rust.

The sound design and original music are the work of Pascal Wise.

Additional sound design is by Carlos San Juan at Brain Audio.

Bend a d Afhaffrey edited the scripts.

The show also wouldn't have been possible without the work of Jacob Weisberg, Greta Cohene, Sarah Nix, Eric Sandler, Christina Sullivan, Kira Posey, and Owen Miller.

Cautionary Tales is a production of Pushkin Industries.

If you like the show, please remember to share, rate, and review.

It really makes a difference to us and if you want to hear the show, add free sign up to Pushkin Plus on the show page on Apple Podcasts or at pushkin dot Fm, slash plus