Episode Transcript
What's going on everybody?
Welcome to another Saturday Conundrum.
It's Brian from The Daily Eye Show and I really appreciate you being here.
So this weeks conundrum is called the Unerasable Self.
Now, look, we've all seen the movies where, you know, the person moves to a brand new town and it's clear that nobody knows who they are.
They're they're new.
They've essentially reinvented themselves in this new town.
Of course, we haven't had anything like that in real life now for decades.
Especially with the advent of the Internet, social media, I think it's harder and harder to just simply restart or, you know, choose to be somebody different tomorrow that you weren't yesterday.
And AI is only going to make this more complicated, like much more complicated.
So this episode is talking about digital twins, which you've heard us talk about on the show if you watch our show or listen to our show.
But I don't mean digital twins in the way that like NVIDIA necessarily is building, you know, digital twins of whole cities or their earth in some cases.
I mean like literally like your digital twin.
Now, we all are familiar with credit scores and how our digital past obviously has an impact today.
You know, if I have more credit, I'm going to struggle to get new loans.
If I've gone through bankruptcies for all sorts of different reasons that may have been out of my control, that's obviously going to impact me at least into the future.
And this is something that kind of I got inspired by because Andy had talked this week about the idea of AI and memory and how we don't want all memory to be equal.
That's, that's not the idea here.
You don't want just 100% recorded.
Perfect memory when it comes to AI, number one, it would probably be impossible.
I mean, the amount of data there would be genomical.
But two, it's also probably not what we want in general.
And this, you'll hear the two AI hosts here talk about that on this podcast, talking about how we don't really want to give the same weight or, or bias to older information in some cases when we do newer information, meaning that people should have the ability to reinvent themselves or just like we do today, at least in the US, if you're tried as a juvenile, and even if you serve time in a juvenile jail, there's a chance for you to not have that follow you for the rest of your life, that you could perhaps have your records sealed and get another shot.
And this is just going to get harder and harder when we get further along with AI because there's going to be more data about you, more little bits and pieces that sort of make up your digital twin.
But what does happen if one day you just decide, you know what, I want to be somebody different tomorrow, for better or worse.
So how does that work in the future of AI?
Is it going to even be possible?
I think is a good question.
So this podcast and our two AI hosts here do a good job of kind of going back and forth as always, sort of looking at two different sides of this unreasonable self conundrum.
So let me give you the quick intro and then we'll get right into it.
So for most of history, people could begin again.
You could move to a Newtown, change your job, your style, even your name and become someone new.
But in a future shaped by AI driven digital twins, starting over may no longer be possible.
These twins will be trained on everything you've ever written, recorded, or shared.
They could drive credit system, tiring models and social records.
They might reflect the person you once were, but not the one you've become.
And because they exist across networks and databases, you can't fully erase them.
You might have changed, but the world keeps meeting an older version of you that never updates or dies.
So here's the conundrum.
The conundrum.
Excuse me when your digital twin outlives you and outlives who you are.
Excuse me?
And keeps sharing how the world sees you.
Can you ever truly begin again?
If the past is a permanent and searchable, what does redemption or reinvention even look like?
So enjoy this conversation.
I think it's a good one.
I like this idea, as always, about peering over the, I don't know, the next tale of the fence.
Whatever you want to say.
Where I feel like we always have this like kind of sort of murky idea of what's coming with AI because of current technology.
And we could say, well, you know, A + B = C and I can kind of tell if we already have this going, well, that's probably going to happen.
That's the whole idea of this series.
This conundrum series is just to, you know, look over that next hill just enough and say, hold on a second.
We're going to have some real conversations here about how AI is impacting our lives.
And, you know, this unerasable self conundrum is just yet another example of that of, you know, we're all familiar with credit scores.
We're all familiar with having track records, but it's about to get a lot more complicated than that.
So enjoy this episode, enjoy our two AI hosts, and we will see you again next week for a great conundrum.
Oh, and hey, listen, if you like these episodes but you've never listened to our show, don't forget we do a regular Daily Show Monday through Friday with with our all our Co hosts and you can catch that at 10 AM Eastern Monday through Friday.
So come check us out for that.
And yeah, enjoy the episode.
Bye.
Welcome back to the Deep Dive.
Today we're diving deep into something, well, honestly a bit unsettling.
We're calling it the unerasable self.
You know, for basically all of human history, we've had this fundamental ability, the ability just begin again.
You move, change jobs, update your look, maybe even your name, and you kind of shed the weight of your past self.
But that whole idea, that sort of historical right to transform that clean slate, it's now running headlong into this new reality of digital permanence.
And the threat here, it's not just theoretical, is it?
The research we looked at shows these AI driven digital twins.
They're trained on, well, everything you've ever put out there.
Everything written, shared, recorded.
And they're already being used for big things, credit scores, hiring, social records, your digital twin reflecting who you were maybe years ago.
Just it doesn't update.
And that's the real conundrum, isn't it?
It challenges our whole idea of human growth.
I mean, when your digital twin basically outlives who you are now, when it never forgets and just keeps shaping how the world sees you, can you actually ever reinvent yourself?
So our mission today is to really dig into the research on this, this fundamental clash, humanity's capacity for reinvention versus this technological architecture of permanence.
What does redemption even mean when data never forgets?
OK, let's unpack that.
This architecture of permanence, it's more than just, you know, an old embarrassing photo.
This data is fundamentally sticky, hard to get rid of.
What makes this era so different?
Well, it's different because we're dealing with what researchers are calling digital doppelgangers.
And these aren't just static files sitting somewhere.
They're AI replicas, and they're dynamic.
They actively predict your future behavior based purely on past patterns.
They don't look at who you are today, They just keep projecting who you were indefinitely.
Right.
And this whole set, this massive data hoovering, it's based on a specific economic model, isn't?
It absolutely is.
The whole structure rests on what Professor Shoshana Zubov calls surveillance capitalism.
Essentially, your personal data is treated like an oil well, an extractive resource.
Every click, every search, how long you linger on a video, all that stuff generates what she calls a behavioral surplus.
Behavioral surplus.
So like the digital exhaust fumes of our lives.
Kind of, yeah.
It's all the leftover private data from every interaction, and that gets packaged, sold and fed right into these predictive algorithms.
OK, so if the model slurps up all that behavioral surplus, how do you get it back out?
That's where we hit this this huge technical wall, right?
Exactly.
Once a machine learning model absorbs data, it creates these really intricate data dependencies, complex statistical patterns.
Trying to pull out just one piece of information.
It gets exponentially harder.
And the actual process for trying to delete it, which they call machine unlearning it faces just immense computational costs, especially with these massive modern and.
It's not just hard, it's risky too from what I understand.
Oh absolutely.
Some of the cutting edge methods they're researching to fix this, like something called gradient reversal.
It's basically trying to tell the AI to selectively forget specific data it was trained on.
But even doing that carries a serious risk.
The unlearning process itself might actually leak information or just destabilize the whole model.
Wow.
And even if you somehow manage to get your data deleted through, say, a GDPR request, the information is probably already spread.
Hasn't.
It precisely We have the right to be forgotten legally in some places.
But data just spreads, well, effortlessly and unchecked, is how one paper put it.
Once your data may be a minor conviction, an old mug shot, a news story hits the digital ecosystem.
It gets copied everywhere, commercial background checks, news archives.
It becomes almost impossible to truly erase.
Which leads right into this idea of researchers call algorithmic determinism, the notion that your past dictates your future.
But let's push back on that for a second because the research also shows that sometimes these predictive systems, they actually work.
We saw those examples, right?
JetBlue cutting training attrition by 25%.
And Wells Fargo increasing retention by 15% using these hiring tools.
So if these systems are good at predicting success based on past data, doesn't that make the argument for second chances seem a bit idealistic?
Maybe not a practical flaw in the system.
That's that's the perfect tension to highlight.
Yes, that predictive power offers huge commercial benefits.
The stats are real.
But the models work based on a pretty cynical assumption that past actions perfectly predict future ones always.
And that creates this recursive trap.
Historical data doesn't just inform future possibilities, it actively limits them.
If you're someone who has changed the system, literally can't see it.
It's only designed to reward predicting stagnation.
It can't see growth.
Exactly.
And if you want a real world glimpse of where this can lead on a societal scale, look at China's social credit system.
Rolled out around 2014.
It pulls data from everywhere.
Financial, government, online behavior, assigned scores.
And those scores dictate real life things.
Loan approvals, travel restrictions.
It's a living example of permanent social stratification based on past actions constantly shaping your present reality.
So that's the big picture, the societal extreme.
But even here in the West, when your data gets used against you, what does that permanent digital shadow actually do to the person living and breathing and hopefully evolving?
This is where I think it gets really interesting, the psychological impact.
Oh it's crushing.
Researchers call it identity fragmentation.
You're constantly fighting to reconcile the person you are now, say the 45 year old, with this obsolete digital twin based on the 25 year olds online life.
The world keeps interacting with that old frozen version of you through these algorithms.
How about this being?
Yeah, exhausting.
And philosophically, this leads to what the sociologist Goffman called a spoiled identity.
It's a public image of someone who, just because of their recorded past, is seen by these automated systems as fundamentally irredeemable.
When you lose that control over how you're seeing, how you're evaluated, it just eats away at that basic human need for self determination, for autonomy.
OK, so we've laid out the crisis.
Digital Ermanence is seriously threatening our ability to genuinely reinvent ourselves.
But let's look back.
What's the case for second chances?
How did society used to handle transformation?
Well, historically the main mechanism was pretty simple geographical mobility.
Moving gave you what researchers now call natural obscuration.
Natural obscuration.
Love that.
True moving wasn't just changing your address.
It was a form of social infrastructure.
It allowed people to grow and change because, frankly, the records just didn't follow you easily.
The past could actually be obscured.
And even today, our legal systems are often built on principles of, let's say, structured forgetting.
Think about bankruptcy law, right?
OK, the record stays for what, 7 to 10 years?
But the law explicitly provides a fresh financial start.
Within just two years of responsible behavior, you can often qualify for major loans.
Again, the philosophy is clear A financial disaster in your past doesn't permanently bar your future.
And we see that same kind of logic in the justice system, too, especially for young people.
Absolutely.
Particularly in juvenile justice, most places allow records to be sealed or expunged.
There's a recognition that youthful mistakes shouldn't cast a permanent shadow.
And this isn't some fringe idea.
It's got huge political momentum with things like the Clean Slate movement.
Think about Pennsylvania automatically clearing over 33 million records.
That shows a broad consensus that we need to remove these permanent roadblocks from minor past offenses.
And underlying all this, there's a deeper philosophical point, right, about identity itself, this idea of identity persistence.
Can you maybe unpack that quickly?
Sure, philosophers make this distinction.
We maintain numerical identity, meaning you are still the same single person over time, but you undergo constant qualitative change.
Your personality, your morals.
Exactly.
Your moral evolution, your character shifts.
So the ability to reinvent yourself isn't about magically becoming a different person numerically.
It's about removing the barriers that stop your qualitative change, your growth from being seen and accepted.
And this connects directly to our ideas about justice.
We contrast retributive justice, punishment for punishment's sake, with restorative justice.
Which focuses more on repairing harm and tackling the root causes to reduce reoffending.
Precisely.
And studies show that society generally has this significant ingrained belief in offender redeemability.
You see it reflected in political movements pushing for meaningful chances for rehabilitation.
OK, so the take away here is we don't have to just accept this technical reality as destiny.
We can build systems that respect human change.
The problem isn't the technology itself, it's how we design it.
Exactly right.
GDPR is right to erasure, even if it's imperfectly implemented right now, it establishes A vital principle.
Indefinite data retention is a violation of basic privacy, and the technical side is moving forward.
Machine on Learning research shows that erasure is actually possible, not merely theoretical.
We can design systems that allow for forgetting if we choose.
To.
I thought the examples from the media world we're really striking here.
Organizations that are historically all about archives are starting to grapple with this permanence problem.
They really are.
Newsrooms like the Boston Globe, cleveland.com, They've started Fresh Start programs.
They're acknowledging that old news stories can immortalize the worst decisions of ordinary people.
It creates this permanent public stain, often unevenly applied.
And it's an ethical recognition, really, that a permanent digital record causes lasting harm in a way that, you know, yesterday's physical newspaper just never could.
Yeah, it really challenges that feeling of technological determinism that the tech controls us.
It suggests it's more about a lack of imagination.
Or maybe we'll we have to focus on building infrastructure that keeps those pathways to individual autonomy open.
So the way forward, it seems, has to involve synthesizing these two valid perspectives.
We need to preserve accountability for past actions, sure, but also proactively enable transformation and growth.
It can't be all one or the other.
OK, so how do we actually build that synthesis into the code, into the systems that increasingly run our lives?
What are the concrete steps?
Well, the first big idea is temporal decay and contextual evaluation.
We have to stop treating decade old data the same as yesterday's data.
Systems need to build in temporal decay functions.
Meaning older information just carries less weight over time.
Exactly just like a bankruptcy's impact fades and algorithmic assessments need to be designed to actually reward demonstrated transformation, they need to align with what some call the moral self hypothesis, the idea that people generally do strive to improve.
Systems have to learn to distinguish genuine growth from simple stagnation.
OK, that makes sense.
And instead of this constant battle just to get things deleted, could redemption become something more active?
Something that technology recognizes.
That's the second key solution earned redemption.
Imagine something like certificates of rehabilitation but integrated directly into digital systems.
This fits with restorative justice ideas.
Individuals could take active steps, verifiable steps to repair harm or demonstrate change, building a positive digital record that algorithms are then programmed to give more weight than older negative data.
I like that it gives people some agency back.
But what about the root problem, that power imbalance of surveillance capitalism, where we, the users, have almost no control?
Right, that requires tackling distributed autonomy.
The current system thrives on constant, often non consensual profiling, so we need to shift that power.
Implementing things like decentralized identity systems, maybe using blockchain tech for IEDs could allow individuals to actually manage and control which pieces of information feed into these algorithmic scores.
It would empower you to correct inaccurate representations as you evolve instead of just begging corporations to delete stuff.
OK, decentralized identity.
And finally, what's the role for law and and regulation and making sure this is all fair?
That's crucial.
We need strong regulatory fairness.
Laws have to demand transparency.
We need to know exactly how historical data is influencing these major life decisions.
And, critically, legal frameworks need to hold the developers accountable.
They must ensure their systems properly weight these temporal factors and give real consideration to evidence of rehabilitation.
The goal is to stop AI from basically imposing a life sentence on people who genuinely changed.
So this deep dive has really highlighted this profound tension, haven't it, between needing to respect the truth of the past, but also fiercely protecting the potential of the future self.
The good news, maybe, is that we seem to have the technical tools and the philosophical grounding to build systems that actually support human transformation.
But the design choices we make right now, they feel incredibly high stakes.
They'll really determine if that right to a second chance survives this digital age.
Indeed.
And perhaps that leaves us with a final provocative thought for you, the listener.
If we actually do manage to design these digital systems around principles like temporal decay and verify transformation, will the philosophical weight that society currently places on someone's past begin to fundamentally lessen?
Could identity change become less of an uphill struggle for redemption and more of, well, just a technological expectation, something for you to think about?
