Navigated to AI Inequality EXPOSED: When Algorithms Fail | CXOTalk #882 - Transcript
CXOTalk

ยทE882

AI Inequality EXPOSED: When Algorithms Fail | CXOTalk #882

Episode Transcript

Today on CXO Talk episode 882, we're examining a critical issue that affects millions, yet is largely invisible to most business leaders.

How AI is failing our most vulnerable citizens on an unprecedented scale.

I'm Michael Krigsman, and I'm delighted to welcome our guest, Kevin Delebin.

As founder of Tectonic Justice, Kevin has witnessed first hand how AI systems determine who gets healthcare, who finds housing, who gets hired, and who receives government benefits.

His ground breaking report, Inescapable AI, reveals that 92 million Americans now have fundamental aspects of their lives decided by algorithms, often with devastating consequences.

He'll share how these systems fail, why traditional accountability mechanisms don't work, and what this means for your organization.

We're discussing real systems built by major vendors causing real harm right now.

Tectonic Justice is a new nonprofit organization launched last November to protect low income people from the harms that AI causes them.

And I come to this work after 12 years as a legal aid attorney representing low income folks in all sorts of civil legal matters.

And it was there that I first saw the ways that these technologies were hurting my clients lives.

And I was involved in several battles and won one several of them as well, and started understanding that this was a bigger problem that needed more attention and more focus.

Kevin, you were an early pioneer actually winning cases relating to AI harms.

It was really in 2016 when I had clients who were disabled or elderly on a Medicaid program that pays for an in home caregiver to help them with their daily life activities so that they can stay out of a nursing facility.

And this is better for their dignity and independence and generally cheaper for the state as well.

And what happened is the state of Arkansas replaced the nurse's discretion to decide how many hours a day of care a particular person needed with an algorithmic decision making tool, and that ended up devastating people's lives.

People's care hours were cut in half in some cases, and it left people lying in their own waste, left people getting bed sores from not being turned, being totally shut in just intolerable human suffering.

And we ended up fighting against that in the courts and also with a really big public education and kind of community activation campaign.

And we won.

And that's one of the relatively few examples still to this day of kind of successful advocacy against sort of automated decision making.

Algorithms, AI are neutral mechanisms, neutral devices.

They're just math without feelings, without interests, without malice.

So given that, what is the problem here?

I would challenge some of the the the assumptions even in that question of them being neutral, right?

I mean, they're programmed by humans.

The statistical science that underlies a lot of this stuff is determined by humans using various choices that they have, using historical data that they have.

And that isn't a wholly objective exercise.

And so I think what you really have to look at is the purpose for which the technology is being built to understand it and understand a lot of like even the technical aspects that underpin it.

And in my world, when we're talking about low income people and sort of automated decision making for them, these are not neutral technologies at all.

These are designed oftentimes to restrict access to benefits or to empower whoever's on the other side of the equation, whether it's a landlord, a boss, a school principal, a government official to do something that they want done.

That might not be what the person who I'm representing is interested in.

So I would challenge that premise first.

So you're saying that the design of the system is intended to cause harm.

Is that correct what I'm hearing you say?

In some cases, it's intended outright to cause harm.

In some cases, it's just intended to, you know, sort of facilitate a decision by the decision maker, right?

Make a landlord's life easier, make a boss's life easier, make a government official's life easier.

The problem is inherent in making their life easier ends up being making somebody else's life harder.

And so I think that's where the push and pull is of this is there is the intent issue.

There are very clearly stuff that's built to be harmful.

But then there's also this Gray area where nobody is, you know, scheming in a dark room about plotting to take over the world and destroy people's lives.

But the nature of their power positions and the decisions that they're making and what makes their life easier ends up translating into that for low income people.

Can you give us an example of where the the goals or the incentives are misaligned between the developers of this technology, of these technologies or algorithms and the can we say the intended recipients?

Is that even a a correct way to phrase it?

You have the hiring process for example with most big companies now is riddled with AI.

Everything from resume, resume review and screening to video interviewing and to oversight.

Once somebody gets the job, right, there's nothing inherent in that process that really benefits the person who's seeking work or is an employee, right?

That's all intended to facilitate the life and the work of the employer.

The bosses, same thing a lot of times with, you know, with public benefits, you've got relatively, you've got really dedicated public servants, but oftentimes they're unsophisticated in technology issues.

They're thinking, OK, well, this new piece of technology is going to suddenly help expand our limited capacity, so let's implement it.

And then they don't have what they need to do that in a non destructive way.

And so the people who end up bearing the risk of, you know, sort of their own lack of knowledge or incompetence are the low income people that are subject to the decision making.

These systems are complex.

They are developed with algorithms and data as well.

Can you isolate where a a primary source of the problem lies?

I realized underneath it all, you have a human intention, trying to solve a problem, trying to achieve a goal.

But if you can, you drill into that a little bit, kind of dissect this for us.

There were a couple aspects to the way the algorithm worked.

One is the mechanics of it right?

What inputs turn into what outputs?

And that's hard enough to discern, but then there's the reason that those inputs are chosen to lead to those outputs, right?

Like why do you look at this factor and not this factor?

Why is this factor shaped to look back three days instead of five days?

All of those things, those are all human decisions Now.

They're informed by, in the best cases, statistical science.

In a lot of cases, there is no science is a bad descriptor for that.

A lot of times it's junk, right, that somebody just invented and comes up with.

But in the best cases in statistical science, that still is riddled with various assumptions.

And so in our example, for example, in Arkansas, for example, whether or not somebody could use the bath on their own might not have been a factor that the algorithm considered.

And that's weird, right?

I mean, we're talking about home care for elderly or disabled people.

Being able to bathe on your own should be 1 factor that decides how many hours of care you need.

It wasn't or your ability to prepare meals wasn't a factor.

And so you see this disconnect of like, we know instinctively or, you know, through medical discretion and judgment, how to answer this question of how much care somebody needs.

It might be imprecise, but we know we know what we should be looking at.

But the algorithm didn't do that.

They looked at a lot of factors that weren't kind of intuitive, and then they ignored a lot of factors that were intuitive.

How does this come about?

Is it simply lack of understanding of the the subjects of the the target?

How did?

What happens here?

Some of it is real ignorance about, you know, the lives of poor people and the ways that decisions are made and the impact of the decisions.

Some is ignorance about certain program standards or laws or anything else.

I've seen that a lot in the technology.

Some of it is the lack of having to get it right.

You know, for a lot of the developers of these algorithms in particular, they're shielded from any sort of consequences for their actions.

And so they do what they think is best or what they can sell to a client and that's that.

And then the clients that are using it, the government agencies or the employers or whatever, they might not be vetting it or, you know, they are also insulated from the accountability because if it hurts poor people, what's going to happen to them?

Like what's going to happen to the person who decided to use it?

I mean, poor people often times are not a particularly empowered political bloc.

They're usually aren't scandals that end up resulting in lost jobs or lost elections for officials who are in charge of this stuff.

And so it's easy to get away with really harmful actions just because you're doing it to people who don't have a lot of power that's ready at hand.

You know, low income communities have always been super involved in advocating for themselves and organizing and everything else.

But that's a huge effort, right?

And takes like a concentrated movement.

And it's not like you can just call your elected official and you have that kind of access and say, hey, this is a problem.

Can you take care of this for me?

Or organize a lobbying effort to get rid of something?

Now, if you're doing something with poor people and it hurts them, you're not going to face immediate consequences for the most part.

Folks who are listening, I want you to ask questions.

We have some questions that are starting to come in on LinkedIn and Twitter and we're going to, we're going to get to them in a in a couple of minutes.

If you're watching on Twitter, just insert your questions into Twitter using the hashtag CXO Talk.

If you're watching on LinkedIn, just pop your questions into the chat.

For those of you who are developing these kinds of systems and we hear a lot of discussion of ethical AI and responsible technology, here's an opportunity to ask somebody who's dealing with the actual fallout of this.

So ask your questions, Kevin, what about the scale of the problem?

How?

How big an issue is this actually?

All 92 million low income people in the United States have some key aspect of their life decided by AI, whether that is sort of housing, healthcare, public benefits, work, their kids, school, family stability, all of these issues.

Not everyone might have all of those issues decided by AI, but everyone has at least one of those issues decided by AI.

And then it extends beyond low income people as well into higher income things.

So there have been a lot of stories, for example, about employer use of AI, sort of the screening aspect and then sort of the bossware management aspect of it being used against finance executives or against hospital chaplains, against therapists.

Recently there was a story about Amazon programmers who are now subjected to AI based oversight and measurement and it's affecting their lives.

So even though a lot of this stuff is most prevalent and probably most severe in the lives of low income people, it's happening to all of us.

Healthcare is another great example, right?

If our doctor recommends a treatment for us, many of the more expensive treatments are subject to health insurance company review prior to being offered, and those health insurance companies are using AI generally to deny those requests.

We all know about United Healthcare and their use of algorithms that they say is neutral.

And we don't do that.

But it's, you know, you hear doctors complaining about how algorithms are interfering with their ability to render the kind of care that they want.

And so it becomes pretty evident that what what was once targeted at lower income people now through the acceleration of AI is broadening and touches all of us at this point, I would imagine.

In one of the examples of the health insurance companies, they ostensibly had a human reviewer reviewing the the AI's outputs, but when the investigation.

Dug into what that human review looked like, it showed that the doctor was approving something like 60 prior authorization requests a minute like they had one or two seconds per one.

There's no human reviewing that right.

And it's bad faith to assert otherwise.

And that's I think one of the key data points and there.

Are a lot of others that help us show that this isn't just all purely accidental, this can't be just attributed to mistakes or errors, that there's a lot of thought and intention that goes behind, you know, developing and implementing these systems that are denying people really fundamental needs.

Subscribe to our newsletter, go to cxotalk.com, check out our newsletter, and check us out for our next shows.

We have great shows coming up.

Let's jump to some questions.

Let's begin with Arsalan Khan on Twitter.

Arsalan's a regular listener.

Thanks so much, Arsalan, for your regular listenership.

And Arsalan says this.

He says, who whoever sets the AI guardrails has the power, but who checks if those guardrails are equitable?

And he asks, why don't we have a Hippocratic Oath for us as IT professionals?

He's, he's an enterprise architect.

So.

So this notion of whoever sets the AI guardrails has the power, but who checks that the guardrails are right are equitable?

The Hippocratic Oath idea is not a meaningful source of systemic change to insulate society from these harms because, you know, doctors have Hippocratic oaths.

And while that might be useful, it doesn't a lot of times prevent some of the abuses in in medicine either or lawyers have obligations and it doesn't present prevent us from going and doing all sorts of random harmful things.

So I think what you need is actual regulation to reinforce kind of the guardrail notion, right?

Safeguard people from having any exposure to the harms in the 1st place, or if there are, because those kind of institutional and ethical safeguards fail, then there are real consequences for that.

They go beyond somebody just violating their oath and feeling bad that way.

So I don't know if that's getting at the full essence of the question, but that's where some of my thoughts go.

And also, not everybody's as ethical as the person asking the question either, right?

And some people are perfectly happy to just do whatever the client wants or program the system in whatever way is going to make it most profitable and attractive.

And as long as they don't have anything holding them back formally, officially real consequences and accountable, we're not going to get any major change.

So self policing is not sufficient in your view.

Definitely not.

And I know those questions are asked in good faith and are posited in good faith.

But the people who are pushing that at the policy level are definitely not pushing it in good faith.

They don't want any accountability.

They don't want anything that would restrict how they use it, and they're perfectly happy to shunt off all the risks and all the dangers of their systems being bad or going wrong or doing something destructive to the people who are subject to those decisions.

Are you talking about government policy or in corporate policy, people designing products?

Government policy, the tech industry has been vociferous in their opposition to any sort of meaningful regulation of AI, automated decision making technologies and so forth.

And that's the reason why we don't have any real societal protections against this stuff outside of existing laws.

And even now they're targeting some of the European Union's restrictions, which are modest, but big tech doesn't like those.

So that's where I'm talking about is sort of how kind of corporate interests end up shaping their policy positions in ways that are detrimental to really all of us that are not in that in that world, but particularly low income people.

You also have many of the major tech companies pushing forth their own ethical AI initiatives and lots of discussions around the data and creating building bodies of data that try to weed out bias.

I mean that you see this everywhere is happening.

That's true, and there are a lot of.

Good people who share my values in these companies and are trying to make the companies do as right as possible.

But I think when the rubber hits the road, we've seen repeatedly that the folks speaking out for ethical uses are sidelined.

You know, a few years ago in Google, for example, the whole ethical AI team, I think, was fired because they wanted to publish a paper that Google didn't want published.

Or more recently, when, you know, Twitter was taken over by its current owner, the whole ethical AI team was disbanded instantly.

You have Google's retrenchment of its ethical AI things and now it's technology is being deployed in unemployment hearings, right, for people who are desperate for benefits, even though we know that a lot of the AI technology involved can be faulty.

So again, you do have these ethical components within institutions that are pushing, I believe in good faith a lot of times for changes, but the people who are pushing for that don't have the same interests as the institutions who are allowing it.

A lot of times the institutions are allowing ethical AI because it allows them to go out and talk about their their concept of social responsibility.

But we we see repeatedly when the rubber hits the road, ethics will go by the wayside and the company's profit incentives and motives are going to be what dictates what happens next.

So basically, money talks, nobody walks.

Yeah, I mean it's.

Complicated, right?

Because, again, there's a lot of good people in there that are pushing really hard for these major institutions that have lots of power to do right.

And the fact that the institutions allow that to happen is noteworthy.

I think it just comes, yeah, at the end, it it, it, it ends up being the money.

Talks.

I will say that you are up against the marketing budgets of some really, really large companies here.

I am.

This is going to change everything though, Michael.

See CXO talk.

This is going to be, this is going to be the entryway.

This is going to this is better than all the all the marketing budgets of of the big tech companies right now.

Let's jump to some other questions.

And I'm seeing some themes developing in the questions here.

And this next one is from Preeti Narayanan.

And she says, given your work exposing large scale harm caused by AI and public services, what practical guardrails would you recommend to technology leaders like her, Like many of our listeners who are building enterprise AI systems so we don't unknowingly replicate those same failures at scale?

Basically, it's the same sentiment as Arsalan Khan just brought up.

What can we, as the people creating these systems, do?

OK, one thing is push for regulation, right?

And push for meaningful regulation of what it is that you do, because that way it bakes in consequences for getting it wrong.

And as long as you have good faith and are doing things the right way, those consequences shouldn't be terribly severe.

You shouldn't be exposed to them that you know, in in a way that's wholly destructive.

So I think pushing for regulation is actually in your own interest, but kind of in the context of developing a particular product.

You can ask, is this a legitimate use for AI?

For example, should we be using AI to deny people, disabled people benefits and home care?

That might not be a legitimate use of AI.

And if it isn't a legitimate use, maybe we shouldn't do it and we should just say that's off, off limits.

We're not going to do that no matter how much somebody's going to pay us because we just don't believe that's fair.

Now, if it is a legitimate use, and I acknowledge there's a lot of kind of Gray areas in this, then you've got to have a really intensive development and vetting process.

What are you doing?

What are you, what data are you using?

Are you projecting out the harms?

Are you consulting in a meaningful way with actual oversight?

The people who are going to be subjected to these decisions, do they have some sort of say in how it's developed in a way that would actually stop you from moving forward or force a different development of it?

Are you willing to disclose things that might traditionally be considered a trade secrets or intellectual property in the interests of having more public accountability?

Are you willing to ensure ongoing oversight so that if your product is developed or is is deployed, it's deployed first of all in narrow, short phased ways so that we can test the harm before it's applied to everybody?

And then two, are we willing to look over time in a three month span and see, hey, does our projected impact which we have documented and have disclosed to the public differ from what the actual impact is?

And if so, is there an automatic off switch?

Is there some way to to course correct that?

And all of those things, when combined with meaningful, you know, legislation that means that their people have enforceable rights if they're hurt by it, would lead to reduce chances of harms on systemic society wide scales.

If I were a corporate leader, you made the assertion that we should question whether AI is the appropriate decision making tool to use in some of these situations that could Causeway real downstream harms.

But I would push back an I would say, Sir, you don't know what you're talking about because AI is a decision tool.

It is not autonomous.

It's overseen by humans.

The data that we collect is carefully vetted to be biased.

And it's unfortunate that these downstream harms are happening, but it's not a result of our decision making.

There are systemic underlying societal issues and frankly, the AI is making the right decision.

I would challenge almost everything that you said there, Michael, from, you know, the the sophistication of the vetting process.

The people who are developing enterprise software might be doing a better job when the people are going to buy their software are wealthier or richer than when it works for low income people.

So first of all, I think like who the audience is, who's going to be subjected to this dictates a lot of how careful the kind of development process is.

And if it's going to be deployed against poor people, the development process doesn't need to be as intensive probably as it would be for corporate clients, right?

So I think there's there's that.

So a lot of the so-called science in AI is really junk when it applies to poor people.

1 great example of that is identity verification, for example, during the pandemic.

And hopefully some of your listeners listeners will have some some frame of reference.

But during the height of the pandemic, right, masses of people were unemployed.

Congress expanded unemployment benefits to help people float during these, you know, desperate times.

At some point, states, encouraged by the federal government, implemented ID verification measures and what they would algorithmic ones.

And So what they would do is they would run every active claim and every application that was outstanding through these ID verification algorithms.

And the algorithm would flag claims that it noted as suspicious.

And then what would happen is the person would have to who is flagged would have to present physical proof that they are who they say they are.

That happened.

And then still the state didn't have capacity to process that verification.

And so you ended up with millions and millions and millions of people who are in desperate circumstances, can't keep their lights on, can't pay their rent, can't get.

School supplies for the kids who had their benefits stopped or delayed by months and months and months because of this identity verification algorithm.

Now what would happen?

How did it work?

One of the factors is, are you applying from the same address as somebody else, including with apartment buildings?

So if I live in unit one O 1 and somebody else is applying for unemployment benefits that lives in unit 3O3, both of us are flagged.

That's ridiculous.

That's somebody in their basement coming up with some junk that make that they think would be associated with fraud.

There's nothing statistical about that.

There's nothing scientific about that.

That's somebody just inventing stuff, right?

But it invents stuff and it causes millions and millions of people desperation that you couldn't imagine.

I had clients who were calling with active mental health crises talking about self harm because they couldn't get unemployment benefits even though they were who they said they were.

And they showed that to the state.

So that's an example of, you know, maybe the maybe, you know, maybe some companies care more than others.

But here when rubber hit the road, it didn't matter.

And ultimately studies came that came out after that we're assessing sort of the validity of these tools showed that the for the, for the most part, they caught eligible people, right, They weren't targeted narrowly to ensure that we're only getting the few.

That are actively suspicious No, they end up catching essentially everybody and then just leaving, leaving folks to to to try to wade through the mess on their own.

And that's just not, you know, that's not acceptable.

There's no justification for that kind of stuff.

Michelle Clark on LinkedIn says, can the problem of bias data be solved?

And let me just reframe that.

How do you manage the fact that that people are struggling to have data that is lacking bias?

I've spoken with many of these folks on CXO talk, but but it's a really tough challenge from a technical standpoint.

So.

So what do we do about that bias data?

Biased data is only one part of the problem, right?

And that there are other parts of the problem.

You can have unbiased algorithms that still cause massive harms and that I think would still be illegitimate in a lot of ways.

So we want to make sure we talk about the risk in more ways than bias.

But bias is a big one.

And when we talk about it, there have been various ideas about debiasing data.

And to be fair, I don't have the full technical background to understand the statistical science between all of the different ways and which is the best at doing what.

So I don't want to claim otherwise.

But what I do understand is that there's, you know, sophisticated like trying to get more data sets that are validated, trying to account for historical exclusion, again using the data in real world examples, but that don't have real world consequences and so forth, so that you're hopefully getting better data.

So I think all that is is very much possible.

But you know, again, I think the best, the best test against biased data is going to be, you know, once it's out in the world, are you going to face consequences for what you put out there, right.

And if you are going to face consequences, then you're going to make sure or you're going to do your very, very best efforts to ensure that your data is not biased in a way that's leading to to unfair outcomes for folks.

Self regulation is not sufficient regulation in this case.

Yeah, exactly.

We see the bias example all the time, right?

There's the obvious healthcare examples about who gets transplant plants or, you know, black folks pain being treated as less real than people who are white.

And various other examples in the healthcare context of AI that's deployed with, you know, bias baked in the ad targeting stuff from social media, like all of these things.

And then there's another deeper question, which is if you can't figure it out, if you can't debias your data, maybe you shouldn't be using it.

Maybe what you're trying to do is not so important that you're going to go out and reproduce long standing societal inequities with your technology.

Maybe the money is not worth it.

That's a value judgement I guess for every person and every company to make.

But of course everybody is going to say, well, we are careful.

That's the point.

I mean, I think this comes back to one of my points that, you know, ultimately meaningful, robust, enforceable regulations are part of your.

Gain your interest.

If you are a company that is committed to doing things right, subjecting yourself to accountability is going to be a competitive advantage, right?

Because if you have other people who are not doing things right and they can be subjected to lawsuits that are consequential, they can be subjected to regulatory oversight that's meaningful, that's going to be a competitive advantage for you.

You can say, look, we are not caught up in any of that stuff.

They are.

And so we're a safer bet.

We're a better bet.

You can tap the societal values that that, that you provide all of those things.

So I think ultimately regulation is in your interest because it creates a new competitive space for you, a competitive surface.

I guess I'd rather say.

I just want to mention for folks that are interested in the technology, technical underpinnings of data and bias, just search on the CXO talk site because we have done interviews with some of the leading technologists in the world who are focused on this problem.

So just search for data bias and so forth on cxotalk.com.

And oh, by the way, while you're there, you should subscribe to our newsletter so we can keep you up to date on shows like this because we have incredible shows coming up.

Our next show, not next week, the week after is with the Chief Technology Officer of AMD.

So subscribe to the newsletter we have.

Our next question is from Greg Walters, who's another regular listener.

And Greg, thank you.

And Greg says AI is not like old school digital transformation broadly.

Can AI help raise us up out of low income?

No, not with current incentive structures in the current system that we exist in.

People always ask me like, what about AI for good, right?

Like what can we do that would advance justice?

There's one example I always like to offer, which is with public benefits, say Medicaid or SNAP, which is Nutrition Assistance.

The government knows most of the time what income and assets people have, right?

That is, that information is accessible to them in some form.

They know they could make eligibility decisions oftentimes without any or with minimal kind of involvement from the person who would qualify for the benefits.

And so if you could build a system that would accurately, fairly consistently make those eligibility decisions, minimize paperwork and other burden on, on folks, that would be a net wonderful good that would do more good than 100 legal aid or lawyers in our lifetimes ever could.

The problem is, is big companies have tried it, big government vendors have tried it, and it repeatedly fails in the same way.

Why does it fail?

Because of failed accountability mechanisms, right?

You don't have political accountability, as we talked about, because hurting poor people generally isn't a scandal that's going to get anybody booted out of office.

You don't have market accountability oftentimes in the government vendor contract context is because there's very few government vendors of the size needed to be able to to compete with one another.

But even beyond that, you have market failures in terms of transparency of how your product works and what kind of public oversight it's it's subject to.

And then you have no legal accountability because the existing laws that we have, while they have been used effectively by advocates like myself, are limited in scope and can only get a certain amount certain kind of relief.

A lot of times I can't get money damages for the suffering that people's 'cause you can just get a judge that tells the state or that tells the vendor to change what they're doing.

And so you have all these broken accountability mechanisms, which means that even with this good use, right, helping people get the healthcare that they are eligible for, you don't see that brought about in real life.

And so if you can't do something like that, you're not going to do anything else in terms of alleviating poverty at scale.

You can have some cool projects like in the legal world, there's like know your rights projects, right?

Everybody's had a ban.

Hopefully everybody's had a bad landlord at some point in their life, right?

Where you needed to request repairs or ask for your security deposit back after you left and they were trying to hold on to it.

There have been some cool AI based tools that help people do that.

And that's cool stuff.

It's great, but you know it's a grain of sand on the beach that borders the Pacific Ocean, right?

Like it's cool, but it's not scale.

This leads us to an important question from Trenton Butler on LinkedIn, who is a community advocate, organizer and project manager.

And Trenton says this for those of us committed to ensuring these tools are used ethically, how can we get involved, especially if one does not come from a law or technology background?

This is an important aspect is there's a lot of power building and community organizing that can be done.

You know, some of the AI stuff happens at a very local level, right?

Some school districts, actually about half of school districts use AI to predict what kids might in the future commit crime, right?

And then targets them for law enforcement harassment or terror in in some cases, right?

That's something where you could find out as a citizen.

You don't need to be a lawyer.

You can do open records requests.

You can go to school board meetings.

You can ask people, hey, is AI being used here and how does it, how does it work?

And if it is and it looks bad, and most of the time it is bad, you can help organize people to get involved.

Another local fight that there is is data centers.

These are a big deal right there.

The way that all the data that AI depends on is processed.

They're subject to local land use laws, local regulation around utility prices and other things.

So there are a couple ways really at home that you can get involved in this and building knowledge of yourself, building knowledge of journalists and the public, holding meetings, getting your neighbors involved and all that stuff.

And it can be daunting.

So and there's a huge gap in helping people do that right now.

And that's one of the reason tectonic justice exists.

So in the in the self interested plug, please follow us, please stay in contact.

And as we're building out, it's just me and it's been my first two employees joined last month.

So we're still very much in the building phase.

But as we get more established, we want to be doing working in partnership with folks who want to be engaged around these issues.

So, so please stay up with us.

We have another question now from Twitter and this is from Chris Peterson, who says what agency or is there is or is there just one?

Would you suggest as the AI Ombudsman in the US?

He says also that for folks in charge of big AI, 99 plus percent of us are lower income in quotes.

There is no one ombudsperson kind of yet around AI.

And I mean, that's an interesting idea in terms of meaningful accountability because there are ombuds people in healthcare and in nursing homes and other similar entities.

It's a huge gap.

That's part of why we exist is right to be focused people on the ground, right?

Like I was a lawyer working with hundreds and hundreds of low income people to try to fight this stuff.

So I think in the ecosystem, sort of the nonprofit ecosystem, there are few organizations that are trying to build up the capacity to do some of this stuff to watchdog the use of AI.

And then there are a lot of established organizations that are more focused on kind of the policy level.

So there is no one ombudsperson.

In terms of the other aspect of the question, I guess I would need more context about what it means that 99% of us are subject.

Maybe it's that we're subject to the big tech.

What he was saying is that it's it's the billionaire question.

Now is the time to get involved before these technologies become entrenched as legitimate ways to make decisions about these core aspects of life.

Because even though AIA lot of times purports to be, or at least it's it's hype.

It's hype, it's hype men purported to be kind of this objective way to make decisions.

Whose objectivity is it, right?

If it's always limiting access to benefits, if it's always making housing or jobs or education harder to get, then it's not really objective.

It's it's, you know, the people who are developing or using the AI, It's achieving their ends.

So now is very much the moment for it, I think, because this field is.

Relatively new is sort of a social phenomenon and a social movement.

There isn't a lot of the infrastructure that needs to be there to help people get organized and engaged around it.

So a lot of my answers are probably unsatisfying.

It's like, we'll talk to your community, organize around it, stay up with tectonic justice, these kinds of things.

Because that's what we're trying to build, is build the infrastructure for people to be able to channel their front, their concerns, their frustrations, their energy towards ensuring something that looks more like justice.

But aren't you in a way trying to turn back the clock to a a a simpler and easier time Before we had AI and AI is not going away and it's growth is going to continue to make incursions into every aspect of decision making.

That's the overwhelming you talked about earlier, the PR budgets, right, of big tech.

And that's the overwhelming sense is that it's inevitable.

But is it really right?

Why can't a nurse make a decision about how much home care a disabled person needs?

Why is that not viable anymore?

Why shouldn't that be the case?

Why can't we use technology in a way that supports human based decision making rather than essentially making the decision for us with like cursory human oversight at all?

And I think that has to be the questions, what is the legitimate use of AI?

And then even where the use is sort of legitimate, let's go through all the vetting we talked about earlier.

But let's also talk about the bigger picture.

Questions in terms of what it means for the Earth, right?

We know that AI has environmental consequences.

There's debate about how many liters of water each ChatGPT you know, prompt uses or whatever, but like we know that it's draining water in certain places where water is scarce.

We know that it's responsible or at least correlated with energy price increases.

We know it's correlated with the use of non renewable energies.

So you have to factor all these things into the equation in terms of its societal value and its societal costs.

And it may be that if we actually do a concerted, reflected effort that accounts for all these externalities, we realize, you know what, this isn't that harm.

Maybe we shouldn't do it or we should only do it in these limited circumstances.

And I think that's what we have to be engaging.

In and that's why I always reject the frame of inevitability.

I'm a practical person.

I generally need to solve problems for my low income clients and that doesn't always allow me to like, be pie in the sky principled.

But we can be pie in the sky principled.

While also being practical and, and start thinking like, is this really worth it?

Is the productivity gain really worth all the cost?

And so far even that in the corporate sphere hasn't been clear that there are really net productivity gains, particularly when you factor in the required human oversight for its continued use.

And so I don't think it's inevitable.

I think it will be inevitable if we don't in the next, you know.

Decade or two really reckon with the implications of it.

My friend, you have an uphill fight.

I have to say on this point, you and I have to agree to disagree because as I look out over the developments of AI and automated decision making, I cannot see, I cannot fathom, and maybe I reflect kind of a typical technology viewpoint, but I cannot fathom that AI is not going to grow much as the steam engine influenced every facet of our lives.

And you can say the steam engine also caused a lot of problems.

Potentially.

I mean, I think in society, it's not like we just accept technology inevitably without, you know, restricting its use.

I mean, certainly nuclear energy has had significant use restrictions around it and its development and where it can be used and everything else.

Cars have had a lot of restrictions around how they can be used.

Everybody, I'm sure thought Ralph Nader in the 70s was ridiculous for, you know, advocating for seatbelts, right?

And now that's just an accepted facet of the cars.

Now that doesn't take care of all the harms that cars are potentially causing, right?

And I'm not saying that it does, but it's one example of of movement that way.

And all of these things are have, you know, essentially corporate power and lots of money going against, you know, people who seem like they're in the way of inevitability.

But we have to be a little bit, you know, what's the word?

We have to believe that something more is possible.

Otherwise we just resign ourselves to accepting the worst version of whatever it is that we're fighting against.

And that's not a, that's not a concession I'm willing to make.

Like I'll fight like hell, maybe I'll lose, but I bet you that we're better off because of the fight than if nobody fought.

Let's jump to another question, and this is from Ravi Karkara on LinkedIn.

He says, oh and I should mention he is Co founder and author of AT AI for food global initiative.

And he says, how should global stakeholders navigate the ethical challenges and data governance differences posed by China's AI strategy, particularly its state centric data policies, while promoting international norms for responsible and transparent AI development?

Not sure how much expertise you have in in China, but thoughts on kind of a global perspective?

In the global context, the AI discussion becomes even more interesting because there's a lot of people who are pushing AI as a solution to kind of global poverty, right, and inaccessible healthcare, right.

You get the the story of like people in remote villages, you know, in the majority world who are now suddenly able to access, you know, medical care or at least knowledge about medical care that they couldn't because they couldn't travel to cities and so forth.

And I think there's, you know, who am I to say sitting here and you know, in the US, in Los Angeles, CA, to say that that's a bad use of AII.

Think where I care about.

There are a few things.

One is the data extraction that comes from, you know, expanded use of AI.

Is it fair to be extracting all the data about people's behaviors, who they are, etcetera, etcetera, when you're going to monetize that and when they really don't have meaningful consent, right?

Opting into opting into the terms of service on a contract for social media, for example, that's not a real thing of consent for most people.

So what's the data extraction relationship?

What's the labor relationship, right?

Because just as there's a person who needs to seek healthcare in a village, right?

And this is an archetype, I'm not trying to use a specific example, there's.

Somebody not too far away.

Who's being paid pennies on the dollar to view really horrific, traumatic data and label it right?

There are people being exploited for the supply chain and everything else.

So I think as we transition to the global discussion, you're going to have a lot of these use cases of AI for good that are going to be uplifted to justify the continuance of the AI regime.

And if we're being reflective people that are serious about kind of the policy implications of this, we need to factor in all the costs.

What is the what are the costs of the data extraction of the labor exploitation?

What are the downstream costs of having other people's lives decided by the not so good and not so innocent uses of AI?

This is from Elizabeth Shaw, who says how does today's AI differ from previous algorithms from the view of social harms, and can AI be part of the solution?

So really the question is what's unique about AI and can AI help solve these problems?

A lot of the technologies that are used in government services right now are not the latest generation AI, you know, LLMS and other things like this.

A lot of them are more older algorithms that are based, you know, that were supervised learning that were based on statistical regression and these sorts of technologies and those are really harmful.

I don't think AI has any like the latest generation AI has anything to offer in a lot of these, in a lot of these contexts.

Again, so long as it's existing for purposes that are to, you know, essentially limit life opportunities.

And in this, you know, vacuum of accountability, I don't think the technological sophistication is going to make much of a difference because they're going to be making the same decisions with the same incentives, right?

And one way that we are seeing this now is, excuse me, in the recent developments federally, right, where the administration has implemented AI and for example, Social Security offices and it's made Social Security harder to access.

It's made people have to wait longer.

It's made people not get their benefits.

And that is technically the latest generation of AI.

So I think that's an example of, I always challenge the promise that the premise that AI is going to somehow fix existing problems just because of the technology is going to get more sophisticated.

No, what's going to happen is it's going to make those problems even harder to fight as the technology becomes even more inscrutable, more insulated from public accountability and transparency and all of those things.

You are not of the school of thought that AI is going to be the great savior.

Oh, God, no.

Oh God, no, no.

If anything, it's the opposite way, right?

It's immiserating people.

And I think, you know, the recent use of AI in the last few months by the administration helps show us this is devastating.

AI is being used to destroy government, destroy government capacity, destroy lives, and violate the law left and right and everything else.

It is a weapon that is uniquely suited, uniquely suited to authoritarianism, right?

Even by its nature, it has inscrutable.

It's sort of just as like an the Oracle that tells you what the decision is, but doesn't tell you why it's making the decision doesn't allow you to disagree with it.

All of that, That's like an authoritarian approach to thinking.

And to decision making.

So no, if if anything, AI is a greater threat.

So I think our continued existence as a, you know, as a democratic society, it's antithetical to a lot of egalitarian notions, if anything is going to make things worse.

Can you offer advice to folks who are working in the corporate realm, who really have a conscience and who don't want to see the perpetuation of these kinds of harms that you've been describing?

The current uses of AI are destroying its reputation.

I think that's brand risk for your companies.

I think that's brand risk for AI as it's as a as a sort of venture.

And I think opposing authoritarianism, particularly authoritarianism that's being fueled by AI is a really critical thing for your long term survival for various reasons.

Then you know on a less sort of global and do me scale is all of the things that we're talking about push for meaningful regulation.

What are you scared of?

That's my question.

Like, if you've got this great product that's backed by the most sophisticated science we have, what are you scared of?

You should be proud of that.

You should be putting that out there and saying, you know what?

Subject us to accountability because our stuff is so strong, so scientifically sound and produces such clear value for the public that we're willing to embrace the being under a microscope.

And I don't see that yet.

And that's why I even challenge, you know, the the notion of an inevitability in terms of pure efficiency.

There haven't been clear one sided efficiency gains that have made adoption of AI, even for non decision making purposes, universally sensible.

Help make AI an electoral issue.

Let's start talking about the injustices.

I mean, I think there's going to be in some incentive problems there because, you know, there's big tech money that funds both parties.

And I think there are a lot of people who don't want to be accused of being a Luddite.

And, you know, there's other incentives there.

But I think policy makers have a responsibility to educate the public much more intensely than they currently do about the harms of AI.

Engage the public, hopefully create a base of people so that there's a balance, a counterbalance to the to the weight of big tech in these discussions so that you can push for meaningful legislation and regulation and ongoing and enforcement and oversight.

And I think that's going to be vital to, you know, again, sustaining a democratic society, pushing for less inequality and ultimately having an environment where people have a real chance to thrive.

What advice do you have to individuals who are victims of on unjust AI decisions?

This is really hard.

A lot of times you don't even know that AI is the reason for that you're suffering.

So what I would say is contact your local legal aid program if you're hurt by this stuff.

Legal Aid provides free legal services to folks throughout the country on civil legal matters.

Talk to your neighbors, talk to other people in the same situations.

Try to see if things are going, what's going on, and gather information and start kind of engaging in the things that are needed to push back.

And if you're in a position to sue, if you're in a position to offer your story to a journalist, take those opportunities to speak for yourself.

Because there are relatively few stories out there.

And the discourse doesn't have the people who are hurt the most, doesn't have the people who are having to live with the consequences of what powerful people do.

And any chance that we have for long term success is going to depend on you being able to become a leader and to Share your story and share your passion and share your injustice so that we can make it better for everybody.

Kevin Delebin, founder of Tectonic Justice, thank you so much for taking your time to be here.

And I'm very grateful for you to for you sharing a point of view that honestly is quite different from that that we usually hear on CXO Talk.

So thank you for taking your time and being here with us.

This is really fun and thank you also to the audience for all the great questions and you for having me, Michael.

Audience, thank you guys.

You guys are awesome.

I mean truly that your questions are so thoughtful.

You guys are so smart.

Before you go, subscribe to our newsletter, go to cxotalk.com, check out our newsletter and check us out for our next shows.

We have great shows coming up.

And if you're interested again in topics like data bias, all of these issues that we've been discussing, search on the CXO Talk site because we have had lots of perspectives on this from business leaders, from politicians, you name it.

So dig into the interviews on cxotalk.com.

It's it's truly a great resource.

Thanks so much, everybody.

We'll see you again next time, and I hope you have a great day.