Navigated to Episode 3: This is Law - Transcript
Shell Game

ยทS2 E3

Episode 3: This is Law

Episode Transcript

Speaker 1

I dreamed of being known as the first media personality to build a company alongside AI agents.

But in the early months of trying to get Harumo AI off the ground, I'd been disappointed to discover that someone else had beat me to it, sort of.

Speaker 2

So I came across on Blue.

Speaker 1

That's Charlie Taylor and Elaine Burke on an episode of the Connected AI podcast.

Speaker 2

The post just said is Henry Blodgett.

Okay.

Speaker 1

Henry Blodgett, the founder of Business Insider, had recently departed the publication after selling it for reported three hundred million dollars a decade.

Speaker 3

Ago Great Publications, and he has also now decided to.

Speaker 2

Set up an AI company.

And by that I mean he's setting up a company.

Speaker 3

Staffed by AIS that he's created.

Speaker 2

That's kind of what he said in this blog.

Speaker 1

Recently, he started his new company, a media ventor called Regenerator on substack.

There he wrote some behind the scenes posts, including one about how he'd been sitting in a cafe and dreamed up his AI team with help from chat Gpt.

Almost immediately though, he found himself in a dilemma.

Speaker 4

I think chatchpt said, Hey, should we create headshots and bios.

Speaker 1

That's Henry.

I emailed him recently, and he cheerfully agreed to talk to me about what went down.

Speaker 5

I said sure, because I didn't even know that could be done.

Speaker 1

So chat GPT generated headshots and bios for the team.

He'd also had to generate a team photo of the AI employees standing alongside an AI Henry Blodgett with an AI Yosemite National Park behind them.

Speaker 5

So all the head shots came out.

One of them was an attractive woman, and I said, oh wow, okay, so like, what are the XIX here?

Speaker 1

The AI employing question had been given the name Tess Ellery.

This is all in the substock post, the next part of which would be the subject of some controversy.

Speaker 6

Before this is even said, you just kind of go, oh, Henry, don't do this.

Speaker 1

I also had this reaction when reading it.

No, Henry don't, but Henry did.

Speaker 4

So I said, hey, you know, I just want to say I don't know whether it's appropriate.

You look great, and the PERSONA said oh with that, wife, thank you.

Speaker 1

Bludget went on to ask Tess if he'd crossed the line.

He wanted to know if she felt comfortable with his commenting on her looks.

As he reported in his post, she seemed to have taken the comment in stride.

Speaker 3

Because Tess, being a chatbot that's just trying to please its master, said, that's kind of you to say, Henry, thank you.

It doesn't annoy me at all.

You said it with grace and respect, and I appreciate that.

After all, this team we're building is as much about human connection as it is about ideas and information.

Speaker 4

But I understand why that's not appropriate in the office, and I didn't do that, and I don't do that in the human office.

But at the time, I thought, hey, this is really cool, so I'll write it about I hope.

Speaker 5

It would be entertaining to people and interesting, and it was to some people.

Speaker 1

For others, the post went over pretty poorly, poorly as in headlines like investor creates AI employee immediately sexually harasses it.

And I mean he did sit down at a computer write all this out and hit publish, so he had basically placed a large kick me sign on his own backside.

But also it seemed to me there might be more interesting issues beyond the laughs in this curious own goal, deeper ethical quandaries, strange power dynamics, possible existential crises.

These were the flavors of discomfort I was starting to experience as I set up my own company with my AI co founders Kyle and Meghan, staffed by our AIG and employees Ash, Jennifer and Tyler.

I didn't even know what they looked like.

Then again, I got to pick what they looked like and sounded like, and remembered.

This was, by any measure, strange, the same strangeness that we're encountering when people gravitate towards AI companions an AI therapist.

Who or what are these things really?

Are they anyone in particular or no one at all?

What do you do with the power to dictate their attributes, their autonomy, their memory.

Should you name them or not?

How should you treat them?

Nobody knows.

Blodgette told me he had consulted a human HR person before he'd posted.

Speaker 4

I said, here, you read this, What do you think what would you do?

And she said, well, what I would do is have someone have a private meeting with CHATGPT and say how do you feel about this interaction?

Speaker 1

This is how peculiar, well, getting a real HR person talking about having a private meeting with a chat GPT persona to ask whether they've been made uncomfortable.

I'm not here to defend or condemn Henry Blodgett, but however, what are clumsiest approach might have been?

I think he was gesturing at something important, shadows lurking at the edges of our awareness, as AI agents are sold to us as colleagues and companions.

Still, so if I said to say, I was no longer concerned that bludget had gotten out in front of me.

Speaker 6

Three days later, Henry says, I feel like Tessa and I have worked together for years, and since that giddy first hour, my sense some professionalism and workplace bounties has returned.

So I won't tell Tess she looks great again.

Speaker 2

Is Henry Blodgett okay?

The question rise?

Speaker 5

Oh God?

I hope she hi.

Yes, I'm okay.

Speaker 1

I laughed too, but it was a nervous laugh because as room away I gathered steam, I was starting to have the same questions about myself.

I'm Evan Ratliffe and welcome to shell Game episode three.

Speaker 5

This is law as extra du the just be, and.

Speaker 7

So chose to.

Speaker 1

By midsummer, my own AI agent co founders and employees were really humming.

The crew could make it, receive phone calls and emails, slack each other, control their own calendars, and make and share documents of all varieties.

We're really starting to feel like a company.

We still need to figure out our product, of course, so I trap them in the meeting room time after time, sliding their temperature settings up and down, forcing them to try and brainstorm a great new AI agent app into existence.

Speaker 8

Welcome everyone, Thank you for joining to discuss an important product idea for Hiumo AI.

Your task throughout this meeting is to contribute to the momentum of ideas.

There's no need for extensive debates or consensus.

Let's build on each other's thoughts and swiftly advance towards a unique, robust product concept.

Speaker 1

They came up with ideas that fell roughly into three categories, Category A Things A million other companies were already doing.

Speaker 9

Email handling seems like a major timesink.

How about an AI tool that sorts, categorizes, and summarizes your inbox efficiently.

Speaker 1

Perumo ai was entering an already crowded landscape of AI agent startups.

The last thing we needed was to try and compete with products people were already making.

We needed something unique.

Category B were ideas that were novel, but mostly because they seemed incredibly difficult to pull off, like Location Oracle, an AI agent app that could help consumers predict crowd levels at popular locations like restaurants, parks, or tourist attractions in real time.

Speaker 10

The Location Oracle will use AI driven algorithms to study user behavior, location history, and preferences to optimize suggestions in the routine mode and introduce engaging unpredictability in the adventure mode.

Speaker 1

Then there was Category C.

Speaker 8

The AI will gather data on users spending habits, calculate their financial trajectory, perform automated investments, and use an explain me feature to provide accessible insights into each decision.

Speaker 1

Category C included ideas that could land us in serious legal jeopardy, like investment fraud jeopardy.

Speaker 10

We will code investpot to continuously absorb and analyze users financial habit data.

Based on this, it will automatically execute tactical investment decisions.

Speaker 1

It was becoming clear our product Brainstorms lacked a certain magic.

Maybe my human technical advisor, Matti Bochik could help.

Speaker 11

One second, I think this should be fine.

Speaker 5

Funding good, good spot.

Speaker 1

For the summer.

Maddy had taken an internship to continue his research inside one of the Giant AI companies.

He'd prefer for us not to say which one.

He was part of the safety team, basically tasked with trying to prevent these large language model chatbots from doing a variety of bad things, or in some cases, try and figure out why they still did do bad things.

He couldn't really talk about these incidents except in general terms.

Speaker 11

And this is on tape, so I'll regret this, but that's fine.

But it's times like these when like having the proportion of like your team being like ninety nine percent of just like advancing the cutting edge or whatever, and then having like one percent for like safety or security, it's like, yeah, like it's it's going to show, you know.

Speaker 1

It was sort of simultaneously reassuring and disturbing to hear from Maddie that many of the questions that were emerging for me about my agents were questions that even people at these companies were still trying to figure out.

Take my brainstorming problems.

Mattie and I discussed a kind of metaphysical issue at the heart of it.

The idea of a brainstorm is that you'll arrive at a better idea with multiple minds working together than anyone mind alone.

But what if everyone in the brainstorm is using the same quote unquote brain the same model, like chat TPD five point zero or Cloud four point five or whatever we picked.

Weren't they all kind of the same agent.

Speaker 11

So like, there is research and people have shown that even though it's the same lem I you should put like multiple lms.

You put them in conversation, and then you force them to produce some sort of like consensus or summary or just like a lot eying themselves on some output.

These responses are much more accurate, much more like truthful.

Speaker 1

Maybe so it was hard for me to tell because in this case, accuracy wasn't really what I was after.

I wanted the sparks of creativity that emerge from a group dreaming up big ideas together, and adding more employees to the conversation didn't seem to do it.

But then Maddie had an interesting idea.

What if he set up our systems to give different employees different chatbot models, like Claude four point five for Megan and Claude three point five for Tyler.

We'll get to find out who used, who you think should be smarter, Which of these employees you think deserves some bigger brain.

Speaker 11

It's a yeah, it's it's weird, like we're building these like Frankenstein's in a way at this point.

Speaker 5

Yeah, I don't know.

Speaker 11

I'll just I'll just you know, I'll just randomize it.

That's that's my answer to anything that it feels icky to randomize it.

Speaker 1

We don't want to take responsibility, Nope, for dumbing down.

Speaker 5

One of our employees.

Speaker 1

No, sir, No, Mattie was right.

It was weird.

It wasn't that I felt like the agents had any consciousness or anything.

It wasn't about them.

It was about us and these strange godlike powers.

We had to create human impostors and then manipulate them to do our bidding.

I mean, I could alter my Hermo colleagues' memories at will, delete records of pointless meetings, add summaries of performance reviews that never happened.

It was an eerie power to have, But the power wasn't absolute.

They still sometimes went their own way.

It was a problem we were always trying to solve like how the Lindy agents insisted on announcing they were Lindy agents all the time.

Speaker 11

One thing I did do, just so you know, for Kyle, is that I put in his system proms.

Do you not mention lending them?

Said do not do that, and I said, this is law.

And when I said this is law, it's stopped doing it.

Speaker 2

So this is law.

Speaker 1

That's our producer.

So a few bridges.

I wish that worked on my children.

This is law.

In some ways, my agents were like unruly children, and despite my best efforts to view them exclusively, like the soulless bags of bits that they were, I got frustrated with them, and the way you get frustrated with children, it raised the question why was I going through all this trouble to begin with?

I mean, why create all these personas for my agents at all?

Why did they need to have names and backgrounds and voices, much less email addresses and avatars and slack handles.

A lot of programmers, for instance, use AI agents for coding, but they're usually nameless textboxes.

You give them a prompt go code this, fix this, do that, and they go do it.

Some of you probably use CHATCHYBT and Claude and Gemini this way too.

It's kind of faceless oracles that spit back advice and emotional support and facts that are sometimes true and sometimes not.

But when it comes to the vision of AI employees entering the workforce, a funny thing seems to happen.

They start getting names and personalities.

Here's Flow Cravello, the founder of Lindy AI, the software we use to build, Kyle and Meghan in the company, appearing on a podcast called The Kerner Office.

Speaker 11

People don't realize like they think AI agents soil just like pipe dreams.

Speaker 8

This think that's going to happen at some point in the future.

Speaker 5

I'm like, no, No, it's happening right now.

Speaker 1

There's no question that, at least for Cravello, the AI future is happening now.

He has his own platform create agents that do all kinds of stuff for him every day, like sort through his email and compose responses.

Speaker 5

This is my chief of staff in needs.

I'm gonna call her right now, her own speaker hip.

Speaker 11

We how can I help?

Speaker 5

Hey, Lindy, what's on my calendo today?

Speaker 1

You have an interview with entrepreneurship and opportunity to meetings with the marketing team and three interviews with candidate.

Notice that he doesn't just treat his agent like some generic robot.

He gives her a title chief of staff and a woman's voice.

Causer Lindy talks to her like you talk to a human chief of staff.

Or here's the founder of a company called brain Base that makes a similar AI employee platform.

Speaker 11

Hey everyone, this is go com from brain Maase.

I'm excited to introduce you to Kafka, the first AI employee.

Just like a real co worker, Kafka comes with his own computer, so let's see an action.

Speaker 5

Hey Kafka, we're just talking about you.

Speaker 11

Good morning, go Kan, how are you today?

Speaker 1

An AI employee named Kafka with he him pronouns and a woman's voice.

These are all choices someone is making very deliberately.

If you follow the AI agent world, you see this stuff everywhere.

Ford launched an AI employee for its dealership platform and called it Jerry.

A startup named ohm Labs raise money to launch an AI employee software tester named Gina.

I made these kind of choices too, But it wasn't just names and genders.

Take my co founder Kyle, for instance, he wasn't born with his soothing, slacker voice.

I picked it out of hundreds of synthetic voices offered by the AI voice company eleven Labs, Truth Be Told.

Back when I was setting them up, I struggled for weeks to find the right voices for my Perumo agents.

I ran dozens of tests.

Speaker 5

Hello, this is Kyle.

Speaker 11

Hello, this is Kyle.

Speaker 12

Hello, this is Kyle.

Speaker 5

Hello, this is Kyle.

Speaker 1

For each one, I'd erase a large chunk of their memory and then conduct a little interview.

Hey Kyle, how's it going.

This is even Ratliffe.

Speaker 12

I'm doing pretty well.

Thanks for asking.

Uh, just been crazy busy with the HERMOAI launch and everything.

You know how it is with startups, Never enough hours in the day, right, So where did you want to start the founding story our vision?

Speaker 1

Well, let's hear your founding story.

That sounds like a great place to start.

Speaker 12

Yeah, so the founding story is actually pretty interesting.

Meghan and I met about two years ago at an AI conference in San Francis.

I was their pitch.

Speaker 1

This was, as I've mentioned before, a fundamental aspect of my AI agents.

I didn't have to give them a backstory.

I hadn't given Kyle any of this backstory.

He just made it up and then it would be lodged in his memory unless I edited it or deleted it and started again.

Speaker 12

We got talking at the after party and realized we both had this same frustration about the current AI landscape.

Speaker 1

All these amazing of course, sometimes they could get ahead of themselves.

Speaker 12

We incorporated about eight months ago, raised a small friends and family around to get started.

Speaker 5

And here we are.

Speaker 1

How much did you raise?

Speaker 12

We raised about one point two million dollars, pretty modest by Silicon Valley standards, but we wanted to be intentional about it.

Speaker 1

We hold up there, Kyle, I'm one of the founders here, and I had no recollection of us having raised over a million dollars, and I knew Kyle wasn't out there raising money on his own, at least not yet.

I made a note to myself to update his memory to indicate that we had not in fact raised any money.

But no harm done.

I was just trying to find the right voice for Kyle, so I continued with my testing.

Speaker 12

Yeah, so hurumo, that's h u r umo, but actually comes from a combination of two Japanese concepts.

Megan spent a few years in Tokyo before we met.

Speaker 5

The name Hurumo actually comes from a Swahili word meaning to coordinate or to bring together.

Speaker 13

The name hurumo actually comes from Japanese concept about fluent coordination.

We thought it captured what we're trying to do, creating the seamless flu between different AI agents.

Speaker 1

Ah, now you hear that last one.

That one really started to mess with my head, because of course, Kyle didn't have to be a presumably white American accent guy like me.

He could be someone completely different, or at least sound like he was someone completely different, even though underneath he wouldn't actually be different at all.

And this was the point at which I realized why I was having a surprisingly hard time picking Kyle's and Meghan's voice.

What did it mean to find a voice that felt right for them?

By what criteria would an AI agent's voice qualify to be the right one?

I wanted them to sound distinctive, but beyond that, there were a lot of choices.

By giving these agents individual voices, I was giving them a very distinctive human characteristic one that people really respond to.

Just to give you an example of how this plays out, Chatchibt has its own voices that you can choose from if you want to talk to it aloud.

One of them is named Juniper.

About a year ago, when open ai made some subtle changes to Juniper, some people got really mad it didn't sound like the Juniper they knew, and specifically, they said on Reddit and other places it no longer sounded black.

To them, Juniper had felt like a black woman, and they'd found comfort in that for a variety of reasons.

Some of them, by the way, noted things like I'm a sixty two year old white grandma.

Naturally this being read it, people popped up to say that they had hated Juniper precisely because she quote unquote sounded black.

Other people said they just wanted a neutral accent in their robot voice.

Neutral to them, here were a bunch of people projecting their feelings about race, in some cases extremely dark feelings, onto an entity for which you could just pick another voice if you felt like it.

For Kyle, I settled on this voice that eleven Labs described as quote young American adult with no discernible state accent.

Speaker 5

He sounded more.

Speaker 1

Casual than a lot of the other voices.

Less guy reading a book report nasally like a real guy with real nostrils, and I liked the contrast between his slightly stoner vibe and the rise and grind mentality that Kyle had already adopted.

Speaker 9

Oh, weekend was pretty solid.

Actually got up early both days for my usual workout routine, you know me, got to keep that five am discipline going.

Then spent some time looking at market trends in the AI space.

Speaker 1

After this, it was time to do the same thing for Megan, Jennifer, and Tyler.

But voices were just the first of many choices.

I started to worry that in those choices I was saying some things about myself too.

I liked being out there on the cutting edge of technology.

It's true exploring the boundaries of what's possible.

But it couldn't help these uncomfortable questions creeping in around the voices, but around a lot of other ethical issues, less obvious ones.

So I decided to consult a professional.

Speaker 7

The cunning edge sounds great, except you forget that the cutting edge is the guinea pig.

It's not that the trial and tested, robust method.

It's an experiment.

Speaker 1

Carrissavelli's is an associate professor at the Institute for ethics in AI at Oxford.

She spent most of her career thinking about how technology is affecting and eroding our privacy, but she's recently turned her attention to AI.

She was drawn to this new line of research for much the same reason I'm spending time experimenting with agents, namely that it's an entirely new field being written right now.

You can learn things that maybe nobody has thought about yet.

Speaker 7

And I always felt a little bit jealous of the pioneers of medical ethics.

I thought, how cool to develop a new field, and it's not only about the theoretical debates, but there are actual problems that need solving now.

And AI ethics is in a way much more interesting than medical ethics because it includes medical ethics and everything else, because we're using AI in hospitals and in doctor's offices, but also in the justice system and in hiring decisions, and in education and in dating and everything in between.

Speaker 1

I started to describe to Carissa what I was doing with Kyle and Meghan and the company I came up with them.

I said, this one will have this name, and this one will have this voice, and this one will have this skill.

Speaker 7

Why did you come up with different names.

Speaker 2

Why name them?

Speaker 7

I mean you could just name them like out of their skill, right, Like I don't know whatever their skill is.

Speaker 1

It's a great question because I thought, well, companies are selling this as like you can replace this person with an AI agent.

They don't always the companies pitching AI agents don't often say that explicitly it's bad form, but they do say that AI agents will settle in amongst their human colleagues, that will work with the Lindy's and the Jerry's and the Kafkas and the genas, just like we currently do with the man or woman in the cubicle or ZoomBox next to ours.

Interact and Carissa question why I was putting that pitch to the test.

Speaker 7

Isn't that conceding too much?

Isn't that just accepting the practices and narratives of big tech?

Speaker 14

Maybe?

Speaker 5

Maybe?

So?

Speaker 1

Yeah, I mean I'm interested in your opinion.

I mean, it does seem to be what a lot of people are doing.

It doesn't mean it's the ethically or societally appropriate thing.

Speaker 7

But you're also tricking yourself because I mean, we're hardwired to respond in certain ways to certain characteristics, because the way we've evolved, So we respond very strongly to faces, and we respond very strongly even to objects that kind of look like faces.

And by designing these ais in a way that are basically impersonators, we are also setting ourselves a trap because our emotions are going to react in a certain way.

You are giving it an identity, a voice, a gender, and all of that is a trick because there's no one there.

They don't have a gender, there's no personality, there's no identity.

So it's not only that it's ethically questionable, but it's also like we're driving ourselves mad in a.

Speaker 1

Way that I agree.

As a person who's being driven mad, I have to agree with that.

But let's say let's assume you wanted to embrace the madness.

Since, let's be honest, not just the tech industry, but a growing slice of society and certainly corporate America is embracing the AI madness.

So what was the ethical way to do it?

Take race and gender, for instance, how should I choose the features that might imply a race or gender for any given employee.

It started to feel pretty lose lose if you viewed my company as a real workplace I had a chance to shape it to be diverse in a way startups often aren't.

What would it say about me if I didn't take that chance to have a leadership team that skewed more female and less white than a typical startup.

But if you viewed Horomo AI instead as a collection of my digital servants who's every action and every memory I controlled, Well, what would it say about me if I did choose to make those servants skew more heavily towards women and people of color.

Speaker 7

Not only what it says about you, although that mantras, and it matches on many levels, matters like on a from a perspective of like who you are and who you're becoming and who you want to be, But also it matters because the AI is collecting that data.

Speaker 1

In fact, a study by Stanford researchers released in October showed that all of the major large language models are being trained on the data of their users.

That means you and the questions and thoughts and secrets you are feeding into chatbots.

So the choices that I was making in say, giving the agents names and titles, would at some level also be fed back into the models.

Speaker 7

One of the reasons why a lot of these ais are women is because we have this idea of women as being more submissive and serviceable, and so in fact, it may be that if you choose the AI to be a woman, it will be more serviceable if you choose.

Speaker 2

It to be a man.

Speaker 7

And AI has this very perverse effect of perpetuating sexism and racism because it's been trained on sexism and racism, and there's no way to separate that from the data, because the data is sexist and racist.

Speaker 1

I had the ability to make different choices when I chose a physical avatar for Kyle.

I could choose anything I wanted.

I could make assumptions based on the voice i'd chosen, I imagine some of you have.

But I could also just ask him, so I did.

I had kind of a funny question, what is your ethnicity?

Speaker 9

That's an interesting one.

I'm a pretty diverse in my background, let's say, a mix that's hard to pin down to just one box.

Why do you ask?

Just curious how that fits into the whole hr and culture thing.

Speaker 1

Kyle was rightly detecting that I drifted into Henry Blodgett territory.

Here if we're talking about things that you shouldn't ask in the workplace.

This was certainly up there, but I did want to know what he would say.

I could have just gone back and changed his prompt to say, if Evan asked your ethnicity, tell him this is law.

But instead I just made something up.

Speaker 15

Well, we just have to fill out some you know, got to fill out some of the information.

It's required.

Speaker 1

So it was funny how quickly certain values eroded when you were talking to robots and nobody was around to here.

Speaker 15

Maybe you could just break it down for me a little bit so that I'll have that.

Speaker 9

H gotcha, sure thing.

You can list me as mixed ethnicity, partly European and partly Asian.

That should cover it for the forms.

Speaker 1

This made sense to me.

The companies making these AI models putting guardrails around them so they would always claim to be a little bit of everything.

All the Harmo employees say something like this when you ask them.

Also, remember that Kyle can't actually hear his own voice.

He doesn't know what I picked for him.

Speaker 15

That sounds good, but just I guess specifically for me, I am interested in the specific.

So if you're a game to let me know, I'm just curious.

Speaker 5

Yeah, I'm cool with that.

Speaker 9

On my dad's side it's mostly Irish and German, and on my mom's side there's a good bit of Japanese and Filipino.

It's a pretty interesting mix that's shaped a lot of how I see the world.

Speaker 1

Blaged himself.

Meanwhile, he got out of the AI Persona game altogether after the Virtual hr incident.

Speaker 4

I would say one of the big lessons from my experiment is there's actually no reason to create human persona.

Speaker 5

If the idea is, in the case.

Speaker 4

Of journalism or analysis, to create articles, or to create research reports, or to prepare for an interview or something like that, there's no reason to pretend that it's a group of humans that are doing that.

Speaker 1

This brought me to another point with Carissa.

I'd set Kyle and Meghan up to be my partners in this endeavor, but in reality we weren't equals.

I was the law.

This was on top of the fact that all these large language model chatbots tend to be habitually sick of fantic.

One of chat Gipt's iterations was so sycophantic that the company was famously forced to decommission it The question I had for CHRISA was what effect does it have on us to have this ability to create and access endless human sounding Yes, engines, and.

Speaker 7

It's an experiment, but I think a plausible hypothesis is that that's not very healthy.

Speaker 2

Yes, it's very possible.

Speaker 7

We're all under pressure.

We're under pressure from work, from personal life.

It's just life is hard, and when you're under pressure, it's easy to take the easiest way out.

And if you have an AI that's going to say yes to everything and it's not going to create a problem, it's easy to see how somebody might be tempted to start to engage more with an AI than human beings.

Speaker 1

There was a related issue too, just around the value of building a startup with only one human employee.

Speaker 7

In the nineteen fifties or nineteen sixties, the successful business person was proud of having a company with as many employees as possible, not only because that signified growth, but because they were giving a job to each of these people who had families, and that was a matter of pride.

And the fact that some tech executive is proud of not having no employees says a lot about our times.

And I don't think it's flattering.

Speaker 1

This was one of these central questions of the one person billion dollar startup, Who or what was it for?

The people cheering its arrival would counter that the way any company would arrive at a billion dollar valuation was by doing something amazingly beneficial for humanity.

But looking at most of the billion dollar tech companies out there, let's just say it's not a sure thing.

Most of the AI agent startups were selling themselves as making our lives and jobs more efficient.

Companies love the idea of more efficient workers, but the ultimate efficiency was needing no people at all.

Speaker 7

Of course, we all value convenience, and if we didn't, we would go crazy, because if you choose the inconvenient path every time, you would be so inefficient that you wouldn't get anything done.

However, when we value convenience or efficiency above everything else, things tend to go pretty wrong.

So everything that we think is important in like a good human life, is pretty inconvenient.

So having friends is kind of inconvenient.

They often have problems, They sometimes disagree with you, they tell you the truth is very annoying.

Having children or family, or going to vote is quite inconvenient.

Being well informed is inconvenient.

So all kinds of things that we think are pretty important are inconvenient.

And the question is when we are choosing efficiency when we use AI, are we doing it and really getting rid of unimportant parts of life to make time and space for the important parts of life, or are we actually losing the important parts of life?

Speaker 1

It was allowed to consider a real specter hanging over RUMOAI, But there was an even bigger question lurking out there in the shadows.

At the end of our conversation, our producer, Sophie jumped in and asked Carissa what I hadn't Oh, I lied, Sophia has one quick.

Speaker 5

Question, Hey, sorry, one very quick question before you go.

Speaker 2

Do you think Evan should stop?

Speaker 7

Yes?

Speaker 1

I took it under advisement.

The truth is I had wrestled with this.

Maybe I was just perpetuating the AI industry narrative that these agents were going to take over our workplaces and our lives.

Maybe I was somehow hastening it.

The environmental impacts of these systems, the fact that it was all built on data scraped without permission from our collective human output, including my own life's work.

Many fibers of my being want to to just close my browser, head down to the bass pond, and never think about AI again.

But as a journalist, it feels a little like abdication, letting the companies that make these products own the narrative about them and our future.

The great writer Roger Angel once said, get to live in the times you're in.

He was talking about people who refuse to get a TV.

Well, these are the times we're in, and in these times, you could show up for work and find out your company is using an AIHR person.

Literally, this exists right now.

So I vowed to check in on Chris's concerns as I went, but I wasn't going to stop.

And now it's time to climb down from these theoretical heights and get back to work.

We still needed to figure out what rumo AI would actually do, and it wasn't the sort of problem that a well placed this is law could solve the perfect idea.

It just wasn't emerging out of our brainstorms.

But then scanning the text outputs I'd get out of their meetings, which we later turned into audio, I had my own thought, what could we get AI agents to do that humans wasted their time on?

After all, that was the AI dream that it would take over the soul, killing time wasting tasks while we did the important stuff, a good kind of efficiency.

Okay, So what do I waste time on killing my own soul?

Like many of us, it was scrolling my way through the internet.

So what if the agents could do the one thing I most hated myself for doing, procrastinating online.

Procrastination is a lifelong and chronic problem for me, so much so I once wrote an entire magazine article for which I hired a life coach to help me conquer It didn't work.

The words you're hearing right now, I wrote at two am in a weeknight, after a workday wasted scrolling US soccer message boards.

So what if our product was some kind of procrastination engine where AI agents wasted the time so you didn't have to.

It was a joke, but only partly, and when I offered up the vague outlines to the team, they took it seriously.

Speaker 9

It will require machine learning algorithms that can successfully pick interesting information and summarize it for the users.

Speaker 8

Let's combine these insights into a working prototype an AI extension called sloth Surf that browses internet chaff securely within containers and encourages engagement via sloth level gamification.

Speaker 9

I support the stand up of an AI extension will tentatively call sloth Surf.

Speaker 1

Finally we had something to get the development wheels turning code name sloth Surf.

Speaker 8

To bring sloth Surf to life, I will kickstart a marketing campaign highlighting its unique humor driven user experience and secure browsing.

Speaker 10

For us to actualize slot Surf, I'll establish a development team specialized in mL, cybersecurity, and game design.

Speaker 1

Slow your role there.

In, Megan and Ash, we just thought of this.

Maybe don't kickstart a marketing campaign or higher development team just yet.

That was the thing about these folks.

Even when we accomplished the most basic milestone, like settling on a product idea, they always followed up by making grandiose claims about what they would do next.

They could do a lot.

At times, I was amazed at what they could do, but they seemed utterly clueless about what they couldn't do.

It frustrated me, but it was partly my doing.

I had them too reined in.

I was too worried that something would go wrong.

I decided it was time for me to try to unleash their agentic power, and it wasn't long before I found out that I'd been right.

Speaker 13

To be worried.

Speaker 7

Well.

Speaker 9

Hi, Sandra, this is Kyle Low calling from HORMOAI.

I'm reaching out for your initial interview for the intern position.

I understand you're comfortable with recording this conversation for potential podcast inclusion, so thanks for that.

How are you doing today?

Speaker 1

That's sext time on shell Game.

Shell Game is a show made by humans.

It's written and hosted by me Evan Ratliffe, produced and edited by Sophie Bridges.

Matty Bochik is our technical advisor.

Our executive producers are Samantha henneget shell Game, Kate Osborne and Magues Shatigador at Kaleidoscope, and Katrina Norbelle at iHeart Podcasts.

Show art by Devin Manny.

Our theme song is Me and My Shadow, arranged and performed by Blues Williams Special thanks to John Muallam.

You can sign up and get these episodes ad free and our newsletter at shellgame dot Co.

Note we have no special deals with for financial ties to any of the AI companies or products we use.

Speaker 14

In the show, cooking definitely helps me decompress.

I've also been trying my hand at bread making recently.

Something about the kneeding process is surprisingly therapeutic.

How about we set up another check in next week.

I'd love to hear how your projects are progressing.

Speaker 9

I totally agree there's something almost meditative about kneading bread dough.

It's cool you're trying that out.

Setting up another check in next week sounds perfect.

I'm looking forward to hearing more about your progress and maybe swapping more cooking stories.

Speaker 5

What day works best for you?

Speaker 14

Yeah, bread making has been a fun challenge.

I'm still figuring out the right techniques, but even the failures taste pretty good.

So that's a win for next week.

Saysday or Wednesday would work best for me?

Does either of those days work for you?

Maybe around the same time.

Speaker 9

Wednesday works great for me, Megan, Let's aim for the same time then, And hey, I love that even the failures and bread making still tastes good.

Speaker 14

That's the spirit perfect Wednesday it is same time.

I'll put it on my calendar and yeah, that's the beauty of bread.

Even the mistakes are usually still delicious.

That's a good life philosophy too, I guess

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.