Episode Transcript
Hi, I'm Ray Poynter.
I'm one of the founders of ResearchWise AI, and this is Talking AI.
And I'm delighted that we've got a guest here today, Dr.
Paul Marsden, a chartered psychologist and researcher at Brand Genetics, where he works with brands and our evolving relationship with emerging technology.
He's a fan of innovative research methods and a co-founder of BrainJuicer, who, of course, now are System 1, using machine learning software he built during his PhD.
So we might find out how up-to-date things built during your PhD actually are, Paul.
But welcome.
Thanks for joining us.
Pleasure to be here and honoured to be here with you, Ray.
Super, super.
And for those of you who follow any of the emerging technology and method discussions on LinkedIn, you should be familiar with Paul.
He's ever-present with a useful additional comments, which is great.
So today we're going to be talking about AI assistants.
So first question, really basic.
What are AI assistants?
Well, AI assistants are the things we know and sometimes love, sometimes make us frustrated, from the Alexas and Ceres of the world to more smarter generative AI assistants, such as ChatGPT, Claw, Gemini, Copilot.
And they basically are there to assist us in our work life and also our personal life to help us complete tasks.
I guess when we say AI assistants, there's another buzzword we're beginning to talk about in 2025s, AI agents, which are kind of, what's the difference between AI agent and assistant, but essentially agents have agency and they take decisions on your behalf, whereas an AI assistant will just, should do what you tell it to do.
Yeah.
I mean, sometimes one hears autonomous agent and react agent in that context, where the react agent is kind of doing stuff when you tell it and the autonomous is, really is going away and doing stuff for you.
Yeah.
I mean, I think, I mean, it's levels of gradation and there's a lot of definitions on this, but I mean, one example of an agent, I'm not sure whether you count it as react or autonomous is the, is the, is the, is the, all these deep research, which is, you know, actually quite impressive now getting, especially in illicit and these, these, these, these academic AI assistants are getting really quite powerful, but even the basic one in ChatGPT and, and, and Gemini is pretty good.
But then you ask it, you ask it, well, once you do some deep research on X, it will do you a bunch of, it will ask you a bunch of questions and then it will go off and do a protracted web search and read what it finds, decides whether something's good or not good.
I'm not sure whether you put that in kind of react because it's executing a task for you, but at the same time, it's autonomous and saying, well, that was actually a really stupid web page, or that's an article.
It's nothing to do with what I'm looking at.
And it will be smart enough to sort of bin it and then go out and look again.
Yeah.
I mean, that was quite a big breakthrough when we saw agents or sub agents reviewing their own work before they gave it to us.
And that's just really a hint of what's coming.
I think.
Yeah.
I, I, a hundred percent.
I mean, I think if you, I was coming up as a psychologist, I come to this from a psychological framework and what's the cognitive, you know, what's our, our cognitive architecture.
And we are always reviewing.
We have, you know, the Dan Dennett and sort of tough psychologist who unfortunately just, just passed away, but he has this model of consciousness where we kind of create multiple drafts about what we want to say or what we want to, how we want to respond.
And then we've got this censoring agent or that will edit.
And then we, we finally do it.
And that's the way we're using that same, similar cognitive architecture.
And so the closer we get to the human architecture, cognitive architecture, the closer we get to a kind of human reasoned response.
Yeah.
I hope it's going to be faster than me.
Cause I always get that reasoned, really good response in the lift going down after the meeting.
I think that I replay it one more time and actually get a better result there.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
We have a branch of notes.
It's like 24 hour rule.
Any prop decision, just like go to sleep on it and don't take it until you've woken up and had time to go through the multiple drafts and think, what's the consequence of that?
Maybe not such a smart idea.
So how are people using these AI assistants in business?
Cause I know you've been talking to companies and organizations around this.
So what sort of ways are they bringing them to use at the moment?
Well, I think, I mean, people know themselves, I mean, CheckGPT alone has 400 million users.
And if you ask people at work, knowledge workers, as Microsoft, that there's 75% of us are already using these AI assistants.
And there's a whole area of what's known as shadow AI, where people are using these things unofficially because it hasn't passed the IT legal internal policies, but people are using them.
And there's, so I've been tracking research on it.
And there's two big studies on, from Boston Consulting Group and Procter and Gamble.
What happens when you give people these AI assistants?
What do they actually do?
And then I've actually been running a year long experiment now with an international wealth management bank or a private bank and with a market research agency, seeing what happens when you give knowledge workers, you give them an AI assistant, what do they do with them?
The shorthand is people do use them to just do all the stuff that they find routine, rote, admin tasks.
And they literally treat them like an assistant.
So prepare me for this, give me my minutes for this, summarize this.
What's interesting, these little assistive tasks that these things can do add up to kind of really big improvements in work quality.
So when they gave, for example, 750 Boston Consulting Group consultants, these AI assistants, independently measured quality of work went up by 40%, 20% time saving and overall productivity of the groups they're in went up by 12%.
When P&G did this at a similar number of people, the quality, this was in product development and innovation ideas, their quality of their ideas improved, time was saved.
And from a psychological perspective, what was interesting is that it improved their, their actual morale at work, they've experienced more positive emotions whilst working with an AI assistant, their stress reduced, and they felt more energized at work.
And so I think, and that's replicated in the kind of the these two year long experiments I've been doing with, with custom AI assistants, research assistants.
Now, did I see that you probably shared a report recently, it was looking at a side by side comparison of individuals doing the work, AI doing the work, and AI plus individuals, which I think used to be called centaurs, but that's, that phrase seems to have disappeared.
Yes, whether you're a centaur, I can't remember what the other, a cyber thing or a centaur, I mean, there's all sorts of metaphors around that.
I mean, there are an increasing, I mean, there's a, increasingly, we've been, we're seeing that the, the trope, I mean, there are two big tropes in AI, as they, AI won't replace you, but somebody using AI will replace you.
And the other thing is that it's actually AI human symbiosis is when you've got humans and AI working together, we're strong together.
And actually, the Procter and Gamble study actually did find that actually working when you've got an AI teammate, you just have better ideas, it really works.
And it's equivalent to human.
The study, I think you're referring to is actually, actually, physicians, you're giving them an AI assistant, and looking at accuracy and diagnosis, and what they actually found is kind of a little bit worrying.
But when you have an AI assistant, you're actually much better as a doctor diagnosing the condition.
But you're actually better still, if you just leave the task to the AI itself, it will actually do better.
And I think we're getting to that stage in the world of market research.
Now, sometimes humans can get in the way.
And so in this interesting period, where we think we need to double down on human strengths, and basically anything that can be done behind a computer screen is soon going to be better done, in my view, by AI.
But that's a long, that's in the future.
Current AI assistance is a little bit different to that.
So I'm going to jump into your one here with some risk.
I was seeing some stuff recently that was looking at the way, we don't like this phrase AI thinks, but the way that it comes to its decisions, outputs, whatever.
We are very stuck in two and three dimensional stuff.
So if we look at, if we take the diagnosis, we would look at scans of livers.
We tend to look at a two dimensional picture, we might have a three dimensional scan, which we can rotate.
But we're pretty much stuck in three, maybe if we're super, we can do four and five.
But we know that the way that the models are put together, they are massively multidimensional.
Yeah, they can simultaneously be considering texture, surface, colour, temperature, all of these factors, without doing the sort of implicit factor analysis we are doing when we look at that information.
And that could really be a different way of looking at that sort of information.
Yeah, I think if you just look at alpha fold, you know, the big language is it's on protein folding is, it can just work with so many different dimensions, find and test different potential relationships that we can't do, still need some kind of guidance.
But yes, this kind of multidimensional finding links.
So for example, at a market research level, we use, we've created, I've created Lexi, who's a research assistant with with brand genetics, and which looks at the implicit meanings of words.
And because the way these large language models work, they can kind of cover this thing called vectorization way, the closeness of meaning, it can actually look at implicit meanings and relations.
And that's socially better at doing things like thematic analysis than humans can, because it can actually look at all the different permutations of potential relationships in the vector space, or vector semantic space, if you know, vector semantics.
And so it, yeah, the understanding that it can play to these strengths that is, is super useful.
But it also helps us know the kind of stuff that AI is really bad at.
I mean, we humans, we're brilliant at context shifting, you're going to go off on conversation after this, and with with will or with with a client, and you're going to be a different ray, you're going to project, you're going to adapt.
And AI is just hopeless at that kind of thing at the moment.
So this kind of context switching is, is amazing for humans.
And then things that, you know, sort of human skills, whether it be sort of creativity, curiosity, judgment, you know, sort of love of learning, all these things, these are the things that, you know, even if AI can do things in a squillion dimensions, it's like, well, let's double down on what's human.
Well, when we're talking about human and tropes, there are a couple that we hear really often, which is AI will never be creative, humans are creative, AI will never be empathetic, and humans are empathetic.
Any thoughts about either of those?
Yeah, yeah.
I mean, so yeah, the way that I look at things as a psychologist is that these AI assistants do three different things.
One is that they help us with productivity.
The second, they help us with creativity.
And my bit as a psychologist, particularly, they help us with empathy.
I think that the role playing potential of AI, so you can actually, you know, I kind of actually did this before this interview, say, what are the questions Ray's going to ask me?
So I created, you've already got your virtual Ray, who is fabulous.
But I just sort of say, okay, this Ray, what's kind of questions again?
And so I can step into your shoes and see the world from your perspective.
And that ability is, I think, is something is one of the big untapped opportunities in research.
But your question, I think, for creativity, we know that whatever creative test, and those psychologists, we've got a bunch of these tests for human creativity, AI just aces it, these large language models are creative.
We know that AI, when you do blind tests, will come up with more creative ideas with humans than humans.
Or if you, if you, if you work with them, you will be more creative.
And again, that comes up, what is creativity is coming up with something that is novel and valuable, whether it's kind of beautiful or useful, surprising or whatever.
And so the ability to do that, I think it's people who are naysayers on that is like, well, on MRS, evidence matters, let's look at the evidence, and it is.
So I think these, some of these, so AI can be creative, and it can help us be productive.
So yeah, so I'm a bit of a, again, with these tropes on creativity, these things are creative, just try it.
If you actually, if you actually ask ChatGPT and say, and what we know is, it's the, similar to how humans do things, you'll have 100 ideas, 99 of them be rubbish, one might be a little diamond.
And that's the way to work with AI, is you get it to generate 100, get it then self-select, get it to generate again, get it self-select.
And then you just do this kind of generate and test, which is the way ideas evolve, the way genes evolve, the way antibodies evolve, the way knowledge evolves, the way science evolve.
This tower of generate and test, you can replicate that, and AI can do it fast.
And as you say, in multiple dimensions.
And just to give you an example of the empathy, I was talking to somebody who's a professor of medicine, and they were talking about using the AI to try to develop scenarios for the staff in A&E.
When somebody comes in and their arm is cut open to the treatment, but they are on drugs or on alcohol, or they have a behavioural problem anyway, and they're violent, what are the strategies for talking?
And they can just do these over and over again, with the AI to try and develop better strategies to then implement, and then trade rather like pilots have used simulators for years.
Having those sort of things, what could you imagine anybody who has to deal with difficult people, whether that's in prisons, whether it's police officers, all sorts of places?
Yeah, practice on on the sims.
Yeah, I've done sins or synthetic humans, I call them because that but but so exactly, we found that with when I'm doing having run this sort of year long experiment with these two companies of what's it like to have an assistant, people love them, and there are things that frustrate them.
But one of the some some of these the uses for empathy saying I'm going into an interview, or I've got to do a really tough, a tough evaluation, to be able to simulate that, and then feel what it's like to be that person, hugely useful.
And so people did it for jobs or interviews, for going to see the regulator for the bank, what are the kind of questions they get that they're going to ask, or talking to clients, and then brand genetics, you know, with you using, using Lexi, putting them into the role of a consumer, it's okay, what's it, what's it feel like to be this person?
What's the experience?
What's the day in the life of it brings to life, and especially if you can just upload your debrief, and you it's actually based on actual consumer, consumer findings, it can just bring insight to life.
You know, I'm, I'm long in the tooth in research, but I still have this feeling that the PowerPoint is where is where insight goes to die.
You've got a kind of a real opportunity to bring insight to life, you know, insight that feels alive by, just as with Notebook LM, if you've heard of me, you can do these audio summaries, you can just bring to life research and just jump that empathy gap and be able to see the world from another perspective, which is, I think, you know, for me, at least, that's the kind of market research's secret source, our secret power, this empathy ability to see the world from somebody else's perspective.
And these tools are amazing for that.
Absolutely.
So let's just have nearly at the end of our time.
Let's think about where things are going.
And it's really quite difficult because three years ago, this stuff wasn't here at all.
So if we talk about the distant future, let's say three years from now, where, where do you think it's going to be?
Will we see human like avatars?
Or do you think that would slow the process down potentially?
I think it's, for me, I think it's inevitable.
I think we will, I think that will be super useful, whether or not, you know, to everybody having access to their own AI therapist, their own AI tutor, their own AI coach, the ability to simulate interviews and understand.
So I think we're going to see a lot more, a lot more of that.
But I think there is an AI divide now.
There are people who like, no, this is, this is not good for humanity.
And the kind of the doomsters, and then there are kind of the EAC people, the effective accelerationists who just want to just, we're getting near AGI, we're getting near artificial general intelligence or super intelligence.
And so I just want to just accelerate things.
I think in research, in the research world, I think we will be somewhere in the middle, but hopefully have an evidence based approach to it, which is why I love these big field experiments that have been done with Boston Consulting Group, with Procter and Gamble and others, and at my own level, a more qualitative level, what are the psychological effects of using these, using these things.
And people do find that it is helpful, it reduces their stress, it makes them feel smarter at work, more competent.
One of the biggest, biggest advantages in both the bank and research is like, sometimes you think you should know something, and you don't.
And you can tell that to the AI without feeling, you know, sort of feeling the imposter syndrome or whatever, you can actually, I have no idea what to do now.
What are they talking about?
What does that mean?
And you can actually, and that is just the sense of relief people have, having that confident, this AI assistant to work.
So I think that it will grow, whether or not, you know, how it will, will, will play out, how much research will be based.
I know you're, you talk about it quite a bit in terms of synthetic data, synthetic consumers modeling, as opposed to, you know, modeling rather than polling, to get insight, whether, how or not, how developed that will be.
Who knows?
I mean, we're still using the QWERTY keyboard, which is a keyboard that was actually invented to slow down diabetes, because the mechanical keys would get stuck.
So it's like, well, sometimes the world is, it moves more slowly, but the bottleneck is human.
It's not technology.
I think that's the, for me, the key takeout.
Absolutely.
Thank you, Paul.
That has been enlightening.
Great for sharing your views.
And thanks so much.
Well, great.
Thanks very much chatting.
Right.
Right.
Um, time to chat with you.
Take care.