Episode Transcript
Hi, I'm Ray Poynter.
This is Talking AI.
And today I'm delighted to have a guest with me, Elina Halonen.
Elina is a behavioural strategy consultant and founder of Prismatic Strategy.
She began her career in consumer insights almost 20 years ago, and for the past 13 years has specialised in behavioural science.
Before becoming an independent consultant, she spent eight years as co-founder of one of the first agencies dedicated to applying behavioural science in commercial research.
Over the past five years, her work has shifted towards behavioural design, helping clients apply behavioural thinking more strategically.
More recently, that journey has expanded to include the role of AI and large language models in shaping how people think, decide and interact with information, which is why we're so delighted to have Elina with us today.
So welcome, Elina.
Thank you.
I am delighted to be here.
I take any opportunity to talk about this stuff.
It's fascinating.
Absolutely.
So let's look at the segue.
How has your interest in human behaviour shaped your approach to investigating artificial intelligence and specifically these large language models?
Well, I think, I think of the very foundation of it.
It's, I've talked for years about how psychology is very weird.
And you know, there's a bias in representation and all that kind of stuff.
And a few years ago, I realised it was actually going to SMR, SMR Congress, and then the SMR AI event.
And I realised that there is a lack of representation for women in AI.
And I had only really dabbled in that at that point.
And I just thought that's that that cannot be right, like that it cannot be right, that there is such a lack of representation.
And at that point, I kind of didn't know, like, how do I fit into this?
Like, how would I talk about this?
I'm not, you know, I'm not a machine learning engineer.
And it's taken me quite a while to kind of start talking about it.
And I kind of find my, my interest area.
But that's where it really started.
And I, you know, I really hesitant.
I was like, but then I thought, if not me, then who, like, you know, like, be the change you want to see.
And what, what angle will women bring to this?
I don't know.
But you know, it can't be right that there's only, you know, that that representation isn't there.
So that's kind of how I got into this, it is more like, it is still human behavior, is that how do we think we might think differently, because of our different backgrounds and so on.
And I thought, you know, you must have different, you know, more voices.
And you know, if that's all it is, and so be it.
But as I got more into working with LLMs, I guess I've experimented for a year and a half.
And I think from the beginning, I approached it as a sort of a co-intelligence, quite naturally.
Back where about a year and a half ago, when I think, yeah, when people were really talking about prompt engineering, I was like, I kind of talked to as a human, as I would talk to a colleague.
And I didn't say anything, because I thought, like, well, maybe I'm just not doing this, right.
And lo and behold, six months later, a lot of people started saying, yeah, I talked to it like a human.
Like, oh, okay, maybe I wasn't, maybe I wasn't so far off.
And that kind of gave me confidence to start really thinking, thinking about it more.
And I think, for me, a lot of, not technically, but like a lot of the aspects of machine learning and, and genitive AI, what's behind it, how it's trained, like things like operant conditioning, reinforcement learning, those are things that we have taken from psychology.
They're the same things that operate, like that work for humans, for animals.
That's very basic.
And, and a lot of the outputs that they've been trained on are, are made by humans.
So there is definitely the, you know, understanding humans and human outputs and all that kind of stuff is important.
But also, this isn't like any other, this isn't like software that we're interacting with.
This is very different.
And it, it's constantly moving.
So it's evolving.
And it's how we integrate it into our lives, how we interact with it is different from anything that we've had before.
And that, you know, anything like that has to do with human behavior.
There's mental models, there's expectations, there's decision making, metacognition, self awareness, all these types of things that feature into how we use these, these tools that are available to us.
So yeah, that's, that's kind of like, that's how I got into it, really.
And it's kind of, I'm just going to back up one second, because when you said weird, you didn't just mean strange, you meant something quite specific.
Yes, I do.
So weird, as in Western educated, industrialized, sorry, Western educated samples from industrialized, rich, democratic nations.
So it's a, it's a, basically, the vast majority of psychology literature, or what we know about humans, there, how we behave, how we think, how we feel, is based on weird people.
So weird in capitals, and very small size of humanity.
So that's, you know, the same thing is happening with LLMs, because the vast majority of the, the, you know, the data that has been trained on is also from weird countries.
So that's, you know, there are lots of elements, it's, it's, it feels like a, a represent, representation of the human, human condition, let's, let's call it that for a lack of better word.
And I feel like the same issues that we have, like understanding of the issues in psychology, or understanding of psychology is helpful in understanding that interaction as well, and what's going on there.
How exactly?
That's, that's, that's literally the, what I'm exploring.
Yeah, well, that takes me to kind of my next question.
I like the fact that you talked about things like reinforcement learning.
We see that in humans, we see that in animals, we see that in the machine.
So what are the sorts of things you've discovered about the way that LLMs work?
And, and what about words we see, like, think, chain of thoughts, and we see reasoning, referred to in some of the iterative models?
Where do all of these come together, do you think?
I, well, there's a couple of, there's a couple of angles to that.
One is that I was, I was recently reading this paper, that's like barriers to AI adoption, or AI tool adoption.
And one of the things was that we kind of have this need to understand how it works, like we don't like black boxes.
So even if, so this, this is one angle that a lot of the, you know, AI experts, I'll just call AI experts, so it might be generative AI, might be LLM experts, we'll just call them AI experts for this.
AI experts will quite vigorously argue that, you know, LLMs don't reason, they don't think, and it's just theatre.
Sure, but maybe that theatre is helpful for adoption and building trust.
So there's that angle that, you know, does it think or it doesn't think?
Does it matter if it helps people adopt these tools?
That's one question to ask.
But then the other part of it is that, actually, you know, even though technically, technically, you could say that LLMs don't reason, they simulate reasoning by, you know, statistically predicting the next sequence of text or next word, it, you know, but the problem is that thinking and reasoning are not like a single concept in psychology or in humans either.
So when someone says, you know, LLMs don't reason or think, the first question is like, what do you mean by reasoning or thinking?
It depends, like, are we talking about, you know, are we talking about cognitive processes like heuristics?
Are we, you know, is it logically valid?
Or, you know, are we focused on performance?
Or, you know, does it require consciousness?
These are different things.
And so when someone says, oh, they don't reason, it's like, okay, but what do you mean by reasoning?
This isn't, it's not like a settled debate of what thinking and reasoning mean.
It also depends on what discipline you ask.
Are you asking philosophy?
Are you asking psychology?
Who do you ask?
You know, there are people who used to, under recently, we thought animals don't think or reason, but that's not true.
That's, that's not really true.
It's just that our methods were not very good.
And we had very anthropocentric ways of measuring that or thinking about that.
So, you know, it's, that's, yeah, if we have very narrow definitions, and then you might say that elements don't reason.
But if you are talking about things like inference, like you're inferring things from something, if you think of that as reasoning, that's how humans think too, you could say that they reason.
Or thinking, like if you say that thinking is conscious awareness, then they don't qualify.
But if it means that you're sort of, you know, you could do something that is functionally similar to thinking, even if it's not conscious.
And I think, you know, if that seems a bit far fetched, we already use these types of terms quite pragmatically that will say like, like thermostats, like it thinks it's too hot here, or, you know, that a dog knows they did wrong, we assign a lot of, you know, anthropomorphic things to, to animals.
And even like inanimate objects, I was like, Oh, a car's being really stubborn today.
We do that all the time.
That is what we do as humans.
So in that sense, I don't know, is it is it fair to say that all they don't reason because like, well, the car also wasn't being difficult, the car broke down for, you know.
So the difference is functional equivalence or biological similarity, like planes don't fly like birds, but they do fly.
Yeah, you know, that's a functional equivalent.
And so probably for a few philosophers, and for a few engineers, the questions about whether machines think matters.
For the likes of us, it probably matters more whether a model that assumes the machine thinks is the best model to get the best results.
What models of us using these things are going to be the right models, they don't have to be the same things.
Rather like I don't know if you sail, but people kind of assume that sailing boats go forward because the wind blows them.
Mostly, it sucks them, because it's the speed of the air going around there.
But it's a perfectly good model for most people to understand.
Yeah, I mean, the other thing is that when people argue that they're just, you know, they're just predictive prediction engines.
But the thing is that unfortunately, human brain is also largely pattern recognition and statistical inference.
So, you know, the, there is a way of looking at the brain that is sort of predictive processing that we're constantly generating hypotheses of what's going to happen without really doing it.
And when basically when the, you know, hypothesis does, you know, turns out to be wrong, we get frustrated or surprised.
That is how, you know, we've predicted that something will happen.
And when it doesn't, we're surprised, annoyed, frustrated, whatever.
Sensory input, we're, you know, we're looking to match that if we, you know, if you put your hand on a hot stone, you know, what you think is a hot stone and it's cold, you're surprised because you predicted it'll be hot and all that kind of stuff.
So, you know, the mechanism of predicting from previous patterns, that is how humans operate.
We're not that much.
We're not that much.
Not so special.
Yes.
We're not so special.
And it's like, you know, human, human thought is largely pattern recognition, context sensible.
We're updating our assumptions, our priors and so on.
And statistical inference, even though it's intuitive, it's not beyond, like, we don't do it consciously.
Then it's kind of, you know, it's not too different in that sense for the purposes of, you know, there's a difference in knowing how to drive a car than to fix the engine.
I don't need to know how to fix my car's engine, but I do need to know how to drive it well.
And it's a bit of the same thing that, you know, I don't need to know exactly how the inside of the LLM works.
I don't need to build it or train it.
That's a whole other set.
But I, you know, I need to understand enough to, to, you know, drive it well, effectively.
Yeah, for many years, I drove a manual car and I used the clutch.
I don't really know what the clutch did, but it let me change gears.
Um, and something happened in the engine that made that possible, but all I needed to know was changing gears, clutch, and you've got to get the timing right.
I tell you what, uh, there's a, you know, even more simple than that.
I, uh, I was quite old when I, uh, realized during a project that I actually don't know what the functional difference is between a paracetamol, paracetamol, ibuprofen.
And I, that was the first time I Googled it.
And that was many decades into adult life that I actually looked up what it does.
Uh, so, you know, and I was like, oh, that's the mechanism.
Yeah.
I should probably look that up sooner.
So, but you know, there are a lot of things that we just kind of guess or, you know, and also humans hallucinate in quotation marks all the time.
Our memories are very fragile.
We, you know, we, we remember certain things that seem important, a bit like chat TV's memory, just notes down very random things sometimes.
And it forgets important things.
That's what our, our memory is.
It's no better than that.
And we also, when we don't know the answer, we make up, we make up the answer.
That's what we talk about with surveys.
And like, it's the same thing.
It's like, if you ask someone a question the wrong way, they probably don't know the answer, but they'll make up something that sounds convincing.
It's exactly, you know, it's not necessarily a lie.
It's just like, it sounded good, you know?
And it's like, well, now that you put, you know, I'll just think of an answer to be polite.
Humans do the same thing.
And it's kind of the, you know, it surprises me that more people haven't kind of, no one's said it out loud.
I don't know.
Maybe they're thinking privately that talking to the LM is a lot like interviewing, like same skills interviewing or writing survey surveys, because you have to think about, can it answer this question?
How do I need to ask to get the outcome that I'm looking for?
And if you, you know, with LLMs, you can change the question.
You can take, you can rewind, you know, back to an earlier question, go, ah, that went on, that went on a wrong track.
Let's go back and I'll ask it differently and then get on the right track.
With humans, you can't, you kind of like, you've already gone into a certain track, but it's, I don't know where it's going with that, but it's, it's definitely, that's one aspect of that.
So for anybody who doesn't follow Elina on LinkedIn, I would recommend it.
You've been posting a lot of really interesting stuff in this one.
And one series of posts you did got a lot of interest, which was to do with the censorship and the guardrails in different things.
You just, yeah, a couple of those for us.
So that one was, um, that was an accidental discovery.
It was basically, I just, uh, just paid for, uh, the Google one thing and I was just experimenting with Gemini and, um, I've, you know, I'd asked a few, few things with Gemini, but it was basically asking anything that had to do with Donald Trump.
It just, uh, like I, I, now I can't remember what the exact questions where I started from.
Oh, no, I think I was asking about the actor who plays, who plays Donald Trump in the film, in the apprentice film.
And, um, and it just started, um, it was very curious that it would not mention Donald Trump's name at all.
It would just would not, it would make up stuff and it pretended that, you know, went to great lengths to, uh, rewrite history for that actor, not hit, not Donald Trump, but the actor who plays him in a film, it made up all bunch of stuff.
And that was, um, it wasn't a deliberate experiment.
It's just like the, it got a bit more bizarre as it went on.
And I was like, this is very weird.
And it would not even list like names of us presidents.
It just categorically said, I cannot talk about anything that has to do with politics.
And I was like, okay, it's kind of, we're talking facts now.
It's not campaign trail.
It was after the inauguration.
And, um, you know, it wouldn't even like, I think maybe a little while ago, uh, chat TTP would say, uh, that, you know, your question violates our, you know, guidelines.
Um, please ask something else.
Gemini just shuts it down.
It will not talk anymore.
It won't say what you've violated.
It's just like, uh, yeah, I'm not talking to you.
That's it.
Not talking to you.
And it was, I think, you know, it sparked a lot of conversation because, uh, there were some people who said, oh yeah, well, it's just, you know, it's kind of a, you know, it's a bug that they're working out.
I'm like, but Google is one of the, if, you know, one of the biggest tech companies on the planet, they have, you know, so much money.
How is, how would they have a bug in a, something that they are selling?
You know, it's rolled out to millions of people and you're charging money for it, uh, quite a lot.
And it has guardrails that you haven't thought about very well.
But I think in this case, it's a good illustration of why you do need someone who understands human behavior in there, because that is a question, you know, you need to think what might people ask?
How do they ask it?
When is it political?
It's not political to ask who is the, you know, who's the latest U.S.
president who's just been inaugurated.
It's a fact.
But after that, I can't trust Gemini at all, because it's like, well, in this case, I know the answer.
Next time, you know, once you've made up something that obvious, I don't trust you at all, because what will you make up next time when I don't know what the correct answer is?
The timing was also very good because a lot of people had been using DeepSeek for a couple of weeks, spotted, won't talk about Tiananmen Square, like this.
Oh, look, the Chinese one is censored.
And then, hmm, so what are the others?
Yeah, that's a good point.
I totally forgot.
It feels like a lifetime ago.
I think January 20th feels like a lifetime ago.
The inauguration feels like a lifetime ago.
It feels so long ago.
But yes, it was, I think I posted about that because I'd seen so many people, mostly Americans, making a big deal about how DeepSeek won't talk about Tiananmen Square.
Like, okay.
Sure.
But Gemini will not talk about a very uncontroversial fact that is bizarre.
And that makes you think, what else are you censoring?
If you're censoring that, plain facts, not even controversial.
If you're going to censor who plays the US president in a film that was nominated and I think won some things at the Oscars, when you can't talk about that, then I have a lot of questions about what else are you censoring?
And the fact that it comes from the US doesn't mean that it's trustworthy.
That's a halo effect, like a nationality halo effect.
Weird.
Yeah.
I think that's what really, like, I would probably not have written about it otherwise, but it was with the context of, thanks for reminding me of the context of DeepSeek.
Yeah.
So let's change more and more into the human side of it.
So what are the challenges of blending human behavior, which is your speciality, and LLMs, which is becoming your speciality?
I think, you know, it's like a flavor, I guess.
I think the biggest challenge that I look at, there's probably plenty, but from my perspective, it's that people don't interact with LLMs like they would do with machines, or there's a mismatch of that mental model for a lot of people that is hard to get over.
There's a recent Pew Research report that says something like three quarters of AI experts think that is super beneficial to their everyday lives and so on.
And it's a, you know, I can't remember the exact statistics, but it's, it's the gap to the general public is huge.
Because there is a mental, like, I've seen this in a lot of friends and family who just don't see the like, well, what would I use it for?
It comes much more naturally to knowledge workers to even use it that way.
I mean, obviously, students and younger people are using them more.
So that's a whole demographic to think about.
But I think the big difference is that we, we have different mental models.
And that, that is where there's an interest, an interesting area that we don't know enough about.
You don't interact with an LM like you would with a machine is more like a person, but it's not a real person.
So that, you know, that discrepancy is, well, that kind of like that, not discrepancy, but that mismatch in how we perceive it, intuitively, versus what it is, is just, you know, there's that.
And it has a lot of consequences.
Like, when you're using people, when you're using LLMs, you'll evaluate it based on not just if it's factually accurate, but, you know, is it coherent?
What's the intent behind it?
It's like the social cues that it's giving off.
And, you know, all of those things influence the trust versus transparency element of like that, that tension for, for people that you don't necessarily trust us.
We don't tend to trust systems we don't understand.
But, but at the same time, you know, if the LM gives an explanation of their outputs, it's not necessarily accurate, which is a lot of people say they don't, they're not really thinking, but it makes it feel more understandable to people who are new to it.
So it makes it feel more transparent and more trustworthy if you see how it's thinking.
Is it, is it real?
That's another question.
But, you know, there's that part.
It's a bit like, you know, if you show a, you feel like the time is going faster, if you see a, you know, a blue circle or whatever, like a progress bar, it's a bit like, a bit like that.
And then there's, you know, there are other things like we expect things to be consistent and predictable.
So if there's a, you know, you don't want it to be rigid, you don't want the LM to act in a rigid way, because that's more like a pre-programmed chatbot.
But if it's too flexible, then it becomes a bit unpredictable.
And then it, you know, it's back into that, you know, trust area.
So that's a very fine line to, to, to, you know, balance.
And, you know, there's, then there's another aspect of like how much agency you give people and what is safe.
And here, there's an interesting parallel, you know, I talk a lot about dogs as well, because very curiously, there is a, there's often a power, it helps me conceptualize things in a, you know, there's an analogy, but basically one big trend in a trend or change in dog training is in recent years is that we are moving to, there's a discourse where we should give more choice and agency to animals.
And that's been happening for some years, but it's kind of become mainstream now.
But the question is that some people are like, well, you can't give them too much choice, because, you know, they can't make good decisions.
Yes, absolutely.
You don't, you know, it's not, you just like, you don't give children all, you know, all the choices, because they, they're not adults yet.
Similarly, dogs live in a human world.
So you have to give them choices within safe boundaries.
So here, you know, I'm not saying humans are dogs.
I'm just saying that there's a, there's a parallel in the conversation of like, what is the right level of agency versus keeping the users safe?
Same things like what's the right level of agency and choice that you can give to a dog within safe parameters?
Like, we're, you know, and that isn't a clear cut.
So that's an interesting, you know, it's, yeah, it's an interesting one.
And then there's, you know, the last thing that I find quite a rich theme to think about, they're not necessarily answers is the whole, you know, cultural fit, because the LLMs feel quite different.
And I don't know if there's, I haven't seen a lot of research in this yet.
But I've studied French since I was, since I was nine.
And when I first tried Mistral, it felt very much like, it just felt very different from ChatTDP back then.
ChatTDP talks a lot, like a verbose American, like it just explains a lot, it talks a lot.
And Mistral goes straight to the point.
It's like logic, like mathematical logic, it doesn't, no extras, no niceties, no pleasantries, it gets the job done.
Like, you know, I don't know, it's just, it was a, it was very interesting.
And I think the many of them are somewhere in between.
But that was a, that was a moment when I realised that some of the, like cultural norms and conversational norms are also imbued in the, the, how the LLMs operate.
And I think, you know, maybe more so in the future.
But there's, you know, that's just my feeling about it, because I haven't read about it.
I don't know if anyone's doing it consciously.
But I can feel the difference of interacting with them.
So I'm going to be publishing shortly a study I'm doing with a Japanese, Nigerian and Thai friend.
And they did the same exercise in, I did an exercise in English.
They then did it in English and in Japanese, English and in Thai, English and in Yoruba.
All three of which are high context languages, unlike English.
And we got different sorts of results.
It was most verbose when asked the question in English.
And so even the same, and these were all chat GPT, even the same when you ask it in different languages, natively, it has an implication because what is the probability that this is the right answer when I'm asked it in Japanese is different to the probability that a Japanese person asking me it in English generates?
Yeah, I mean, there's also, there are also other differences, like I'm a native Finnish speaker.
But I have spoken, I've been speaking English, most of my waking hours for, you know, best part of 20 years now.
So I think in English, I now find it difficult to switch to thinking in Finnish, I naturally think in English.
But when I, when I try to express myself in Finnish, I have to switch the way I'm thinking.
It's very hard to describe, you know, just describe, but it's, I have to change how I express myself, how I, I guess, express certainty, like confidence in what I'm saying.
Also, politeness, like hedging, all types of things are different.
And I mean, there's also, I think, yeah, you use, you'll, you'll, you'll, you'll start, you'll structure your thoughts differently in different languages.
If you're not bilingual or multilingual, you have not experienced it, it's very hard to put into words, but anyone who has tried it will notice that.
Like, the clearest example I could give is that I remember many years ago, I did a, out of curiosity, did a course in Chinese.
I remember there were, you know, maybe there were like 10 people there.
Everyone else was a non-native English speaker, like me, and some were Spanish, German, whatnot.
And there were two Americans.
And they were the ones who found it hardest to understand that Chinese does not have Mandarin, does not have, like, temporal tenses.
I'm probably using the wrong term now.
But you know, I went to the shop, I went to the shop, it says, today, I go to shop.
Yesterday, I go to shop.
And it's basically, it's the same word.
And they, you know, everyone else in the room who's had to learn another language were like, okay, weird.
Okay, but fine.
Moving on.
And then, you know, the two native English speakers who had never learned another language, found it incredibly difficult to even comprehend that there is a different way of structuring that, that the, you know, it's basically, I go to shop is the same thing every time you just, you just swap.
Yesterday, today, tomorrow, I go to shop.
That's the easiest, it's just very salient example from 20 years ago or so.
But there are many other much more nuanced things.
Also, like agency, like some, some languages will say, oh, the vase, the vase fell on the floor.
Whereas that, you know, like, oh, just, just fell on it by itself.
Like in Spanish, there are, you know, different tenses.
So I think some of these things are, they are built, they are kind of like weaved in, but it's very hard to test them.
And I think we're, you know, we're nowhere, we're nowhere near knowing all this is so new that you have to kind of like, that feels like it's happening.
But it takes so much effort to test any of it, that you don't tend to do that.
So therefore, it's invisible to you.
Yeah, absolutely.
So we're running into the time buffers.
Lots of people are not as far down the path of using these tools as you, Elina.
So what would your recommendations be for people to up the level of their involvement with LLMs?
I think, I think the first, you know, first is kind of mindset shift of not thinking of it as prompting.
At the beginning, I thought of like prompt engineering.
And I thought the engineer part that made it feel like it's not for me, I'm not an engineer.
But actually, you know, think of it more like briefing, you know, a colleague or, you know, a thinking partner, really.
And it's not really about crafting the correct, right?
If that's the kind of thing you want to do, like getting, you know, one shot of, you know, execute this marketing strategy for me, I don't find them very useful.
Some people do.
I find it more like an iterative thinking partner.
And I think the first one is that is to think of as briefing, treat it as a thinking partner, experiment constantly.
It is a skill that you build over time, but you have to, like, if you're like, what do you use it for?
I don't know, but my life is different from your life.
So you have to look at your own life, the types of things that you do, and just, you know, experiment, give various tasks to LLMs and see how it does.
Like I, you know, I've used it to, I thought recently, I took pictures of things in my pantry and got it to do in the inventory.
I would never bother doing that by myself.
But that was really helpful.
I ask it to, you know, customize, you know, like ready, ready made cake mixes to, you know, make it more of this and that, like, you know, things I don't like.
And it's taught me as a, as a, you know, as a consequence, it's taught me a lot about baking and cooking.
Super helpful.
I would not have the patience to Google for the, all the recipes and stuff.
No, not a chance, but it's taught me very tailored ways of like how I operate, how I think it now understands me.
And it's super helpful.
I can just ask it a lot of these things.
But yeah, and you have to also have to experiment with different tools.
What fits you?
The more you use them, the more you develop your own way of interacting with them.
And with some of them, like with chat TDP, if you have the memory on, some people don't like that.
I do, because it makes my life easier.
But the more it remembers about you, the easier your life becomes, because it kind of starts to know a bit better how you operate and you know how it operates.
And the other thing is that people are like, oh, but it's not always accurate.
But humans aren't accurate either.
That's the thing is that you could ask someone and they don't know the answer.
They're like, yeah, I think it's over there.
Or they misremember or, you know, get your question wrong.
How many times that happened in daily life?
How does it happen with a colleague that they give you the wrong answer?
Because you asked the wrong question, they misinterpreted.
It happens all the time.
So don't assume that it's always right.
It might have misunderstood you.
Or, you know, just something happened.
It happens to me.
A bit more kind.
I thought that was a bit of a weird word to use, I guess.
But, you know, a bit kinder that, you know, humans aren't perfect.
And it's just, it's not perfect.
It's not perfect either.
It's, you know, the goal isn't to get the correct answers is to think better and ask better questions over time.
And, you know, and it feels it feels like a lot of effort at first, which is a real barrier that you can't just, people just, you know, use it.
And it'll be, you know, immediately I'll get, press a button, I'll get a lot of useful stuff.
Like if I said I use LLMs to write constantly, they're a really integrated part of my workflow.
But does it write it by pressing a button?
Absolutely not.
Like I will use it to, I will dictate ideas, like half baked ideas or half connected ideas, and then go back and forth.
And it actually, for the first time, something thinks at the same speed as I do.
I don't, you know, it's, it's kind of, you know, it's, it helps me think better.
But it's certainly not like if I said out loud that I use LMs to write a lot, people like, oh, so you don't write it yourself, you just press the button.
Absolutely not.
It still takes, you know, it takes quite a long time to make sure that it's right for me.
But it certainly, it certainly helps, it makes it better.
It's not the faster, it's the better.
And, and yeah, and experiment with different ones to find what what suits you.
I think that's really what it is.
It's, it's a skill.
It's a skill that is however you feel about them.
It is necessary.
It is necessary because it like I'm not, I'm not an AI enthusiast, or, you know, a doomer or whatever.
I'm just a pragmatist, like it's coming.
It's a bit like, did you know, the Snowpiercer, the TV series where the Earth has gone, you know, all iced over and you have to get on the, you know, you either get on the Snowpiercer or take it on a train that goes, that basically the train is, it goes around constantly, so it doesn't freeze.
That's the only way to survive on planet Earth in the, in the series.
And either you're on the Snowpiercer or you're left on the platform, you know, and that's kind of what I was like, I don't, however you feel about the ice age coming, it's coming, and you've got to get on the train.
And that's how I see LLMs, that either you get on board or you're going to get left behind, or, you know, you don't want to be in, you don't want to be in third class.
You want to be, you'd rather be, have a nice comfortable seat and, you know, at least have a ticket for the train.
So that's how I see it.
And, um, at some time, at some points on my journey, I've been a bit reluctant, like, you know, but then I see this is, this is the way that the world will go, whether you want it or not.
That's kind of, that's how I see it.
And it's, there's no point.
Um, I think you're only, if you are resisting it, the longer you resist it, the more behind you are.
And the only one that you are harming is yourself.
Really?
Yeah.
Which is, I don't know if that's just, that's my opinion.
I'm not, you know, not, not judging, but, uh, that's how I see it for myself.
Brilliant.
Elina, that's been fascinating.
And I'm sure people are going to enjoy listening to it.
Thank you so much.
Thank you.