
ยทS14 E10
Inside the AI Classroom: Dan & Ray's Big AI-in-Education Download
Episode Transcript
Welcome to the AI in Education podcast.
Thank you, Dan, as usual.
And, , I'm not sure whether I should be looking at you or looking at the camera because you know, so many of the times we've recorded, we've recorded.
Remotely we have.
I know.
It's really weird being in the room with you, Dan.
It is.
Ah, you smell good though, which is good to be together in the room.
Wow.
Yeah.
Hey, Dan.
Very expensive after shave, so, uh, last week.
Yes.
You weren't able to be with us.
No, but I, because I had a great session with Carlo, it was almost like a meditation because we were talking about ai.
And the way it changed what students did and the way it changed what librarians did.
Mm-hmm.
But it, we just kept coming back to this point of staying human and being human and it was like a meditation because my word Carla has an amazing speaking voice.
I know.
I listened to it in the car and uh, yeah, it's just phenomenal.
I did message him on LinkedIn, actually.
It.
Great episode, some great insights into that.
And you know, he's doing the circular lots of conferences.
So if any anybody's got, uh, lucky to see him, then, , yeah, please, , follow him on LinkedIn.
Yeah.
Like, it's great, great messages.
I'm, I missed him at a bunch of conferences.
I'm so frustrated.
, But anyway, he's so good.
Finally got to be in a room with him.
That means that this week is gonna be the opposite of last week.
It is not going to be a mindfulness and meditation session.
We are rapidly gonna go through the news and research, and guess what, Dan?
What?
There has been more news and research.
Can I, yay.
So and so.
I've had to collect all of the papers together.
It's like a proper news briefing where you can tap it on there.
Oh yeah.
I feel like a news announcer or maybe like Ronnie Corbett.
For all the Ronnie Corbett.
Ah, brilliant.
Yeah.
For all the people from the uk.
He's a brilliant comedian.
Yes.
It it, was it Ronnie Barker, Ronnie Corbett?
There were four candles.
Yeah.
Yeah.
So that's right.
Four candles.
Look up that sketch.
If you're not English, you will still laugh at four candles.
Yes.
Yes.
Okay.
Fantastic.
My favorite, uh, we'll put a link in the show notes.
Make a note on that, Dan.
Please do.
Okay.
So Dan, I found some research, I found some news, so we're gonna do the same as we usually do.
Yeah.
We are rapidly gonna go through and talk about what the news has been that is relevant to education, and then we'll talk about the latest research on AI in education.
The point I'll make, Dan, is we are not talking about AI news.
We are talking about AI news that is relevant to education.
So I'm not gonna talk about the corporate restructuring of open ai.
I'm not gonna talk about the gga bazillions of dollars that are being invested by anthropic in.
TPUs.
'cause I don't think any of that stuff is important.
Yeah.
But I'll tell you what is important, Dan.
The first paper I came across, so MIT, so this is MIT.
So MIT have produced a guide for.
To AI in school.
It's called Perspectives for the Perplexed.
Oh, he looks like me.
Okay.
So that, that gives you, that gives you some ideas.
So look, this is a super guy book.
So Justin Reich, who is the director of MIT's teaching Systems Lab, worked with a whole pile of contributors and they talked to a lot of educators.
So this isn't.
Isolated European style.
Well, we'll let the policy makers make the policy.
This is, let's talk to teachers to find out what's going on.
, Look, I love the preface , and I'm gonna refer to the preface.
Go on.
So it talked about the building the plane while we fly it.
Analogy, you know that one?
Yeah, yeah, yeah.
It's like, um, that's very much what AI feels like.
Yeah.
Is Well we dunno what it's gonna be used for and we dunno how it's gonna change things, but we're using it.
So it talked about that analogy, , but he took it a little bit further.
Here's what he said.
Writing a guidebook on generative AI in schools in 2025 is a bit like writing a guidebook of aviation in 1905.
Hmm.
Just two years after the Wright Brothers launched from Kitty Hawk, no one in 1905 could have said the best way to fly, build a plane or fly one, or operate an aviation system.
That's true.
Yeah.
And no one in 2025 can say how best to manage AI in schools.
It will take our school systems, our educators, our policy makers, our researchers, our parents.
And governments some number of years to try a range of approaches and suss out which ones work best in which context.
That's great.
Yeah.
Well, that's, uh, does that summarize it all in terms of, is that useful of the schools to look at?
Has it got any information?
Yeah, so it's, it's, so it's a genuinely iBook, it's not a policy manual, it's not a set of best practices.
Okay.
It's a snapshot of the current conversation going on amongst teachers, amongst school leaders.
And around Sounds pretty accurate, doesn't it?
Yeah.
, And it says nobody has all the right answers.
Mm-hmm.
Because there are no right answers at the moment.
And so it's very much talking about experimentation, feedback loops, , iterating on policy.
You know, an AI policy for today may not fit for Yeah.
For next year.
Absolutely.
Yeah.
Um, and the themes running through it, , the themes were AI arrived before policy.
Yeah.
Everyone's doing it.
Whether or not.
We want 'em to.
Yeah.
Um, second point that runs right through it is we need to put ethics at the center of the conversation.
, The third point there is currently mixed learning and, sorry, mixed evidence on student learning and on teacher workload.
So that's right.
I, I see lots of, oh, this is gonna save you an hour.
And it's like, but I'm gonna spend an hour learning.
How to do it.
Yes.
Like I often talk to people and say, Hey, do you know the hour you save with ai?
You're gonna use it to learn about ai because for the next few years we're gonna have to stay on that treadmill.
, And then the last point.
Is AI literacy is going to be so important.
Many teachers lack confidence or knowledge of how AI works.
That biases that were in it, it's environmental, it's IP costs.
It's always a conversation that comes up.
And so I think it's about that.
How do we build AI literacy and then how do we build AI fluency?
Mm.
So literacy is, I understand the basics.
Oh, I can.
Get it to do things for me.
Yep.
But fluency is that confidence thing.
I think you using it in multiple contexts.
Yeah.
Did it say anything about AI detectors in it at all, or, or integrity?
It, it says they, basically reframe the AI detector discussion.
Okay.
So , they say it's really about going from policing.
To trust.
It's about learning design.
It's about ethics.
It's not about AI detectors, it's about how do you design learning and assessments in order that they're suitable for an AI age rather than working or walking around a building, playing whack-a-mole.
Yeah.
Okay.
That's good.
That's a good, , so that's available.
We put the link in the show, north Psoriasis, MIT Guides to AI in schools.
Yep.
It's written by and for American teachers, but I have to say.
It is brilliant internationally as well.
So yeah, that was my first brilliant find.
, The second one is, , is one that you pointed out to me.
This was the McKinsey report?
Yes.
About, yeah.
, So the title is One Year of Agen ai, six Lessons from the People Doing the Work.
Six, not five.
No, not seven, six lessons.
Okay.
So Dan, yes.
You put this, sent this to me and said, oh, this is interesting, but you didn't tell me why.
So the first question I have for you is, what is.
Agentic AI mean, so agentic ai, we talked about it this year being the year of agentic ai.
So agents, so it's the next level from just using your chat bot Yep.
As a, as a kind of bot and chatting to it.
It's about actually saying, well, you know, now what I'm gonna do is offload specific intern, say.
To look at a specific domain.
Yep.
So I might have an agent to look at HR documents.
Yep.
Or HR policy.
Yep.
So it's like a specific domain within, inside your chatbot.
So you create agents if you're in Google League gems, if you're inside chat g bts, GPTs.
Yep.
And then you'd be put in those agents inside, um your ecosystem to look at particular domains.
Okay.
And, that was also in there.
, Work trend index report.
They're saying that frontier firms were using agents.
If you remember, you quizzed me on if I was an agent boss.
Yeah.
So you're managing these agents in your company?
Yeah.
Okay.
So I, I'm gonna add something on top of that, Dan.
Yeah.
'cause I think there is a bit about it taking an action as well.
Oh yeah.
Okay.
An agent isn't just a smart ass.
Mm-hmm.
It's actually does something as well under most people's definition.
, Okay.
So this comes from McKinsey.
Which is a management consulting company.
And, , it's interesting 'cause management consulting is being threatened a bit by ai.
Yes.
But.
I think they have massive credibility for this.
And the reason for this is they've got a division called Quantum Black who produced this report.
Right.
Sounds like a great division.
I want to join that.
That sounds great.
, Do you reckon they watched Black Mirror and then said, oh, we want that in the name Quantum Black.
But I saw about a week ago that they were posting pictures of their, like their Grammy award, except it wasn't for records.
Mm.
It was for using open AI tokens.
They have used in McKinsey 100 billion.
Open AI tokens.
Hmm.
So, Dan, explain to the audience what an, what a token is.
Um, yeah, , so token is a way you could think about it.
It's actually four characters.
So think about it like that.
So if you type in a query or a.
Prompt into chat GBT every four characters, including spaces and full stops as a token.
Yep.
So that's the way that the large language model in the backend Yep.
Can then pause your, your prompt, basically.
So it'll split up your message into.
Bits of four essentially, and then break up those words.
, So token is how that, that kind of, the mechanism works.
And the way if you're buying, , open ai, yeah.
Then you're buying an amount of tokens per prompt, essentially.
Or if you're buying copilot, it's the same kind of thing.
It sounds a bit complicated, but it's the way, there's a mechanism of being able to cost it out.
Okay.
So let me check my understanding.
If I type into an AI bot chat bot the cat, uh, complete this sentence, the cat sat on the.
Question mark.
Yeah, that's like 25 tokens.
Because there's a whole bunch of letters in there it's sending off and then it's gonna come back with an answer.
And if the answer it gives me is Matt.
Yeah, that's maybe a couple of tokens.
Yeah.
If it comes back to me with a big philosophical thing saying, oh, you're answering a really interesting question there.
And it could be Matt.
It could be then that's more tokens.
More tokens, yeah.
So to use a hundred billion tokens.
Yeah.
I'm probably thinking then that some of those really complicated enterprise scenarios about, well, let's give it all of our policies and ask it to make sense of it so you can ask questions of it.
That's like a high token use of it.
Yeah, yeah, that's right.
Yeah.
Okay, brilliant.
Super.
Okay, so anyway, McKinsey.
Yeah.
Yeah.
What they did was they looked at 50 real world use cases for AG agentic ai.
, And this was.
Projects that they had worked on, building AI agents, redesigning workflows, working with teams.
So not just the Okay, the flashy stuff, but, you know, real business problems.
Yeah.
And , they have said, and I believe this is much wider than just agen ai, um, the best results come from combining human expertise, strong evaluation systems.
And reusable agent components within a workflow.
So let, let me break that down.
So human expertise, we know the problem we're trying to solve.
Yeah.
And the smartest person in the room has said, oh yeah, I always look at this document when I'm looking for this information.
Or this is our corporate style for how we write.
, The second is strong evaluation systems, which means you build an agent, it works, hurrah, and then you come back in a week's time and go, does it really work?
Yeah.
Does it really, really work?
Yeah.
Yeah.
And then in a month's time, you come back and is this still working?
And, and even, you know, does it need to be updated or modified or retrained or whatever it might be.
Yeah, yeah, yeah.
In that workflow.
And I think some people have forgotten, you can't just automate a process and then walk away.
Mm.
Because this stuff changes all of the time.
Okay.
And then the last bit was then about the reusable agent components.
Mm-hmm.
Good idea.
So I'm guessing that's good idea.
That's that.
That's the bits you put together.
Go, oh, I already know our how to read our policy documents.
I've got something that reads it really well.
We'll use that.
In my agent.
Yeah.
And we'll see more of that I think when we think, , look at agent AI in the long term where people might create a generic agent bot, for example, like you just said, for hr, but then that might be called by other agents as well.
So you can have multi-agent orchestration as well long term.
But , yeah, there's interesting reusing those objects.
Your full geek is coming out today.
Oh, sorry.
Okay, so they share six crucial lessons.
Go and read the report for the six crucial lessons.
I'm gonna just talk about one of them.
They say.
Focus on the workflow is not the agent.
So it's not about building cool tech, it's about solving a workflow problem.
And you do a lot of that , in your consultancy and your independent work, don't you?
Yeah.
Because one of the things that's coming out from all of the research is if you redesign your workflow , for an age of ai, you're likely to get better results than if you just take your existing.
Suboptimal, let's call it mm-hmm.
Workflow on something.
And so for them it's about, first of all, you, you focus on the problem.
Mm-hmm.
Which is.
What's the best workflow to solve that problem?
And, and I, I had this, I was at a IS do South Wales, the amazing, , head office there that they were doing an agent day last week and I spent the day with them and we were looking at and exploring those issues.
I showed them what agents were, first of all to get head round it, and then we kind of talked with the business problems they had.
So there was one lady that had to do special educational needs reports and they had to be a specific format.
So she went in and she kind of just.
Super prompted and say, look, I am, this is my role.
This is the type of output I'm looking , at from the agent.
And then she was able to utilize that to do some of her special education needs reporting.
So it was very much from her work flow perspective.
Mm.
Rather than, um, you know, this is an agent that's building one for the sake of it.
Yeah.
Brilliant.
Okay, so next piece of news, OpenAI announced something called Atlas.
Oh yeah.
This is fun now Atlas.
So this was labeled chat, GPT Atlas.
It's basically a browser.
Built with AI within it.
So not a browser with AI added, but it's like in there.
Mm-hmm.
And so because it's chat GPT that's in there, it's kind of got your memory of things you've done, it's got all of that context.
Um.
But it can start doing things for you.
Mm-hmm.
The actions part you make.
Yeah.
So you've got 20 tabs open.
It's like, oh, summarize all these 20 papers for me.
Yeah.
Or, or, oh, I've been looking for a price for this.
Go and, you know, just put it into a table for me because it's on these 20 different tabs.
, Now Perplexity announce something called Comet.
Yeah.
Which is very similar.
, They were first to market with it.
, So the first thing I tried, Dan, is can I get a go and.
Complete my course for me.
Can I get it to go and do my assessment?
He's such a cheater.
He does it all the time.
Um, and here's the good news, , it won't go and complete a course in your canvas.
LMS Yeah, much of the time.
Yeah.
So a bit like all AI is random.
Yes.
Some people managed to get it to do it.
I saw that.
I was gonna, I was gonna push you on that one.
'cause I saw , the, , comments on social media about the fact that they couldn't do courses and then some people are going, well, I've just done a course with it.
Yeah.
And, and so so here's the thing.
Interesting.
With all of these things, our students are gonna be smarter than we are and they are gonna learn how to hack it.
Hmm.
, And so just be ready for the point , that we are now seeing these.
AI integrated browsers that will start doing things for you.
Mm-hmm.
There's a risk for assessments.
There's a risk for take home assignments, there's a risk for all kinds of stuff.
And there's an upside as well.
Like if you've got to do compliance training, you can get it to do it for you.
But the real win here is we're starting to see where AI might start to take some of our kind of.
Doing the dishes type work.
Yeah.
Like, oh, I've gotta take this spreadsheet and put it into this system that very soon we're gonna be in a world where I can say, oh, actually I need you to take that spreadsheet and go and type it into our student management system.
For me now there's a whole load of security worries as you start to allow this kind of thing.
So.
IT managers were freaking out about the risk here.
, But it's a pointer to what's coming ahead.
Yeah.
, And so just thinking about, just generally from a consumer point of view, the amount of forms you've gotta fill in or if you go into a school, the enrollment forms, , I was filling an online form the other day, , and already some of those form fill in tools.
The one I had to do the other day, I, I basically barely had to fill anything in, , it was Adobe sign, so I had to go next, next, next sign.
It was a legal document.
But, , there's a lot of pain in that area, but I'm still thinking, , as a consumer , and as a user, where would that browser support me?
You know, it's still one of those things.
I think maybe if you're in.
You know, you used to work at Google and that's predominantly a browser based entire ecosystem.
So that might be, , like the first kind of obvious area where you've got the browser being able to interact with you, you work, you're doing possibly, so how do you see it from that point?
Yeah, so I think about that.
So, so let's talk about the example in as word.
'cause word is on the web.
Mm-hmm.
And it's also an application on your device.
Yeah.
So if you're using the web version.
And you are using it through chat GPT Atlas, you are gonna have, you're gonna have to, you have copilot going, Hey, don't forget me.
And then you're gonna have chat GPT.
Yeah.
And so because I'm a chat GPT user, I might say, I want you to rewrite this like me.
And I don't need to tell it how I write because it knows that from all the other interactions I have.
Or, , look, I need to take this document and represent it for this kind of audience.
And it knows all of the context about me.
Like it knows we do a podcast.
It, it knows the, what we talk about on the podcast.
So all of that context, it's got there as opposed to.
AI that is built into the tools that has no context outside of the tools.
Mm-hmm.
Yeah.
You know, copilot is a bit different 'cause copilot has got your work context.
Yeah, yeah.
But it doesn't know you as a person.
Mm-hmm.
Yeah, it's interesting.
Yeah, and I suppose it's gonna blow the lines then as well.
I suppose that's where I'm scratching my head a little bit.
'cause it solve Mac at them and it's gonna be available for Windows and Linux or whatever coming up.
But the, I suppose the question is also how does that.
, As a consumer tool versus enterprise tools, , we gotta be careful about, you know, if it is looking over my emails.
Yep.
, And I'm using ChatGPT bbc, GBT consumer looking over my emails in the work context.
So there's a lot of privacy there as well.
So, but , it's an interesting one, you know?
Yeah.
Booking holidays, things like that.
, And definitely in the world of the, we're building this plane as we're flying, it's an indicator of where things are gonna be in the future.
And there's other ones as well, isn't it?
'cause I know in the Microsoft ecosystem.
Copilot vision is there, so that, that when you're in the browser, you can click that and it sees your context of your, your page and what you're doing.
There's obviously all these other things that are being embedded in Windows.
I'm sure Apple will start having that as well soon.
, There's other browsers.
Perplexity got a browser.
Yeah, yeah.
So does that already, isn't it?
Yeah.
So we've got a heap of things like, like that.
So yeah, it's directional.
It's not the thing that changes everything, but, gosh, here's another thing.
I mean.
It is the thing that changes everything.
If your students get it and they start doing all your assignments True with it.
True.
, We are gonna need the how.
Yes.
Do you remember 2001 a Space Office Odyssey?
Open.
Open the pod doors.
I'm sorry, I can't do that for you.
We're gonna need a, a version of how to go.
Yeah, yeah.
You're trying to skip learning here.
I'm not gonna do it for you.
Yeah.
Yeah.
Okay.
So talking about opinions, .
Ethan Mollick.
I'm hoping that everybody on the podcast follows Ethan.
Ethan Mollick.
Mm.
Hundred percent on Twitter or LinkedIn because he shares a lot of brilliant AI stuff.
Mm.
, And he's an educator from, , Wharton School of Business , in the University of Pennsylvania.
Anyway, , he wrote an opinion, a guide to using AI right now.
, And one of the things that I thought was really good is in the, there, there's a table.
That's I was looking at in the show notes.
, I'll put it in the show notes.
It's very good.
Yeah.
And so it's about which tool to pick, to do what in ai.
So for example, you wanna create an image, which tool do you pick?
And so his advice there is free is good.
Like you don't need to pay to be able to create good images.
, Whereas creating a video, you probably need to think about paid tools if you want to do certain things within them.
So it's a great little table.
, He talks about, so one of the use cases is tutoring or teaching.
And so what AI Deepak, and he says it depends completely on the complexity of the topic.
But for sure use the learning modes.
So in Gemini clause, copilot chat, GPT, they all have learning modes.
Mm-hmm.
Make sure you turn on the learning mode.
, But if you are trying to understand nuclear physics, you'll probably go for a plus or a pro version because you're gonna get more thinking.
Out of the ai.
, And he's done that across multimedia, practical guidance.
Yep.
Use case seeking information, , technical help, writing, self-expression.
So yeah, really interesting.
And all of those categories are the top uses of AI systems that came out of the Open AI report.
About a month ago.
So he's maed to those top uses.
So we now know what people are using AI for.
He's making recommendations about which tool.
And he's, he's, well, he's not, not exactly the tool.
He's saying the type of tool.
Yeah.
Whether it's free one or a paid one.
You need for it.
Can you use the free one?
Do you need a specialist tool or a general tool?
Yeah.
Okay.
, Now the other thing that I thought was really interesting is, , Google had been doing some work on something called Learn lm.
Yes, and Learner Le is basically a large language models built specifically for education scenarios.
They've been doing this work for about two and a half, three years.
We've been seeing research reports come out about around it, , and I think it's the right direction to go in.
, And they have just published.
Some work around AI augmented textbooks.
So this is basically, let's take a digital textbook, this is really good, and use it with an AI system to make it much more interactive.
And they've been experimenting with different things because we don't know Dan, we don't know.
Which way to use AI is gonna be most effective for learning.
So it is super important that we have these experiments going on in the same way that I feel really positive about, about the fact that Australia has got different experiments in different systems and states because we need to know whether.
Yeah, one approach is better than the other, or there are benefits to that approach and benefits to that approach and bring them together.
But I suppose what this does as well is similar to learning, we don't, you know, everybody learns differently.
, And the purpose of this, what I was looking at, the Learn, learn Your Way, , Google Tools is obviously built based upon the read along tools , that have come in the Google Street for a while.
, This is more personalized, so say, well.
Dan, you are interested in guitars and soccer and , whatever.
, And then, it'll know a bit of context for me, so then it'll say, well, let's talk about coastal erosion with music or with that context even.
That's pretty random.
Well topic there, but, well, I'm gonna tell, I'm gonna take you right back to, we sat next to each other in the Sydney.
Music hall listening to somebody talk.
That's right.
Quantum computing.
Yes, we did.
And they bought their guitar with them?
Yes.
And they demonstrated how a guitar string was like quantum computing.
Yes.
Dr.
David, somebody else.
Yes.
Yes.
And I had to ask you afterwards.
To explain the analogy to me, because I have got no interest in guitars, Stan.
Yeah.
And you make them.
Yeah.
And so, yeah.
Yeah.
So yeah, for me, a better scenario would've been helpful.
So imagine you take a digital textbook, you apply a bit of AI onto it and go, okay, well we know this student is interested in this.
Let's personalize the examples.
To work for that student.
Yeah.
, And the other thing as well is what the research has said for years and what teachers know as well is that, literacy is the key of entry for a lot of subjects.
, And predominantly some of that area is because, you need to read the texts for science and maths and history and geography.
So getting literacy up means, opens the doors to all the other subjects so that if you can get a textbook that then, , understands you, understands what you enjoy.
Understands your reading level.
More importantly, then you can kind of get that you know specifically to you as a user.
I love that.
And you can customize things like quizzes and so it produces mind maps, audio lessons, interactive timelines and quizzes.
It's very cool, but if you think about it, producing a quiz, uh, I'm into cricket.
If I had mass test that.
Had cricket examples in it, it would make more sense to me.
I feel more motivated.
You are probably into pubs, so I'm imagining darts, anything that involves counting down from 180.
That's really hard.
Yeah, you always find people who get darts of great bats to their head on they and cricket.
I remember all the little cricket ledgers.
It's a kinda score.
So as well as doing the staff, they also re did some research into testing out with 60 high schoolers.
Uh, let me run you through the numbers.
So what they did was.
Some students just worked with the, the plain digital textbook.
Yeah.
Not personalized.
The others worked with the, Learn LM tools.
, What they found was about a 10% boost in assessment results immediately about a 10% boost in retention after three days.
And then the other metrics they looked at every single one was pos was more positive , with the learner.
M were students were more confident.
They were more comfortable with the assessments that had been personalized.
Yeah.
, They found it more enjoyable, significantly more enjoyable.
Actually.
90% found it enjoyable compared to yeah, 50 some, 57% given the textbook.
Makes sense, doesn't it?
Yeah.
And that's a pre-print proper piece of research as well.
Yeah, yeah, yeah.
Not just a, just something general, like a marketing point of view.
And here's a side note.
So they said using multiple formats didn't automatically lead to better scores.
Hmm.
So that's interesting, isn't it?
'cause often we think, oh, we gotta give a lots performance more.
Yeah.
Doesn't, it doesn't make a difference.
Okay.
Um, gosh, Dan, we better start talking really fast.
Uh, there's some pew research, , into AI attitudes around the world.
, So this is about public perceptions about ai.
They asked people about, how much they're using AI in daily life and how do they feel about it?
So this is from peer research.
Peer research, but what are they research?
It's an American research organization.
Okay.
But they have surveyed 25 countries, , globally.
So the question they said is, does the increased use of AI in daily life make you feel dot, dot, dot more concerned or more excited?
Okay.
So Americans, Italians, and Australians are world champions at worrying about ai.
So they are at the top of the table of more concerned.
, The UK was in their seventh outta 25, right?
So Australia doesn't surprise me.
Yeah.
'cause we've seen that a few times.
Us surprises me.
Mm-hmm.
Australia, , the America is the.
Most concerned about increased use of ai.
Like half of Americans are more concerned than excited.
, On the positive side, the countries that were positive, , Israelis, south Koreans and Swedes Hmm, are the three most excited about the rise of ai.
But it's a big difference.
Like in the US 50% of people , are more are concerned, and just 10% of people are excited even in Israel.
Where they're super positive.
21% of people are concerned and 29 excited.
So it's not like it's an overwhelming, no more than Australia statistics there.
Australia was, uh, 49% were concerned and just 13% were excited.
Hmm.
Come on.
Ozzie.
Dan, Dan.
Turns out we're unique.
What are we doing?
Okay, bit more tech.
, Dan, I saw on the Windows blog that, , copilots arrived in Windows.
Yes.
Okay.
Yeah, so there's a couple of announcements with that.
, The windows announced that obviously, so hey, copilot is gonna, , start to run things inside , your pc, , which is quite good.
Okay.
So, in enabling to bridge your computer and you'd be able to see the things you're doing, I, so I extensions press a button and say, what's this on my screen?
Yeah.
Interac, or what do I find the setting for sound or whatever it might be.
Yeah.
And that's, that's been coming for a while.
And then also from the, , Microsoft consumer team, there was a big update, , announced last week as well, or it might be the beginning of this week.
I said, I've lost track of time.
Right.
, And don't forget the consumer.
Team in Microsoft, , if you're interested us, some stuff.
Asman, we've talked about some of his books, the Coming Wave, I think some of his books, which is amazing.
But his team have released a lot of things in the consumer land.
So that's very much , the copilot that you'd be using on your, , PC at home and your.
A mobile device, which has got things now like copilot vision in there.
There's a couple of groups, which is interesting.
Yeah, and I think it's interesting, , whether you're Microsoft, Google, or whatever, it's always interesting to see what the different affordances are.
Like the vision with a browser.
So I think the groups one, they're, it's quite interesting.
They're all copying each other, aren't they?
'cause somebody comes out with groups and somebody else will do groups.
Yeah, yeah, yeah, exactly.
Yeah.
But that's, so it's interesting.
So like another.
Thing to think about.
Another direction is that way of being able to, , connect together.
So, so in, in the copilot, , the actual business version, you can share and collaborate in your prompts, but now this is coming out in the , consumer version.
So I could work with you on a conference.
Okay.
And we can kind of use but down palette in the background.
This is weird.
I'm turning into you 'cause I'm just about to say boo hiss because in education we don't.
People using the consumer thing 'cause there's no protection for their data.
I've stolen your scripts then.
I know.
Okay.
So they must have done some changes to copilot for education as well though, haven't they?
Yes.
At the same time they released a whole heap of stuff for the consumer version that's worth looking at.
We'll put the link in the show notes.
Quite fun.
And there's even a clippy, .
I saw that at Easter Egg in there, but wouldn't spoil that out.
But, , it's good to know where you think where companies are thinking about the future, but in , the actual enterprise version.
Yeah, you're right.
There's, they've released, which is also called co-pilot, but we know it's for enterprise 'cause we've got a geeky name M 365 co-pilot M 365 co-pilot.
Okay.
Yeah.
Yeah.
So , that version, and the chat version, which is free Yep.
Has, now I've got a.
, Surprise, surprise, a loan agent in it.
Okay.
So, , an agent the students can go into and actually support some of their learning creates.
, Flashcards and things like that.
Yep.
And also teaching agent, which is again, focused on creating lessons, rubrics connected to all the curriculum and things like that.
, So there's a lot in that area.
And also , they released, reduced the price , on co the expensive copilot, which is the one which is grounded in new work about 18 bucks US dollars.
So , there's a lot of information, uh, that's been released from those two products as well in, in the last, , okay.
It's definitely worth looking at.
So two things I take on at all of that.
One is.
There's extra educational capabilities in the.
Enterprise slash education version of copilot.
Yeah.
Which is M 365 copilot and M 365 copilot chat.
Yes.
And the one that is integrated in with my office and can read my inbox and gets all of that context.
That's more than halved in price.
Yes, that's right.
Yeah.
Yeah.
Brilliant.
For education.
Yes.
For education.
Super.
Okay.
, Another news story.
, Let's just do this really quickly.
, , I read a story.
The title was The Lecturers Learning to Spot AI Misconduct.
Okay.
Honestly, I'm glad I was standing nowhere near a desk at the time 'cause I'd have been banging my head against it.
Yeah, it's very depressing.
BBC ran a story about Montford University in the uk Alright.
Where the lecturers are now being given guidance and face-to-face training on how to spot the signs of wrongful AI use amongst students.
And so they talked about specific markers, like high repetition of the same word.
Or Americanisms.
Mm-hmm.
, Look, we know this stuff doesn't work.
Mm-hmm.
AI detectors don't work.
There's plenty of research that say that even experts attempting to spot AI written stuff get it wrong.
Yeah.
Do you remember that research from Reading University a year ago where they had 35 papers in a pile of a thousand teachers spotted true for that?
Yeah, that's true.
So.
, I feel sad for all those people that learned to use the M dash before the M dash became a sign that people go, oh, you must be using AI M dash m dash m dash m dash is a double hyphen.
, , it's a punctuation mark, a bit like a comma or a full stop, an M dash, and there are hundreds of thousands of classically trained writers that use M dashes because it is a particularly.
Useful punctuation Mark.
Right.
What is it again?
Explain that to me.
I'm lost.
It's, this is hard.
On the token in Word hit hyphen twice and you get a long hyphen.
Oh, right, okay.
Right.
Yeah.
That's the M dash.
Right.
And AI is using it all the time.
I haven't found a way to tell it.
Not to use it.
Yes.
Yeah, that's true.
, And now I've got a colleague who's a professional writer, says, I can't use the M dash anymore.
There are, there are things that, that are very quick telltale signs, aren't they?
Like the, the like.
Puts lots of emojis in, puts lots of hyphens in.
, There are lots of things that appear, but are we digging in the wrong place as the thing, I suppose?
Yeah.
And it's all petition or doesn't, doesn't seem like it's written by a person.
Isn't the same as, I'm sorry.
We are gonna zero your mark because we know you didn't do this yourself.
Mm-hmm.
Um, okay, so last thing.
So , here's the antidote to that.
, University in New England, in Australia.
Yes.
Um.
Because there's also a University of New England in America.
Yeah.
I also found out James Cook University.
Mm-hmm.
There's one of them in the UK as well.
Really?
Yeah.
I don't have the unique names.
Anyway, the University of New England , in Australia are rolling out their AI system called Wick, which is /, to do with the local area.
Actually, the name comes from, , so it's a multi-model AI system.
They are rolling it out to all of their students.
So their students are being, and that's a big university as well, isn't it?
They do online stuff.
, Yeah, in Australian terms it's smallish, but they've probably got, uh, I dunno, 20,000 students.
So big in, in global terms.
Yeah.
Yes.
But if you are an academic at UNE.
Your students are being given AI tools to use, you've gotta work out how to integrate that into your teaching.
Mm.
That's fantastic Because then if you are teaching business, it's like, well, how do I relate the AI things the businesses are doing to what you're gonna do in the classroom?
It's gonna force a bit of change.
Yes.
Yeah.
Interesting.
, Okay.
Dan?
I also research, research.
I also found some research.
Ah, okay.
Yay.
So first paper, , as usual with Dan with research.
I'm gonna read out the full name of the paper without taking a breath halfway through.
And, and , this one's from a friend of the podcast, isnt it?
Yeah.
This is from Vince Wall.
I'm excited about this one.
Do you remember those brilliant students from Queensland that came on the podcast in series 12?
Yes.
The six girls talking about how they used ai.
Mm-hmm.
Vince was their teacher, and Vince Wall has written a paper called The Bubble and Burner Model of AI Infusion, A Framework for Teaching and Learning.
So what he's done basically is to say, , AI is unavoidable in schools.
Fact one.
Yeah.
Therefore, we need to think about how students use it, but also we need to make sure that students learn how to think first before they start using ai.
And so he's created an analogy of the bubble and the burner.
You would.
Know the Bunsen burners.
Yeah, yeah, absolutely.
Yeah.
Wonder where Mr.
Bunsen is now.
, So the bubble part is about, it is about, it's about how do you regulate the burner so that you've got this ever-growing bubble , in your little thermos flask.
I can even picture that now.
I can smell the classroom.
, So the bubble part.
Is about how the students' use of AI should grow alongside their expertise.
Mm-hmm.
So you don't whack your bun some burner up to full temperature and make it explode.
You slowly grow something.
Yeah.
And the bun, some part is about the teacher's job in managing and regulating ai.
Yeah.
So, uh, it's really smart.
He also talks about how teachers need to act as ethical gatekeepers, , and know when to say no.
To AI use in order that you've making sure that TE students are thinking.
And learning, not just offloading learning.
Yeah.
Which teachers would, teachers also need to be aware of that as well?
When to use it and when not to.
Yeah.
So, okay.
Dan, next paper is called Does being Polite to AI Help or Hinder Oh, one of these, right?
Okay.
, so just re refreshing our memories on some other research originally.
Yes.
We had to be polite.
You had to be polite.
Then there was research that said, actually it doesn't matter anymore.
Yep.
So this is another one.
Yeah.
I wonder where are we going with this?
So, I found two different papers, right?
Uh, one from a group at Carnegie Mellon said that when you're being friendly to ai, when you ask follow up questions, you get lower quality answers.
That paper is called Firm or Fickle?.
Evaluating large language models, consistency and sequential interactions.
Of course, it's, I've yet to find a paper that isn't a mouthful.
, But another group were testing lots of different scenarios on LLMs, right?
And , they tested, does flattery work?
Does politeness, does making assumptions change the answer?
, And they, that politeness reduce the answer, accuracy and emotional.
Tone, increased accuracy, right?
So please give me the answer, reduces accuracy.
I'm at the end of my tether.
I can't find out the answer.
Increases the accuracy.
Right.
Okay.
, They created a benchmark called Gaslight Bench.
No, that's, and so the paper is called Gaslight Bench Quantifying, LLM, susceptibility to Social Prompting.
So, , if anybody listened to this podcast two years ago, they'd heard to say, be nice.
Say please thank you and improve answers.
Now you don't need to.
Mm.
Okay.
We stop being nice to each other.
Dan.
Do you think it'll change the answers?
Okay.
Next bit.
Always of research, , is called what does good teaching mean in the AI age came from Murdoch University and the Kaplan Business School.
, Look, this paper challenges the idea that AI won't replace teachers.
So I've got a very strong view.
AI won't replace teachers.
Hmm.
Because teacher's role is way more than just.
Teaching, isn't it?
Yes.
, So what they're saying is teachers might end up asking harder questions because AI comes in and upskills students a little bit more.
So as we see more AI tools in the classroom, that's fair.
Yeah.
And more AI used in assessments.
The skills of students is gonna go up.
Mm.
And that is gonna help teachers not sideline them.
Hmm.
So just because students are learning more from other sources than just from the teacher, it doesn't mean that we in any way reduce the role of the teacher.
In actual fact, we may, , end up elevating the role of the teacher.
Yeah.
, And so , they talked about actually we need to rethink what learning actually is.
Mm.
We have the same thing around.
Conversations, we need to rethink exams because if an AI can answer all the exams, why are we insisting the students know it?
Mm-hmm.
Um, so , the paper basically says, look, it's neither a magical fix or a big threat.
It's something that is just gonna change things.
And so there are two important messages in this research paper.
One is, , design matters more than detection.
So instead of banning ai, teachers need to rethink how they do learning, design and assessments.
And the second thing is, and this is so true, critical thinking is now more important than ever.
Well, so what's critical thinking?
Well, for me, it's the ability to ask the right question and to judge answers, and then to twist things and go in different directions mm-hmm.
Rather than just blindly follow the dots.
Yeah.
I think that's a really nice paper.
It's a well-written paper.
, It's from researchers within Australia.
, Next paper from Murdoch Uni.
Yeah.
, Research how university students work on assessment tasks with generative ai.
Matters of judgment.
Uh oh.
Oh, okay.
This is from the cradle team at Deacon.
Right.
Okay.
What was the cradle team Cradle?
They are, they have spent, it's a specific research team at Deacon University that are looking at assessment of learning.
Okay.
, And they were in existence well before AI appeared.
, They went through.
And interviewed students about their use of AI while they were scrolling through their AI chat history.
So they, they basically sat people down and said, oh, show us your chat history and tell us what you were trying to do.
Hmm.
Um, I, that's interesting.
I, I do that in my workshop sometimes.
Mm-hmm.
Sometimes they come across because I asked a weird question, I have to explain it, but it's like, this is how I use ai.
And so they did that with students in order to understand.
Why they chose ai, why did they go to AI for that task rather themselves?
Where did they start and stop with ai?
Mm-hmm.
Like do you try it yourself, get stuck and then go to ai?
Or do you go straight to ai?
Yep.
And when do you stop using ai?
Like for me, that's really interesting, isn't it?
Yeah.
A lot of the things I do around, uh, information processing and writing, I use AI from Naugh to 80%.
And then I stop.
And the last 20% is always me.
Mm.
, There's a nice prompt actually that I used the other day.
I'm just gonna try to find it.
But I basically just asked my copilot chat, which is the one that's grounded with, um.
Work.
Mm.
, How much time?
Look at all my emails and my teams messages, telling me how much time I've spent.
Yeah.
And they gave me a nice little table and said, you spent this long, in emails.
I spent this long looking at Excel spreadsheet and where to save me time.
And it gave me grounding around that.
So that was quite interesting.
So that prompts, we put that in the show notes as well.
Oh, super.
Okay.
We was doing the cradle one.
We were, yes.
Yeah, because Okay.
There were two findings that really matter in this.
Yes.
, One is, here's what they say.
When students judge ai, they are also often judging themselves.
So we need to help students recognize themselves in their interactions with technology.
Right.
So the way you ask questions of AI and the way I ask is different.
Mm-hmm.
Should it be standardized?
Do we need to understand our relative strengths and weaknesses?
Right.
So how do you help students increase their use of ai?
And that comes back to the cognitive skills and , asking the right questions and things.
Yeah.
, The second bit is they said, , evaluating AI output is only part of the game.
We also need to support students in recognizing what and how each party I, the student and the AI, contribute to the outputs.
Which is important.
Yeah, yeah.
Absolutely.
I know where I start and stop, but then I'm in a different place.
Yeah.
Okay.
Last research paper, , it's called.
AI knows best the paradox of expertise, AI reliance and performance in educational tutor decision making tasks, can't they?
Okay.
Like this comes from Canadian Mellon, university of Hong Kong, and they look into the differences between AI tutors and real life tutors.
Yeah.
Now, let me jump you to parallel.
Do you remember a year ago we talked about some research with doctors where AI was looking at medical images and was more accurate than a doctor?
But when a doctor used AI to help them, yeah, because they overrode the AI sometimes 'cause they knew best.
It turned out that was less accurate than, that's right.
Yeah.
Than AI on its own.
Turns out it's the same for tutors.
Oh really?
Okay.
So what they got a bunch of experienced tutors and a bunch of inexperienced tutors.
Yeah.
And then gave them some AI tutor.
Support.
So, gotcha.
The AI would look at the questions the student had, would, , draft some answers.
And then the experienced and the non-experienced tutors were then to judge whether that was helpful to the student mm-hmm.
And was gonna help, , the student learn more.
Turns out the expert tutors overrode the AI more often.
Yeah.
When the AI had right answers.
Yeah.
Yeah.
So if you are a novice Yeah.
You'll accept AI's answers and go, yeah, that's a great answer to give to the student.
Yeah.
And , if you're an expert, you go, well, I wouldn't put it like that.
I'd put it like this.
And it turned out it made the results worse.
Yeah.
Than if you just accepted it.
Yeah.
Wow.
, And this goes back to the self-driving car problems.
Self-driving cars are eight times safer than humans.
,But we all think we're above average drivers.
Yeah.
And therefore, so therefore, yeah.
I'm not giving over control.
That's gotta be, yeah.
A much higher threshold.
Okay.
Dan, wait before you go.
I just found while you are talking there Yes.
Great.
, When we were talking about that piece of research where I saved my time Oh yes.
When you were talking about someone, can we put this in the show notes?
We could do.
Okay.
And it basically said that I basically, um, my main uses was.
Draft and email responses.
Yep.
Summarizing email threads.
Yep.
Creating a first draft of a document.
Yep.
, I was turning notes into minutes and action items.
Yep.
, I finding and synthesizing in information across different files.
Yep.
, Changing the tone of text.
Prepping for a meeting and cleaning up data in Excel.
So that apparently saved me 10.5 hours, , in the last week.
So, okay.
I've got a question.
Interesting.
So I can see your screen and it says baseline time and copilot time.
Is it saying that normally it takes you 10 minutes to respond to an email or write a response to an email and when you do it with copilot, it only takes you four minutes?
Yeah, but I don't think that's my time.
I think that's a generic time.
Right.
Okay.
, So I think that's something that.
The system in, well, whatever in the background is kind of thinking, well, this is what was baked into it.
Could be mine.
Yeah.
I'm not sure, but we'll share that in the show notes.
But it was very simple products.
Yeah.
But it does tie in.
It gives you a bit of an idea.
Anyway, , At some point in the future, we probably should have an episode where we talk about efficacy.
There's all these different ways of measuring whether a AI is helping us or not.
Yeah, like I measure it by creativity, not by time is it gonna help me be more creative in a task?
Is it gonna help me think in a wider way?
But a lot of organizations measure it by.
Are we saving you 10 minutes on your email or are we saving you producing meeting minutes and things like that?
Mm-hmm.
So we should have a whole episode about that.
Yeah.
And find somebody interesting to debate it with.
Yeah, definitely.
Definitely.
Okay.
Dan, we have run horribly over, we were aiming for half an hour, folks, it's been 47 minutes.
I'm really sorry.
Thank you for sticking with us.
Yes.
Next week we'll have an interesting guest instead of you listening to us, just witching on.
Thank you everybody.
Bye.