Navigated to The Science of Learning Meets AI - Transcript

The Science of Learning Meets AI

Episode Transcript

The widespread adoption of and the rapid evolution of generative AI platforms have created substantial challenges for faculty in how we assess student learning.

In this episode, we explore a new resource designed to help faculty use AI to efficiently support teaching practices based on the science of learning.

Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

This podcast series is hosted by John Kane, an economist...

...and Rebecca Mushtare, a graphic designer...

...and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

Our guests today are Lew Ludwig and Todd Zakrajsek.

Lew is a Professor of Mathematics at Denison University, where he served as Director of the Center for Learning and Teaching from 2020 to 2025.

Much of his recent work has focused on innovative methods for utilizing generative AI.

Todd is an Adjunct Associate Research Professor at the University of North Carolina at Chapel Hill, from which he just retired a few months ago after 17 years.

He is also the Director of the International Teaching Learning Cooperative and the Director of four Lilly conferences on evidence-based teaching and learning.

Todd is the author of many superb books, and has published six books [so far] in the past five years.

Welcome, Lew, and welcome back, Todd.

Thank you.

Thank you, Rebecca, ...

Lew, are you drinking tea?

I am.

I'm usually a coffee drinker, but I have a lovely Cha Atiso.

This is a Vietnamese artichoke tea, which a student brought back for me from the summer, and I've really, really been enjoying that.

So that's my tea for today.

What does it taste like?

It's got a little sweetness to it, which I like.

I don't put anything in it, but I really enjoy it.

And it's supposed to be healthy, no caffeine, all that sort of stuff.

And Todd, are you drinking tea?

Yeah, I went with a roasted dandelion spice tea.

It says detox on the package, and I can always use a good detoxing.

So I'm doing that and sipping it from my favorite cup.

Mmmh.

That's right, the tea for teaching mug.

I have my tea for teaching mug here.

I never leave home without it.

Actually, I never left it home.

And Rebecca?

I have a Swiss Breakfast tea, John.

Very nice… on a chilly morning here in upstate New York.

And I have a ginger peach green tea today.

Wow.

Quite a variety.

Nice, this big variety.

So we've invited you here today to discuss your newest book project, The

Science of Learning Meets AI

Science of Learning Meets AI: A Practical Faculty Guide to Purposeful Integration, Student Engagement, and Ethical Practice.

Can you tell us a little bit about the origin of this book?

Yeah, sure.

I can get us started on that.

So as you know, Todd and I both have backgrounds in teaching centers of faculty development.

And our national conference is something called POD, Professional Organizational Development Network.

And in 2024 we were at that conference, and here ChatGPT had already been out for about two years, and we were still kind of struggling with the idea of, how are faculty supposed to use this?

A lot of the questions we were getting is, “Hey, how can I use AI in my classroom?” And it wasn't until this conference was up in Chicago in 2024 where we realized we were asking the wrong question.

It's not really how to use AI in the classroom, but instead, maybe we should be thinking about, why are we using AI?

Yeah, and actually, I'll jump in too, because Lew went right to that technical part and everything.

Another thing that happened, which is why I love it when people get together, is Lew asked me if I would help out or just talk to him a little bit about some of the science of learning, because he wanted to really work on this project with bringing AI into this, explaining why it's important to use it.

And I said, “Well, that's interesting, because I just started to outline a book that I want to do on AI, and I was going to check with you about the AI part, because I got the learning part down.” So we just said, “What the heck, we'll do it.” And Lew didn't know me very well at the time, so he said, “Yes.” Anyway, we made it through.

But the big thing was that we really wanted a book that would teach faculty and help them understand the learning theory behind why the learning happens and everything else, and use AI to augment that.

So this isn't like using AI to do things, it's using AI to help us do the things that we've been doing.

You start the book by acknowledging that faculty didn't ask for this AI disruption or for any of the evolutions of this.

Why start with this instead of with excitement about AI's potential?

Oh, it's called learning from one's mistakes.

So again, ChatGPT came out in November of 2022, by August of 23, I remember on my campus, we had a fairly large across the board faculty meeting about AI.

It was about a half-day session.

So administration was there, IT was there.

And as the Teaching Center Director, I gave probably a 90-minute session on, “Hey, here's this new technology and how we might be able to use it in our classroom.” And what happened was, just by saying, “Hey, here's how we might be able to use it,” some folks labeled me as AI pro.

So I was just pro-AI, hey, you talked to Ludwig, and he's just going to be all in on AI.

And that was almost like my scarlet letter that I was wearing around campus to some folks.

So I learned from that.

So any of the workshops, any of the things I do since then, I start off with this exact phrasing, just to kind of level the playing field here, I say “You didn't ask for this.

You didn't request an unregulated, untested, and rapidly evolving technology to suddenly upend education and nearly every other sector of society.” Now, I've been delivering that, like I said, quite a few workshops.

I did it back in August for a group of high school teachers in Richmond, Virginia.

They applauded me.

So it's just kind of getting it out there and acknowledging it that, yes, this is a disruptive tool.

We kind of have to acknowledge that grief and then kind of move on from there.

Yeah, I would say too, is really, I really thought Lew had a good handle on that part of it.

I wouldn't have put that in there, quite frankly, for the book here, and Lew was pretty insistent that it go in there.

But as we started talking it through, I realized I always started any COVID workshops this way.

When we were doing emergency remote teaching, if you just started talking about emergency remote teaching and didn't acknowledge just all of the stuff that happened with COVID at the time, on the number of deaths, for instance, you can't just jump into something and ignore it.

So I think that Lew had a really good point here, and then I started pulling together what I did with that, with what he was talking about.

And I think the whole concept is we started thinking within the book of, how do we do this?

To recognize people are grieving, that they're struggling, and we say it's hard that we see you there, the frustration is real.

Teaching is changing, just as it did with COVID.

We don't have a choice of that, and AI isn't going to go away, so let's recognize what's going on, and recognize even some of the impact of AI, and then from there, then move on with a sense of acknowledging what's going on.

It's almost like you're modeling some good teaching practices there.

Kind of.

Periodically it happens.

We try.

This adapt, create, embed, progression, the ACE model, is used throughout the book.

Can you talk a little bit about what this looks like in practice for a faculty member who's never used AI?

Yeah, so this work kind of, again, developed that starting point back in November of 24 after we had that initial meeting in Chicago and started to realize we needed some early wins for faculty.

So we got a lot of faculty out there who are kind of hesitant about this new technology.

And if you gave them something like a prompt or something that didn't work, they'd be just like, “See, I told you, this AI thing, it's a disruptive thing.

So we've been workshopping this quite a bit early January onward, and refining the model.

Originally, you ready for this?

It was, instead of adapt, create, embed, it was improve, create, integrate, which, as an acronym, is icky.

We didn't think that would land very well, so we went with ACE instead.

So for example, the adapt, that's kind of the easiest win.

So what you do is, you take an existing assignment, you have that assignment that it's a big project, but for some reason, it just doesn't land with students.

They always hand in something a little bit different than what you asked for.

Well, that assignment’s begging to be TILTed, right?

So Mary-Ann Winkelmes has this idea of transparency in learning and teaching, and it's a technique that you can do for your assignment to make sure that your students understand why they're actually doing the assignment.

And it's a kind of a three-step process.

And AI is great at TILTing your assignment, and faculty members can take an existing assignment that they have.

They're experts on that assignment.

AI can help them TILT that with this framework, and then they can see how that might better fit into their language for their course, and help improve what their students are doing.

And the TILT framework has been really good.

It kind of raises all student boats.

But in particular, it does well for underrepresented and first-gen students do exceptionally well with this.

So again, just kind of an early win with the adapt model.

Should mention quickly that we kind of started out and we based a lot of this on Ruben Puentedura’s model of SAMR, which is the substitution, augmentation, modification, redefinition for all of technology.

If you have any new technology out there, you started with substituting it.

How can we do this instead?

You can augment it.

And that's a really interesting model.

Didn't quite work with what we were after.

And so Lew had lots of workshops he had been doing.

And so we kind of used that as a foundation.

But as Lew just pointed out, the adapt, which would be similar to that original substitution, is to get you rolling.

And I'm going to go ahead, Lew and take I'll take the other two real quickly.

So next is create.

And essentially, it's looking at how you can do something that you really couldn't do before.

And there's lots of things that are out there that we could use for this one.

We'll talk a little bit more about those, probably later on, as we're looking at some specific examples.

But when you think about what you can do and couldn't do before, one quick thing is a daughter, one time, was struggling with something, and she said, “Can you

help me?

help me?: And I said, “I'm busy right now.

I can help you a little bit.” 15 minutes later, I said, “I can help you now.” She said, “Oh, took care of it.” I said, “How'd you do that?” She says, “Oh, I just got on a group, a small online group, and then we solved it together.” And I said, “Who was it?” And this was years ago.

And she said, “Oh, there was a student from Germany, one from South America, somebody from Nebraska, and myself.” That was never possible before.

And so many things in AI do things like that.

They'll set up things we could never do before.

So the “create” is, how can we do things different?

And then once you've adapted it and created it, the next thing is, just embed it.

Just make it part of the regular aspects of your course.

How do you do that?

And what we like to point out over and over again is that, you do this wherever you're at.

If you're just getting started, you just adapt some things you're doing.

And that's where Lew is really good at setting some of those things up.

If you're a little bit nervous, just take something you've always done, do it with AI, do a little bit better, faster, and then just get started, but don't get overwhelmed with it.

And the other thing about doing with something you've already done is then you're the expert, and you can critique what AI is doing for you.

I wouldn't go off and learn a brand new subject you learn using the AI, because I’d have no idea.

But again, coming from that wheelhouse of your own expertise, don't forget you are the expert with the content as well as your students and context.

So keep that in mind whenever you're dealing with AI.

And that example you use of the TILT model is a good one.

I've given workshops to faculty about it, and a lot of people were reluctant to put in the effort to make all those changes.

But when you show them how AI can do that, or better yet, if you share perhaps a prompt, or if you share a GPT or a chatbot tool that they can plug in their assignments to and their learning objectives, and it can do it for them, or at least give them a starting point on it, it makes it so much more likely that people are going to do that.

So you talked a little bit about the ACE model, which is used throughout the book.

But could you tell us a little bit about the organization of the book?

Yeah, sure.

So the book's kind of broken up into three parts.

The first part, we call it “Creating your own Gen AI roadmap.” And again, chapter one, you didn't ask for this, but here it is.

So just kind of, again, acknowledging that situation that we find ourselves in and how to move forward with it.

Then in chapter two, we start talking about what it means to think about shifting our teaching in the world of generative AI, so the idea of being transparent with our students and how we're using it and setting up good communication with them, and then also the idea of as faculty members, as we're embracing this, we really have to be curious and flexible as we work with this technology.

We're always pushing our students to take risks, be flexible, be curious.

Well, guess what, that shoe is going to now fit on our foot as we try to move forward with this model.

So that kind of got things up and running, and then we kind of ran right into the learning aspect of it.

Just to give you an idea of the book, I guess is probably the easiest way to do this is.

Chapter four is on universal design.

I've been a long time advocate of universal design, and I think AI can be now done to do things that were really challenging before.

So we did dedicate an entire chapter to universal design.

Then chapter five, we really thought community was important.

So chapter five is about building classroom community and trust, looking at belonging, engagement, and integrity, kinds of issues.

Chapter six comes into backward design, a huge aspect of teaching and learning, kind of knowing where you're going as your goal.

And then chapter seven is looking more at learning science of cognitive load, attention, spacing, interleaving, retrieval practice.

And that's the “building the foundation section.” And the last section of the book, section three, is for kind of deeper learning.

We looked at some things about teaching and learning that just we know works well.

So chapter eight is guiding students from novice to expert.

We put together in that section zones of proximal development, scaffolding, and the gradual release of responsibility as ways of helping students kind of go into deeper aspects of learning.

Chapter nine is on strategic learning.

So we looked at transparency, TILT, cognitive apprenticeships, metacognition and those types of issues.

And then we wrapped it up with chapter 10 of persistence and motivation.

So there we pulled together the concept of self efficacy, growth mindset and self-determination theory.

So the chapters have two or three strong learning principles to them, but they're put together in ways that not only help explain the individual things, but also overall this concept, like persistence and what kinds of things go into that.

And then the last thing I'll say, since I'm on a roll right now, is overall, I think we really thought it to be helpful, repetition is just helpful for learning, right?

We know that.

And so we did is we built a book that has strong repetition.

So chapter after chapter, we basically started with chapter four, and then we go all the way through chapter 10, and it's set up so that there's always an opening story or anecdote to set it up, which we'd want to do for just getting the learning focus there.

Then from the model, there's an improve activity, and then we look at the science components, and from there we might be looking at metacognition or Bloom’s taxonomy or something.

So that comes into the second two parts of the model.

Here we have a create and then embed, those activities are in there.

And then we talk about the ACE model, and we have more activities that come into there.

Then we look at risks and guardrails.

And again, we're going to acknowledge everything has risks to it.

We're not going to just pretend like it's all great.

And then we have a conclusion, pull it together, and we finished every chapter with discussion questions and reflection questions.

So if you're a faculty developer and you want to use this as a book group, for instance, we've got your discussion questions already built in, but for having faculty work through it, there's reflection questions in there.

The last thing I'll mention is that we talked really extensively about how to cite this stuff, big APA kind of person, but we really wanted this readable, and so we went with a model where we put everything in as end notes, so as you read through it, you don't get the standard citations with dates all the way through it, it just reads like a novel, almost, and I think it's really easy to read.

So you don't have to read it cover to cover, although you should, maybe three times.

You can just go through and pick out what you like, but if you do go through, we've actually built it so that you start with simple kinds of examples and move to more complex ones.

And Lew is masterful at putting together how to like, ask questions and do prompts and specifically with a separate font in there, we just have paste this in and add your components, and it spits out the answer.

Pretty cool.

So what Todd's explaining there is, we have a lot of different techniques.

We have things called activities as well as examples.

So each chapter has six activities, which are kind of page-long discussions of how you might use AI.

And it'll start off with some prompting, like he said, in that different font, but you put it in your context.

We don't do this

We don't do this: “Oh, a biologist would do this.

Now you do it for yourself.” You actually, in the context of this thing, put in your information so you're directly dealing with it.

So we have, like I said, six of those for seven chapters.

So 42 of those things.

Each one could be a workshop.

And then also we have smaller examples for seven chapters, we have 12 smaller examples, which is another 84 prompts.

Those aren't prompts, those are just more ideas.

But Todd mentioned this idea of meta prompting.

We give you a way to turn those examples into a prompt for yourself.

So you're using AI to create your prompt for you, and I think it's called meta prompting, so that's what he was referring to there.

There's a lot written about the science of learning and Todd, obviously, you've written some books on the subject.

How did you decide what science of learning concepts to include?

I just put a whole bunch of stuff in there.

And then Lew started saying, “Nope, nope, nope.” No, we got along great on these things.

It basically boiled to Lew does tons of workshops, I do workshops, I think I've been at 300 campuses so far.

So I looked through the different presentations I've given, and when things kind of hit and when they didn't hit.

Cognitive load is something a lot of people just don't quite understand, and is a huge foundation of learning, because within cognitive load, you've got automaticity and schema activation and all kinds of different things that are practicing retrieval kinds of things.

So I just went back to the warehouse of stuff and picked the things that worked, which is what I still think is really, really valuable about the book, is that these are things that work, and it'll help explain them as you go.

Big things like the metacognition, Bloom's Taxonomy, cognitive load, automaticity, practice, retrieval, interleaving, the kinds of things we've heard.

And it was hard, I will say, Rebecca, it was really hard.

There's a lot out there.

I mean, obviously I got what is it a 270 page book on learning.

So how do you just fold in some of that?

And so, it doesn't have everything, but I think it has the big things.

So you mentioned before that AI allows us to do things that we couldn't easily do before.

Could you give us an example and walk us through that in terms of how AI could be used to create learning activities that could not be done as easily in the past.

Sure, individualization of instruction.

I'm at a pretty small school here at Denison, but I could imagine, wow, what if I was teaching a class of 400.

So I might end my class here at my small school with, you know, a muddiest point, I get 20 responses.

I look those over, what did students struggle with that day?

And then next class, I can kind of address that.

Imagine if you had 400 students, what could you do with that?

Well, now, with AI, you can take this classroom assessment technique of muddiest moment and, on the spot, have students type their prompt into, say, a Google form or something like that, and as maybe the students are off working on some kind of project or discussion together, within two or three minutes, you could use AI to organize those muddiest points and give me the top three things that these 400 students are struggling with, something that was just was not even feasible before.

So I think you know, at scale, this thing can be really useful for those types of examples.

And with regards to CATs, the classroom assessment techniques, Todd's got a book of like, 50 of these things, right?

So there's so many things that you can kind of do on the fly that AI can then help you with, kind of the managerial stuff, just the paperwork thing of keeping track of that stuff so you can respond in the moment for what your students are doing.

And Lew, I appreciate you mentioning that book, it’s interesting, got me thinking real quickly here.

So that book is Classroom Assessment Techniques, which not too many years ago was like the standard.

Everybody had a copy of this thing, and they did because the first edition was 1988 I believe it was, second edition was 93 though, and in 93 was right about the time we started talking about learning versus teaching, and how would you know if students are learning?

And along comes Angelo and C ross, their book was out, I guess, already, and so it explained it.

I mentioned this quickly because I then worked with Tom Angelo to do the updated version, the third edition, which is now out 30 years after the second edition.

And the second edition was still selling, but we were removing things like “hand out papers to the students, have them respond on the muddiest point, and then take them home, but make sure you don't ask questions that you don't have time to respond to.

Don't ask elaborate things because it will take too much time.” Lots of stuff about CATs was about being careful not to overwhelm yourself.

And if you think about that in the old days, that's what you do.

Pieces of paper were handed to, you’d read through them and everything.

So we took out things like “share your results on the overhead protector,” and those are gone, and then we replace them with things like Lew was saying is, just put your students into a small think-pair-share, have them respond to this thing with a PollEverywhere or something, and then while they're talking, then come up with the four muddiest things going on, and then teach those in the second half of class.

And so, all of those classroom assessment techniques that we've been using for 30 years can now be made easier with AI.

So I'm really excited about this.

You mentioned earlier about the chapter on Universal Design for Learning, and I know you've been a long time advocate, Todd.

Can you talk a little bit about how faculty can use AI to design courses that are better for diverse learners and really have kind of access in mind, rather than retrofitting accommodations.

Yeah.

So number one, I think the thing to point out here is, so I teach, this Lilly teaching conferences.

They've been around for a long time.

There's different types of sessions that I'll include in the program, and some of the sessions, and I talk to the presenters about this all the time, some of the sessions are just hot topics that you know are just going to

go

go: engaging students in asynchronous classes, it’s going to fill a room.

How to do active learning strategies, going to fill a room.

They always have.

Sometimes you have topics, though, that you put on the program because you have to talk about them.

And if you don't put them on the program, even though they're not popular, then it doesn't get picked up.

And I remember putting sessions in the late 1990s, early 2000 about diversity that would have four people in a room, and individuals would say “there's only four people in there.” And I'd say, “We're still putting it on the program.” UDL was another one, anything on universal design, I can tell you I'd put in the smallest room in the conference, because it would be a great session, but it would draw like seven people.

And one of the reasons it didn't draw many people, and quite frankly, across all of higher education, people might love the concept, it's really hard to do.

If we're going to do things like multiple means of expression, which is a key factor of universal design, it can't just say that you can share your results in one way.

What if you're not good at putting text together, but you really have the idea down.

So maybe you create a video, or maybe you have a play, or maybe you get in small group and have a discussion.

Multiple means of demonstrating the information, so presentation, how do I explain this in a way that people can get it that's always been hard, it is now being made really easy because we can go to AI with the correct prompts, and we can come up with multiple ways of expression, multiple ways of presentation.

Lew was talking about having a class of 400 with the students in there.

You could say to the students

You could say to the students: “Here's what I want to do.

We're teaching metacognition.

Everybody think of something you love to do now, type into AI.

I love to blank.

And I want to learn more about metacognition so that I could explain it to my friend.” So I could type in I love long distance bicycling, and I want to learn metacognition.

Rebecca might put in that she loves rock climbing.

John might put in like he likes baking, and Lew might probably put in there he likes street fighting.

So you put those prompts in there, and then what pops out is everybody in the class gets to learn it in their own… I mean, what could be more universal than that?

And so I'm just really excited about it.

And we did the whole chapter, and Lew and I talked about it, and I think at the beginning I wasn't even going to do a whole chapter, and he said, “Yeah, we need to do a whole chapter.” And so I think AI will open the gate so that people can finally do universal design in a way that's meaningful without overwhelming them.

Faculty have been overwhelmed, first with COVID and now with AI, and many other things that are happening externally.

How does your book address the reality that faculty don't have enough time or energy to learn a whole new technology on top of everything else they've had to learn in the last several years.

Fair point.

I think the thing we need to start with is the idea that we want to be careful and not this chase the technology, because if you do that's going to make things worse, you're going to get frustrated.

What you want to do is try to use the tool to make things better.

So I often talk about there's 2, t-w-0, too many shiny objects out there.

And one of the ideas with the too many shiny objects is there's all these

little prompts that people

little prompts that people: try this, try that, try that, and that's really frustrating for faculty, because it's just a little bit beyond their skill set of what they want to do.

It's a biology example, and they have to bring it down into what they want to do.

The other thing is, with the shiny object, is the idea of treating AI like this big shiny hammer, and then you're running around trying to find all these curricular nails to try to beat down of how I'm going to use AI to do that, that is just such the wrong way to address AI, letting the technology lead first, you're just going to make things worse and you're going to stress yourself out even more.

So what we want to do is we want to go back to basics.

We want to think about backward design: at the end of this course, what should my students know?

What should they understand?

Then how am I going to assess whether they understand that?

And then finally, how would I go about teaching that?

And that's where we decide whether AI might fit in or not.

So again, it's not the idea of trying to get AI in the classroom.

I think as faculty members, we want to be really careful about how do we use it to be a creative assistant for us and kind of level up and improve our existing teaching practices.

And the thing is, the book, all these teaching practices that we're doing, these are well reasoned research techniques that have been laid out in chapters four through 10, backward design, ZPD, TILT, all the fun letters, UDL, all that stuff.

But again, it's trying to use that in a practical way, trying to develop your teaching skills more, and then just using AI as the tool to help you with that guidance along the way.

I wrote an article years ago.

I was like, we just want to be well, that's all we want.

So this concept of what we're doing, if we think about what we do as faculty members out, as Lew pointed out, we're chasing things a lot of times, and that's just a problem.

And this goes back to… remember when clickers came out, student response systems.

Everybody's like, “Oh, I want to use a student response system because my colleague’s using it,” and as faculty development directors, we are constantly trying to talk people out of using technology unless they actually needed it.

Backward design.

Why do you want to use it?

Student response systems are always fun, because it's like, I want to use student response systems and my friends and I would say, “Okay, so why?” “So we can gather student responses” and I’d say, “Okay, you know, you could just have them raise hands.

Like, how many of you think this?” They say, “Well, yeah, but I want to know like anonymous.” I say, “Okay, have them close their eyes and raise their hands well.” “But I want to tally them.” “Alright, well have them close their eyes, raise their hands and then count them.” I mean, these things are not necessary.

But then when they start to say, “Okay, but I want to know some qualitative aspects, what are they thinking and why?” Now we can use the technology.

So I think that's helpful.

And so when you go back and look at what really takes time in teaching, those types of issues do, I can't, like have everybody talk about muddiest point, because I don't have time to read through those.

Now you do, because AI can skim through those quickly for you.

If you're prep timed, you put in exhausted in there.

I was exhausted because when I taught history and systems of psychology the first time, I would be up from 10 o'clock at night till two in the morning putting together a lecture.

And I knew the stuff generally, but I had to go look for it.

But I spent all my time looking.

I was looking for the information.

Then I'd find it, and then I would embed it.

With AI I'm going to get the same kind of information, I'm just going to get it really faster, and because I can pull that together with the knowledge base I have, then I can put it out there.

So prep time is going to drop, grading time is going to drop in terms of figuring out how to read through these types of things.

Underpaid, by the way, I think you might have mentioned that one, just so we can't do a lot about that.

So sorry, yeah, for the underpaid part.

But I want to finish with one quick thing here.

There's a book that I did actually get accepted for publication that I have never written.

The contract was signed in 1994 and the reason I mentioned this is that what I wanted to do was take a stats book, but I thought we're teaching statistics without kind of framework.

So what if we took the old psych studies that were really the cool studies, took the data set, retrofit a data set basically that would give you the same data.

Have students analyze the data, come up with what they've got, and then talk about what the data means, and then explain to them how that fits into like, that was the Bobo doll experiment, where Bandura had models hit a bobo doll, a blow up doll, and then they give the same doll to kids and see if the kids would do the same thing.

You could replicate these studies while learning statistics.

And I was told I couldn't write that because most faculty members teaching statistics don't know those studies.

And so at the time, there was no way to do it.

Now, with AI, you actually could, it would be very easy to say, I want to know more about Bandura’s Bobo doll study.

And so I think that's some changes that we're going to have that are really going to be helpful.

One of the things that you've both underscored is leaning on the faculty member's expertise because they can validate what's coming through the AI tools.

So you're not worried about the reliability, because you have that expertise, and the things that you're suggesting people use are relying on that expertise to some extent.

Yeah, and that's the thing.

They've got the expertise.

How can we help them with it?

And help them by putting together a book that's very modular.

You can just go through and work through these things.

Actually, the activities are set up, and Lew did this, I thought it was fantastic, they're put in boxes.

You could pull the whole thing out and make a workshop out of it.

Very, very easy to pull this stuff out and do that, but by making these things very consistent, we know that reduces cognitive load, which is going to reduce some of that exhaustion and extra work.

And the other thing is, the examples were set up in a way, again, that they relate directly to what you're doing in your classroom, your assignment, your example, not something that, again, the cognitive load that you have to translate from somebody else's existing example.

And again, that's one of the neat things that AI allows you to do.

Because now, instead of me saying in a book, do this example, I can put a little bracket there and say, insert your example from your discipline here, and all of a sudden you have individualized learning, individualized application for the teaching technique that you're trying to do using the power of AI to do that.

And one of the things I've seen is that we have a lot of people attend workshops on AI just to learn how they can block it and how they can ignore it.

But when the focus is on activities that actually allow them to do their work more productively or more easily or more efficiently, they get a bit more excited about it and then start thinking about how they could be using it to improve their courses and to improve their lives in general.

And I think that's a nice way of introducing people to AI who are initially somewhat resistant to it.

Yeah, it's critical.

I don't think you can walk into a classroom and say, okay, students, we're going to use AI today without understanding it yourself.

And you hit the nail on the head there, the best way to do that is to do something that you are the expert in, and then learn to explore this technology on your own.

It's going to take a couple years for us to kind of get used to what's going on.

I think in a few years, we'll be ready to think about how we might use this with students and things like that.

But as a faculty writ large, we're going to have to have a better understanding ourselves of just how to use those tools.

And I think to your point about students misusing it and things, we can now have better conversations with them if we better understand how to use the tools ourselves for our own work.

And how people learn.

So we have tht published in the book too.

That's what it does.

It helps you understand how these processes work, and so you learn about things like cognitive load.

So then you can not teach cognitive load, but using that, build assignments that have lower cognitive load.

That was one of the things that Lew came up… I love that one assignment that you came up with, Lew, is basically it’s you load the assignment in and says, “Where are students going to have high cognitive load in terms of really struggling with this assignment?

And then AI says, “Oh, they're going to stumble here, here and here.” And then you start rewriting it.

It's not a lot of extra work for you, and it helps the students.

You don't have to go through examples and have all those students lined up outside your door in office hours, saying, “I don't understand why this thing works.” And so yeah, I think it’d fantastic.

I appreciate the idea of meeting people where they're at, for sure.

You mentioned something about a couple of years, maybe we'll be ready to really integrate it with student learning.

In five years, maybe, what does good teaching with AI look like?

This is kind of a fun one.

I actually was asked to do a workshop recently that they were looking for a speaker, and they said they were interviewing different people, and they said, “We just want somebody come in and talk to us about what teaching will look like in 5 to 10 years.” I said, “If somebody tells you that they know what teaching would look like in 5 to 10 years, I would not ask them to give that presentation.” Because I don't know, we used to talk five years out.

Tom Angelo's book with CATs was good for 20 years and still kind of worked.

That stuff's not going to handle anymore, because just in the last three to four years.

Can you imagine, just before AI took off, go back just a few years and predict that it would look like this?

So I don't think we have a clue about what overall it's going to look like.

I do think we can look at some trends, though, and we can look at some directions we're taking, because we can't just throw our hands up and say “we don't know,” but we want to be careful about what it's going to be and it's just going to be difficult, but there's lots of good stuff coming out.

And I'm going to actually ask Lew to do this, because earlier today he was explaining to me an article he's reading.

I think this stuff is fascinating that’s coming out.

Sure, I'll jump in.

So one of the things, not only do I keep an eye on higher ed and what's going on with AI, but also the K through 12 space, and I follow a lot of folks on substack, it's interesting.

I think a lot of the discourse right now is going on in substack, as far as with AI and teaching and learning things.

But anyway, there's a fellow I'm going to try to pronounce his name, Nick Potkalitsky from Ohio, and he's working with our Ohio Public Schools and writing about that.

So the state of Ohio has mandated that by July 1, 2026 all school systems are supposed to have some kind of AI policy in place.

So Nick is working with some of the larger school systems, Cincinnati, Columbus, Cleveland and things like that.

And in this article he talked about on substack, he's like, “Listen folks, this is going to be a three to five year rebuild.

This is not going to be a quick policy fix.

Rebuild means rethinking instruction, assessment, and curriculum, basically from the ground up.

And how might we support teachers to make that happen and things like that.

But teachers clearly need to be in that conversation.

They need to be in that room.” This is the K through 12 space, and I think that's only going to translate to higher ed as well.

So the sooner faculty can get out there and start engaging in this technology and seeing what it works for them, then they will be able to kind of think down the road of how that's going to affect their students in their classroom.

We have an epilogue in the book we did too after chapter 10, there's a section in terms of what we think is coming along.

And I just want to add one more quick thing here, when I think about what good teaching is, throughout my career… so I started teaching in 1984 so I've been teaching for a minute… and the concept here is that throughout all of that time, when I think about things I wanted to do, and what frustrated me were things that I couldn't do because I didn't have time or I didn't have a way to do them.

Universal design was just one example of that, but lots of things.

With 400 students, I would love to ask them an essay question, but there was no way to look at that, even to get a sense of what they were writing.

I would like to put students in small groups and looked at multiple versions of what they were talking about.

There's so many things out there.

And now, as I start looking at what the future of teaching might look like, it's kind of like one after another we just remove those barriers.

So I don't know what it will look like, but I know it will look better because it is removing things that we can't do.

And I will just end with my chunk of saying this is that we so quickly, when this came out, went right to cheating.

Just immediately, the first articles out were, how are we going to stop students from cheating?

How are we going to stop them?

Students were cheating before AI came along.

Don McCabe did a lot of work on this, and the proportion of students cheating in some way was off the chart.

It made it easier, no doubt about that, and it does create a real problem, because if students can just jump in and do AI, then there’re many that will do that.

But I don't think we have to go there.

And Lew and I talked a lot about this too, is that we shouldn't just go there.

We should start looking at opportunities.

And I just talked to someone the other day who said, like, “How are we going to know when we can fully trust AI when it gives us an answer?” And that's the kind of stuff that just drives me nuts, because when we talked about hallucinations, and so many faculty immediately say, “Oh, I put that in and it made a mistake.” So do I, sometimes.

There are times when I make mistakes.

I've heard my colleagues say stuff and I say, “That's not true.” And so it's no different than humans in that respect.

We're going to be able to trust it when we can look at it and we know it's right, but it is getting better all the time.

And Lew said it earlier in this discussion we've had here today too, is we can't focus on what it was.

We have to focus on what it can be.

And I think looking forward for teaching, there's some really cool stuff coming.

I think one of the things that you're highlighting, both in your book and how you responded, is it remains anchored in the science of learning and just how we're using the technology, and things might shift, but if we keep focusing on good teaching and learning practices and just get them better, it's not a surprise.

Well, it's interesting, especially at the higher ed.

We're all trained in a very specific discipline, a very specific area.

We're experts in that field.

We were never trained to teach, or very few of us maybe had that one course as we were TAs or something like that.

And again, what the AI is allowing us to do is to further our expertise and our teaching and to kind of push the boundaries where we just didn't have that training before.

So I think, as Todd pointed out, I think there's going to be a lot of opportunities just to make things better, hopefully.

Before we wrap up this question too, I'm just going to kind of put a charge out there, because Lew mentioned a little bit earlier, but in the model we have for adapt and stuff, for those of you who are listening, if you haven't done it yet, just try dumping one of your assignments in for a topic or an area that you know students struggle with a little bit put into the AI.

And the book's got some examples on how to do this, but you could just say, “Where will the students struggle with this?” When I teach statistics, distribution of sample means, always give them fits.

It took me years to figure out how to get around that, but if I was a brand new teacher and never taught before, I could put distributional sample means in there and say, Where will my students struggle?

And it will tell you, here's some places.

And then, if you want to set that off to the side and then teach it and see where they struggled and make a comparison.

And if it kind of lines up a little bit, you know it's a gateway to kind of figuring out where the problems are going to be before they emerge.

Wouldn't that be cool?

So this is the first time you've worked together on a project like this.

Could you talk a little bit about the working relationship and how that evolved?

Yeah, let me jump in there real quickly.

Number one is writing a book is hard.

I mean, it is really, really hard.

And AI actually has helped us out with this too.

So I wrote a book, this New Science of Learning first edition took two and a half years to write that thing.

This book was nowhere near that, six, seven months, and we put a lot of time and energy into it.

But the point is, it's hard.

Working with somebody is even harder because now you got to talk about what you're doing and look at different points of view and everything else.

And I think what was really cool with the way Lew and I kind of set it up is that we were coming from different directions.

I had the learning theory.

He was coming in from the AI, I would write a bunch of stuff, and he'd say, that's really cool about all that metacognition.

Maybe we can change it from like 3000 words down to, I think, 42, one time.

And I said, I'm not explaining metacognition in 42 words.

And then he gave me like 150 so we had to compromise and go through that.

I would just say, working with him, he gets stuff done fast.

He is really, really good at things in terms of understanding AI component.

His prompting was amazing.

I don't know he's a little mathematician-ey for me sometimes, but, you know, it was alright working with him.

So all in all it was a positive experience.

And I think Lew would say the same thing.

So what's the next question?

No, it was a lot of fun.

I really appreciated, as you've already mentioned, Todd is a very prolific author, so he was able to help us navigate all of those rough and challenging waters.

So I was so much appreciative.

But I think we complemented each other well, as far as what we brought to the table, and it's exactly you know, what the book needed.

And in particular, I remember Todd that GLCA, the Great Lakes College Association talk that you gave, what was the title of that?

You remember something about neurodiverse learners?

It was basically how AI can help neurodiverse learners to be better learners.

The title is way better than that.

Right, and Todd, he graciously used himself as an example, I will say.

So a lot of things made sense to me after that presentation Todd, I'll just leave it at that.

So, let me just say this.

Yeah, I explained a little bit about ADHD and some of the characteristics of people who have ADHD, like time blindness, where you think something's gonna take 40 minutes, but you don’t have a clue?

Might be 10 minutes.

Might be 17 hours.

Who knows.

You set up your task list, but you've never, ever finished a to-do list.

I have never finished to-do list.

And yeah, I went through all of those types of things and a couple of other things that I kind of struggle with.

And then Lew when we finished, he says, it's been great working with you, but I'm not sure if I'd heard that presentation first I would have signed the contract.

Well, you did it.

No, it was fine.

It was fine.

So Todd, some of our long-term listeners might be curious what role opera played in this book.

Didn't you listen to a lot of opera last time?

Yeah, you know what?

Let's just throw this in there real quickly, because this is part of the ADH stuff that I'm learning, to whatever extent this fits in with this whole interview here.

So ADHD is one of those things that, much like dealing with gluten intolerance and several other things, people think, if you've really got it, you must have known it your whole life.

There are people who find out later in life that they have this thing.

I always knew that kind of general things with ADHD which always struggled with a little bit.

And I knew I had some of it.

But, I mean, I was 60 years old when I was diagnosed, and so that concept there is suddenly I started to realize and read bunch of books about things like time blindness and some other things, but I had a terrible time writing.

And a person, Tamara Rosier, who wrote a fantastic book, Your Brain's Not Broken.

It's been reviewed in the New York Times and stuff, she said, “listen to opera, like, crank it up.

Don't do ear damage, but crank it up and do that while you're writing.” And I said, “that will never work, because I have to have it quiet.” And she said, “Just try it.” And I put opera on and cranked it up, and I churned out work like I'd never turned it out in my life.

And she pointed out to me that my brain's always looking for stuff.

Part of it would settle on that, and then it just kind of left this focus for the rest.

Still had to have absolute quiet when I read.

But for writing, that worked.

I got put on some ADHD medicine, and two days after I started the medicine, I cannot listen to anything while I write.

To me, that's fascinating, because what it now means is my brain's actually kind of slowed down enough that that becomes a distraction.

But that to me as a cognitive person, that differentiation is fascinating.

That is fascinating, And that's a nice preview of our next podcast that will be coming out next week.

We're talking to Karen Costa about ADHD and her experiences with that.

She’s really good.

So we always wrap up by asking, what's next?

So I'm a mathematician by training, and I was just at a recent large gathering of many of our professional organizations, and for years now, they've been calling for, especially in like the high school setting, that we need to teach more statistics, data analytics, modeling, in other words, creating mathematical pathways, kind of beyond the calculus curriculum.

The rub is, is that the way folks are trained, they're basically trained for the calculus curriculum and not these other things.

So what I'm doing is, I'm actually working with some organizations like Math for America to consider how we might take frameworks to increase the number of teachers that were able to develop these courses in these different areas.

So what we're doing is we're looking at something I like to call AI-assisted retooling, so teachers are learning new content alongside an AI partner to expand their mathematical pathways.

So, that's kind of a really exciting thing that I'm starting to work with.

We got a few case examples that we've been doing.

I've been a case study myself of doing that.

So, I’m really excited about that.

And of course, you know, I'm continuing to work with teaching centers on workshops and things around the book.

Nice.

For me, so I got another book started, because why not?

These are always fun.

This particular one's going to be different, though.

So I've probably said about as much as I can in terms of some of the teaching tips and kinds of things we do.

I really think that what's going to happen pretty soon is there's so much stuff coming out with AI, real life stuff is going to become something I think people are going to cherish.

So the project I'm working on right now is looking more at how I've taught in different aspects of it, but what it really is going to boil down to are things that I experienced in the classroom.

They're really kind of unique, interesting things, and then how it played out and how it could be different.

And so I had a student one time have a grand mal seizure in my class.

And the quick example here is I quickly ran down to a nursing station.

It was just around the corner.

I ran over there, got somebody, brought them back, and the nurse came back, and we came into the classroom.

And I, right now, would tell you… that was my first year of teaching that happened… I would never leave a classroom when an emergency is going on.

I should have said to somebody, “Hey, go to the nursing station.” So what I want to do is talk a little bit about what I've done, some of the ways I've done it, how I would have done it differently, and then how that ties into learning and different aspects.

So I really want to make more of a personal story.

I don't know if anybody will be interested in it, but, you know, we'll give it a try.

Sounds like an interesting book.

I'd read it.

Good.

I just sold one.

Oooh.

Yeah.

Maybe two.

Yeah, maybe two.

No worries, but thank you, appreciate that.

People have anxiety about, like, what if X, Y and Z happens in my class, and hearing personal stories about how those sorts of things have been addressed and strategies for addressing them, I think eases some worry for folks, so it could fill a gap.

Yeah, I appreciate you saying that, and I think one of the big ones is that it's been about 40 years of teaching, and the 40 years of teaching, I think the framework I'm gonna go with, I've made almost every mistake you can make, and I still am seen as kind of a person who's written these books.

I do know a lot about teaching, and I've learned a lot about it, but you don't become a better teacher without making mistakes along the way.

I don't think anybody can.

And so I've done that.

And I probably should point out, too, as Lew said, I run around some campuses.

I love meeting people and going to campuses.

I’ll still keep doing that.

So Todd, three years ago, I did a column for the MAA, it’s the Mathematical Association America.

It's called Still Learning From My Misstakes.

And mistakes was spelled with two S's, anyway, but yeah, exactly that, that idea of, I think, showing your vulnerability, especially at the part in the career that he's at, like, oh, Todd Zakrajsek does everything perfect.

I think peeling back the curtain and showing the mistakes that we made for the folks who are coming up, that is so refreshing to see that and what we did well and what we didn't do well, and then kind of examples of moving that forward.

So yeah, maybe three books, you might sell three.

Definitely worth the investment now.

Lew, that will cover my next tea.

But thank you, appreciate that.

Well, thank you both for joining us and for sharing your work.

Thank you so much.

Appreciate it.

It was fun.

Appreciate the opportunity.

Well, thank you.

If you've enjoyed this podcast, please subscribe and leave a review on Apple Podcasts or your favorite podcast service.

To continue the conversation, join us on our Tea for Teaching Facebook page.

You can find show notes, transcripts and other materials on teaforteaching.com.

Music by Michael Gary Brewer.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.