Navigated to Teaching with AI - Transcript

Teaching with AI

Episode Transcript

The rapid evolution of AI tools  provides a challenge for educators exploring educational applications.

In  this episode, we discuss a resource that can support faculty in this process.

an informal discussion of innovative and  effective practices in teaching and learning.

This podcast series is hosted  by John Kane, an economist...

...and Rebecca Mushtare, a graphic designer...

...and features guests doing important  research and advocacy work to make higher education more inclusive  and supportive of all learners.

Our guest today is José Antonio Bowen.

He has a  joint PhD in musicology and humanities, served for several years as President of Goucher College,  and is the author and co-author of several books,

including

including: Teaching Naked: How Moving Technology  Out of Your College Classroom Will Improve

Student Learning; Teaching Naked Techniques

Student Learning; Teaching Naked Techniques:  A Practical Guide to Designing Better Classes;

Teaching Change

Teaching Change: How to Develop Independent  Thinkers Using Relationships, Resilience, and Reflection, and a co-author, with C.

Edward Watson, of the first and second

editions of Teaching with AI

editions of Teaching with AI: A Practical Guide  to a New Era of Human Learning.

Welcome Jose.

Thanks.

Good to be here.

We're looking forward to this conversation.

Today's teas are

Today's teas are:...

Are you  drinking tea by any chance?

Coffee.

Sorry.

That's not uncommon, that and water  are probably the most common teas.

I had tea this morning already, though, so… Okay.

Alright, so you're not anti tea.

Alright.

I lived in England for five  years.

How could I be anti-tea?

Right?

I have a Constant Comment today.

And I have a ginger peach green tea today.

So we've invited you here today to discuss  the second edition of Teaching with AI.

We ran a very successful faculty reading group on  the first edition of this book in fall of 2024, and since the first release of Teaching with AI  in late April 2024 we've experienced extremely rapid change in the development of generative  AI platforms and tools.

What were some of the changes that prompted this relatively rapid  release of a second edition of the book?

Wow.

Yes.

I mean, first of all, the technology has  continued its march, and so we are seeing pretty steady improvement and diversification of kinds  of tools and platforms.

So we initially started a website with a list of tools.

There was the big  three and a few others, and now they're probably 18 to 20 foundation models that are all over the  world.

When we wrote the first edition, there were not all of these cultural models.

There were  not Arabic and Chinese models and Spanish models, and there’s a new Swiss model this week, which  speaks 1000 languages, we didn't have Latimer, which is trained on black and brown sources.

So  there are more models, there's a lot more tools, and every week seems like we get more of them.

But it's also the case that people's attitudes have changed, student use has changed, faculty use  and resistance have changed.

But a lot of it was also that we just spent a lot of time visiting  campuses.

So I think each of us visited 100 campuses this year alone.

So that's a lot of ideas  for assignments, things we hadn't thought of, that “Oh, I'm using it to do this.

I'm trying  it to do that.” So we always intended this as a practical guide.

There are lots of assignments,  lots of assessments, and there's a lot more of that now.

There's a whole new chapter on chatbots,  because that wasn't really an easy thing to do when the first edition came out, and now there's  hundreds of platforms that do that.

And there's also an extra chapter on literacy and ethics.

We  were initially a little scared to write a chapter on literacy.

I mean, partly, we didn't have time.

I mean, it was just we were like, “We got to get this out.” And also, I think we didn't know what  it meant at the time.

It was brand new.

And now we have literacy, we have fluency.

So this time,  there's enough, I think, research and evidence out there for us to say, “Okay, here's how people  are thinking about this.” And so there's a whole new chapter on literacy, and there's an extended  chapter on sort of role play and feedback.

That's less about tools, although the voice tools have  really… we didn't have that in the first edition, the ability for voice and also emotional voice  tools like Hume and Sesame that really do understand the tension in your voice and respond  in an appropriate way that feels and sounds human.

So role playing is a bigger thing.

I could go  on and on and on, but so much has changed.

I thought we should do it as an annual, but I don't  think I can do that.

I think that's too much work.

And it will probably stabilize to some extent, as most other tools have, at  some point in their evolution.

We thought so, but the first three years, it certainly hasn't yet.

I'm with you.

I sure  hope so, because it's impossible to keep up.

In your earlier Teaching Naked books, you focus  on the importance of maximizing human connections in the classroom and using appropriate technology  for student work mostly outside of the classroom.

Do you still advocate a teaching naked  approach in the current environment, with the ubiquitous availability of AI tools?

Yes, mostly, I think there are going to be places  in the classroom where you have to bring AI in, the same way that you can't teach digital literacy  if you don't have the tools in the classroom.

I do think there are a whole class of assignments where  I want students to do things in class and talk to each other.

An example that I've borrowed from  Mike Caulfield's great work, or sort of altered, but used his tool, is that I have students come up  with a list of controversial statements.

What are some things that might be true, or does the Earth  have two moons?

And then the students now write a prompt, an 800-word prompt, to test this group  of ideas.

If I said, “Write an essay.

write a prompt,” sounds a little bit more meaningful.

I'd  let them go do that at home.

Go write your prompt, and then they come back.

And so now everybody  has a prompt, and now I say “So now get up with a partner and pick a controversy and test it with  your two prompts.

See who wrote the better prompt that actually gets underneath and does this thing  well.” Then I show them Mike Caulfield's Deep Background, and I said, “so his is better.” So I  don't show them the prompt.

I just say, “Can you alter your prompt to get it to do this, to make a  table, to divide the question into sub questions, to trace back the origins.

What are the other  things that you didn't think of?

Do you need a chart that with a scale of “Well, some statements  might be true on a scale of one to five?

This is definitely true, definitely not true.

But what  about the maybes?” So there's a lot of active learning in class, but the tool is still with  us in class, in the same way that I don't think we should ban PowerPoint from the classroom, but I  do think we should ban PowerPoint slides that have 400 words on them.

Put a picture, that's good,  and six words that's, I think six words still works as a maximum, and then write that.

You use  that as the way to kind of focus the conversation.

“Oh, that's what we're talking about.

We've kind  of wandered off, but there's the picture and the six words.” So I think there's a place for some  use of technology in the classroom, like I always did.

But I still think that if you're going to  have students drive to campus, get parking and take the extra investment to be face to face,  rather than be online, you have to give them something different, something they cannot get  online, and that's the smell and the feel and the touch and just the presence of other people and  the ability to have those kinds of conversations and look how people are responding.

I still  think we should be using class for that.

I do think that, as with the internet and now with AI,  we might not need to meet three times a week for 50 minutes, especially if you've got over 50 or  60 students.

If you've got hundreds of students, you do not need to meet together with 600 people.

My suggestion is, let those 600 sessions… let that happen online, tape your videos, your lectures,  whatever, but then meet with students in small groups of 15 or 20.

So maybe that's only once a  week.

Maybe I have my 600 students and I have my four TAs.

So maybe what we do is we only meet in  groups of 12 face to face once a week.

We could radically rethink what we do with that face  time, and that's kind of what I proposed in Teaching Naked and so I still think we could do  that.

And now, instead of those lectures, maybe it's an AI tutor that does the out-of-class work,  but I certainly don't need the AI tutor in class.

One of the main concerns that faculty raise  about generative AI is the concern over academic integrity issues, so faculty sometimes opt  or wish to ban the use of generative AI or focus their efforts on trying to detect its  use.

Why is this not a very useful strategy, and what should faculty maybe  be thinking about instead?

There are two things.

So the first is faculty  should be concerned firstly, with learning, not with academic integrity, right?

Learning.

So if students are not learning, the problem is that they're not learning, and I need to rethink  how they should be learning.

The second problem, though, is that the phrase “academic integrity”  implies that we have a special integrity.

It's not like your integrity, our integrity, right?

And the minute you say that, it's like, “Well, then in four years, I can forget about it, right?”  It's only academic.

So why not just talk about integrity?

Why not talk about authorship and the  importance of human contributions?

And by the way, what we call cheating, business calls progress.

So the first thing to realize is that if the product your human students produce is no better  than AI.

You're not doing them any favors because they won't get a job.

Yes, there are things…  we have to learn how to add, even though the calculator will do that, but there are other  reasons for that, right?

We're really learning number sense.

But in fact, most accountants are  going to use a calculator.

They're not going to add up long columns of numbers.

The learning to  add is part of the process to get us to a kind of thinking that moves us along.

So if we're clear  about, oh, this is the reason you're learning to add when the student says, “ Well, no accountant  is not going to use Excel spreadsheets or computer or calculator, why do I have to learn how to  add?” Well, the answer when I was a kid was, so you get the right change when you buy a candy  bar at the drugstore, and now they don't even make pennies.

Or they used to say, “Well, the patient  will die.

If you can't do long division, you won't be able to give the right kind of meds.” No doctor  anywhere does long division in the hallway on a piece of paper.

They all use an app.

Now it's  so you don't put as many zeros on your taxes, right?

You want to be able to estimate so that,  oh, I put the numbers in wrong.

This doesn't look right.

This is too many milligrams.

So the  ability to have number sense still matters, maybe matters more.

The process of learning to do those  things matters.

Is that really an integrity issue, or is that a learning issue?

I'd much rather  frame this as a learning issue and demonstrate to students that this learning will be useful.

At  the same time, I want to raise standards in the same way that a spell checker raised standards.

It did not make me a better speller, but it did make my spelling better, and that was sufficient.

So now one in three fortune 500 companies require Grammarly for every communication in and outside  of the company.

So the requirement that you don't have spelling mistakes is now pretty common, but  I don't grade spelling like I used to.

So those 10 points that I used to give for spelling,  I now put on the thesis and the thinking, I put it someplace else.

So AI is going to make  us do the same thing.

We have to rethink what is essential for students to do, and how will I  encourage them to do it and motivate students to do it?

And how will I raise standards so in  fact the end result is something that neither AI nor student alone could do, in the same way that  my accountant could not do my 700 pages of taxes without a computer program.

She simply couldn't  do it, but she does know how to add, I think.

But she doesn't need to.

We do talk numbers all the time, and she's estimating things.

That kind  of thing happens all the time.

I'm glad.

A related concern, though, is that perhaps  students are using AI too much at the early levels of skill development, so that they're  not developing critical thinking skills and disciplinary expertise to be able to  evaluate the output of generative AI.

What are some strategies that we can  use to help students develop these skills while working on learning  how to use AI tools effectively?

The first part, is yes, they absolutely are  overusing it.

But back to your cheating question, instead of calling that cheating, maybe we  should call that overreliance or misuse.

Is it really fraud?

Is it really intentional deception?

Cheating is a very loaded term, and so a student might not necessarily know when they're overusing  it or how they're undermining their own learning by over reliance or misuse.

On a practical level,  there are two things that you need to use AI well.

One is expertise does matter, that being able to  say, “Huh?

I don't think that's right.” I often say that AI literacy is fundamentally two things.

One is asking better questions, and the other is evaluating answers.

The two things that we all  do, the core of a liberal education has been those two things for 1000s of years, so we all already  teach AI literacy, but how do I get expertise if I have this tool that pretends to have expertise?

It's kind of the same question as the calculator problem.

And I know the calculator is not a  perfect example, but it's a good example of this.

I still need to know how to add to know when the  calculator might have made a mistake, and so even though the calculator can do the work, I need to  have that experience.

So the calculator challenged us in two ways.

One is, do I still need to teach  long division and addition of long, long lists of numbers?

The answer to both of those is not much.

I should probably do a little bit, but I don't have to spend six months like I did just adding  up long series of columns.

So I have to change what I teach, but I also have to think about the  sequence.

When do I introduce the calculator?

So the question for us is not, do I introduce AI?

The question is, when and how and what are the skills that I still need to preserve?

Spelling  is another one of those.

I'm able to write books with a spell checker, and if I had to look up  words in a dictionary still like it was 1970, I would be much slower… if I was still using index  cards for my bibliography… So, I no longer have to teach the card catalog.

Are there skills that  I can not teach in order to add the other one?

Academics, we love addition, we hate subtraction.

We don't like taking stuff away, but we're going to have to take something away.

But that's a big  question.

How do I do that?

But I do think we've been teaching digital literacy for a while, and we  should add AI to that.

So the question is, is this true?

Although, again, Mike Caulfield's version  of this is, “is this what you think it is?” is a great better question.

Is this what you think it  is, especially now that you can't tell because it has six fingers.

There's no more “Oh, it has six  fingers.

I'm going to show you how to identify AI pictures.” Well, that doesn't work anymore.

So  understanding what's the source?

What are the kinds of claims?

Do other people say this too?

So  that's probably a separate skill from expertise, just finding out what's true.

And again, all of  us are doing it on a daily basis, because the news is full of stuff that it's like, “Is that really  true?” And “What's the source for that?” and “is that picture really real?” So we're all having to  do that.

So that should be easy, but the expertise is the harder part.

How do I get you to be an  expert if the junior level jobs and internships are going away.

And so I think that that is  a primary question that we should be thinking about.

And my rudimentary answer is we need a lot  more talk about motivation, a lot more talk about process that “yes, you will have a calculator,  but the process of learning to add and to multiply is valuable, and here is why.” And so here's  what we're going to do.

And then I promise you, we will introduce the calculator.

It is coming,  but you will not be as good a user of that if you don't first do this.

And my analogy to that, so I  teach music, and so people wanted to learn how to play jazz pian, and so you would give them these  timed tests.

I'd say, “okay, E flat minor, what key are we in?” And they'd have to write that.

And so those were, like, those old multiplication, you know, 60 seconds, and you've got to, like,  scribble as fast as you can.

“I want to play the piano, why am I…” I said because, if you have to  think about what key E flat minor is in, and how many flats that is, the band is already… that  time has to be immediately in your fingers.

And so in order to do that, I'm going to give you what  feels like really rote memorization.

Five flats, by the way, is the answer.

And unless you see E  flat minor and immediately go three, two, and you have that hand ready to go, you can't converse  with me in jazz.

So I had to explain to students why I was making them do these, what appeared to  be really dumb, like rote third grade math tests, because that will allow you to speak with me.

When  you see a piece of paper and we're playing in real time and it's going, click, click, click, click, E  flat minor, F minor, you can change key.

You know how to do that because it's automatic, and so I  have to make it automatic for you.

So notice how much time I spent convincing you that you had to  know how many flats.

So that explanation is not something we're always good at.

So that's not a  great answer, but I think it's the right question.

What you're talking about is reminding me of  a conversation I had with my graduate student a couple weeks ago about changing disciplines and  fields and learning the language of a discipline, and how much that actually allowed him  to communicate about what he wanted to communicate about this whole time.

And so he's worked on a really great presentation about this exact topic that  kind of becoming fluent in the language of the discipline allows you to communicate  things that you couldn't communicate before.

And that you can do with AI.

There's a great  tool from Stanford called Storm that actually has a little button that allows you to change the  disciplinary focus.

And so one of my assignments is,”Okay, so pick a topic, pick a question,  and go to Storm and then look at it through three disciplinary lenses, and then write an essay  about what is the difference?

What did you learn?

What are the differences in how anthropologists  and historians think about this question?

So I do think that's an important piece.

And again,  it's often not explicit.

We should make it more explicit.

In the same way we should make the  critical thinking discussion more explicit.

For years, we've just said, “Oh, they're learning  that.

They're learning critical thinking.” It's like, “ah?” We did a great study four years ago,  and the students were like, “Critical thinking, what's that?” You just spent four years doing it.

It's like, “but you never use those words.” And so adding the words, “This is critical thinking,”  turned out to really help students.

So we've got to be more explicit about that disciplinary  knowledge and why it's useful and how we're learning it.

And I remember the light bulb in  graduate school.

I remember sitting in that talk when it was a guest speaker, and all my professors  were there trying to outdo each other, and it's so “Civilization and culture.” And the other guy said  “Apollonian and Dionysian.” And it's like, “Oh, they just want to use their own terms.” It's like,  “Oh, my God, they're not really arguing.

They're just like, trying to see who's top dog.

And so  I remember that light bulb going, “Ah, please.” Can students use of AI tools help them reduce  the equity gaps that students have faced in their prior learning experiences.

This kind of goes  back to John's earlier question a little bit.

Yeah, so I think the answer is yes, but we have  to be careful.

So the first is that AI has bias.

We have to make sure that everybody knows this,  but it's also the case that humans have bias, and if we were ignoring human bias, we weren't  doing anybody a favor either.

So it turns out that AI bias is a little easier to fix if you  know what you're looking for.

If I know that AI tells me I should be earning less money because  I'm a woman, I need to tell it to respond to my resume like a man.

How much should a man make  in this job?

And then that's a useful tool.

So understanding the bias is even more important  than it used to be, because now I can say, “Oh, well, how could I correct this?” My example is,  if I said to you, “so rank the candidates for the new position of history professor that you're  searching for,” can you ignore the prestige of where they got their PhD?

Uhhh, that'd be “no,”  because you're a human.

It's like, you know, don't think about the cookies.

You can't not  think about something I just mentioned.

But in AI, I could say, just rank the candidates based  solely on the quality of their teaching, review the faculty based solely on the quality of  student feedback.

I can do things as a check on equity.

I can also do things like so I can make  you a bot.

So you didn't take calculus that's required for this course, or most people have  had it.

So I've created this bot that will help you figure out what you're missing, or create  a bot that will help me, I didn't take biology before chemistry, and I could do individualized  tools.

And AI is pretty good at learning, “Well, you tend to make this mistake a lot.

This is a  grammatical mistake you always make, and so you made this in the last paper,” and one that people  don't think of, which is that most of us, well, all of us, because I do it too, and everybody does  it.

All of us provide students with way too much feedback at once.

Here are 37 things to fix about  your paper.

And the truth is, you can only do one, one at a time.

After you fix that.

I'll give you  something else.

This hit me when I was trying to relearn tennis a few summers ago, and it was like,  move your feet, hold the hand, the grip.

It's like, “Shut up.

Just give me some more balls.

One  thing at a time, let me just put my feet right, and then… So I use AI to take my feedback and  what is the most important one thing the students should fix for next time, and then save the rest,  because I overdo it.

And so I can tell the AI not to give the student too much feedback at once,  and it can do it.

And I have a… all of us, I think… have a really hard time doing that.

So  the potential for AI to close equity gaps, to help us see bias, to make more ethical decisions,  is there, but it was not going to happen automatically.

We have to be very, very careful  and very aware of the kinds of things AI gets wrong and the kinds of biases that are built into  it, in order to make sure that doesn't happen.

But as a check on human bias, it's really a pretty  amazing new tool that we didn't used to have.

In terms of that issue of feedback, when students  are given 20 or 30 different things to correct or to improve in a paper and assignment, they  will often choose whatever is the easiest, which is not always what is the most  important.

And that idea of focusing on the most critical or most important  type of feedback makes a lot of sense.

Well, I can give an example.

So this is a real  example from a PhD student who's like, comes back to me and says, “Oh my god, I handed to my  dissertation to my three readers, and I got three different types of feedback.

My God, they don't  agree.

Is this normal for you people?” Uh, yes, you got contradictory feedback.

Congratulations.

Welcome to the club.

And she's like, “Well, what do I do?

One of them wants me to cut chapter two.

The other one wants more.

What do I do?” I said, “So, you put the whole thing into AI with the  three letters, and you say, ‘so, make a table list all of the things they want me to fix, from  easiest to fix to hardest to fix, and then list: Does everybody agree that should be  fixed?’ Check, and then another column, if they disagree.

What's the easiest way to  make them both a little happy?” Graduation.

So easiest to fix, the hardest to fix.

So fix the easy stuff first.

That's fine, but an AI here could keep you on track to  make sure you actually prioritize.

So yeah, here are the three easy things you should fix,  because that spelling mistake was terrible.

But here is the real issue, the one real thing you  should focus on.

And I would say students leave a conference with me, and I didn't know this until  AI, but now they go to the AI bot and they say, “Okay, what's the one thing I really need to  fix?” Oh, my God, I didn't make that clear.

And I know that because I can see them talking to  my bot about the conversation they had with me.

With all the changes that are occurring  in terms of the development of AI, we're beginning to see some shifts in labor  markets that are not that well defined yet, but it's very likely that many of the jobs that  students were preparing for in the past may not be there or may not be there in the same  way going forward, and jobs are very likely to be changing in the future.

I think you've  already partly addressed this, but how can we prepare students to thrive in a world where the  nature of the tasks that they'll be doing might be changing fairly dramatically in response to  more widespread use of AI in many occupations.

So, you know, I've been saying this for a while,  but the future has been unknown for a long time.

And so first you said, “Look, your major has never  mattered less.

I don't know what the jobs are going to be in five years.” In fact, when I wrote  Teaching Naked, my daughter was still in college, and then she became a director of social media,  which was not a joke I put in the book because it hadn't happened yet, but there's like, oh, wait a  second, you could be a director… this didn't exist five years ago, and now you can direct it.

So this  isn't new, that new jobs are changing and being created all the time.

The internet destroyed  jobs and created lots of IT jobs, but it's impossible to guess what those are going to be.

So in some ways, that's great news for those of us in the humanities and liberal arts, because  it means that the thinking is going to matter.

Good use of AI is creative thinking, asking better  questions, evaluating answers.

Computer Science, it’s hard to know which exactly to do.

But that  was also true 20 years ago, your degree in the iPhone I was not going to be useful forever,  unless you could figure out they're going to be new iPhones.

And so adaptability, flexibility,  has always mattered.

It's just it was slower, and now we're seeing it's happening more quickly.

It is very hard to know what the job market is going to look like, whether it's going to be we  all work three day weeks, whether certain classes of jobs go away.

Is AI really taking jobs?

It's clearly going to take some jobs, but the research says that every job is going to change.

There's at least one task.

And so for faculty, my first thing is find out what you no longer need  to do, whether it's making a makeup test, grading, drafts of accreditation reports, assessment of  student for accreditation, assessing student learning.

There's got to be something that you  hate to do that AI could already do better now.

My doctor no longer sits at the computer and types,  my doctor now gives me eye contact, which is a little creepy, but the evidence says that doctors  who use AI transcribers to do the nurse's notes, the insurance notes, whatever, that they have  more cognitive load to focus on patients, and they're happier, there's less burnout.

So this  is a good thing, right?

Because no doctor went to insurance school, they went to medical school, so  insurance notes are not part of the job I love.

So could AI help with that?

And my doctor says the AI  is now better, the first six months, he checked it and checked it, but it turned out he was looking  stuff up, because the AI is better at it.

So what could faculty do that AI is already better at?

The department schedule, chairs?

Because that's what's going to happen, or the bibliography, the  literature review, keeping track of what are the latest trends.

So that should give you time to do  something else.

And for faculty that will learn a bit about AI, because you've got a few hours  back, but I think for students, the question is, how can you be constantly thinking about what else  you could do, how your job is changing.

I'll give you one example.

So in the old days, if I had  a new product or I needed to know how to price my product, I would send a group of interns out  to various cities to do focus groups.

Go find 12 people at the supermarket and give them a taste,  see if they would pay $4.99 for 12 ounces of this.

Now I can do that with AI.

You're a 35 year old  suburban soccer mom and you live in Oklahoma, and it can give you the synthetic personas to do  marketing.

So on the one hand you say, “Oh, I'm going to lose my job.” Well, no, because you're  going to go back to your boss and say, Okay, boss, so instead of doing 10 focus groups.

I did  1000.

I did them in every state, and I did them by year.

I did them for every year between 30 and 40,  and I did them for rural.

And so that's actually happening.

So somebody is setting up synthetic  personas to do marketing focus groups for their products, and they're now able to do work of  much higher quality than they were.

So there's still a job, but the job is now different than  it was, and so I would classify that as creative thinking that's raising standards but also how  could I do this, not just more efficiently, how could I do this better?

What do I want?

I want  to know how much to charge for my product.

What could I do to get better data?

And the answer is,  “Oh, I could create 1000 synthetic AI personas to do this market research and go back to my boss  with way more data than she was expecting.” Those are the assignments I like, too, right?

So I  might give students an assignment and say, “Okay, so you need to figure out a way to use AI to do  something you couldn't do by yourself.

How do you make this better?

What else could you now do  that you couldn't do before?” And that's thinking, that's the critical thinking, part.

And so the  process becomes important.

It's not just did you produce this thing, but how did you think  about it?

How did you make sure it was right?

How did you validate?

Those are going to be  really important questions in the future.

Did you validate?

Did you make it better?

We  should be able to do that.

We love that stuff.

One of the things that we've talked a bit  about today is the rapid amount of change, change in what jobs look like, changes  in tools.

It's hard to keep up… Impossible.

Right.

So what are some low effort  and high impact AI practices that faculty might explore if they want to get  started, but haven't quite gotten there yet.

Well, the first thing I think you need to do is  to try it in your own life.

I actually think that just try an AI browser, try shopping, trying to  see if AI could help you buy a new subwoofer, buy appropriate gifts for your kids or whatever.

I have a really hard relative, Uncle Larry, plays the ukulele, and he likes whiskey, but I  got him that last year.

Give me 10 ideas for new gifts.

Link me to the website.

Shopping is really  changing, so just finding a way to use it.

So we

have this website

have this website: weteachwithai.com.

We have lots  of models, lots of tools.

Try a literature search with Undermind.

That will change your life.

Try FutureHouse, if you've not tried that, if you're a STEM person.

Try some of these tools.

And start to think, “Aha.” Because if you try a disciplinary tool, right?

We talked about  disciplines earlier, you're think, “Aha, this is going to change how experiments are run.

This  is going to change how literature search is done.

This is going to change how verification is done.”  And now you have the basis for rethinking your assignments.

But the easiest thing for faculty  to do is to take your syllabus, and if you're nervous about putting your whole syllabus in,  then start with your academic integrity policy, take your cheating policy, put it into AI and say,  evaluate this as if you were… and then put the description of the students in your class, right,  18-year olds at Nebraska City College, whatever, put in the description of your students and then  say what might be insensitive or unclear.

How could I make this policy more motivating or more  meaningful to and then describe your students.

That works for every assignment, it works for  your syllabus.

In fact, one thing you can do is build a course profile.

So I build a course  profile.

So I have 64 students.

I have three who have this accommodation.

I have this student who  didn't take calculus.

I put in the details.

Then maybe even I asked them what they like.

I'll come  back to that.

And then I say, so is there anything about this reading list or this assignment that  is not going to feel inclusive to my students.

I'm teaching engineering, and I didn't used  to have a lot of women.

Now I have more women, but I'm using the same syllabus from 1975, could  I change some of the examples so they might be more inclusive of a more international or gender  diverse group of students?

So get feedback from your students.

It's a focus group, essentially,  right?

Because you don't know… you've been here too long.

You like school so much, you're still  here.

You're weird, all of those things.

So if I want to know how an 18-year old would respond to  my assignment, I need to find an 18-year old.

So those of us who had kids go through college found  that very useful.

It's like, hey, what do you think of this?

But once they graduate, it's like,  “Oh, my focus group is gone.” So now I could use AI to get feedback.

So a course profile is a great  way to do that, and that's low-hanging fruit.

So you've suggested a few ways in which faculty  can use AI to improve their teaching.

Are there other things they can do to update their  courses, to make them more inclusive, etc?

Yeah, I have a prompt on the weteachwithAI website  that says.

“So here's my syllabus.

I want to update my syllabus.

I'm really skeptical about  AI, but I know students are using AI.

So what are some things I need to know?

How do I update my  policies?

You give me 10 ideas for how to create new assignments that might prepare students for  this new world.

And by the way, you're an expert.” Learning to prompt, right, recognizing that you  have to tell it it's an expert, is useful.

So I think that's a pretty easy thing to do.

Another  thing you could do is customize your assignments.

Ask AI, “I need an idea for a 10-minute activity  to break up this lecture.

I need an idea for new assignments, or I have this assignment, but I have  50 students.

I know they have different interests.

Not all of them are majors, some of them are  pre-med.

Could you create different versions of this, one version of the problem set for pre-meds,  one for this, one for that?” Because nobody really cares about Train A leaving the station and  going 20 miles an hour faster than train….

Right?

Nobody cares.

So why not?

The Republicans are  registering 20 voters an hour and the Democrats are registering… It's the same math, but the  research says the students were going to do that problem more because it feels like a poli sci  problem, or the wide receiver you're covering runs 30 miles an hour and you're only…, that feels more  like a football problem.

And so it turns out that matters.

So you could even go to the extent of  customizing every assignment for every student.

Do a survey before class or the first day of class.

So give me an alias.

I don't have to put your name in the AI, but give me an alias and then tell me  what motivates you to do your best work.

What do you want to be when you grow up?

What are your  favorite colors?

What's your favorite sports team?

What are your favorite hobbies?

And then I feed my  generic assignment with that and the spreadsheet, I say, now create an individualized assignment  with a unique link that you will then email to all of the students individually, that's a customized  version of this assignment.

You can also do this

in class. So an example

in class.

So an example: “So write a five page  paper on Hamlet.” Okay, so students are like, “Oh, really?” So first what I want you to do is,  I want you to tell the AI a bit about what really motivates you.

Tell it about your passions for  baseball or dancing or whatever it is, and then ask it, how could you customize this assignment  so that you could write a paper about Hamlet that overlaps with what really motivates you?

Could you  write a paper on Hamlet and baseball or Hamlet, using analogies for whatever, and so AI is  actually really good at that creative back and forth, because it hallucinates.

It makes stuff  up.

It's creative.

So that's the kind of way where I can get students to increase their motivation,  more customization of their assignment, because they're using an AI to create and again,  I couldn't do that at scale with 600 students, but with AI, I can customize 600 assignments  just as fast as I can customize three.

You're starting to hint at this, but if  faculty implement some of the strategies you talked about today and are in your  book, what types of changes would you expect to see in your classroom and  in the student learning outcomes?

Well, I think that we have two problems.

One is that we charge a lot of money for something that's now out of date.

If you're still  delivering content in a lecture, there are cheaper ways to do that.

If you're giving a credential,  there are cheaper ways to get the credential.

But if what we're teaching is an experience of  critical thinking, of being with other people, of learning about diversity, we should double  down on that part, and we can outsource again, not unlike I talked about in Teaching Naked and  the internet, but it didn't quite happen with the internet, right?

But we could outsource  some of the content acquisition.

They could do things outside of class to get first contact  with content.

And so I do think that classrooms will probably physically change.

We do not need  600-seat lecture halls anymore.

We need smaller classrooms.

We may not need to meet as often  with students, but we do want to meet them in smaller groups.

So I think that's going to  change, but I think more face-to-face human reactions.

If you have an asynchronous course that  was mostly online or asynchronous, most of those students are within a few 100 miles.

They are  fairly local.

I was a consultant for one state, and they were having this problem because the  students, they're spread out.

And I said, “Well, why are you moving the students around?

Move  the faculty around.

You have seven campuses in your state.

The faculty could go to a different  campus a day and meet with students, and then the rest of it is online already.” So we have to think  differently about what our classrooms look like, what we do in those classrooms.

But I think that's  going to be the harder change.

Certainly, AI could do grading of papers.

There's a whole lot of  caveats to that, but AI could do content delivery.

Instead of having to read the chapter, work with  my bot that's interactive, an interactive textbook instead of a midterm.

Maybe I have you talk  to Einstein, and when Einstein thinks you know enough about thermodynamics, the chat bot puts a  grade in the grade book.

So those things I think could be automated if we do them carefully.

The  bit we can’t automate is the “I believe you can do it” piece.

“I believe in you, and I'm a human.

I believe in you.

You can do this.” That bit we can't outsource.

And so the question is, how do  we make sure we have time for that.

My example is advising.

So we think we're advising.

The student  comes into our office for a 20-minute meeting, and we go, you don't have a prerequisite,  but you want to study abroad next spring, so you have to take this course now.

Oh, you  don't like vampires, and you want to switch?

Well, let's find another sec, right?

And we spend 20  minutes adjusting their schedule, and it's like, “Oh, bye.” And we never got to ask them, “How are  ya?

How do you feel?

Do you have friends?” That's the mentoring piece.

And what we do often is we do  administration stuff.

Is your schedule right?

Do you have the right prerequisite?

I have to sign  this.

That's administration.

So if we separate administration from real mentoring, perhaps I now  have a 20-minute mentoring conversation with you and say, “You know so AI is going to make sure  you have the right prerequisites, and it's going to adjust your schedule.

And so I'm not going to  do that.

How are you?” Not everybody can do that, by the way.

We have some faculty who are not good  at relationships.

You used to be able to skate by with, “I'm a content expert.

I got my three Nobel  Prizes.

I'm just going to lecture.” I think that's going to be harder.

So I think that we're going  to see changes to everything about the campus, how it looks physically, how we divide up our  time, what people do with students.

And I do finally think the stand and deliver lecture is  probably going to finally, maybe, go away.

Maybe.

Turning just a little bit to institutional  policies.

Are there any sort of policies you'd recommend for institutions in terms of  institutional policies with AI.

Many campuses have introduced syllabus requirements, that  there be an AI statement or something to that effect.

Some campuses have gone a little  bit further.

Are there any guidelines that you'd suggest for campuses that would involve  best practices, currently, in terms of AI use?

Yeah.

So the good news is, the AAC&U has  been doing this institute.

This is, I think, the third year, and so we have 180 campuses, so  we're seeing a lot of what people are doing.

So the first is, last year, everybody was worried  about policy.

So I would worry less about policy, because policy is changing.

It's not a bad idea  to make sure everybody has a syllabus statement, but lots of people will just put “No AI,” and then  they think they're done.

So I actually think the first place to start is with control.

So I call  this CLEAR.

So C is for controls.

Do I have a safe AI?

Do I know which AI i can use?

Can I  put student data in?

Can I not?

Do I have to use CoPilot?

Or if I'm not going to use CoPilot,  what can I use?

And what data can I put in there?

Because people are confused and they're told, just  use CoPilot… But I want to use Nano Banana.

Well, that's okay, as long as you're not putting student  data in.

So understanding what the controls are.

L, is for learning.

You've got to give them places  to play.

E is for examples.

How are you using it in your course?

And then A is for adoption.

So we  need some mechanism for Okay, let's run a pilot.

Let's do that.

Let's try that.

And then R is for  review.

Did this help or did it hurt?

Oh, that was bad.

But,there's going to be failure.

This is  not all going to work, but in order to figure out what does work, just like anybody who builds…  I think it's Adidas who has a failure funeral when shoes don't work, they acknowledge that not  every pair of shoes is going to be a best seller, not everything we do is going to work.

So we have  to be able to experiment and try and forgive each other, and be able to have a climate where  we know what's safe.

We have some examples, we're all learning, and then we review to say,  “Oh, we tried it twice, because you can't try it once.” I mean, some things you can, but a lot  of the times where we think, Oh, I tried this new thing in class and it didn't work.

So yeah, that's  because the first time you did it, you were crap at it.

You've been lecturing for 20 years.

And so  you tried this new active learning thing, and it didn't work because you didn't introduce it right,  you didn't set it up right?

You aren't good at it.

Try it again.

You'll be better at it next time.

And so I think that for teaching, we often think that teaching is just content.

And I discovered  when I started doing corporate work and I had to do the same 3-hour presentation 35 times in a row,  that it got better by the 34th or the 35th time, that actually practicing… I wasn't changing the  content.

I was just getting better at the delivery and the management of the client, all those sorts  of things.

And that rarely happens in higher ed, where we do the same thing over and over again.

We get feedback, and we try to make the activity and the learning better, and not just, “Oh, well,  I'll do that next year and try some new thing.” So I would worry a little less about policy and  a little more about building the right culture where people are having the right conversations  or experimenting.

I do think we're starting to have the curriculum conversation, which is, where  does it go?

I think people now recognize, okay, I can't graduate students without any AI.

That's  got to be someplace.

So the first place to start is the senior seminar, because they're about to go  look for jobs, and so they should know how to do a

practice mock interview with an AI before they go

practice mock interview with an AI before they go:  give me a 30-minute mock interview in the format that Microsoft uses for my interview next Tuesday.

Here's the job description.

That's an easy thing to do with an AI and have students practice.

But  it's also, read between the lines of this job description to tell me what they're really looking  for, so I can write a better cover letter.

AI can read between the lines.

So that's cool.

And also,  how is the discipline changing?

Think about it, you just spent four years as an accounting major,  and then Excel spreadsheet gets introduced when you're a junior, but the curriculum can't catch  up because, like, well, we don't teach that, because it didn't exist when you started.

Four  years from now, maybe.

So the seniors’ capstone is the place to say, well, you know what, this new  thing just happened.

It's called a spreadsheet, and I'm going to introduce you to the spreadsheet,  because you're an accountant.

So that's a good place to say, Okay, I still need to learn how to  do all those other skills by hand, but before you graduate, and that means you've got at least one  person in every department who is having to keep up with what is going to happen when our students  graduate, and what are they no longer going to be asked to do, and what new thing will they be asked  to do that we might need to prepare them for.

So, senior seminar is a great thing to say, “Let's put  some AI there.” Probably at the freshman year too, about… again, I would redefine cheating.

I  would think about it as what's misuse and what's really fraud.

Why does integrity matter?

Why do credits matter?

It's the same thing with the bibliography.

That's not about academic  integrity.

You go to the movies and what do they do at the end?

They show you the credits.

Is  that academic integrity?

No, that's the movies, right?

But telling people who gets credit for  this matters, and students get that, right?

They really care about that.

If you steal somebody  else's TikTok, that's not academic integrity, but it matters.

So there are probably other places  you need to do this, but I don't think you can have one policy for everybody.

I am noticing  that some universities are now having a policy that you can no longer ban the use of AI for  specific things.

In other words, you can ban AI, but you can't ban it for feedback, or you can't  ban it for brainstorming.

So they're limiting, because students were going from all AI to no  AI, and it's like, “I'm confused.” So nobody can ban AI for everything, in the same way that  you can't ban a spell checker, right?

We don't do that anymore.

We used to, we used to ban pencils  and erasers too.

We didn't like the typewriter either.

But we've had a long history of this.

It's going to change everything.

Yep, and it did.

So going back to your idea of controls,  rather than really thinking about policies, but giving some boundaries on things, you  mentioned, identifying specific tools that maybe people are allowed to use, what kind of data  they can put into such tools.

Paid services often have more services or more capabilities,  and so in the interest of equity, should institutions adopt some sort of standardized  tool that everyone then has access to?

Ah… it's a lot of money.

So I do think that we  need to think carefully about those issues.

It may be the case.

The problem is that Microsoft  CoPilot is expensive and you buy one tool.

So I think the first thing is for people to understand  what happens to data when you put it in if you're already using Microsoft on your computer, and you  wrote your dissertation on Microsoft and you use iCloud, or you used Microsoft 365, OneDrive, or  you used Google Docs to write your dissertation, it's already in the AI.

You have much bigger  privacy concerns when you talk in your car.

There are other things that we need to talk about  for privacy.

So I think the first step is helping people understand what does happen to your data  when you put it in an AI and which AI.

And by the way, there are these controls.

If you open your  settings, you can turn some things off… not sure I trust them… but you need to know that the default  in Claude is off, but the default in ChatGPT is on, and that will probably change tomorrow.

So you need to be checking your settings, and you need to be thinking about what kind of  data is it ethical and legal to put in here?

And do I want to do that?

If I do that, then  there are a lot of free tools I could use.

So I think step one is not saying, “Do whatever  you want, but only do it in this tool,” because then everybody's an idiot.

Much better to educate  people, that's what we do, and to say so let me educate you as to what the tools do and how they  do it, and what are the parameters.

And then you can make informed decisions about whether or  not you want to use free Gemini twice a day, which is all you can use in the best model, but  at least you have that.

And then you can switch to ChatGPT for your three times a day or et  cetera.

And we're going to have this model, and you can always use this model for this kind  of thing.

But I think if you don't also educate people as to how this works, then having one  tool for everybody is just a waste of money, because not everybody's going to use it.

People  are going to use their favorite tool anyway, and they're going to use it poorly and with  risk.

So the first question, the way to reduce your liability is to educate, not to restrict.

And so I would start with that, I like BoodleBox because I think the pay structure is reasonable.

It's a used-based model.

It gives you access to lots of different models, it's not just only  ChatGPT or Gemini, because one of the things we now know is that Claude is better at Excel than  CoPilot, and that it used to be the best writer, but now ChatGPT is a better writer, and Claude  is a little verbose, and Gemini does other more....

So they are distinguishing themselves.

They're doing different things.

And of course, we have all the millions of APIs that really  do different things, and so I don't think, for the moment, anybody can get by with just using  one model.

I mean, yeah, you should try making a picture in Gemini and writing text.

That's good.

But we're in the AskJeeves, AltaVista phase.

I don't know where we're going to end up.

And so  for the moment, it's useful to try a couple of things if it doesn't work in one try another,  but to understand how data and privacy work.

And that reminds me of Dogpile,  I think that was the name of it, where if you put a query in there, it  would search all of the major tools, which sounds like perhaps the  best way of approaching this now.

Yeah, and there are tools that do that.

POE will  do that, but I also think that it's not just about getting in it because you have to follow up.

Iteration is a part of good usage.

And so picking the right tool is important, but mostly following  up.

I was doing some shopping the other day, and I had a really odd problem.

I have this  old set of speakers and an amp from the 1990s so I needed a new piece of equipment.

So I  tried Gemini, ChatGPT.

I was using Comet.

I tried different things, and I didn't know what  the answer should be, but I got pretty good advice.

But let me just try this over here.

Let me try changing the prompt a little bit, so I think that sort of skill is going  to be necessary for everybody to manage, in the same way that digital literacy, not  everything you see on the internet is true.

What?

I know, I know.

It's hard.

Well, we always end with the question

Well, we always end with the question: “What's next?” …which is an especially  important question in addressing AI.

Oh gosh, yeah, nap would be the right thing.

Well, here's what I think.

So I think what's next is that we're about to see lots of new  competition from places you didn't expect.

This one.

I do know, we're going to see a $6,000 eMBA  from an accredited university rolled out in the next couple of weeks, the whole degree.

So it's  the professors have been turned into AI avatars, so it's scalable.

So it's cheaper than your  MBA or even your eMBA, and they're hoping to scale it.

So we're going to see degrees that  are totally AI, that are going to be cheap and scalable.

And this is something new, because  traditionally in strategy, you can either be customized and expensive or low cost.

Can't  be both.

But AI allows you to be low cost and customized.

So your new competition is an entirely  new class of things.

On the other end, we're going to see boutique institutions.

You're going to see  Harvard.

They don't have to change because they're Harvard.

So we have in-person classes.

We do  this, but they're expensive.

So the question is, for most of us, where in that spectrum are  you going to be?

Most of us are not Harvard, so we can't charge what they charge, and then  have boutique.

So are we organic vegan boutique, or are we, you know, mass?

So most of  us are going to be neither extreme, but those two things are coming at us, and so  the market is about to differentiate or spread.

And so for most places, thinking about what kind  of institution should we be?

Where are we going to plant our flag?

And it's not whether or  not we're going to have AI for most of us, it's how much and where.

Harvard could ban AI  and get away with it and give you a four-year degree with no AI because they're selling a  credential, but most of us cannot do that.

Well, thank you so much.

This has been a great  conversation.

We appreciate you joining us today.

I just make stuff up as I go along.

So does AI.

And that's why we're both creative  people , or creative entities, I guess.

Beings?

I don’t know.

I don’t know.

And by the time we release this, the second  edition of your book will be out.

In fact, my copy is arriving in my mail tomorrow.

I'm looking forward to the second edition.

Yeah, December 2 is the drop date, so  Amazon should switch over everything, it should switch to the second edition tomorrow, and there's a lot more.

We didn't think it was  going to be 40% new, but it is.

We thought, oh, we'll make some updates and we'll fix  this.

It's like, “Oh God, scrap that, whole new chapter.” So it's a thicker book with  a lot more than we thought it was going to be.

As I think has happened with AI in general, it's a lot more than many people thought  it was going to be when it first appeared.

There you go.

Thank you.

Thank you.

If you've enjoyed this podcast, please  subscribe and leave a review on Apple Podcasts or your favorite podcast  service.

To continue the conversation, join us on our Tea for Teaching Facebook page.

materials on teaforteaching.com.

Music by Michael Gary Brewer.

Editing Assistance provided by Ryan Metzler.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.