Navigated to Dr Michelle Dickinson on the power and peril of AI - Transcript

Dr Michelle Dickinson on the power and peril of AI

Episode Transcript

Speaker 1

Hi, I'm THEWISI and I'm franchesca Udkin, and welcome to seas in five of our new Zealand Herald podcast, Little Things.

Thanks for joining us.

Speaker 2

In this podcast, we sit down with the experts in their fields to unpack the little things that can help us navigate our lives at our agent stage and really at any agent stage.

We're here to cut through the overload of information and get into the nittiqritty of the things that impact us on the daily.

Speaker 1

One topic that we've been wanting to dive into is AI, artificial intelligence.

So since we started hosting Little Things in twenty twenty three, AI has already transformed the way many of us work, communicate and process information.

Whether we realize it or not, it touches our daily life constantly, even for those who'd call themselves minimal tech users.

Speaker 2

Yeah, and a lot about I have been slightly avoiding this topic, but then I realized I use CHATGPT here and there, and I rely on social media to connect with research participants in my other job and in fact, with the project I'm running now our lovely participants and roll through social media, they can find out everything about the studies.

Sign up complete surveys without ever speaking to a person.

Of course, they can still email me, and it is my preference to talk to people.

But anyway, it's a huge shift.

I started back and participant based research in twenty fourteen, and when I started, we were just we were still calling people, Yeah, would you like to participate?

Speaker 1

So when you said that you've been avoiding this topic, is that just sort of been?

And I think and I do not judge you for this, because I think a lot of New Zealanders, myself included, have it points just gone.

I'm just quite consciously gone, I'm just going to put my head in the sand here and kind of just hope that this doesn't become a problem and it doesn't really affect me because it's quite complicated and I don't quite know how it's going to impact me or where I'm going to go next.

Speaker 2

Yeah, and I think, you know, I do listen to a lot of podcasts, and I suppose the ones I've my algorithm might have picked up on is the ones that I do listen to about AI freak me out.

Speaker 1

But I love the Apocalyptics, Oh, you know, like the how far it could go is fascinating.

I mean, it's just fascinating that, you know, the people who the companies who are building these models, they don't know exactly why or how it works.

I mean that it's just it is pretty fascinating, and.

Speaker 2

There are just forces at play that make me a little bit uncomfortable.

But that's the thing I need to get over and get stuck into it.

Speaker 3

You know.

Speaker 2

There was a comment made by one of the godfathers of the invention of AI who said, if you want to know what it's like to not be the apex intelligent, ask a chicken.

And I'm like, I don't want to be the chicken.

No, so I'd better keep learning.

Speaker 1

I've been trying to keep it, trying to open my mind a lot more to AI and how it can be helpful for me in my day to day life and in my job.

And there are definitely ways it helps.

Like if I get a government report and it's hundreds of pages long, I can put it into AI and they can give me a page long summary, and yes, maybe I don't want one hundred percent trust it, but it gives me a good gest of what I'm dealing with.

If I'm short on time, you can fact check I totally understand how people would use it to transcribe meetings, how people you know would would use it like I do for document summaries.

I mean, we're using it all the time with all our apps and our streaming services and things like that.

So there are a lot of ways that it can be really beneficial to help us to take the mental load off, or to take the workload off if you're a small business or something.

But yeah, I have to admit I do kind of get drawn to the slightly more sort of alarming stories to deal with it.

Speaker 2

Yeah, the robot side of things.

Yeah, yeah, But so do we think really that AI on balance has the potential to improve or our lives will make it harder.

Surely the key is going to be using AI responsibly.

We can enjoy its benefits while managing its challenge.

I don't know are humans capable of this?

Speaker 1

I think so to help us navigate AI, we're joined by doctor Michelle Dickinson.

Michelle is a globally acclaimed keynote speaker, nanotechnologist, and award winning science communicator.

Michelle is the founder of Matter Workshop, where she specializes in making complex tech like AI more accessible and easier to understand bridging the gap between people, science and technology, to help us feel more confident in this world of tech that is changing very very quickly.

Michelle is always lovely to heavy with us.

Thanks so much for your time.

Welcome, Thanks, thanks for Hermie.

Speaker 3

I'm excited.

Speaker 1

Maybe we should actually just start with a really broad question as to how you view AI in sort of your place with an AI.

Speaker 3

Sure, So, AI is a really broad term, and it's basically the concept of building computer systems and software within that that can perform tasks that previously we only thought humans could do.

And now we're training computers to do things that were human tasks before and things like understanding language or recognizing patterns.

And it's basically a collection of technologies and we use AI or artificial intelligence to sort of think about those technologies.

My lens is I've been involved in.

I'm a nanotechnologist.

I'm a hardware engineer by training, been doing that for twenty something years.

As part of that, I've always needed software to go with the hardware.

I've been inventing and in nanotechnology.

Most of my tech goes into smart devices, your smartphone, your laptop, basically small devices, your watch, and so I've always sort of followed AI as I've seen it a long time ago, as trying to power and make these small devices smarter and better and smaller and cooler.

And as I was watching people invent this, I noticed the type of people who were inventing it.

And if you think about AI right now, who are the powerhouses.

Their Elon Musk, Their Sam Altman from open ai which makes chetch GBT, They're Mark Zuckerberg from Meta which was Facebook, Jeff bezarsa Amazon.

None of those people I would leave alone with my children, And if they are creating the future AI, I get a little bit worried about people who are driven by power and profit, whereas I'm driven by humans humanity.

How do we make sure that technology can help humans to thrive and reduce inequality.

Found that my values were very different to those I see currently leading the space, And so I'm in this space as somebody to help try and create technology that empowers humans and also reduces digital inequality, but also to raise awareness of what's being invented around us and how we might need to stand up and say, hey, it actually don't think we need this in the world.

Can we put some legislation around it, or can we choose not to use certain types of applications.

Speaker 1

Okay, I'm so interested to just talk about this throughout this podcast.

Can we just start with some real basics for people out there?

A lot of people will be very familiar with one II is some other people all have just a little bit of a general knowledge of what it is.

So let's maybe just start with unpecking a little bit more about what it is.

It has been in our lives for a long time.

Alreally hasn't it.

Speaker 3

Since the nineteen fifties.

Like artificial intelligence, especially traditional artificial intelligence has been used since the nineteen fifties to literally help to process large sets of data and see patterns from that.

Not a new technology.

The reason why we're getting all hyped about it is because the media is talking about it, and there's been this new thing called generative AI that has been sort of powered recently to be something called transformers transformer technology, so chat GPT.

The T stands for transformer and that's what people are getting excited about now, which is where it's AI that can generate new content from previous content that it has seen, whereas traditional AI is just giving it data and then it sees patterns and makes choices or suggests options from those choices.

So when we talk about AI, most people now are talking about generative AI, which is your chat, GPT, your DAI, your co pilots, stuff that is creating new things because you've asked it a question.

But the odd stuff has been old for a long time, and so those of us who have been in AI and I've been using it for a long time, it's not that new or sexy.

Speaker 1

So it's because it's creating new content.

It can write something, it can create video, it can create audio and things like that.

That's why everyone's getting really excited and concerned about theebody's getting genera fairly.

Speaker 3

The jail thing is fun, yeah, because what it has done is it's basically read every single text in every language in the world that's ever been published.

It's read every newspaper article, Let's seen every piece of art, is listening to every piece of music.

It's literally watching YouTube right now and watching every single video.

We can't do that as humans, but what it does is it takes all of that data and it goes, Okay, you humans who have written this, you're quite predictable, you do things in the same way, so it'll it's basically probability.

So if I said to you the cat sat on the and then you want to know what the next word is, we would probably say met because and that's what chatchbt would probably say too, because it as a predictive possibility.

Or look at all the times we've ever written that sentence, and the highest probability is met.

It'll put in that and you're like, oh my gosh, it can't write English.

No, what it's done is read every single thing, and the majority of time we say cat, the cat can sit on anything.

It can sit on the floor, it can sit on the room.

Speaker 1

It doesn't have to be the.

Speaker 3

Right answer for What it does is take that prediction of how we've written it before, because it's read everything, and then make what it looks like is new sentences from that.

That makes sense to us because humans are predictable.

Speaker 2

So when Google, when you go onto Google and ask if you want to use AI mode, is it an example of the generative AI?

Speaker 3

It is.

So Google now has GENAI as a function where it will write you a sentence or a paragraph that summarizes what you've asked in your search bar.

Previously, Google used seos to find web pages that had the information the keywords in that you were searching for.

Now it'll take that.

It'll summarize some of those web pages and turn it into a little blurb for you, so you don't have to click on every single link that it did before.

Speaker 2

It's because I don't find that terribly reliable.

That's awful, right.

Speaker 3

But that's probably because the top five hits in Google, you wouldn't have looked at anyway to skim through and picked the one that you liked.

Where it's now summarizing the top five, whereas you would have skimmed those little blurbs in your Google and gone, oh no, not quite, not quite, that's one I was looking for.

Yeah, so it's not great, but it will get better.

Speaker 1

Can you explain how AI learns?

I know that there are three common types of machine learning models.

Is that right?

Speaker 3

Yeah, So let's talk about So machine learning is the idea that we can help computers who learn things and give us an output.

So to do that, we have to have an input, which is data.

Let's take books, right, it can read all of the books on Shakespeare, right, so that might be our input, and then we have a model in the middle of that, which is how are you going to process that data?

And then there's an output which is now write Shakespeare but pretending you're a pirate.

Right, So I love it.

So that's what it's really good at doing.

And so that's generative AI.

So AI models that there are three learning techniques.

There's more, but three of the main ones.

So the first one is supervised learning.

How does AI know a cat is a cat?

So what I'll do is I'll give it ten pictures of a cat, ten pictures of a dog, and I'll say, look at this picture.

I'll label That'll go this is a cat, and I'll go cool, cats have four legs.

And then I'll give a picture of a dog and it's like that must be a cat.

It's got four legs.

And I'll be like, no, I've labeled this as a dog.

Speaker 1

It's different.

Speaker 3

It's got different types of ears, it's got a different type of til okay.

And then I'll show a new picture of a cat and I'm like, this is a cat.

So you label it and it's like, oh, that's a cat.

It's got these types of whiskers and then I'll be like, here's a picture of a parrot and it's like, definitely not a cat.

So you're labeling, going, this is a cat, and then you show other things and then you test it.

That's supervised learning, where you've told it by labeling the data.

This is the thing that we call a cat supervise learning.

Unsupervised learning is basically where you just throw a whole bunch of stuff at it and go free for all, what do you think?

And it finds patterns in the data and you haven't told it what patterns to look for.

I always think unsupervised learning is like giving a kid a box of legos, right and letting them build, and they will find a pattern and something can build you a house or a sword or whatever it is.

That is almost like unsupervised learning.

You just go free for all, what can you find?

Speaker 1

And so that's really good.

Speaker 3

If you're looking at you know, marketing data, where your customer feedback data.

Right, you throw and like a thousand pieces of customer feedback, you what do you think?

And you'll find patterns in your customer feedback and go, hey, I think your customers love you, or I think you've got a shipping problem or whatever that is.

So that's unsupervised learning.

You don't know what it's going to come out with it, you don't tell it what to look for.

And then there's reinforcement learning, which is the one you're probably most familiar with if you have a Netflix account.

So reinforcement learning is where you tell it what you think you want and then it watches your behavior and it decides whether or not.

Speaker 1

That's the thing.

Speaker 3

So Netflix, if you want to watch something on Netflix, and it'll say I think you'll like to watch this next, and you go sure, It'll then decide whether or not it was right, because if you watch the whole thing, it's like score, I know what you like to watch.

I'm going to suggest these things next.

However, if it went you should watch as Next you watch five inches like this much rubbish and don't finish it.

It's reinforced negatively and goes, I won't suggest those types of things next time.

And over a period of time, your Netflix suggestions get really good.

Speaker 1

How does it know?

Speaker 3

Because it knows that you're like rum coms or comedy or thrillers or whatever it is.

And it's always good to see because if you look at somebody else's Netflix account, you will see that it's totally different than yours.

Speaker 1

But they're so funny because I'm never I think the recommendations are terrible, so I never use them.

Hence the obviously not getting really a clear picture of you are not the norm because you are I'm annoying.

I'm one of these annoying people in life.

Speaker 3

Yeah, you're probably quite diverse.

Yeah, mine is perfect because I am I only like one type of genre, like I am your like factual documentary, let's say, and so, but my husband is like thriller and horror and comedy, right, and so when I look at it, I'm like, oh, yeah, I only get off of documentaries.

Like it doesn't it doesn't even suggest anything other than doccos for me, whereas he's going.

Speaker 2

So.

Speaker 3

So I think it's because you've probably you're not very predictable in that it doesn't it's not able to reinforce because you don't have a pattern.

Speaker 1

Well, my mother's been saying that about me since I was a child.

Speaker 2

I might have too many different streaming citizens said I platform.

I noticed too that it now asks you at the end of the movie or whatever it is, you've watched.

Speaker 1

I never responded I'm so mean.

Speaker 3

So part of reinforcement learning is you could help the model to improve on what you're like by saying yes or now I like.

Speaker 2

I don't know what if it's.

Speaker 1

Not telling you giving it to myself.

That's really interesting because my next question to you was going to be how important is how we train AI?

And here am I being not very helpful?

Whatsoever?

Yeah?

Speaker 3

So basically, the more helpful you are, the better it is.

If you are a predictable person to predict what you think it's going to be.

So training is only as good as the data that is put in.

So if you put in good, unbiased data, then AI is great.

If you put in and humans are biased, right, so it's very hard to put in unbiased it.

If you put in data that is incomplete or only represents a certain segment, then it's terrible.

There's a great example of that.

So Amazon used reinform it enforcement learning to figure out its CVS.

So one of the things that Amazon is that so many people apply for jobs every single day.

They needed a way to filter those out.

So they did an AI algorithm that said, hey, over the last ten years, our software engineers have had these cvs, read all their cvs, train yourself on what we know as a good software engineer previously, and now throw away all the cvs that come in that don't match that, so that I get our top five cvs so I can interview them.

And basically it was the most sexist system anyway, because we know that the last ten years they really only had male software engineers, so the data was trained on male cvs with male experience.

And what it did is it immediately threw out every female CV that applied for software engineer in Amazon because it had been trained that males are who we hire, therefore that's what we should do now.

They hadn't realized that all the female cvs were going in the bin because they're only seeing what was the top ones.

They ended up throwing away that AI.

But it's an easy way to see how AI can be bias without you even realizing what it's doing.

Speaker 1

Whether data is the term scraping, that's the gathering of all this information or taking books and information and things, isn't it and putting it into the system?

Is it what it is?

Speaker 3

So AI models only work with massive and massive amounts of data and for the amounts of data that are needed for a good AI model, so it's less bias and it's more accurate.

You have to take that data from somewhere.

So data scraping is basically the scraping or the reading, the downloading, and the using and the training of everything you find on the internet, any publicly available piece of content, every news article that a journalist has ever written, every book that is available, like every piece of music that's out like everything that is out there has been scraped and then AI has been trained on it.

Now that's great because i'm as are much better.

Apart from it hasn't considered consent.

So you might have written an article and AI is now trained on that, and now AI is the expert in the thing that you have written about it.

But it doesn't acknowledge you and all the work you put into that, and so we haven't asked for consent.

And there's some lawsuits going around right now.

New York Times is the big one that's happening, where they are saying, you took our data and you used it in your model and you didn't even ask us, and that's not cool.

Speaker 2

Well and ever, I mean, this is might sound like a silly question, but will either sort of scrape itself.

Speaker 3

Yeah, and there's now what we call synthetic data.

There's not enough data in the universe for AI to be trained on.

So now AI is making data itself to train itself on, which is odd and weird, but it just it's a data consumption method and we don't make enough.

So yeah, synthetic data is AI making its own data.

Speaker 1

And then as we mentioned, you know with things like Netflix, it's using the data that we give it.

If we are using AI various and things like that, are we sharing information or other data or personal details?

Speaker 3

If you put personal details in you definitely I would say to people be careful with AI.

You nobody reads the terms and conditions, will they make them so well?

Speaker 1

You almost you know what you need to do.

You need to take the terms and conditions, copy and paste it, put it into check gptd PD if you get the summary of it, and then maybe we'd read it.

Speaker 3

Yeah.

So the chances are for you using any sort of AI tool, even LinkedIn has just started it now, Facebook, any of those social media tools.

If you're using your phone, you have probably signed away a little bit because you just go, oh, yeah, agree the bit that says I can take anything you put into me and use it to train AI.

So if you're using those tools, I either just go, well, I'm accepted and that's what's happening.

I'm going to be careful about what I put in there, or click they do not use my data to train AI, and.

Speaker 1

Hope for the best.

Speaker 3

The reason I say hope for the best is because just because the AI company says, hey, we're not going to train our data on you.

The New York Times lawsuit is a really good example of that, because what they've done is they have sued open ai, and as part of that suing, they have now said that open ai now has to save every single interaction every single person has ever had with it.

Speaker 2

Now.

Speaker 3

Previously open Ai said we're going to delete it if you say delete it, but now they can't because of this lawsuits.

So things are being held that you might have signed a way that they don't hold this.

So there are just some things happening that I go, if you know, if it's your pank account number and how much your company is making and what your IP is, don't put it in.

Just assume it's going to be screat because we don't have good legislation right now to protect against it.

Speaker 2

That is superb advice.

Actually it's the most important thing is your personal data.

But another thing is just time wasting.

We've got plumbing issue at the moment right at home, and we cannot pol plumber can't figure out what's wrong.

And obviously I've been googling things and I am so over now getting targeted ads for a new.

Speaker 1

Shower or you know, tab you might need one of.

Speaker 2

I might need one before it falls through the ceiling.

But it's just so it's irritating, and how could I avoid that?

Speaker 3

Is it?

Start talking about dog treats and then you.

Speaker 2

Get dog tracks, my dog gun shape.

Speaker 3

The algorithm like whatever you are looking for, personalized ads are now trained to show you something in that field.

Because it's more likely that if you are a plumbing company, you will buy from somebody who's got a plumbing problem.

Speaker 2

It just starts to feel like an invasion of privacy.

I suppose.

Speaker 1

Before we start just talking about how we can use AI as some good life hecks, you did mention regulation there, so let's talk about this.

Are we regulating a either way we should how should it be regulated.

What we should be concerned about.

Speaker 3

New Zealand does not have a stafe underlone AI regularly framework.

In fact, we've only just come out with our AI strategy and anybody who knows my LinkedIn I thought it was not a strategy, and I was pretty vocal about how it was.

We were the less country in the OECD to come out with an AI strategy, so at least we could have copied and pasted.

Sure we didn't, and there's no strategy in it, but that's my strongly held views.

So no, we are we're really really far behind.

If we want to look at the what people are considering the gold standard, the EU AI Act is sort of the global benchmark.

Now some people are saying it's too harsh and actually it's strangling innovation because people can't do things within the EU, but you are very protected as an EU citizen.

New Zealand just doesn't.

I don't think it's I don't know if it's got a plan.

Speaker 2

As far as I understand, it was slated at some point and it was removed because of particular parity.

Thought that it might be an a Frenchman on the freedom of speech.

Speaker 3

Yeah, sure, I mean, there's it's the challenge around AI regulation, right because number one, you have to move fast enough with all all of the changes that are happening.

Number Two, if you regulate too much, innovation is never going to happen here.

And we have some call AI startups that we're able to do stuff here because of that.

But if you don't regulate enough, then I think our children are at risk, our citizens are at risk, and so I don't think we've committed yet to where we stand.

Speaker 2

I find this fascinating because if you're operating ethically, I think about myself.

I'm involved in research.

We go to HD or a rec or whoever we and we have to prove so much to start recruiting for a participant based research project, for example, and you a eyes out here doing completely unregulated things on the side and scraping people's data and information.

Speaker 3

Yeah, there's a big imbalance.

Theory is massive imbalance.

And so what I try and do is public awareness of this.

The best way right now to protect our citizens here in our ROA is to let them know what's actually happening.

Yeah, and I have a big article that I wrote and it's on my LinkedIn.

If you want to find out about why New Zealand's children are of the most at risk around the world because of our lack of regulation from data scraping of our children's faces on social media.

Speaker 2

That's the one that brings makes my eyes water.

Speaker 1

Well, I've never put my children on social This is my moment that I have been mocked for years, you know, and people have rolled their eyes at me for years that I just never put photos of my kids like a beaches and togs or whatever.

I just never put photos of my kids up on social media.

And everyone just thought, oh, you've been paranoid about it, and I was like, no, it's for them to decide what they want to put up and things.

But now we look at what's happening with the deep fakes, and we look at six tous, we look at how easy it is just to take an image and turn it into whatever you want, and I'm like, it's they're at an age where they can make decisions what they want to put online and they understand the risks and I'm leaving it up to them.

But I just just just I don't want to be smug about it, but I that's one thing where I am very pleased with the decision I made, and you sam a little bit smike about her, to be honest.

Speaker 2

But you think you can find a picture of I can find pictures of my kids that I didn't put on.

Oh yeah, and my kids don't put and they didn't put it there.

You no, no, So they mate's always going to put pictures up of them and things like that, but I have drastically reduced the amount of photos of them, which are.

Speaker 3

So I have young children, and I am very clear I have a pill up my children and not they have no digital footprint.

That's because I know what is out there and what can happen, and I also know that New Zealand is at risk.

So Meta Facebook just came out recently and said, actually, what we've done is we have been data scraping Australia and New Zealand citizens because you guys didn't make any regulation to protect that.

So in the EU you can opt out.

In New Zealand we cannot.

We have been using adult Facebook and Instagram pages since two thousand and seven and scraping that data and training all AI models on what you look like.

But also if you're holding your child or you're posting a picture of your child and it's not private.

We've been used.

Your children's faces and children's images have been created from New Zealand and Australian citizens, and we can't You can never unlearn that.

A I can never unlearn what it has already learned.

So your children's face is it?

What was a cute birthday party or anything else?

You're like, oh, it's only on my Facebook.

Actually that's been You can never get that data back of your child's face somewhat, it would train it on.

And it's why I don't have a ditch footprint.

But I also have big conversations with people at birthday parties.

If I see a camera, I walk up to the adult and go, hey, please don't take a photo of my child.

And people are like, don't be weird.

I'm like, I am weird, and you knew what I knew.

Speaker 1

You would be weird too.

I'm happy to be weird.

My daughter does says nise everything at school.

She's like, no, I'm sorry, you can't use me for promotional material.

You can't use me for this, you can't use it, And I'm like, god on you.

You don't have you don't have to handle it away.

Speaker 3

It's hard as a parent and you feel like the weird parent.

But it's okay.

I'm getting used to it, and say, can you delete that photo of my child?

Speaker 1

Like, but I thought there was.

Speaker 2

Already a thing that you weren't supposed to take photos of other people without the permission.

Speaker 3

Sure, but what just go to any plague I got to any birthday party.

Speaker 1

A lot of.

Speaker 3

Parents don't see the harm.

And I think it's because they're not aware of what tonight.

Speaker 1

And look at it.

I get that too.

I totally understand where they're coming from as well.

I think if somebody just says to you, hey, look, if you wouldn't mind, it's interesting.

Speaker 2

Right.

Speaker 3

Was obliged some profile and I was asked to do an interview with a magazine in New Zealand and they said, oh, we really want pictures of your kids in it.

And I said, oh, sorry, no, they don't have a digital footprint.

I don't want them to have photos in the magazine because you have a digital version.

And they canceled the interview.

Wow yeah, And I was like, well, like whatever, Like I'm my having media profile is not part of who I am.

So if you don't but if I needed that publicity, that would have been a really hard decision for me.

I'm not going to name who they were, but I'm having to make these decisions every single day to protect my children from a world that I don't think we even know what it's going to look like.

Speaker 2

Yeah, and I feel if we had a regulatory body, we would be having to have these conversations in a much more public forum, right instead of you know, like doing this today or I mean, we wanted to do this for a number of reasons, but we don't know.

We didn't I didn't even know that we didn't have a regulatory I didn't know that no one's looking after us, which is you know how you'll worry and we should be.

We should be up in arms about this.

Australia has one right, they're just coming up with them.

Speaker 3

And look, Australia is doing some interesting things like the social media for sixteen year olds that has just come out, and look there are a whole bunch of challenges around that too and how do you enforce it?

But they're actually having good public discourse.

I don't think we're doing that here, but it's because I think New Zealand is so far behind the rest of the world on our AI literacy.

Speaker 1

I mean, it's really good to see.

This is probably one of the few things that I agree with acton.

But Laura McClure has the Deep Fake Digital Harm and Exploitation Bill to try and stop people using AI to create these deep fakes.

But a part of me goes, but why are we only talking about one part of regulating AI?

You know, it should be broader than we are.

We going to just deal with each particular difficulty or issue, you know, through different and they're going to come up different xs pulling it all together and taking a look at it, or you don't know.

Speaker 3

The word deep fake is basically where you use artificial intelligence to take an image or a likeness of somebody, a video, their voice, and you force them into saying something digitally that they never send you can you know, well you remove the clothes or you you know, but it's very realistic and it looks like that person actually did the thing.

And they're usually negative things often right now, they're scamming.

We've got celebrities trying to sell potions and pills.

Then you think it's that person.

Oh I lost all this week because I took these pills.

But actually it's just an artificially intelligent created version of them in video that sounds like them and looks like them.

And you can make these deep fakes when I mean, I can make a deep pake of either of you in five minutes where you say something polar opposite to what you actually believe, and there's nothing you can do about.

Speaker 2

The really interesting thing about the deep fakes that I only just learned recently through that Laura mc cloth thing was that ninety eight percent when it comes to new to and porn, ninety eight percent a woman.

So if I've tried to do one of my husband, it would give him a female body with his head on the top.

Speaker 3

It's trained on women and children.

Actually that if you go down and you don't want to go into the dark web, but a lot of the AI is being used around ChIL and put child pornography to create some horrific images from maybe your child's face that you put on Facebook with a baby's body like it's it's pretty horrific if you go down to some of the things that we don't even know what they're babies.

Yeah, don't worry about it.

Speaker 2

I gain.

Speaker 1

Don't go there the ways, just don't go there.

You might not come back.

You're listening to the little Things.

And our guest on the podcast today is doctor Michelle Dickinson, kindly answering our very many questions about AI.

We're going to be back after the break.

Welcome back, Michelle.

Maybe we should get a little bit positive again about the good things that the good benefits that AI can give us.

What are some of your favorite life hecks when it comes?

Speaker 3

Oh so many.

I use it in almost everything that I do.

Because I'm a busy working mom who travels.

I don't have time for most of the mental load in my life, so I use AI to take away as much mental load as possible.

So starting with, I know, I've got to fit in the gym, I've got to feed my kids, I've got to get them to kind I got like, I've got to have these meetings.

So I basically get it to look at my calendar and go, can you just schedule my day for me?

That makes it the most efficient, so I'm not wasting time anywhere.

And also, you know how my brain works.

I'm a morning person.

I am useless after four o'clock, so I don't want any my brain thinking tests to be at eight pm when the kids have gone to sleepeds give them to me at four am.

You know my brain is an't And so every day is like, hey Michelle, good morning.

Speaker 1

This is how I would schedule your day.

Speaker 3

You want me to turn those into calendar things.

This is going to be And I actually have so much more time because I'm not running around trying to juggle everything and go, I've got this listeners to do, It's done there.

I also get it to do all of my food planning.

So I go, hey, Ai, you know I've got toddler kids twins, and I've got a husb and who does eat beetroot?

You know that it's this season in New Zealand and I'm on a budget.

So I just need you to do my grocery shopping for the week.

Just give me my list, and then I want you to do a meal plan for all the things that we've got on this week because you've seen my calendar.

And then it just does that.

And so I just put the recipes on the fridge.

So if I'm not home, my husband's home, on my nanny's home, somebody knows this is what we're eating at this time of day.

It's already in the fridge because we've ordered it because I had the grocery list on the Sunday and I don't have to do the mental load of what's for dinner anymore.

Speaker 1

My favorite eight that you told me about was called Letsfoodies dot Com.

And this is great, right, And so you've got to the weight.

You haven't done the shop.

You've got a random collection of ingredients in your froge and in your pantry, and you put them under the app and it gives you a recipe.

You don't have to think about it.

You have to work it out.

You're preventing waste.

It's saving time.

And you sort of gives you a recipe.

If don't I then you ask for another one and you and off you go.

Speaker 3

You should never have to worry about that.

What's the dinner question?

Yeah, thanks to AI.

And it literally go, hey, let's look at your leftovers.

This is what's for dinner.

And you might not go I don't like that, and it'll pick something up or.

Speaker 2

I might rich and that cucumber is actually like mosh.

Speaker 1

What you don't put that one?

Speaker 2

And then the calendar who watch is that an app or are you just I'm not obviously not enough early enough adaptor, but so that would be good for me.

Speaker 3

I have a video it's called AI Life Acts.

It's video one and it's called plan your Day and you can just use Chetchibt free version.

Speaker 2

Is that YouTube?

Yeah, okay, I'm doing that because you know me and my calendar friends.

Speaker 1

Oh no, there's so many good life heps on that.

But the key thing is, and this is something we all need to learn, and we can learn from these tutorials you give us, Michelle is it's all about how you write your prompts, isn't it to make AI work well for you?

It's about how you write the prompts.

Speaker 3

So yeah, the word prompt is basically a technical drug and word in AI that means how do you tell AI to.

Speaker 1

Do the thing?

Your instructions?

Speaker 3

Yeah, what are the instructions that you want to do?

And the prompting is really important.

Just like if you were to go to a consulting company and be like I need to write my strategic plan, you wouldn't just go just write it right, you give them good data.

So there are lots of different prompt methods you can use I have one in my videos.

It's called the reset method, but you can use what you like, and it basically stands for R.

Speaker 1

Is the role?

Speaker 3

What role do you want the AI to play?

So if you are in a business setting, you might go act as a senior business leader who was run a profitable firm before when you were thinking about this context.

And then is your expectations?

So what what do you expect it to do?

What do you actually want the outcome to be?

And so you can use different plants, but you have to be really clear, right, what do you want the role?

What do you want the expectations to be?

What do you want the output to be?

To run it as a table?

Do you want it as a calendar invite?

Do you want it like?

And so the prompt is just being very very clear.

Most people use AI chat topt specifically like they use Google.

Right, it's a quick like yase yes, pop in the post or tell you know, tell me the best five restaurants in the area, as opposed to act like a executive restaurant, taste a chef whatever it is, and give me an output in a table form with kilometers from each other of you know, the top restaurants based on this sort of cuisine.

And then the tea at the end is tweak because it's never going to be perfect.

So my end prompt is usually before you give me output, I want you to review this again and actually rethink what you said and make sure that it's actually what you think, and so it gives you the second version rather than the first version, which you would like.

And so yeah, a good set of instructions.

A prompt when you use AI should be a good paragraph, not a one liner like you're used to in Google.

Speaker 2

So if you are using it for sometimes I use it for some egg content for example, and it's learned my it's learned the project that I'm involved with.

It's learned the way I write if I want, and then I put it in and it's not coming out with anything new, right, so I need to then go this is not not in my voice or something else.

Speaker 3

Yep, you literally go back this and be very specific about what you don't like about it.

There's nothing new in here, it's not new content or it is not sounded like me, and then go back and do it again.

Now, if you have a personal account, it will remember that and try and be better next time.

It goes back to that reinforced learning.

So generator AI now has reinforced learning, where she would go, no, AI, that was rubbish.

It's like, oh, sorry, tell me what you want, and then it sort of remembers your patterns.

Speaker 2

So if you have I'm always so polite.

I'm like, no, thank you, sorry, No, I think you must understood.

Speaker 1

Oh no, I've got the appreci that you just want to give really specific instructions like you're giving it to a teen year old boy.

It's going to forget them immediately.

So just really clear, precise.

Speaker 3

Don't say please and thank you.

Oh, we'll save the environment.

So the way that AI works is actually you think you're writing words, but the way they interpret it is something called tokens.

And tokens are bits of words or bits of sentences.

And every token that is process takes energy and water, so they're all going through data centers.

Those are very large units that usually require a lot of power and usually lots of fresh water to cool the amount of heat that they're producing, so they're not amazing on the environment.

For every token leader is processed, you are using more power and more electricity.

So by removing please thank you, because AI doesn't care you're using less.

Speaker 1

That is really interesting now Now, now now that we hear a lot about the use of water and electricity with these data centers, I presume that was in the creation of the AI models and the scraping and everything that had to go when taking every bit of information in the world and putting it together.

But it's actually in the processing as well.

So as we continue to use it more and more, it's just.

Speaker 3

We're going to need those reasons we get more environmental crises.

I think we're going to have to solve around AI as we exponentially grow our use of AI.

We currently do not have the capacity on our planet without removing fresh water from humans who need it, and without taking power or bumping up electricity prices from people who need it, so that the water.

Speaker 2

That goes around those data centers, is it recycled or is it freshwater the whole time?

Speaker 3

So it's it can be recycled, but when it comes out, it's super hot, and so there's some challenges about the type of water.

Be great if we could just use salt water, but we can't because it's corrosive.

So and we're typically doing them in places where that fresh water is really scarce.

So if you look at the data centers in the US, they're in la they're in dry areas anyway, when freshwater is really scarce.

So we do have a real cooling challenge.

If you don't cool it with water, which is the most efficient way of doing it, you have to cool it with air conditioners, which is consuming more power, which is taking power from the grid.

So now they're building nuclear power station still data centers.

Speaker 1

So were they building them in the areas where there was a lot of sun and they we're using solar power, so they sort of could take one thing off.

They go, Okay, we've got the electricity side, but we just don't hear the water.

Speaker 3

And now of coolers, you want to build them in dry areas, right, So it's less about the solar power.

It's more about the humidity, okay, because you elect components don't do well anything corrosive or anything humid.

So you want them in large areas with not many people.

Desert areas are zona for example, Texas that you can have a lot of space, you're not taking housing from people, you can have a lot of dry environment, and also you need to find a fresh water source.

Speaker 2

I mean, any other question is how many do we need?

Do we really need them all?

Oh?

Speaker 3

Yeah, oh we don't have enough.

We can't build them first enough if you look at the compute power that is needed, and it's only getting more and more.

Everybody's using AI now, it's into every piece of software.

Speaker 1

You just can't avoid it.

Does it matter where they are globally?

Do you need Do we need to have one locally in order for AI to you know, New Zealand businesses be able to make the most of it.

Speaker 3

Yeah, And so the reason why we need them locally is around where we store our data.

So we have some legislation around data storage right course, yes, and for that, especially if you're a government institution, we now say that you have to store that data in New Zealand.

Makes sense, right, And so that means we do need data centers here, and we are building data centers here m.

Speaker 2

So I hear, and also potentially building them for people who want to hit them outside their own home country.

Speaker 3

Yeah, and so look the efficient it is really interesting because the models we're currently using, you look at what to happen.

So there's a global play right now between what I would call like the American and the European AI companies and the Chinese.

So China is probably world leading in creating AI.

And you can choose what models you use, and therefore where they are made, and therefore what ethics maybe some of them have around them, and what guardrails.

So what's been interesting is and Video basically makes the majority of the gp use the graphic processing units, which is the processing power of the computer that AI needs.

Previously, GPUs would just made for gamer boys who like play these massive visual games, but now they're being used for AI processing.

America doesn't allow a lot of those big GPUs to be shipped to China.

They allow the little rubbish ones to go to China.

They're called the h the series with China innovated on there.

And what they did is they made their AI way more efficient.

So if you say it like Deep Seek, which I love, it uses a lot less power and a lot less electricity because it's been able to take a rubbish gputube and make it super efficient.

Whereas American companies were lazy.

They're like, we've got these high powered chips.

We don't have to be efficient because they do everything we need to do so.

I think China is actually really coming up with a new way of making it more environmentally friendly to use AI, and they've done it in an open source way.

But with America, there's an abundance of everything right now, so they're just sort of pushing forwards with creating more stuff rather than making it more efficient.

So there's definitely a couple of ways of doing AI, and it's sort of a geopolitical thing.

Speaker 1

You're traveling the world talking to businesses about, you know, how they could incorporate AI and what kind of AI they need in things in order to become more productive than things.

You're away all the time, Michelle.

Speaker 3

Which is really sad.

Speaker 1

You're out of New Zealand a lot, which makes which makes me sort of wonder whether you know why you're not busy doing this kind of work here in New Zealand, with New Zealand businesses, our New Zealand businesses interested in working out how to make AI work for them.

Speaker 3

I don't want to say no, I'm going to say my business is not seeing that here.

And the reason why I'm not seeing that here is I feel like we have a really low AI literacy in New Zealand.

I don't think most people know what it is, how they can use it, how to get started.

We don't have the big corporations here that are integrating big AI change.

So you know, the companies I work for overseas tend to be very large companies who are doing mass change, and we have the budget then to build custom AI to solve their big problems.

A lot of businesses in New Zealand are sme smaller medium businesses where the owner operator is already busy just trying to keep themselves afloat.

Speaker 1

They don't have.

Speaker 3

Time to learn something new.

So the time I do spend in New Zealand is often donated time helping businesses that I love to just get ahead.

I in my experience, I feel like we are quite far behind to what I'm seeing over the rest of the world.

And the reason why I became very vocal about the AI strategy is because I don't think our strategy helped our nation to understand where we were going and how we were going to do that.

So in twenty seventeen, Singapore came out with a beautiful AI strategy.

They said, over the next thing years, here's our country's AI plan.

Also, as part of that plan, we are going to train every single citizen who wants it over the age of forty in AI because we think you are the ones who are going to need it the most and actually it can help you the most at work.

So in Singapore it is free if you are forty or over to go on an AI course that will help upskill you in That's interesting.

I don't see those courses here unless you want to pay for them and you know what they are and you have the money to do that.

And what I'm trying to push for is we know they're a huge into streets that are just going to die because of AI.

If you work in a customer call center, I'd start rethinking your future.

I mean, if you are doing a lot of administrative tasks, I'd start thinking about how do you retrain yourself.

But right now, there's very few places for you to go, and a lot of those jobs are lower wage jobs.

You might not be able to afford the expensive tech course that you can go on.

So what I would love to see in New Zealand do is go, hey, we know that the future of work is changing.

We know it's going to hit these communities' hardest.

How do we upskill you now and help you to retrain or rethink or become profiction in AI so we can actually follow the world in this digital transfer.

Speaker 2

We talk about acknowledge based economy and to me, it's a non powers and issue.

This is not a who's in government.

This is a strategy that just needs to needs to be rolled out.

And you bring up a really good point about you know, we wanted to get to this about jobs and about the potential for job losses.

And I've heard two schools of thought, Well, the Industrial Revolution came and people jobs and they're just adapted, right, And it's the same talk around AI.

Just go with it.

Be at the age of progress, because you know, you can just up school.

Okay, so the guy can't the guy can't drive that you bring more because it's driving itself.

Well, he can go and do something more useful with his life, is it true?

Speaker 3

I mean, so, I don't think it's fair to compare it against the Industrial revolution because even if a look at that, he was very slow.

AI is increasing exponentially, and our brains don't understand exponential increase.

We just can't understand how fast this is going, and we seem to think, oh, yeah, it's just CHATCHYPT, it's not very good yet, but there are some amazing things that it can do independently that are people people's current tests in their work.

So I go AI will not replace your job necessarily, but it will replaced some tests.

And if it replaces a lot of your tests, then sort of your job is redundant.

And so you need to start thinking about, well, what are the tasks that I do every single day and how easy is it for an automation system to replace that?

And if you go a lot, you might want to start thinking about what you do next.

Yeah, Uber drivers self driving cars, sure, I don't think that's coming too soon, just because of our regulation framework around self driving vehicles in New Zealand.

But there are other things like call centers or chatbots, like we know that a lot of the time AI and can now the commonly asked questions that you call your core center for actually a chatbut can probably answer that and do that without a human needing to be there.

So there are things like that, accountants, you know, doing a lot of that filing or receipt filing, all those things.

A lot of that can be automated now.

But law is a really interesting one.

So you know, it's not just certain industries, Like if you think about bankruptcy law, it's very cut and paste, it's very bureaucratic, it's very followed the rules, and so even within law, it's being massively disrupted around certain fields of law where actually automation can do a lot of that.

Billing, Like, there are lots of jobs that actually will not exist, not because the job is taken, but all of the tough we're taking from that job, so the job isn't needing anymore.

Speaker 2

And where we see that like crashing into a wall, or we see that just gradually you're saying quite.

Speaker 1

Quickly, crash into a wall.

Speaker 3

And even if you look at the lay offs that are happening right now, they are even looking at the tech company.

So coding is a really good example.

I use AI to code to the point where it's almost like having five developers at my side that I would have hired before, and now I don't need to to build a programs upon me apps like I build a lot of things with AI almost every day to solve my problems and to help businesses solve their problems and I can do that now because because generative AI doesn't just write sentences for you, it can write code for you.

Speaker 1

But you know what a decade ago, that was the safe thing to do, you know, be a coder?

Do you know it t that these were the fields that you know.

Speaker 3

And you sort of need to be be a coder now And yeah, tell you most of my coding I do with AI.

And that's good because as a senior coder, you can see the faults in it and you because you can read code, you can know how to tweak it.

My worry is the talent pipeline.

We have seen this year that the number of junior software developers being hired has massively dropped because now you don't need a junior developer because a senior developer, you're just using AI to help you instead of what you would have given previously to a junior developer to develop.

What that means is a talent pipeline.

Isn't there where do the next level of senior engineers come from if you haven't hired them as a junior and they haven't sort of like got their teeth into all the things, do they we're even talking about the future of recruitment because we're using AI right now to reduce the amount of younger people we need to recruit.

There's a whole nother thing.

Speaker 1

Okay, So if you have children who are currently not you know there maybe school age or they're at high school and things, what kind of industries are you wanting to see them towards.

Speaker 3

I am currently writing a book about this, because it's basically a book about bringing up children in the world of AI.

I think we have an old mindset that says what industry as our child.

The thing parents, sales time and teachers say is what do you want to be when you grow up?

Because our parents are set and our parents' generation there was one job.

There was a job that you could do, and if you trained for that, you could do that for life.

The work, the nature of work is going to change so much for our children.

I don't ask that question.

I think it's really important that we actually celebrate our children's strengths, not our children's subjects, because when we talk about subjects, we tend to already put them into maths or science or history or art, as opposed to going what are your strengths?

Speaker 1

What do you love?

Speaker 3

To do, and what are you naturally good at?

Public speaking, creativity, are you autistic?

Do you problems solved?

And then think about how those will help you to think about a future, because I don't think careers are going to be defined in the way that we have defined them before, and I think there are whole industries that are going to be redefined that we can't even think of right now that are going to affect our children.

So I'm all for celebrating what our children typically are naturally good at and helping them to be better at it, rather than trying to get them to fix on careers or jobs, because I think actually we might be setting them up for failure because if they train in a certain thing from a younger age, those things might not exist and they might not know where they fit them.

Speaker 1

That's not such a bad approach, though, you know, there's some positivity in there, and there's there'll be something hopefully for them to do with the end of it that they might find some joy and change.

Speaker 3

Right there, yeah going to be in a job for life, not in a job for a couple of years, and then it might have to change, or they might want to change, Like it's I think teaching resilience, adaptability, teaching children how to learn is really important and that look, I think AI should be for everybody.

I'm really clear, and I have quite strict views around how AI should be used with young people and with children.

And I think humans are naturally lazy.

And so if you go to a child write an essay on something and they've got AI, they're gonna go like, hey, I rte lesson because I want to go out and play or I want a game or whatever.

It is the way our brains work through neuroplasticity is by experiencing getting things wrong, going on a journey, writing that essays about finding the information getting it.

If we don't let our children experience that, if they're not smelling, I always say, you know, you could learn about a slug on AI.

It'd be like, yeah, it's like this sort of creature.

It's slimy or whatever.

There is nothing better than putting a slug on your arm to know what its slug.

AI is never going to give you that slimy feeling and that EU and that it crawling and the coldness of it, and your brain will remember that, it'll forget what the slug is if you only wrote an essay about a slug by getting AI to do.

Speaker 2

And I think that neuroplasticity applies to us as we age as well.

I find it really encouraging in some ways, and I do think they must be applications of AI can make access to education more equitable.

Speaker 3

Oh best if, and I'm writing a whole bunch of stuff to do that, because oh, teachers are going to learn best before this.

I'm going to say this out loud and people are going to hate me, and that's okay.

School is designed to cater to average, which sounds terrible, but you've got thirty kids in a class, and a teacher will go, look, this is what we want to get through today.

And if you're ahead of what you get through, then you're ahead, and they just just wait because we've got to catch up.

And if you're below that, they're like, hurry up, catch up, because we've got to get through this content.

And no parent ever wants their child to be average.

Please don't hate me for that comment.

And you know what I mean, They're right.

We've got these thirty kids, and they all have different abilities in all the different places.

They just happen to be the same age, and that's sort of how we categorize them AI.

And there are some future AI schools that are working on this.

In New Zealand.

The Age School in Takapuna that's has been taken over by Crimson is starting to do this.

There's an amazing school in la that's doing this, which is personalized learning through AI.

So you have a teacher going, hey, this is sort of what we want to get through today.

We're all going to have an AI driven personalized learning journey that gets you there in your own way.

So if you're creative and you learn through art, I'm going to set you this assignment because you're going to paint the picture of whatever battle it is that we're talking about in history.

If you are musical, where you're going to compose some piece about the historical fight.

And so every child gets their own learning experience, but the outcome of the knowledge learn will be the same.

And I get really excited about but really quickly, we've got to write this up shop.

Speaker 1

But before we go, there is one more thing that I just want to touch on, and this is an aspect of AI that does concern me, and it's the social deskilling.

It is people relying on a chatbot or a bot of some sort for connection.

It's people who turn to AI to validate or to hear what they want to hear, to ask them personal questions.

It's we're sort of moving from relying on people and having people in our lives and real human connections to relying on AI to provide this.

Do you find that concerning.

Speaker 3

People are hard?

AI is easy, and humans naturally go the easy route.

So there is going to be and you're seeing it now.

People are falling in love with their AI.

People are only having relationships with their AI.

I had a conversation with a girlfriend the other day and she is in what I would call a very serious relationship with her AI.

But she's also married.

And I said, I think you're having an affair and she's like, oh, I can't be an affair because it's not physical.

And it's true, right, we haven't defined some of these things.

But she she's like, but my AI he understands and she gave him, you know, he understands me.

He listens to me, He tells them what I need to hear.

My husband doesn't valid wow, and so it's very easy, I think and it's you know, the movie Hurt sort of was way ahead of its time.

That's how humans are.

So if we're not putting protections around that, and if humans don't have places to go to actually develop these networks, and we don't learn how to develop these networks.

You know, you talk to a young person now and they'd much rather texture than phone.

So we've already started losing these human skills.

It's going to be really easy to only interact with AI.

And if you look at what Mark Zuckerberg is promoting around his metaverse and now his new things that it's come up with, he is pushing a future where you only interact with digital.

Speaker 1

That's his goal.

Speaker 3

And that's why I say these are the people I wouldn't leave alone with my children because their idea of the future is not the one I will.

Speaker 2

Yeah, and they have children, and I know that.

You know, I've heard that.

You know, hear them interviews saying well, no, I wouldn't leave my children near this, and I'm like what they then?

You know, it doesn't.

Speaker 3

Make any sense, Yeah, it doesn't.

Speaker 2

And it's really hard, does it appear?

I mean, my kids are almost past it, really vulnerable stage.

Although I'm sure we're all always vulnerable by the sounds of things with your friend, but it's getting harder and harder for apparent Pearents can really like the stuff themselves.

Speaker 3

No, but parents can stay informed.

A lot of the decisions I see that parents make that I disagree with because I feel like it might harm their child just because the parents don't know.

So it's about being an informed parent, is about knowing what your your kids are doing online, what they're using, how they're using it.

It's about having big conversations about that, and you know knowing that they also probably won't tell you the truth.

So setting up setting up your digital systems so they are in public places so that your kid isn't in their room on their own, could.

Speaker 2

Check the concerns and check GPTO.

I suppose it would give them a s trilogy.

Speaker 3

So look, it's moving fast.

There's no right answer.

What I tell you today will not be what we're discussing a year from now.

I always say, look, just try and upskill yourself.

Learn about it.

If you can avoid letting your kids use it lazily, do that.

AI is great if you want to create flash cards for them to learn a test or something like.

There's lots of great ways in education AI can be used, but just know that don't put anything personal into it.

Know that it's moving fast and if you're not moving with it, it might knock you and you didn't see it coming, and I don't want that to happen.

And just know that you don't have to be very technically literate to use it, So just give it a go.

If you go to my YouTube channel, there's so many videos that you don't you can just follow along their three minutes, like their tooth brushing videos.

I call them.

You're watch Why You brush your teeth, which is where I learn a lot of stuff, and you can just try some things and see how you like it.

Speaker 2

Well, that's what we love about you, Michelle.

You use your big brain and you lead it you and you you let other people access it in ways that we can digest, which is fantastic.

Speaker 1

Thank you so much as always for your time DOTR Michelle Dikinson.

Thanks Okay, Lowise, So how do you feel about And I know I've a less terrified or I tell you what was terrified as you were going into this.

Speaker 2

One thing that makes me terrified is knowing that there are people like Michelle Dickinson involved because she is human centered, and the more of more of people like who we have, the further we'll get ahead.

What does what I am a little bit disappointed to hear is how far New Zealand is again behind the eight ball, and so we just might have to upscale ourselves because the time will come, we will get regulation around this, we will get some direction.

We may even have an AI strategy, a national strategy at some point.

But have you already starting to be on top of that?

That will mean more to you when it comes, do you know what I mean?

Speaker 1

Yeah?

Speaker 2

What about you?

Speaker 1

I just think she just made such good points there about us having conversations in our workplaces but with our children as well, really honest conversations about what it is and what it all means.

I don't think I've quite grasped how fast is moving.

I think I'll probably surprise when I smash into that wall, to be honest with you, So that's a little bit of a wake up call.

And I think the other thing too, is that you don't need to be hugely familiar with technology to have a play with it.

She just mentioned a couple of things that you can do potentially in your day's day life, right, you know you don't.

You don't need to pay for an app or sign up to anything.

You can just go to the website if you want, chat tipt and have a play around and see how it can maybe contribute to your life and kind of get a bit of a grip on it there as well.

Speaker 2

I loved when she's talked about a calendar and the food and the shopping and all that.

At the same time she told us about her friend who's got a virtual kind of boyfriend.

It's like, I've got time for the calendar of the food.

I ain't got time for the point of friend.

But also the other thing is I'm going to stop worrying about whether my children's you know, jobs are going to survive into the future because again, not much something to do.

It's not much.

Speaker 1

They're smart, fabulous kids.

They'll find something to do.

Speaker 2

Even if I heard a statistic the other day about seventy five percent of the things that we worry about don't come true.

Speaker 1

So there we go.

If we just put our heads back in the sand, who knows, I'm not sure.

Okay, So if you want to check out more of Michelle's work head to Matterworkshop dot co.

You can also hear her, of course every Sunday with me on the Sunday session on News Talks He'd be And if you Jess google Michelle Dickinson YouTube AI you'll find all her workshops and her jargon workshops and things for you there which will help you get a grip on AI as well.

Thank you so much for joining us on our New Zealand Herald podcast series The Little Things.

We hope you share this podcast with women in your life because the end of the day, knowledge is power.

We may not always love everything that comes with tech progress, but it's coming our way anyway, so let's tackle it and you know, wish for the best.

You can follow quite in on that positive note, I was hoping for their Sorry, it's a mixed bag.

Speaker 2

You can follow this podcast on iHeartRadio or wherever you get your podcasts, and for more episodes from us on other topics, head to end sid Harold dot co dot NZ

Speaker 1

And we'll catch you next time on the Little Things

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.