Navigated to Steady hands, not knee-jerk bans: Brainbox’s approach to AI regulation - Transcript

Steady hands, not knee-jerk bans: Brainbox’s approach to AI regulation

Episode Transcript

Speaker 1

Welcome to the Business of Tech powered by two Degrees, your weekly dive into the issue shaping New Zealand's digital future.

I'm Peter Griffin, and today's episode tackles the big debate in tech policy at the moment, how and when should we regulate artificial intelligence.

I'm joined by Tom barklof tech policy expert and co founder of the brain Box Institute, who's been at the forefront of local and global discussions about how we manage the rise of artificial intelligence.

Speaker 2

Tom argues against the.

Speaker 1

Rush from academics and advocacy groups to put new AI specific regulations in place now.

He points out that we already have a robust patchwork of legislation, everything from the Privacy Act to the Crimes Act that's more than capable of handling the challenges AI brings.

Speaker 2

He argues, what we're.

Speaker 1

Missing isn't more laws, but the coordination and practical understanding to apply them effectively.

As the debate heats up here and overseas, Tom reminds us that regulation is rarely a binary yes or no answer.

It's a complex process driven by existing statutes, international standards, and real world enforcement issues.

We'll unpack the pressure on government to act fast, the risks of hasty decision making, which we've seen in Australia with the looming introduction of the under sixteen social media ban for instance, and why Tom believes that codes of practice and even industry self regulation can serve as a useful trial run before legislation catches up.

Plus, we look at New Zealand's potential competitive advantage with AI, the ability to be the world's smartest, most reliable deployers of AI systems if we focus on capability, digital literacy and a coordinated national vision for AI.

So here's my interview with brain Box Institute's Tom Aracloth.

Tom, Welcome to the Business of Tech.

Thanks for coming on, Thanks for having me.

A great organization you work for.

We've had one of your colleagues on the past.

But just in case people don't really know what Brainbox Institute actually is and does give us the lowdown.

Speaker 3

I've got a background in law and public policy and what I concluded from working in other areas of policy was that there was a really cool space to be occupied for organizations that are not government, so that sit outside government and can coordinate with other parties.

In a particular policy area, not political parties, I mean sort of sector groups to kind of lead a bit of a conversation so you can be really interested in the stuff that other people are not so interested in.

And luckily for me, I kind of like doing the work.

I like thinking about tech policy, I like reading about it.

I like trying to synthesize everything together, and I like trying to find a way through that's kind of productive and efficient and useful in all of these spaces.

So I established the brain Box Institute with a co founder back in twenty eighteen around a series of public interest legal research projects, including one related to deep fakes and synthetic media, which was all the way back in twenty nineteen, which is kind of wild looking back at that.

But since then we've also done a range of I've done a range of projects related to disinformation.

For example, with the Department of the Prime Minister and Cabinet, I led a global multi stakeholder coalition around technology company transparency, particularly in a social media context.

And I've also really lent into this topic of how to turn law and regulation and code and structured data.

So the brain Box Institute is the think tank and a consultancy company, but I also spend a lot of my time thinking about how to build the tools for doing this work more effectively, and that's for another company called syncer Pate Lab.

Speaker 1

It's a great work that you've been doing in recent years and really excellent to have like a think tank like brain box on the landscape at the moment with so much going on and tech.

You know, a lot of this stuff used to be done a little bit like by Internet New Zealand.

They had a really good policy arm that sort of faded in recent years.

The academics do their thing, but it's very much through the academic lens.

So something that is sort of at that interface of government, research, business and what actually is working is great, and particularly for artificial intelligence, which you've been increasingly focusing on.

You've got a four part blog series out really interesting at the moment, all about AI regulation.

So we're going to get into but before we do, look at what's going on in the scene here.

This is moving pretty quickly overseas.

You know, various government's approaches to regulation.

How would you sort of summon up at the moment where you've got sort of the US really forging head wanting to be the leader.

You've got China doing its thing wanting to have AI supremacy as well.

You've got the EU have legislation in place now, the AI Act.

What's your take on where it's all going at the moment.

Speaker 3

My top line take would be, and I think this is really relevant for local discussions, even if you take kind of the most strident approach to regulating artificial intelligence, and that is really the European Union approach, and it's not just the AI Act, it's also you know, the GDPR has had provisions in them around automated decision making systems since about twenty sixteen, and we're still working out really what it means to regulate automated decision making systems in that way.

I guess what I'd say is, even if you have that kind of really strident approach, what you find is the top level legislation can be quite general, so it can be kind of oriented towards like risk assessment and things like that.

And what you need to do to kind of flesh that out is have this kind of cascade of other regulatory instruments, so you have kind of delegated acts and you have guidance, and you have guidelines and you have you know, best practice stuff, and then you have all the kind of institutional infrastructure that sits around that too to kind of support the analysis of that information and reporting on it and scrutinizing of it.

And that's even before you get into any of the kind of hard edged enforcement stuff where you're actually talking about living fines and telling people what they can and can't do.

And you know, aside from actually deciding to impose a penalty, there's still litigation underway now about the GDPR from what are we now nearly ten years ago to kind of work out what all of this means.

So my top line summary would be that it's not a kind of binary exercise with all of this.

It's not like we're just going to say, let's regulate AI and then tomorrow AI will be regulated.

From a kind of starting point, it's much more gray in terms of what we already have in place and how we use that more effectively.

And then even if we did decide to just really kick things off and go hard, it would still be a pretty long process of trying to work out what a lot of this regulatory stuff means.

I think I'll just add that you have mentioned China and other countries.

The other thing to be aware of is just the scale of all the different regulatory approaches out there is just enormous.

And there's another piece to think about, which I've commented on before, and I think there's some good domestic discussion happening around, which is around international standards.

So quite often you'll have top level regulation that says something like, you know, you should comply with this standard, and the standard will be produced by a completely different body that exists independently of any particular jurisdiction and the regulation and people's compliance, but it tends to converge towards that.

So it's a really interesting space there for us to play with, I think as a small country.

Speaker 1

Yeah, and look, it's still a bit of a blank sheet.

We haven't raced in and enacted any legislation, which is a good thing in many respects.

Obviously, the academics that work in and around artificial intelligence came out a couple of months ago quite stridently an open letter to the Prime Minister and the Minister of Technology saying you need to regulate this.

This is a general purpose technology the likes we haven't seen before.

Some of them arguing, you know, we have dedicated legislation for nuclear for biotechnology that's being reformed to biotech laws.

At the moment, this is as or more powerful than those technologies.

We need to do something.

You wrote quite a thoughtful blog post basically saying, hang on here, guys, we've actually got if you look at the legislation we've got.

Sure, it's a patchwork of legislation, but there's a lot to work with here.

You're basically saying, let's focus on the various acts and regulations we already have to do the job.

Speaker 3

Yeah, there's kind of two pieces to what I've tried to do.

One is to well to promote coordination, but also call out why I understand that that's really difficult.

That's come from experience where I have another area sort of advocated for legislation.

So there was one project that we did around accessibility for disabled people and how do you basically turn the Convention on the Rights of People with Disabilities into an enforceable statute.

And part of looking into that has been understanding, well, what is the process where we go from like a great idea, a bit of legislation In New Zealand And along the way, there's some really tough questions and they're really hard to answer, and principle among those questions is what do we already have in place?

What's the gap?

Speaker 1

You know?

Speaker 3

The task of going and answering all of that is really tough, and that's what kind of the first post is about.

It's about we've got information everywhere.

We don't know whether it's sort of current, it's not all tied in one place, and it kind of relates to a lot of different things, and it might be quite nonspecific anyway.

So there's an information problem that needs to be kind of grappled with.

And then part of that information problem is kind of a coordination problem too.

So there will be fantastic information that is basically doing a systematic analysis of everything that exists in New Zealand and the quality of it somewhere.

I'm sure if there isn't, I will be stunned.

You know, maybe it's in government, maybe it's you know, in the private sector.

I think I think somebody has probably already done this analysis.

And part of this coordination problem is how can we make sure that we're just reusing that rather than starting afresh, you know, I would love to sit down and kind of look at all the AI regulation that I've collated with other team members of brain Box in the past, and the AI Policy Tracker that we've pulled together, but that's going to take a while.

So the task of kind of doing there is something that you want to do in a systematic and a structured way in collaboration with others.

And I do think as well, you know, what's been done with the open letter by under the leadership of LINSA McGavin and Chris and Andrew there as fantastic because it has pulled people together and as the start of a fantastic discussion.

I think about what we're going to do about all this.

Speaker 2

When you start digging in.

Speaker 1

There are you know, legislation that can be drawn on there, particularly the Privacy Act that's an important one, but there are others as well.

My concern is that we're going to see something happen on a truncated timeline that has happened with social media, where we have people officials and government who've been told you need to get the under sixteen social media ban ready to go like Australia has done.

It's a populist issue and it springs from deep seated concern and genuine concern from parents about what is going on in their children's lives online.

So now we get that, but they're hastily rushing to implement something that's potentially going to do more harm than good.

You know, we've got issues already around algorithmic bias and AI systems, the potential for mass surveillance, autonomous decision making in critical services.

So you know, does our legislation you know, deal with that sort of thing.

If someone wants to launch a chatbot in New Zealand, our government has said we want a light touch, proportional and risk based system that anyone could launch a chatbot tomorrow and there's no risk assessment off it is that going to be governed by any piece of legislation.

Speaker 3

Really great point.

One of the things that is important to me is to have a kind of baseivil society capability around this kind of regulatory space, which I think the Open Letter has really demonstrated to me that we do.

You want to have this base level of capability.

So when there is this kind of really popular swell to basically do something, won't somebody do something that we've got a kind of handbrake on that where we can at least have an informed discussion about it.

So the thing for me around social media regulation and kind of banning it for under sixteens.

Sure, you know, there's a lot of reason to be concerned, but nobody is sort of advocating for it on the basis that, hey, wouldn't it be a great idea if we had real id requirements for internet services all the time in New Zealand?

Speaker 1

You know?

Speaker 3

But that is the effect of that policy, So we kind of really need to be talking about the actual practicalities of some of this too.

One thing that I've thought about quite a bit is if you sit down and you look at the AI strategy and you look at the light touch proportionate risk based approach that has been advocated for.

That is also the approach that the Open Letter has advocated for, except maybe a little bit less light touch, a slightly heavier touch.

If you look through the Responsible AI Guidance for Businesses that accompanied the AI strategy, it's a really fantastic starting point for thinking about these issues because what it does is it wraps through a kind of table of at least ten different statutes that already have some bearing on AI in the way that it's used in New Zealand.

So you know, we need a kind of a better starting point for a lot of these discussions.

That's hard, Like it takes a lot of reading and thinking and writing and stuff like that, and it's hard to kind of get that going if you can't find the information, the information problem.

If you can't coordinate and kind of work more effectively with others, that's the coordination problem.

And then today I've just shared another one on what I'm calling the economic problem, which is effectively it takes time and energy to do this.

It's pretty hard to do it for free.

The only way you can do it for free is if you have some other economic interest in the discussion.

And that's not necessarily a bad thing, but it's something that we need to think about in terms of how we progress that discussion.

Speaker 1

Yeah, I want to get into that in a minute, because that is absolutely crucial to vested interests involved here in a small country, who has the resource to put into advocating a particular position.

But just before we do, one model that maybe has emerged for a rapidly emerging technology, in this case, facial recognition was.

You know, we've had the biometric code that came out of the Office of the Privacy Commissioner.

It was quite a thorough process.

There was a trial running with food Stuff's North Island at the same time, which actually gave some real life data to look at about how this technology is used and the pros and cons of it.

Is that a potential model for some of these issues that are going to emerge out of AI, like deep fakes we keep talking about.

It hasn't really blown up yet, but it's quite likely to as the technology gets better.

It's potentially coming up with these sort of you know, codes from trusted officers like the Privacy Commissioner.

Is that potentially a good way to deal with some of these emerging issues with AI?

Speaker 3

I think it is.

Yeah.

And one thing that I have probably surprised myself with over the years in this area is that I do think there's a really interesting case for basically even self regulatory approaches.

So self regulation always sounds like this kind of wetbus ticket, you know, flimsy cop out option, right, and it can be like that.

You know, there are some really glaring examples of that, But what it can be Also, it is a really good trial run for how something's actually going to work, because how it's going to work is quite complicated.

And then what you can do is you can start to wrap around that some of the enforcement mechanisms that begin to kind of escalate the consequences of doing a good job or a bad job of that kind of thing.

The other benefit of kind of taking that code based approach is you can move really quick.

You can move way faster than any government agency can.

And the thing is, if you can basically demonstrate that you've got a code that works and it's well thought through, it's much much easier for an agency to just pick that up and give it some teeth if it works well.

So I'm quite surprised by the extent which I am an advocate for self regulation, because I wouldn't have been like that in the past.

I need to spend some time with the biometric code.

I haven't taken the time to read it carefully.

Part of that is because of the information and economic problems that I describe in the series of blog posts.

I wanted to come back just so to one thing that you said as well about sort of deployment of chat bots.

And then maybe this is also relevant to the deep facs point as well.

If you think about it, there are areas where we already take a kind of risk based approach and we put the obligation on people who are doing stuff to think about what they're doing before they do it and impose consequences if they do a bad job.

One example to me would be our health and safety legislation.

Right, so you can imagine a situation.

You know, let's say you're deploying a chatbot for your employees or something like that, or people coming on to your workspace.

You're essentially already required to think about the risks of deploying that, and you can be liable for really significant financial consequences if you're deploying that chatbot in that context and something bad happens.

So there's just one example off the top of my head of the way that you know, we do have these regulatory frameworks that kind of do apply in these highly sensitive context but it doesn't have the same satisfying impact of this generic nationwide risk based AI regulation that kind of sounds way cooler than talking about health and safety.

I think deep fakes are another example of this, right, Like one thing we concluded from our initial research was there was a lot of really open textured legislation that cover things quite easily like fraud through deep facts.

And that's because the definition of a document and the Crimes Act clearly anticipates something like a video file, so actually fraud through deep facts is already a criminal offense.

The other example of this is nonconsensual sexual imagery as well.

There's an argument that that is covered by the Harmful Digital Communications Act and the Crimes Act.

Unfortunately, Parliament hasn't yet just bite us first identifying this issue in May of twenty nineteen, taken steps to clarify any of that.

And that's the kind of political priorities issue, I think, and.

Speaker 1

It's all it might take is that clarification and guidance that this law actually does apply to the digital world.

You know, for instance, this will eventually happen we will have an AI chatbot in New Zealand that discriminates against someone racially or gender or something like that, and that will potentially be a case before the Human Rights Act, so that's applicable to that.

We've got privacy obviously, if you leak data through an AI chat bot or don't disclose what you're doing with that data and don't store it properly, you could be hit with our weak Privacy Act.

Speaker 3

A nice sweetbus ticket.

Speaker 1

Yeah yeah, yeah, exactly, So there is that.

So yeah, I take your point that you know there are lots of layers of things going on.

I think it goes back to though, what you said earlier, like even in the EU, you've got the scary AI Act, you know, with big multimillion dollar fines for non compliance, but there's this whole process that goes on behind it, and I think that's the bit that we don't really have much visibility into or or talk about here.

You know, what is actually going on there.

You know, you take an issue like copyright and arguably the bus has already gone on that you know, everyone's copyrighted material has been scraped from the Internet.

We do have copyright legislation in New Zealand, quite a strong copyright Act, so technically, if you wanted to chase open AI under New Zealand copyright law, you could take them to court and you might have a good case for it.

But there's very little discussion about this, and goes to that point you made about regulation, is one thing, but having those processes and even having a government that is waiving the flag on these issues, and we've been very muted.

I think in New Zealand we've just sort of said, look, the laws are there to handle us, let's sort of see what happens.

It's not as though the government is really thumping the desk on any particular issue around AI, just sort of saying in general, you've got to do it responsibly.

Speaker 3

I think that copyright one is a really good example, because someone somewhere in New Zealand will have sat down and looked very very carefully at the Copyright Act.

I'd be fascinated to know a lawyer somewhere may have even been approached by our copyright holder and been tasked with thinking, if we were to pursue litigation under New Zealand law on this, what would the answer be and how it would be enforced.

I'd be stunned if somebody has not thought about that.

The information issue is that we can go and replicate that or we can basically share the results of that analysis, and that would be really valuable.

It would basically mean that we don't have to duplicate that over and over again.

And you can almost guarantee that the person who has thought to themselves, I really want to sue anthropic or open eye about this, has not gone to their local family law solicitor, like ideally, they've probably gone to one of the big law firms who really know what they're doing.

And they may have even gone out too some QC who's the you know, sorry, Casey, who's the copyright expert in New Zealand.

I think this is part of that kind of systematic approach, which I've learned over time can be really frustrating and disheartening because you might take off in one direction and kind of do this amazing analysis, and then you kind of turn a corner and like somebody's already done it way faster and come to a great answer, and you could have just saved all that time asking them what they think.

So that's why I'm kind of thinking about this information coordination, the kind of economic things that drive that, and then downstream to some of the kind of policy issues that give us the kind of focus to organize around as well.

Speaker 2

Let's talk about that economics.

Speaker 1

The I think the third in your series is very much about that, how that drives this discussion and approach to regulation.

What I've found over the years anything tech related, particularly around like social media.

You get the big tech giants who basically do a cookie cutter version off their overseas policies that you know that's put a lot of time and money into crafting.

They localize that, they serve that up to our government, and then they sort of cozy up with the net safes and those sorts of organizations to soft power as such.

Speaker 2

Then you get sort of the rest of New Zealand Inc.

Speaker 1

Which you know, only if if it really affects their business, like for instance, or you know, is a facial recognition company that does retail security systems, so they're very interested in the biometrics code.

So you'll get particular businesses that are interested in it.

You get the academics because some of them this is their area of research and they want to write research papers on it.

That's great, But the ones that are really left out is sort of civil society and those groups that are really poorly resourced.

I'm thinking, you know, just as we speak, you know, one of the big ones in New Zealand, IT professionals has just gone bust, going into liquidation.

Organization it started in the nineteen fifties.

It's those sorts of organizations that in the past maybe had really good input into policy development in New Zealand, but they're just so cash strapped.

So my concern is that we're going to miss increasingly in this discussion, really important dialogue with the public and researchers who work on behalf of the public.

Are you concerned about that?

Speaker 3

Yeah, And I think you've really well summarized that.

And essentially I've thought about this a lot since establishing brain Box in twenty eighteen.

It was also quite an eye opener to me working in international tech policy because obviously, you know, you have the kind of international think tank space, which will be One of the members of our steering group was from the Carnegie Endowment for International Peace right so founded by Andrew Carnegie, the kind of Elon Musk of his time, with just an enormous endowment to just focus on good ideas and articulating good ideas.

It was really amazing to kind of walk down think tank al in Washington, DC and go to the Brookings Institution and stuff like that.

So I've thought about this a lot, and I'm a real geek for all of us.

As you can tell, it is critical to be able to have a kind of non industry, non government, and i'd say non academic voice on all of this too, because academia is fantastic but can also operate like a very large institution and as you say, is kind of maybe oriented towards particular ways of thinking about things as well.

The space that I've tried to really fill with brain box is to have that really action oriented capability to deal with all of those groups.

So one thing that people might find teds going through the blog post is I'm trying to call some things out really directly, but then also say like, look, I get it.

You know this is not happening because like government are bad guys that don't want to blah blah.

You know, there is a place for industry at the table be advocating and sharing good ideas that make for better policy, but also you know, have an impact on their bottom line.

Speaker 2

You know.

Speaker 3

The tricky thing here is how do you balance all of those different sector groups.

It's a really interesting space in technology too, because this is how Internet governance has always worked.

When you govern the Internet, it's never been about just governments doing it by themselves.

It's always been about a blend of industry, academia and research and then the people using the internet, you know, the kind of civil society.

So I think there is a really cool opportunity to try and solve this and do it really well.

That is what I'm trying to do at brain Box with my co director, doctor Allan Strickland, who was at Internet New Zealand for a long time and is currently leading, for example, a project on Internet resilience and a world of climate change impacts.

So really cool epic Project's just do an incredible job of pulling together so many interesting people with perspectives to share on this from lots and lots of different groups.

We've had some funding from the Internet Society Foundation internationally to do that.

So it's a really cool example of the way that if you can crack the economic side of things and the institutional side of things, you can get this great kind of collaboration infrastructure going that deals with this kind of information problem of synthesizing and grabbing and bringing everything together, can kind of coordinate in a way that isn't driven by any particular interest, and then you know, create that capability and keep the discussion going in the way that we need it to.

Speaker 1

Yeah, final question Tom, which I ask you everyone that I'm interviewing about AI and I get different responses.

I'm intrigued as to what your thoughts are.

You know, should we have a real angle on AI as a nation and what should it be for New Zealand.

You know, the US, as have said at the start, they want to have supremacy in Ai because it is geopolitically significant.

It's a national security issue.

That's why Trump is in the White House every second week with Oracle and Open Ai.

Five hundred billion investment here, six hundred billion there.

For them, it's about brute force.

How much can we spend to accelerate our lead.

China has a version of that going on.

Europe has sort of pivoted a little bit between regulation first too.

We don't want to miss the AI revolution by strangling our companies with red tape, so we need to accommodate that as well.

You're closer to it than me talking to lots of stakeholders in this.

Is there an angle for us emerging when it comes to what our edge could be competitively and you know in terms of trade and that but also for the rest of the world to look to us, is there anything emerging where we have potential leadership.

Speaker 3

What I do think we are missing is a single coordinating vision for what New Zealand needs from AI.

I think there is recognition that it's very, very important, and I think in fact it is important.

Probably if you'd ask me, maybe even a year ago, I would have been on the fence about that, but I'm not now.

I think it's critically important.

It's not just the AI side of things.

It's also that broader kind of digital sovereignty, digital infrastructure, kind of internet infrastructure, and then also the kind of literacy and capability aspects of that too.

It's all very well and good having really powerful computers and fantastic models, but if everyone's using them for the wrong thing, we're not going forward in any meaningful way.

So we do need a kind of coherent vision.

What I've explored in the past, I probably would have advocated for what's called a human rights based approach, and that's really useful because it gives you a starting point and a set of principles and values that are pretty well tested right like freedom of expression, right to privacy, to public participation all of those kinds of things.

That kind of framing has obviously massively fallen out of favor internationally, which you know is not great, to be honest, but it is what it is.

We've got to be practical.

I was talking about this at a seminar a little while ago where somebody raised for me this concept of sovereign AI, and my initial reaction in the moment was, what that's weird, what are you even talking about?

But it's kind of wormed its way into my head quite a bit, and I've been thinking about what it might actually mean for New Zealand, and the next post I'm going to kind of frame up why I think sovereign AI is a really interesting direction for us to be thinking about.

And I've got a longer discussion paper that I've shared in the past that kind of breaks it down and just sort of says, this isn't about having like NZGPT that's trained on like all of the data in New Zealand and speaks with a Kiwi accent and all that kind of thing.

It can be as simple as talking about meaningful AI literacy, or it could be as simple as making sure that we do have resilient digital infrastructure for access and deployment of AI systems, and an all likelihood it probably means fine tuning models.

Yes, that already exist, and I saw, really, you know, this is kind of free plug for straker Ai, a pretty amazing New Zealand company who I think are now proposing to offer fine tuned models as a service.

Pretty interesting for your listeners.

I'm sure.

I'm really interested in this kind of sovereign AI thing and kind of fleshing out what that means in terms of the competitive advantage for us.

I know that people are kind of thinking about the fact that we're like a stable, remote, English speaking westernized nation.

I know that other people are also thinking about the fact that we have a lot of renewable energy generation, so when it comes to basically housing data centers, we have a lot of capability there as well.

I hear that all the time.

I actually do agree with what they said in the AI strategy, which is a pretty unpopular position to hold to say anything nice about the AI strategy, But what they did say was, you know, we can be experts and basically deploying AI systems.

So we've got these incredible AI systems, very powerful.

But something I've said in the past is, you know, they're only as good as your systems for making sure that what they're doing is good and reliable and not a hallucination.

And you know it takes into account all of the data that it needs to and that all the code that it's produced can actually be tested for security purposes and works.

So I actually do think this competitive advantage around being you know, the world's smartest deployers of AI systems is something that I'm obviously pretty interested in and I'd like to see that taken further.

Speaker 2

Yea, and we do have a track record at that.

Speaker 1

We didn't invent cloud computing, but we've got some great software as a service companies that use the cloud to improve accounting or billing software, whatever it was.

So we are adept at taking those existing technologies that billions of dollars have gone into and actually making them really useful.

Speaker 3

And I guess one of the other things I've been thinking about a lot is and you'll know more about this than me.

I keep reading about the Knowledge Wave conference.

Yes back in sort of two thousand and one.

I think of us, Yeah, and I asked myself, you know that was probably the last time that we went We're going to be a genius tech economy.

And one of the questions I'd love to see sort of asked an answered is like, well, what happened with that?

Yeah, you know, did it work?

Is there where we are now?

Or or you know, did it fail?

Or And I think, to be honest, speaking as somebody who reads a lot and does a lot of sort of research, I think a lot of the most interesting answers to that are actually going to be by talking to people who are there, and they'll be able to say, you know, oh, well this person is leading the work and they kind of moved on, and then you know, it'll be insights like that that are actually are really impactful.

So I'd love to see a bit of a kind of retresce aspective on that kind of thing.

Speaker 1

Well, we'll leave that for another episode a deconstruction or what happened or didn't happen to the knowledge wave.

But in the meantime, hey, great work.

Keep it up with brain Box.

Really important part of the tech policy landscape.

So thanks so much for coming on, Tom, and we'll post obviously links to all those blog posts.

Speaker 2

Really interesting series.

Thanks very much, for sharing them with us.

Thanks for that.

Speaker 1

That's it for this episode of the Business of Tech.

Thanks so much to Tom Barrowcloth, who was really challenging the prevailing wisdom about AI regulation among tech experts and academics.

He's cautioning against quick fixes and highlighting the wealth of existing legislation that already covers some of these AI related risks.

Whether we're using that regulation to head them off quickly as another story.

It's an argument well made, but as we saw with the rise of social media, exist in laws didn't really deal well with the harm that those platforms caused.

The introduction of the Harmful Digital Communications Act did give people some redress against cyber bullying and harassment, but really did it change the behavior of meta x and TikTok.

None of them have ever faced criminal penalties or fines in New Zealand for live streaming the christ Church terror attacks, for instance, or anything else for that matter.

That's exactly why parents in New Zealand are overwhelmingly supportive off of social media ban here.

They don't trust these platforms in large part because they've been given free rein They've sort of been incentivized with a soft regulatory regime.

Still, Tom rightly points out how our real challenge is actually information coordination, not necessarily a lack of laws, but how we use them, and he stressed the importance of building civil society's capability to hold balanced, thoughtful discussions even in the face of populist calls for sweeping action.

Totally on the same page with them about that New Zealand needs to shape its own national approach focused on digital sovereignty, infrastructure resilience, and AI literacy.

If we get this right, our edge could be not just in developing technology, but in deploying it really smartly, responsibly and with real impact.

So thanks for listening to the Business of Tech, which is streaming on iHeartRadio, Spotify, and Apple, where you'll find all the show notes for this episode thanks to our sponsored two degrees, and I'll catch you next week for another episode of the Business of Tech.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.