Navigated to Grok, deepfakes and who should police AI

Grok, deepfakes and who should police AI

January 16
38 mins

View Transcript

Episode Description

What happens when AI gets it wrong? After a backlash over the misuse of Elon Musk’s AI tool Grok, new restrictions have been imposed on editing images of real people. Is this a sign that AI regulation is lagging, and who should be in charge – governments or Silicon Valley? This week, Danny and Katie are joined by AI computer scientist Kate Devlin from King’s College London to discuss why this moment could be a turning point for global AI rules.


Image: Getty

Hosted on Acast. See acast.com/privacy for more information.

See all episodes

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.