Episode Transcript
Hello, and welcome to the Physics World Stories podcast.
I'm Andrew Glester.
And in this episode, we're diving into a topic that is revolutionizing physics itself, artificial intelligence.
AI is transforming our world and becoming part of our day to day, but what about physics?
What happens when we apply machine learning to the fundamental questions of our universe, and how is AI accelerating discoveries in particle physics?
Can it help us visualize the unseen through photography?
Well, and what does the future look like as AI and physics evolve together?
This episode is inspired by the Institute of Physics latest white paper on AI in physics, exploring the challenges, opportunities, and ethical questions that come alongside it.
We'll hear from a science photographer at MIT about AI in her work, and we'll hear from two people who have been advising on that white paper, including a particle physicist at CERN and the University of Manchester.
But first, Toni Hay formerly of Microsoft, has spent years at the intersection of computers and scientific research.
He's one of the consultants on the IOB's white paper, and he used to be a particle physicist.
I wondered what had made him move from that into computing.
Theoretical particle physics had essentially come to a a a a lot of great crescendo with the gauge theories of quantum chromodynamics and the standard model.
And and you could read my book, my fifth edition of my book, gauge series in particle physics, which is, if you like, an epitaph of my career in particle physics.
So I was very fortunate to be there at the critical time.
But, yes, I didn't want to spend my time doing the same things, and doing things which are largely irrelevant to people.
Alright?
Most people don't care about particle physics.
Most people don't care about, I would say, they like the pictures from astrophysics and astronomy, but but, actually, you know, I'm not sure whether they worry about the the big bang and the inflationary period that followed and the background radiation and stuff like this.
So, I mean, I still find them interesting and they are fun and and things, but I do feel that particle physics evolved into sort of a dead end in that it went to string theory, which is fine, but it's sort of metaphysics in that it doesn't make any predictions that you can check, and has excitations at the the Planck level, which we can never get to.
Oh, and also all the particles I spent my life caring about, the proton, the pion, and electron, they're all ero mass approximations because compared to the Planck mass, everything is zero.
So they're there's symmetry breaking effects which you may get run.
Oh, and it's in the wrong number of dimensions, but maybe maybe some of them curl up and we get to four dimensions.
And and really, it does seem to have evolved into into sort of a love of mathematics.
And it's very fascinating, and it's done some wonderful things in mathematics, but it doesn't have any impact in particle physics.
So I I do I did feel, yes, increasingly, particle physics was becoming divorced from real life, and I got more interested in using computers to solve them.
And then parallel computers, I was extremely fortunate to be able to build my team built and designed designed and built a a parallel computer to do physics to do originally physics, but now I realized there are other things you can do with it than physics.
And for example, I in doing e e science, I really believe things like climate change are rather more important than particle physics and astronomy.
And, again, I'm not sure AI can do a huge amount to accelerate that, but I think particle particle physics certainly can't.
So, no, I I welcome the physicists.
They're a wonderful community.
They do lots of things, but but, I think they're not quite as critical as they think they are.
Sorry to all our listeners.
I mean, that's, yeah, that's the That's that's that's that's the way it is.
I'm a I I'm a physicist.
I'm giving a talk next week about Bell's theorem and demonstrating how Bell's theorem demonstrates, you know, that Einstein's hidden variables were wrong.
And, I still like that, and I'm talking about difference between John Bell and Einstein and Bohr.
Einstein and Bohr considered correlations at naught and 90.
John Bell, as he used to delight in saying, because he's Irish, considered correlations at 37 degrees.
Alright?
And, then you can tell the difference.
Right?
And so, I no.
I I I still think physics is a wonderful area, and my hero is still, Richard Feynman, who who his wonderful lectures on physics, his wonderful Cornell lectures he gave, and stuff like that.
And so and I I worked with Feynman to help write up his lectures.
He lectured for the last five years of his life on computing.
You can find the Feynman lectures on computation edited by me.
You can find them in a book form, and they're all about interesting things and, you know, universality and Turing's theorem and and these sort of things.
I take it there's nothing in there about the possibility of AI?
Yes.
Feynman cared about AI.
He called them he didn't like the name AI.
He called them advanced applications, but he understood, you know, image, vision, computer vision, robotics, and things.
Yes.
He he cared very much.
He had collaborators, one of which on his original version of the course, he gave it with two collaborators.
One was Carver Mead, one of the guys who explained why Moore's Law worked, and one was John Hopfield, who won the Nobel Prize.
And he he had a a specific type of neural network, which is not the same type as neural network as everybody uses now.
Alright?
But but he did win the Nobel Prize, and he was originally a physicist.
Hartfield was a physicist.
Hinton, on the other hand, was never a physicist.
So, my computer science friends, I used to be have a colleague of famous guy called Jim Gray, and he was very annoyed about the physicist claiming engineers for the guy who invented integrated circuits they gave a Nobel Prize to.
And he was really, really angry that that he wasn't a physicist.
He was an engineer, and he would have said the same about Hinton and Hopfield, that they were not physicists.
So getting a Nobel Prize for physics was a little strange and must have certainly ruffled a few feathers in the physics community.
And so to come to the the IOP document, it's it's a wonderful document, and it's and it's right to engage, the attention of physicists because it will become part of their working life.
That's true.
I I just rather doubt when they say, you know, large numbers of them know all about it, that they do know all about it because, actually, it requires quite a lot of effort and investment in computer science technologies, and I don't think most particle physicists do that.
It is true, however, that particle physicists have used, you know, things like neural networks and other types of algorithms, which are part of the general AI.
But what what drives AI now are purely these deep neural networks and these now now the version of them that these transformer networks that can do these large language models.
From your perspective looking at at AI in society, not necessarily in physics, but also in physics, are there sort of misconceptions that you can see about it that that that are out there, or are we understanding it as it really is?
Well, when I came back in 2015 to the Rutherford Lab, I was amazed that nobody I could find understood that, what you needed to do AI post 2012 was large amounts of computing power.
And GPUs like Nvidia's GPUs were one solution.
And that was one of the things that was found out that actually what made the difference was the scale of the data and the scale of the computing.
And everybody here were doing AI includes all sorts of little algorithms, and they're all mentioned in the report.
And that they were, if you like, pre deep learning.
And people were thinking, that's fine.
I I do AI, and I can get money from the government to do no.
The whole purpose was actually to transform, and the government putting AI in there money into AI was because of deep learning.
And now these these transform models and large language models, and I couldn't find anybody, who was interested.
And so when I I got a grant, on AI for science, I called it, I called it AI for science deliberately because I knew the politicians didn't know about machine learning for for science, which is what it was.
Alright?
Machine learning was a nuance they wouldn't appreciate.
And I managed to persuade, it was a collaboration with Turing, and I managed to persuade them to give me some GPU computing power so we could Rutherford will have offer a GPU AI computing service to Turing participants.
And that, I think, is a challenge still.
And, you know, the question is, you know, Microsoft and and and others, my old company, Microsoft is is building data centers at a at a vast rate, which involves huge amounts of computing power and, cost billions of dollars.
And there's no way that Europe can emulate that because we don't have any, what are these companies are called, super scalar companies like, Amazon, Microsoft, Meta, Amazon, Microsoft, Meta, Google.
Yes.
That's right.
So and possibly Apple.
They're the in The US, they're the only one.
China has some sums that could similarly do that.
We don't have companies in Europe that can put billions a month into building these centers.
And so Europe's trying to do something, but it it's it's a complicated business.
And I think given the the promise of AI, I think there will be some national resources that you can do these things with, but I think it's complicated.
And, of course, then you have this result from China.
Deep Seek says, oh, you don't need large amounts of computing power.
I'm slightly skeptical of that, but we we'll wait and see what happens on there.
Yeah.
I was gonna ask about that.
You are skeptical of it, are you?
Slightly.
Yes.
I am.
I suspect that they've actually learned from what The US companies and there are open solutions, what they've done.
And, yes, I think that you will still need large amounts of computing power.
And the Chinese have that.
Right?
So, but but I don't know how open what they've done and what they copied and what they haven't because, you know, it's difficult to tell.
As you mentioned earlier, you have an interest in climate science.
Right.
I think everybody should have.
Right?
Everybody.
Yeah.
No.
Absolutely.
But there's, you know, large amounts of computing power is a large impact on the climate.
Right?
Yes.
I'm told.
I haven't checked this figure that the the amount is is less than used for Bitcoin mining.
Right.
Okay.
I haven't checked that, but but but it seems to me a plausible thing, especially as Bitcoin and and its variants are now being used, and they have they use a large amount of computing power.
But but no.
That doesn't worry me so much.
I think they'll come a natural end that you won't actually no.
Question is how much training data do you need?
Right?
And one of the projects I'm actually interested in is related to the Institute of Physics thing is that what you can do now, you can pick up a model that's being trained by ChatGPT and and and OpenAI and things like that and the other companies.
And then you can specialize it to your domain, but you don't know what the thing has been trained on.
You don't know all the solutions to be trained on Wikipedia.
It's been trained on Reddit.
It's been you know?
And what what what is it being trained on?
You don't know.
You don't have control.
So one of the things that my friends in The US and I'm also interested in is is is seeing if you can train a large language model on a corpus of scientific data and literature.
And will that give you different solutions rather than taking what you've got with all the sort of extraneous things it's been trained on and then specializing it.
If you train it on on on scientific data and and and scientific literature, does that make a difference?
And and That's really interesting.
It is really interesting, and I'm not sure it is it does, actually.
But but it's certainly, they're beginning to look.
See, I work one of the things that I still would detain in The US, I'm on the advanced scientific computing computing advanced scientific computing advisory committee for the US Department of Energy until they've abolished it, alright, which I don't think they will.
But but the the the Department of Energy has the the nuclear weapons, which where they fired all the people who knew about the nuclear weapons and then had to rehire them because they suddenly realized well, the previous previous Trump administration, the guy went to the Department of Energy saying he's gonna close it because it was all about energy and green energy and stuff like that.
But but, actually, it's about where the bombs are.
And eventually, he realized that's where the bombs are, and you don't really necessarily want to close that after this step.
So I but I work with the the nonsecret part, and there were three supercomputer labs that I work with.
One is Berkeley, One is Argonne near Chicago, and the other one is Oak Ridge in Tennessee.
And and there, they have very the most powerful computers in The US.
They're gigantic things with large numbers.
They have tens of thousands of GPUs on them, so they can, in fact, do some serious stuff.
They're built for doing supercomputing, but they can also be used to AI because they have large numbers of of of the GPU chips.
They're not necessarily optimally designed for that, but nonetheless, they're very useful.
So that those those are the community.
I think that's a very valuable community if it isn't destroyed by the present administration, but but they're they're they're doing some very interesting things.
If you're not concerned too much about the climate impact, are there other impacts of AI that you are more concerned about?
Yes.
I'm not concerned about, you know, the the the most scary things that taking over the world and things like this.
I don't actually subscribe to that, that I'm worried about the great things also for disinformation, dis deep fakes, and putting out all sorts of, generating all sorts of evil stuff.
Yes.
I do.
So I my my view on on that, I hope The UK has the search into unethical AI because, for sure, North Korea, Iran, China, Russia are all looking at attacks on us with whatever they can do.
And so I don't think they're too concerned about ethical AI.
So it's great to see that we're concerned about it, but so long as some parts of our establishment are actually looking at how you counter unethical AI.
So that's my question, and I and I'm sure that some parts of GCHQ or elsewhere are are doing some stuff like that.
I hope Now regular listeners will know I'm something of a fan of particle physics, so let's hear from a particle physicist.
Here is Caterina Dolioni.
I'm a professor of particle physics.
I work mainly, at the ATLAS experiment at the Large Hadron Collider and where I do data acquisition, real time analysis, and searches for dark matter, try to produce it in the lab.
I'm also very interested in, software and open science and the environmental impacts of the research that we do.
You say you're looking for dark matter.
Have you found any?
No.
Not yet.
I mean, we've not we don't think we've produced it yet.
Or maybe we have produced it, but it's too rare to be distinguished from the backgrounds yet.
So we don't know.
We keep looking.
Okay.
Can can you give me a sense of what that would look like?
I I know it's quite a hard thing to picture, but what what would it look like in the data if you saw that?
The easiest answer is you'd see nothing.
There's a because that's the main signature for what we in in what we're doing of a dark matter candidate.
And, when I say candidate, it's it's important to, like, keep that in mind.
It's not guaranteed that that particle that we're going to see is dark matter.
We did many other experiments to confirm it.
But assume that you're producing dark matter at the Large Hadron Collider and an experiment detects it.
What you're actually detecting is the missing transverse energy that is left by particles exiting your detector without any trace.
And we use more or less conservation of energy.
It's actually conservation of transfer from momentum, where you have two visible particles coming in, colliding, and then a lot of debris coming out, most of it, you'll be able to detect with a detector.
But the dark matter particles, you're not going to be able to detect them.
They're dark.
They don't interact very much, so they're just going to escape.
So you're going to see if you sum the energy that you had at the beginning and the energy that you had at the end, you'll see that something is missing.
And this missing is the signature of potential dark matter.
Okay.
It's also the signature of neutrinos.
So those particles exist.
Yes.
So that's the main problem.
You have a lot of background, that you need to distinguish.
You need to distinguish what is signal and what is background.
And a lot of the time, you can only do it with the analyzing a lot of data and accumulating a lot of data.
So that that's one of the ways in which dark matter could appear.
I'm not looking specifically for that.
At the moment, I'm, looking for, a sort of a sister theory of, the quantum chromodynamics theory that is called dark quantum chromodynamics.
So imagine we have a copy of our plentiful particles and beautiful particles from the standard model that we all know and love.
There's a copy of that, and it's dark.
We don't see it because it's only connected to the Standard Model particles via very weak interactions or very, rare particles or very heavy particles.
So we call that a a complete dark sector somewhere else.
And within this dark sector, that might be dark matter particles.
It's one of them or a combination of this dark sector particles makes it makes it for a a dark matter candidate.
How does AI come into this research that you're doing here?
It comes in quite a lot because with the amount of data that we have, we have to be smart on how we analyze it.
And we could possibly, do most of the things that we're doing.
Maybe not not all, but most, I would say, in classical ways.
The thing is that it would take us much, much longer.
It's the same kind of revolution that we had in particle physics when people were looking at slides that taking pictures of, a collision or taking picture of a certain process in bubble chambers and then going from there to computers.
You have something some algorithm, something that really accelerates your the speed at which you can gain insight from the data.
So that's where machine learning is coming from.
This is only one way in which the field of physics uses machine learning for data analysis.
So in this case, we might not be the proponents of new algorithms.
We're mostly users, but it is still having a huge impact.
But there's other people that are trying to put physics inside into machine learning.
So that's another kind of crosstalk.
I don't do that specifically.
I'm more of a someone who is using machine learning to get things done.
I mean, the big question that seems to come up all the time with this sort of thing is, does that mean you won't need your PhD students?
I mean, would you I would not.
I mean, who's going to do any data analysis if I do teaching all the time?
No.
Anyway, jokes aside, it's, there's a lot of, experience, I think, that it's not something that, even the best large language model is going to be able to to inject into this kind of endeavor.
So I'm not worried that anyone's knowledge based job is going to be in our field.
At least it's gonna be taken away by machine learning simply because we have the tools, but we need to know how to use it.
We have Copilot for programming, but we still have to know how to design the software.
So this is the kind of input that we still need to need to have.
Sure.
If you will go for a general artificial intelligence or something that is bigger and not necessarily my my field of expertise, then maybe you can think about a longer term future and and see what what that brings.
But at the moment, I think we do need a lot of human input in the design and the use of the correct, algorithm or correct tool for the problem that you have at hand.
You know, five years ago, if you told me that AI was gonna be as big in society as it is now, I would have said, no.
It's not.
Not not in five years.
So how quickly is that future gonna come around where we do need to think about it replacing people's jobs and that sort of thing?
Have a crystal ball at the moment.
No.
Not from my field.
I think anything that we find boring can possibly be something that is taken over by some algorithm, but that's normal.
Like, when you feel like you have everything done, like, you have everything that is, well established, and then you you get a robot or, you know, something machine like doing it for you.
It has happened before and that hasn't really made the so it has made changes, but it hasn't made changes for the worse if handled correctly from the the workers' perspective.
So you can't say, oh, well, bye.
I'm not meeting you anymore.
That that's not a nice thing to say to someone who was doing the same job as, you know, a robot or or something.
You need to find parts in which these people can be appreciated, can be recognized.
So that, I think, my concern maybe is more about that than about the replacement of its there might be a way to do it, but we have to do it in a in a way that's sustainable for people, not just maximizing, scientific profit, whatever you want to call it.
Yeah.
Yeah.
Yeah.
Yeah.
So but I'm sorry.
I will get on to the more positive things in a minute, obviously.
But but so you're doing teaching as well.
Does does students' use of AI concern you?
Yes.
In not in a way that, I mean, I encourage my students to use I teach a programming course.
So it's something that, if you don't use AI, you use Stack Overflow.
If you don't use Stack Overflow, use Google.
If you don't use Google, you ask your friend.
So there's always been a case of I need to find this information, this punctual information of something that is happening to me and I don't know how to solve.
Where do I go?
And this is it's fine to go to, I think large language models, or you know, it it's it's replacing it's it's just evolving something that was happening before.
What I'm not entirely sure I understand at the moment is how, the perspective has shifted between, I want to learn something for myself because I want to have that experience.
I want to be able to apply that experience, and I want to reach a goal with a minimum amount of effort possible because that is the trick, I think.
That this is the tricky part.
When we're tired and, we have a lot of things to do, we want to just find the easiest way to get something done.
But if you make your entire course or your entire programming course, work like that, what have you learned at the end?
So one thing that we're trying to to encourage students about is, share your prompts.
Ask the right questions.
If you use this kind of tools, you have to use them responsibly.
You can't just ask do the assignment for me because we found, hallucinations in our in our assignments.
There is that that that is the immediate, the immediate problem, but also very complicated solutions for problems that if you just had thought five minutes, you would have found yourself.
So I'm not saying that this is going to be overcome.
There's going to be better versions of, Charge GPTs or Copilots that will do this better.
But it somehow, doesn't stimulate learning for oneself.
And I think this is an important part of, at least a physics degree.
Because we're not I mean, so maybe we do it we do it also for the money.
We're gonna do it exclusively for the money.
We do it also for our pleasure, for our, will to understand the world.
And if we're just asking someone else to understand the world for us, then where does it leave us?
But I think there's also good things about the the the use of artificial intelligence for for teaching, for solving teaching problems, for solving learning problems.
It democratizes knowledge quite a lot.
A lot of people have similar access to knowledge that they might not have had before.
And then there's all the, the I'm not gonna fall into that hole at the moment because I would talk about it for hours.
Environmental sustainability of these kind of tools.
So what are we doing?
We're just continuing to use resources without thinking because it's good for us because it eases our understanding.
It accelerates our our knowledge.
How do we use it responsibly?
My wondering is do we get to a point in physics where it doesn't actually matter if AI is doing all the discoveries?
Because and there's a black box though, isn't there, of we don't know how it did it.
Well, I I think there's a science fiction story about that.
Don't remember the title or the author at the moment, but I I brought it up before.
And that is where, in the far future, AI is doing all discoveries, and there are a few months just to reverse engineer those.
Figure out how they've done it.
Yeah.
So imagine it could be AI, could be an alien that comes and brings you up an amazing technology, and then you're like, okay.
How can I use it?
Why how is it there?
How is it working?
You know, it's it's kind of reverse the scientific process because you're not making a discovery yourself.
You're just figuring out how it's done.
But it wouldn't be too different from reverse engineering a piece of code for someone who has not left documentation.
You still have to figure things out.
So Can you see that as a possible future with AI?
I mean, I can see a lot of possible futures for AI, and I'm worried about different things in the world apart like, that are not that thing before.
But, I think it it it could happen, but I'm still thinking that the human brain is the best possible kind of AI that we got.
And the collaborations of humans is something that is that brings in serendipitous, discoveries as well.
Not to say that maybe at some point, we're not going to be able to simulate that, but I think the power in the individuals and collaborations of of humans is not going to be something that is easily matched by AI.
So we'll still so if the future is that it's not too near because we we're not going to be able to be I I have faith in in our in in the human aspect of of science and research, doing a lot of collaborative science.
I think that there's things there that cannot be replaced by by AI.
I think most people listening will have had some experience now of using something like chat GPT or the other, generative AI word based things, maybe even images.
But how do you actually use AI in the work that you're doing?
How what what do you actually do with it?
So we're not using, generative AI too much.
We're figuring out how to do it, and, there are some uses.
There's, this thing is called ATLAS GPT.
So it's an it's an experiment in within our experiment that is trolling the, knowledge base of the experiment.
And, you ask a question, you get an answer.
Now an interesting thing, about the training of that, and I think it's it's part of the the reason why there's still a lot of human input needed in this this machinery, is that if the documentation that it's, crawling is, that it's been trained on is obsolete, it's gonna give you obsolete answers.
And so how do you make sure that the training data is proper?
A human has to go there and clean up the pages of the documentation.
So you still are feeding some human information into this machine.
It's just digesting it and giving it to you in a different form.
So this is just a simple example of why I think we still need human in in the loop.
But so this is not the main use that we have in our field for for AI.
We mostly use algorithms that analyze large amount large number of features that give us, some insight in the data that we would not have had otherwise.
So this is we have different stages in, in our experiments.
One stage is to reconstruct from the signals that the detector gives you.
Some there there is a particle with this energy.
So that process we call it reconstruction.
And here you can have a lot of, neural networks of different sorts to do this kind of reconstructions.
Imagine you have a lot of points in space in your detector because it's like a big digital camera, and you want to reconstruct tracks of particles.
This is a gigantic combinatorics problem.
Graph neural networks can do it well and give you an answer of where what tracks you got with fewer fakes than other algorithms, for example.
Or you can, use it for clustering problems.
That's one of the other classic uses of AI.
So if you have a a detector that gives you, spray of particles and the spray of particles represented by a bunch of energy deposits, then you can use, AI to cluster this energy deposits in a more efficient way, for example.
So that's one thing that, they can do.
Tagging, we call it identifying particle a from particle b.
That's another really big, problem that we use for we use AI for.
So is this particle a quark, derived from a quark of type b?
Then this is the beauty core.
It has the this party will have the tendency to fly a little away from the interaction vertex.
So you find a neural network that can distinguish what is a probe particle from a particle that's long lived.
So this is something that we use quite a lot for this well, maybe basic task, but it's not quite data analysis.
Data analysis, I have 10 b's and five electrons.
What is the process that gave it this output to me?
I mean, very, very simple.
Right?
We we have many more particles in our events.
And then you can use all of this for data analysis because you can you have, the the problem there is to distinguish signal for background, and you have a variety of supervised and unsupervised methods to to make this, distinction.
One thing that I've been dabbling with, but, is unsupervised learning, what, people call the anomaly detection or the outlier detection methods.
It's the the kind of algorithm that ping your credit card when you're abroad.
So it's the same thing that we use.
And the advantage is that we don't know what to expect from the the new physics.
Ideally, we just we know very well.
We know standard model very well, and we have plenty of standard model data.
Most of our collision and most of our collisions that we analyze are standard model processes.
So can we get an algorithm to learn how this looks like and then tell us if there's any difference or deviations from the the data that the, that we have.
Now it's conceptually, this is very easy, and it's beautiful.
It works super well.
In practice, there's a lot of pitfalls and the things that one needs to understand before calling an anomaly seen by an algorithm, new physics.
For example, detector noise that you weren't expecting or, you know, clustering of a specific process that was too rare to be seen in your previous data analysis, but now you found that out and if not your physics.
So trying to understand what the known unknowns are and what the unknown unknowns are, that the that's the key that's the key problem there, not necessarily what I gotta be using for your anomaly detection.
So I I I'm very interested in that, that side of things because it also has, an impact on more basic data position things like data compression.
So if you don't, if you want to compress using machine compress data using machine learning, are you going to just, wipe out any anomalies because the compression will reconstruct will compress them badly?
It's only the anomalies, and then you'll just put them back into the bulk, and then you've lost your signal.
And so this is all interesting interplay that, you have to think about it when you're using AI and physics.
Yeah.
Yeah.
But so you do need your students to be able to understand things Yeah.
Without the AI, the use of AI.
They need to understand how to use it Yeah.
But also how to do it if it wasn't there.
I think there's also another aspect that the communities, has realized that you need to have, reproducible science.
This is generally a pillar of what we're doing.
You can also go into the open science.
So you want someone else, not you, maybe the general public even, to reproduce, you know, reasonable manner what you're doing.
AI doesn't make this easy, mostly because of the complexity and the the complexity of the networks, but also the computational resources needed for running large algorithms.
So how do you make, AI based analysis accessible to others, understandable by others, reproducible by others?
And sometimes the easy answer is, well, you don't.
You give them a version of the analysis that sees the same thing that is not using AI.
So you still need someone to do that part.
And, usually, when you want to convince someone that you've made the discovery, you will be asked, I think, over the field, can you please now do something that is, that at least indicates in the right direction that you have something solid with your AI algorithm.
I think what we're missing is also the, some more crosstalk with people that are working on, explainable AI and, because we don't we don't want to treat our data analysis as a black box.
We can't.
That's not scientific method.
And, this kind of, theoretical advances, when we try to to work with people that work on this, this topic, are still very, very far from our, understanding.
People are doing marvelous work, and, it all makes sense if you think about it in from afar and when they explain to you.
But how do we get that into the physics field?
How do we how do we make sure that we're using the state of the art?
Mhmm.
Caterina was also involved in the IOP's white paper.
It came out of a workshop that wants to take the temperature off how AI is impacting physics and where are we going with that.
Because I think the IOP is a very, good, very strong stakeholder, but also can influence where we're going also because of the impact on teaching, undergraduate teaching.
So there was a workshop in October, I believe, that, gathered information from, the participants in a variety of of forms and the then summary this white paper summarizes the the findings of this workshop.
And here, it's the AI is really broad, so it's machine learning, like the traditional machine learning, but then there's also other all other kinds of things.
So it's, the participants were also from different fields.
But in general, it's, it came out that, of course, AI is useful.
It's and physics can play and is already playing a a special role in AI and can also be highlight that can also be highlighted highlighted further.
And and at the moment, we can say that I think physics needs AI because of this, data, reconstruction, data processing, data analysis.
Where we have a lot of datasets, a lot of features, then AI is really making a difference.
Sorry, Scott.
Just to go to the environmental thing, is it the case that it was it will speed things up to such a degree that that will mean it's using less resources because we're not Hopefully.
Okay.
I think this was a a recent, UN, report that, where it can use AI to improve the the environment.
Right?
So it's it's kind of coming full circle.
But I think we don't know enough or at least it's not transparent enough on how we run our resource and how how much our resources cost.
So, there's a number of groups that are, trying to do that.
We also have a UKIF.
It's the European Coalition for AI and Fundamental Sciences, and we have a group on on, environmental sustainability.
It's not yet taken off, but it hopefully will.
That are just informing people.
That that's the main the main point.
Computing is not free in general.
We think that it's free because we are not paying the bill.
We're not paying the electricity bill, but the electricity bill translates into a, environmental bill, servers, and the there's a life cycle assessment of all of that stuff.
So it's trying to make the wisest choices to obtain the results that we want.
It doesn't mean turning everything off because it consumes less or not using the rate latest and greatest AI algorithm because it consumes too much.
Just trying to, map the problem to the algorithm in a way that also includes environmental sustainability as one of the axis, not just speed or that 1% more or, you know, efficiency.
Felice Frankel is a science photographer and research scientist at the Massachusetts Institute of Technology or MIT.
In a recent daily briefing on nature.com, Felice wrote about what science photos do that AI generated images can't.
Most of the work I do is in science photography.
That is I make pictures of the science, or I create images that, that are metaphoric in as far as describing what the research is about.
And, of course, everybody is thinking about AI, and I'm I'm now looking at what AI can do photographically.
Not not in the sciences, but I'm seeing in the, you know, in the world around us this extraordinary tool that can create an image from nothing, basically.
And so I started thinking, oh, boy.
You know?
Maybe I'll be out of a job at one point because AI someday will be able to depict, research art completely artificially.
That is to say, not creating an image from the thing, but will develop an image from pixels out of out of the large language model.
So it's it was it started about a year ago when I when I when I was thinking, am I in trouble?
And, you know, I'm the fact is I have to be realistic.
I'm next month, I'm turning 80, I will have you know.
I can't believe that I'm that old.
Right?
And so I think I'm thinking about my future, but it but it's it was it's an interesting thing to think about, if in fact, I will be taken over even if I were young by AI.
And and so the article was is investigating that idea.
And where, I did some experiments, I I have a a photograph that I made of science that, in fact, Munji Gewendi's science got a Nobel Prize last year for this work.
It it's an image that you're looking at nanocrystals in in vials, different colors, fluorescent different wavelengths.
Let's leave it at that.
And so I asked AI to with my prompt to create x, y, and .
And very I used various models, and they were terrible.
They were cartoon like.
They were silly, but I could see it happening.
At some point, AI will be able to create an image that looks and represents like the real science.
The key is how are we going to create a system to judge whether an image for submission is AI or not.
And so at the very end of the article, I list some ideas of when you submit an image, you you ask the researcher to say, is this an AI image?
If so, what model you used?
What prompt did you use?
You know, a number of questions.
But in the end, the key is that we should not permit any AI image to be presented as a record of the science.
Yes.
AI will be very good at creating images that are explanatory.
Conceptually, even structurally, it sort of looks like this, but never accept it as the record of the science.
And, unfortunately, we have to trust the submission.
If AI keeps improving, you can never see an occasion where it gets to the point where we can accept it for that.
Yeah.
Because it it's the intention.
That is the that's the key to the submission.
If your intention is to explain the science, fine.
If your intention is to say this is a record of the science, that's verboten for me and for anybody, really.
Now I don't know the answer of how to teach honesty.
I mean, we we've seen manipulated images.
We there's a whole list of images that are we're looking at that have been manipulated to, in fact, say what the researcher wanted to say because the science wasn't there.
So that's that's the manipulation of images is something that we've been around that's been around for years.
Even, for example, as I say in the article, these glorious images that we see of of the universe from the James Webb and the Hubble, those are all highly manipulated images, those colors.
People think the universe looks like that.
It doesn't.
But we but we'd say that these have been falsely colored.
At least, we should be saying that.
And but, so changing an image is something that we've been doing for for a while, but it's starting from scratch to create an image that literally never existed to represent the thing is, in fact, the problem.
Because you have this photography interest.
Right?
You have the science interest and the photography interest.
Does looking at an AI photograph, a photograph that's been generated, not an image, but a photograph that's been generated by AI, AI.
You know, something to look like a photograph.
Does that offend your sort of artistic sensibilities as well?
Let me first say I'm not an artist, and that's kind of important.
Artistically, I'm very I'm blown away by what is what people are able to do with AI.
It's just remarkable.
But as a scientist, I I'm worried because it's about truth as as as what I'm trying to do when I make an image.
Remember, when I make a photograph an image, it is a representation of the work.
It is not the work.
Is is it a it's a representation.
So there's always in my picture some sort of manipulation.
The very nature of making an image is a manipulation of reality.
You know, I'm not, you're not showing everything.
I'm I'm framing it, and so that's my initial manipulation.
But I I I'm not doing anything to the data.
That's that's the bottom line.
If I start manipulating the data, then then I I I'm making a terrible mistake, and that's I try very hard.
And when if I do anything, like remove a dust particle, for example, I always indicate that I've done so.
Always.
In sort of captions and that sort of thing.
Yes.
Exact yeah.
Absolute for example, I have a book out, a series of books called the visual elements, communicating science and engineering, and the first element is photography.
And I taught it's very it's a handbook, and it's I've I'm told it's very good.
And so I say at the very beginning, all of these images have been digitally enhanced because I I wanted you to pay attention to the process, not necessarily the the the distraction.
And so at the very beginning, I and and when I say enhanced, I'm talking about removing dust particles and which really but but you have to know that I've done that.
So I always indicate if I if I do anything like that.
Okay.
That's good.
But so it because you've you've touched on this that there's a concern about honesty, right, and and how we ensure honesty.
At times, when you look around the world, it feels like it's a runaway train of dishonesty.
And how we hold on to it, as you say, it's difficult.
But could you give us some suggestions of how we Yeah.
As as the people who are who do care about it, what do we do?
Yeah.
I I think now this might be naive on my part.
But I think the more people understand our process, like, for example, how I make a particular image, the more we will engage people to understand what can be or what cannot be, if if if they if they see if they understand, for example, that how how NASA or the the James Webb people color, if they actually see that, then it's a new part of your thinking.
So for example, I'll give you a personal experience.
I used to be a a choral singer.
My voice is gone at this point.
That's very disturbing.
But I used to be a very serious auditioned choral singer.
Because I know the music, because I've sung this particular song piece, when I hear another chorus sing it, I'm engaged more with it, and, actually, I can sort of tell when something is not quite right.
It's because I know more about it.
Same with cooking, for example.
I'm I'm a very good cook, and I could read a recipe and know that, uh-uh, I'm not gonna do this part because I'm experienced with it.
I maintain that engaging people in our process, especially the, yeah, the next generation of researchers, even if they're not making pictures, but engaging them in the process, I believe, might push all of us into understanding what is not right.
What what do you think?
Do you think that's correct?
I mean, no.
I don't.
I I mean, it I'm a lecturer in science communication.
Right?
Oh.
Yeah.
It's my thing.
So you're preaching to the converted, but I I'm you know, there's people listening who might not agree.
I'm always teaching about communicating the scientific process and helping people to understand that process is a really big part of science communication.
It's not about scientific facts.
You know, science at school quite often is about learning facts, and there's a bit about the process because but, you know, the scientific process of vaccines, if people understood how that was those new vaccines in inverted commas that came along during the the pandemic, if people know the process knew the process that had gone into producing those, there wouldn't be as much fear, I think.
As much.
I'm not saying it's it's it's an all or nothing thing, but they're they're engaged in thinking.
This is the ongoing issue that I did talk to colleagues about.
People are not interested in thinking.
It's hard.
You know?
And and they want quick answers.
I don't know how to engage people in thinking.
There should be a way to make it rewarding to think.
I mean, as scientists, we are, and, you know, we're rewarded in the thinking process.
But for the most part, most people just want to be told what to to choose either a or b and not why.
My hope is if we start, for example, on a on a simple level, if we start creating visuals that are engaging and gives permission to people who to ask questions.
You people are not frightened of images.
It's a means of engaging them to ask questions.
And once you get them, frankly, it's a seduction to ask a question.
My thinking is that there's a next step so that, for example, I'm coming out with a young adult's book in in the fall.
It's for teenagers.
It's called Phenomenal Moments.
And the idea is that everything around us is science, period.
That's it.
Everything we look at, everything we touch is about science.
So the whole book is about everyday phenomena, and the pictures, I'd like to think, are beautiful, but you can't you don't know what they are.
It's sort of a guessing game.
And the my hope is that they the kids will look at the picture.
They'll see the caption about what it is that it is.
And then when they start walking through the park one day, they're gonna see something like what they just saw in the book.
The picture I'm making is engaging them to remember, perhaps, when they see it again, they're gonna know what that what is because the picture is a means of getting them interested.
It's very simple.
I mean, I'm not doing anything brilliant here, but but I think that with pictures, we can we can get more people to start looking and thinking about.
Absolutely.
And but in that sort of use, would you see AI being a useful tool in in generating Yeah.
Well yeah.
Oh, boy.
Yes.
Yes.
I think it can be as long as we it is indicated that this image was done with AI.
That's the primary.
And whether or not we could get the AI image maker to do that is a whole other thing.
I don't I don't know.
I don't know how to do that.
Yeah.
I think it's a brilliant example, the NASA images.
You know, how if we can understand how that works, then there's there's an there's another level of interest in that for me.
I'm sort of okay.
So how have they colored those images?
And there's there's a there's a level of intrigue about AI, which I think disappears in a few years when everything's it's just gonna be a thing.
But, there's a there's a sort of, oh, that one's created by AI.
That's interesting.
Or this one's created using that piece of software.
That's interesting.
I don't I didn't know about that.
And it adds a it adds a level to it.
But I I I can totally see that.
What I struggle with, and this is slightly other to this, you know, I can't imagine logging into Netflix and going, which one of these were created by AI?
That's the one I'm gonna sit down and watch tonight.
You know, I want like, you're talking about music, individual differences between choirs.
And then there's, you know, the the nuances of actors.
There's the nuances of photographers.
There's the nuances of filmmakers.
There's the nuances of artists, individual artists.
And and I I I worry that if we allow AI in image creation at any level, then we we end up.
Yeah.
Yeah.
You're right.
I mean, the question is, will there be a time when we will see that AI really is missing that creativity that only a human can bring?
It let me quickly go back to the image that I was talking about, the AI image that, Dali created of these vials.
As I said, it was very cartoon like.
There were all kinds of mistakes.
But, interestingly, the the model created little dots, in the vials.
And the model put a couple of the dots on the surface of the table.
That was an aesthetic decision that AI decided to do.
It was stupid.
It was silly.
Maybe the machine will never be able to be as creative as we.
I I actually don't know.
I mean, I don't know enough about that world.
And maybe, you know, what I should do is talk to people who do know.
Well, I did.
I did before I wrote the article, and they all agree we have to have some kind of guardrails.
That's that's a that's a done deal.
But the question that you're asking, which is a very important question, is will AI be as creative as a human, and can we discern that difference?
I think at at least at this point, I don't think it come it will come I think there's always something that the human can do that AI now can't do.
But will it happen in the future?
Probably, yes.
So there goes the answer to that.
We spoke to Tony Hay earlier in the podcast, and I wonder who he thought should read the IOP's white paper on AI and physics.
Oh, I think it's got a a a lot of things that I absolutely agree with.
It is very physics oriented.
It says, you know, physics is the only field with large, well curated datasets and theories.
Well, there are things like chemistry, possibly biology, material science, a few other things who might object to that statement.
But but but, no.
It's it's it's it's a good thing to to to galvanize the community that, AI is going to be in their future, and I I do see that that that you could make an AI assistant, which was really very effective in advising you in your day to day job and in your work, as a as a physicist.
So I I I think that with the large language models, you will find that there are AI assistants for physics, and that will be part of many people's lives.
Alright?
And I think people understanding the strengths and weaknesses of it and the fact that you need to do skills.
And so the the things it recommends are very sensible things, and it's good that the the physics community is aware of the potential.
But I did, for example, very much approve of the chemistry Nobel Prize this year, which was awarded for for protein folding and gave it to DeepMind two people from DeepMind, and also a colleague of mine from University of Washington where I used to have a joint position, David Baker, who's been doing it for years.
And I think that was a good thing, and and will actually have ramifications, huge ramifications in all sorts of omics type stuff.
So so I see that there's really exciting applications in that area, drugs and cures the diseases and things like that.
Physics is less clear.
I mean, material science is probably the major hope that you actually will find something really exciting, and that would be a good thing to do.
If you did just program, inform the larger language model with physics information.
Right?
All it knew was the physics information.
Would that make a really good physicist, or does it do you need other things as well?
That those no.
That that's that's the sort of question that's interesting me.
And and you see, we we use deep learning on to help analyze some of the data at the lab, which is usually in in images.
And if for images, it's it's ideal, except that you have to train it on ground truth.
And once you've trained it on ground truth where you know the answer, it can then go and see data it hasn't seen before and make decisions.
If you take something like quantum computing, right, which is something I care about.
And at this point, I advertise my lectures with Feynman, the Feynman lectures on computation.
The question is we do simulations at all different scales, but we don't do a fully quantum mechanical simulation with with everything in in terms of the Hilbert space just grows exponentially, and and that computers don't have enough memory to handle more than the, you know, small number of electrons and things like that, a handful.
But quantum computing can do it very easily, and it grows linearly, and it's it's it's much easier to do a big system there.
Question is, what would be different by doing it that?
Because we've done a whole range of modeling at various levels, which are, yes, approximations, but they're are we going to find really some new things by doing that?
And that's an interesting question.
But but and people talk about, oh, this wonderful stuff on quantum machine learning.
Now I don't know of any machine, quantum computer that can really do serious calculations yet.
I think it's getting close, closer than I thought it would.
And I don't know whether quantum machine learning or quantum AI, if you like, is a real real thing.
But that's that's something that that, you know, smart people should look at and young kids could could find really that an interesting thing to do.
And and then you could also that's starting with the physics model.
And then the question is, does it make any difference whether you've trained it on, you know, Wikipedia and Reddit or scientific literature?
Again, I won't make.
Yeah.
Yeah.
Okay.
Well, maybe we'll find out in a future episode of the podcast at some point.
But No.
That would that would be nice.
Yes.
Yeah.
There's no it's an exciting time.
I'd like to thank Tony Hay, Felice Frankel, and Caterina Dolioni for talking to me for this episode of the Physics World Stories podcast.
You can find links to their work and, of course, the IOP's white paper on AI and physics on physicsworld.com.
We'll be back soon with something else from this wonderful world of physics, and thank you very much for listening.