Episode Transcript
Hello, and welcome to the Physics World weekly podcast.
I'm Hamish Johnston.
In this episode, I'm joined by my colleague Margaret Harris, who's recorded a series of interviews for the podcast at this year's Heidelberg Laureate Forum.
We're going to hear two of those conversations shortly, which both focus on the challenges of designing and developing computer chips.
Hi, Margaret.
Welcome to the podcast.
Hi, Hamish.
Margaret, before we hear those interviews, can you tell us a little bit about the Heidelberg Laureate Forum?
Yeah.
So the Heidelberg Laureate Forum is a scientific meeting that's held every year in Heidelberg, Germany, hence the name.
And its aim is to foster connections between notable figures in computer science and mathematics, and then with early career researchers from around the world.
It's modeled on the Lindau Nobel Laureate meeting, which does something similar for the Nobel Prize granting fields of medicine, physics, and chemistry, but there's no Nobel Prize for computer science or for mathematics.
So instead, the laureates who come to Heidelberg have won major awards such as the Abel Prize and the Fields Medal in Mathematics or the ACM AM Turing Award for computer science.
I I have to say, Margaret, I'm a bit ignorant of of both mathematics and computer science as sort of academic fields.
Is is there a lot of crossover between those two?
I mean I mean, I would have assumed so, but, I could be wrong.
Yeah.
I mean, you know, obviously, these are two fields that are very closely related to physics, and, so I would say that there is simultaneously less and more overlap between the two and with physics than you might think.
So several of the computer scientists I spoke to during the format, which I attended back in September, they said that that being there and talking to mathematicians had actually convinced them that they need to learn more math before they can make a useful contribution to future discussions.
And I would sort of back that up.
You know, I spoke to a couple of pure mathematicians there who told me what they're working on, and I was like, I understand one word and three of those things.
So, you know, you can't have much of a discussion if you don't have a common vocabulary.
So I think there is some work there.
And And I'm sure, you know, there are mathematicians who felt the same way about computer scientists.
You know, it's not as if it's a one way street there.
But that said, a lot of talks at this year's forum focused on machine learning and artificial intelligence or AI, which is having major impacts on so many areas of science, including physics as well as mathematics and computer science.
And there's definitely some common ground there.
And then, of course, there's a whole branch of mathematics specifically dedicated to computer science.
You know, if you think back to the middle of the twentieth century when computer science was getting its start, you had work by scientists like John von Neumann and, of course, Alan Turing himself, you know, really laying the foundations of the field.
So it's definitely a lot of crossover even if it's not complete overlap.
So you you mentioned the Lindau meetings.
I I went to one many years ago, and and I thought it was fantastic.
And it was fantastic because of the people who were there.
You had Nobel laureates, on one hand, and then you had lots of, well, students and early career people there sort of mixing and chatting.
And, you know, I thought it was a really it was a really dynamic event.
So so who is in in Heidelberg?
Is it the the sort of same idea of Yeah.
Very much so.
So the laureates of this year's forum included, Vint Cerf, who's known as the father of the Internet, with his role in developing the TCP ISP system used in email.
I wrote a blog post for Physics World about Cerf's talk at the forum on the on the longevity of digital information, and you can read it on the Physics World website.
Other people I met there were the cryptographers, Whitfield Diffie and Martin Hellman, who pioneered the public key cryptography method that's used to keep all your data secure online, especially actually, there there's also the RSA algorithm, which is a big part of it.
But Whitfield, Diffie, and Martin Hellman were the ones who were at this particular conference.
And then as you said, there's the early career researchers.
And at the Heidelberg Laureate Forum, they were a hugely diverse bunch.
You know, they came from all over the world and ranging from undergraduate students to postdocs.
They have to apply to attend, and it must be a pretty rigorous application process because everyone I met was just great fun to talk to.
They're really impressive in their enthusiasm for the research and actually very good explain at explaining it, even to a physicist like me who doesn't come from a really strong computational or mathematical background.
And Margaret, we've got two, of your conversations, queued up and ready to go, later in the podcast.
Who who did you speak to?
So my goal at the Heidelberg Laureate Forum was to find out more about the aspects of computer science and mathematics that relate most closely to physics or, to put it in a slightly different way, the ones that are of most interest to physicists.
And with that in mind, I spoke to two PhD students whose research focuses in different ways on kind of the nitty gritty factors that go in designing and developing computer chips.
And these computer chips obviously play hugely important roles in many aspects of physicists' lives, whether that's making your own chips as part of a scientific prototyping process or even just, you know, owning consumer electronics.
It goes the whole gamut there.
And so who who are we going to hear from first?
Okay.
So first up, we'll be hearing from Maryam Elgamal, who's doing a PhD at Harvard University in The US.
She's working on the design of environmentally sustainable computing systems, and I began by asking her what got her interested in this field.
First, thank you for having me.
I think what really got me into the field, I was I came into my PhD not really knowing what I wanted to work on.
I just knew that I really liked computing, and I was just exploring different kind of projects.
And I was lucky, I guess, because my advisers were just, like, giving me all of these different kind of projects that are available.
And one of them really stuck out to me, which was the looking at, like, environmentally sustainable computing systems and and the the environmental impact.
And I have always had an interest in environmental sustainability, even, like, prior to my undergraduate degree.
And it was just this one project stuck out to me, and I started working on it with a senior student at the time.
I was currently a professor, at Cornell, Udut Gupta.
And so I started working with him and I really enjoyed that project and then just decided to continue in that area for the rest of my PhD.
And now I'm in my fifth year still working on designing computing systems for environmental sustainability.
I think what's really also exciting about this area is how it's an emerging research area.
So it's pretty new compared to other areas in like hardware design or computer architecture.
And so that also gets me really excited to just like, there's so much potential, so many things that we can do and so many things that we need to explore.
What does sustainability mean for computing systems?
What are the main things you need to consider when you're evaluating how sustainable a computer system is?
So when it comes to sustainability for computing, there are multiple aspects for it.
Traditionally, when we design computing systems, it has always been like the primary metrics that we would look at are like power, performance area.
And, like, in the past two decades, there has been so much work and so much progress when it comes to the energy efficiency of computing systems.
So how can we design computing systems that deliver the most work or the highest performance with the minimal energy possible?
So there has been a lot of work in the area.
But when it comes to the actual, like, sustainability of a computing system, it's not only about energy efficiency.
It's also about, like, looking at the total carbon of the computing system.
And what that means is that we're looking at the carbon footprint due to the use.
So you can think that energy efficiency is a subset of that, but also we want to look at the carbon footprint due to the manufacturing of the system itself.
And what we're finding is that the carbon footprint of computing systems from a manufacturing perspective can actually be a huge portion of the carbon footprint.
It up it depends on the domain.
So if you're looking at a data center, it could look like 50% of the carbon footprint is due to, like, the operational carbon, so use.
And the other 50%, around 50% is going to be due to the actual manufacturing and the infrastructure of the data center.
When you're looking at something like mobile devices or, like, more broad like, more broadly speaking, consumer devices, that would more look like 75% being manufacturing.
And so that's something that hardware designers and computer engineers haven't typically looked at.
And we're really trying to see how we can consider this total carbon and trade that off with all of these other conventional metrics of power, performance, area, and costs that have been traditionally done.
So, this is how we're looking at the sustainability.
And there are other aspects of sustainability beyond carbon footprint that some people had that we're starting to explore.
And some people have also explored in other places, which is, like, water consumption, whether that's in the fabrication or, like, if you're using that for, like, cooling a data center.
Another thing that I'm also looking at is, like, the forever chemicals, more like colloquially would be called forever chemicals, but it's also called PFAS, which is the pro and polyfluorocal substances.
So the problem with these chemicals is that they're synthetic and that makes them very bioaccumulative.
Sometimes toxic, sometimes not.
And it's a very broad class of chemicals.
And they're very much widely used in semiconductor manufacturing.
They're pretty much almost in every like, used for every single integrated circuit manufacturing and fabrication process.
Not in every single process, but it is used to make the actual integrated circuit.
And so we're looking at how we can quantify that, which turns out to be even harder than cover for carbon footprint.
And then also how we can start, like, looking at techniques to start optimizing that and reducing it.
So that's like sustainability from a high level perspective when it comes to computing.
You talk about making computing systems more sustainable.
What would that actually look like in practice?
What are some things that manufacturers or people who run data centers, even consumers who have phones, what are some of the things we could do to make that more sustainable?
So I like to think about sustainability from two I like to classify it in two separate ways.
One is the quantification aspect and one is the optimization aspect.
So first tip for us to be able to know how much we need to reduce or optimize something, we need to measure it and we need to quantify it.
So a lot of work has been happening in the past few years on, like, how can we quantify the carbon footprint of computing systems?
So talking about what is like, what is how much is like going to manufacturing?
How much is going to operational?
And then the next part is, okay, how can we also optimize these systems?
So maybe, like, should we be using, like, GPUs or should we be using CPUs in a data center and, like, in a more nuanced kind of context and depending obviously on the applications that you're running in your data center?
Something, for example, for, like, PFAS, it could be more, can we reduce the number of metal layers that you're using to manufacture an integrated circuit?
So that's, like, just for context, it's like you when you make a chip, you have your transistors, and then you start manufacturing, like, mental layers where you start doing the routing, to connect these transistors together so that you can your chip can actually work.
And so, typically, like, these PFAS are included almost in every single step to do, like, the what is called the photolithography.
So the patterning of the actual, like, routing in the circuit on the, chip.
And so if we can reduce this number of metal layers, maybe that's one thing that we can also reduce the forever chemicals or PFAS with that are used in manufacturing.
Fabrication facilities have started looking into how they can reduce, like, recycle maybe some of the water that they're using.
Some have looked into recycling some of the metals that they use.
So for example, I think, Taiwan Semiconductor Manufacturing, TSMC, they have worked in on basically recycling the copper that they're using in the manufacturing and then taking it, recycling it, not necessarily into electronic grade copper, which is a challenge because the materials that you typically use for electronics or semiconductor manufacturing need to be very high purity.
But instead of having to recycle it back into the same fab like, to use it in the same fabrication facility, you can recycle it and then give it to some other industry that may not need that kind of higher purity copper that you may need.
It can also be utilized elsewhere and still benefit the whole sustainable sustainability aspect.
Making copper wires or copper roofs or something like that.
Exactly.
Yeah.
And so these kind of things.
I guess any change is going to involve some level of cost in terms of going from old technologies to new ones.
And traditionally in engineering, you can basically there's this three way trade off between being cheap, being fast, and being high quality, you know, and you get to pick only two.
Are those kinds of trade offs also going on in the sustainability sector?
Like, is sustainability maybe one of those criteria that you have to pick just a limited number of them?
It really depends on the domain and really depends on the application.
There are cases when you find computing systems that like, the energy efficient and the highest performing system could also be the carbon efficient system.
But in other cases, you may find that, oh, it's actually the opposite.
So it's like, pick which one you want.
And so this whole trade off space is something that is very exciting to me in in in my research area, which is how can we trade this off?
And that really depends on what your target is as a designer.
So for example, if you if as if you're designing something that, say, you just need to meet a specific performance criteria, once you meet that performance, you probably just want to lower the carbon footprint.
But if you're trying to design a system that you want to make sure that you reach the highest speed and performance possible, then probably you want to figure out a trade off that would still give you a good performance, but is not too bad when it comes to sustainability.
So that trade off definitely makes it I think it's an extra dimension that we need to account for now when it comes to a sustainable design compared to just, like, energy efficient design or high performing design as it traditionally is.
So In your research so far, what would you say has been something that's really surprised you?
Something you didn't expect to find?
Something you didn't expect to find.
I was very surprised to see how the PFAS used in semiconductor manufacturing are actually they're pretty essential in many ways.
And, the most surprising part to me was most of the PFAS used in semiconductor manufacturing do not actually stay on chip.
They mostly go to wastewater, but it's something that you absolutely need to make that integrated circuit, and yet it's not on the actual chip.
This whole PFAS thing makes it makes me realize that some of the sustainability things that we really need to think about in computing is not just what actually remains in the end product of computing, but the process itself, if we start looking into it, we might find that there are so many things that we that there are so many opportunities that we can do to improve the sustainability of computing in general.
What's the next stage for this project?
What are you working on now?
What do you hope to achieve before you finish your PhD?
So one of the things that really excite me and interest me in the sustainability space right now is thinking about what are better optimization techniques that we can do to actually help designers start doing actual sustainability aware design.
And one of the main challenges in the sustainability area is that there's so much uncertainty in the data.
And this uncertainty comes from so many, like, different shapes and forms.
One of them is, like, one, there's lack of data.
And two, there's just a lot of variability in the data that you have.
Because as you can imagine, the semiconductor supply chain is so vast, and it's just so across so many countries, across so many grids, there's just so much happening in there that there's so much variability in it.
And so I'm looking at optimization techniques that would still, despite all of this uncertainty, would still enable the designer to figure out what is the optimal design and what is, like, this is the design that you should be thinking about to have a more sustainable computing system, but also meet all of your other power performance area criteria that you need and what kind of trade offs that they can make.
So that's that's the most exciting part for me right now in my project.
Well, I look forward to hearing how that pans out in the future.
Mariam Elgamel, thank you so much for appearing on the podcast.
Thank you.
Well, that was a really interesting discussion, Margaret.
I mean, one thing that sort of struck me almost immediately is the, you know, the I I suppose the sort of pure economics of this.
I mean, I can see if you you want to make a better chip, a chip that's more deficient, energy efficient, lower cost, you know, so I can see why you why a designer would want to do that.
I can also understand why you'd want to eliminate, other sort of energy costs from, you know, from the process.
But what, what are their economics that are driving things on the environmental side?
I mean, particularly now, you know, when the the the tide almost seems to be going in a different direction, unfortunately, for making, well, just about anything greener.
Did, was there any sort of talk about that at the conference, or did Maryam have any further insights into that?
Yeah.
Maryam, I didn't really get into that, but, you know, there was a little bit of a a talk or a lot of think people, sort of interested in in the environmental effects of AI, which you know are are quite severe in terms of how much energy is being used to create and run, the large language models that underlie things like chat GBT, Copilot, and various other AI systems.
I think one of the the factors in terms of of computer chip design specifically and some of the chemicals that she talked about, you know, they are quite expensive and using less of them would be, you know, good, like, economically speaking as well as environmentally.
And then I suppose if you want another economic perspective, if you end up dumping these chemicals into waterways, you can, in principle, get fined for that depending on where you are in the in in the world and how the local enforcement regime works.
So I think that there are some some sort of cold cold hearted economic factors going towards making things greener, not just sort of, oh, well, it would be better for the environment.
But, I mean, I'm not trying to sort of diminish the it would be better for the environment arguments because that's obviously important.
We all live on this planet.
We've all got to keep living on this planet.
There's not really another option, is there?
No.
That that that's right.
Yeah.
And I suppose if, you know, if your factory is working to very high, environmental and health and safety standards, then if you can eliminate as many dangerous, nasty chemicals and materials from that process, you can you can save a lot of money, can't you?
And, you know, sort of, make your employees happier because they don't have to work with these, you know, particularly nasty materials.
So Yeah.
Yeah.
You you have health effects also for the people working with it.
That's true.
And I think it's a little bit like, other sort of economic, environmental configurations.
I mean, not flying to conferences as much as we used to, perhaps, is good for the environment, but it's also good for organizations' bottom line.
You know, not sending researchers around the world quite as much is, you know, beneficial for that, which I guess is is one problem with the things like the Heidelberg Laurier form, because people do come from all over the world.
But I met some people there who said, you know, we the reason we're here, particularly some of the there's a group of alumni who can kind of come to the conference, occasionally.
And they said, well, we have a collaboration going, and we're we're having it at the Heidelberg Laureate Forum because it saves us sort of travelling around various different places.
So it saves us in that respect.
So I think there are these sort of kind of parallel concerns that can feed into, things becoming greener in a slightly indirect way.
Yeah.
And I I suppose, you know, solar panels are a classic example of something where, yeah, I suppose they were brought in to reduce carbon emissions, but it turns out that, you know, you can produce electricity really cheaply with them.
Yeah.
You know, much cheaper than with fossil fuels.
And so who's the, the second person that we're gonna hear from, Margaret?
I have a funny feeling it's a fellow Canadian.
We love to have Canadians on the podcast.
Actually, Mariam's Canadian as well.
Although although she's doing her PhD at Harvard, she is Canadian.
So yeah.
That's interesting because I was listening to her, and I thought she she sounds like she's from Montreal.
Do you know if she's from Montreal?
I don't think so.
No.
I think I think she's more more more Toronto, probably from near your area.
But, she's she's sort of lived between Canada and Egypt and The US, so it's, you know Wow.
So both Canadians.
Both Canadians.
Yeah.
Yeah.
Well, well done, Margaret.
I should say that I had nothing to do with, with this.
That's great.
Well, let let let's move on to, to the next Canadian.
What what's his name?
So he's Andrew Gunter.
He's finishing his PhD at the University of British Columbia in Canada, and he works in designing circuits for computer chips.
And I asked him to start by explaining what that design process usually looks like.
Designing chips and creating chips, let's say, has three aspects.
First, there's the design entry, where you determine the functionality of the chip that you want.
But you don't know anything about how it's going to be implemented.
You don't know what it's going to look like, how well it's going to perform.
You just know the functions.
That's step one.
Step two is converting that functional description of the chip to a physical description.
That physical description now, it tells you where the components go, where the wires are laid, how it's going to be manufactured.
Once you have that, then you go to fabrication.
You ship your design off to TSMC in Taiwan.
They do a whole bunch of very complicated things, and the output is a physical chip.
And you would have this regardless of whether you're ordering a couple of chips for a very specialist application all the way through mass manufacturing.
Is that right?
Yeah.
So for what we'd say the high end tech nodes are, those are very expensive.
You really only want to use high end technology if it's mass market production.
If there's a lower volume, you might use an older technology, cheaper and more reliable.
A big issue with chip fabrication is errors in the fabrication process.
The newer and more complex the processes, the more errors you have, the lower your yield rate is.
So maybe you try to manufacture, let's just say, a 100 chips on a new technology node, maybe you only get 80 out of the 100.
That drives up cost.
If you have really low, let's say, volume requirements for how many chips you need, you might look at field programmable gateways, a form of reconfigurable chip, which is actually my expertise.
And we can talk about that more if you want, but that's the spectrum.
Okay.
So tell me what a field programmable gate array is.
Yes.
So as opposed to what we call an ASIC, an application specific integrated circuit, where so for the ASIC, you determine the design that you want and you go off and you manufacture it.
An FPGA, a Field Programmable Gate Array, is a chip with fixed resources which has already been manufactured, and after the manufacturing process, you then determine the design that you want to implement in the FPGA.
So let me clarify what that means.
Let's say you're creating a CPU.
Right?
You could design the CPU and manufacture it, and that will cost a bunch of money.
Or you design the CPU and you implement the design on an FPGA.
And now this FPGA implements the functionality of the CPU that you specified.
So the FPGA has a fixed set of resources.
If the CPU you've designed only requires less than those resources, you can't implement it.
If it requires more, either you get a bigger FPGA or you go to an ASIC.
That sounds a little bit like it's a breadboard type of stage.
If you're thinking about traditional physically wiring circuits, you put something on the breadboard, then you see if it works, and then only then do you go off and solder the real thing.
Exactly.
So a breadboard would be useful for a lot of analog circuitry or very simple circuits.
Once you get to complicated digital circuitry and you want to do prototyping, then you go to an FPGA.
In fact, and I'm not an industry person, but from what I've been told, one of the big revenue drivers for the FPGA industry is prototyping ASICs.
So they do get used in that production loop as well.
What specific problem with this process have you been trying to solve in your PhD?
Yeah.
So maybe multiple problems, but I will give the let's say, kind of the big picture problem.
So I told you step one, step two, and step three previously.
Step one is design entry.
Step two is the conversion from a functional to a physical design, and we refer to that as electronic design automation, and step three is the fabrication.
So my research deals with step two primarily, electronic design automation.
We have the functional design.
We want to produce a physical design.
Now the way that we do this is in several stages and we'll say that at each stage, we have usually an NP hard problem to solve and we use a heuristic algorithm to solve it.
So stage one within this, then stage two, stage three, it's a sequential process.
At each stage, things can go wrong.
We're using heuristic algorithms.
So they can fail to produce a viable solution.
If they succeed, they can take an unexpectedly long time to succeed and produce a solution, and the solution may be of a low quality.
So my research deals with predicting in advance before you run the algorithms if any of those three things will become an issue.
So prior to this research, that electronic design automation process, it's very uncertain, and I like to say that my research is adding certainty to electronic design automation.
That sounds like the equivalent of if you're having a software update, you might want to know that the software update is gonna take, I don't know, three hours and forty five minutes to run that thing over time as opposed to taking, you know, two minutes.
Yeah.
So I normally don't market this way, but when I talk to others in the domain, they say, oh, you're creating a progress bar for electronic design automation.
And I go, yeah.
You know what?
You can think of it that way.
There's a little bit more, but you could create a progress bar with this.
Whereas today, there's nothing.
You just kind of sit there and you wait and you see what happens.
I want to get a little bit into the mathematics of this.
You mentioned something as being an NP hard process.
Yeah.
So without going into the technical details of what NP hardness is, I think the important intuition is that we're looking at optimization problems, and there are many, many different possible solutions to this.
So we would typically speak to the complexity of the problem, and the problems that we're dealing with have combinatorial complexity, where for every element of the output solution, you can choose from, let's say, a set of possibilities.
And for each possibility for a given element of the solution, any combination of those is viable in the output space.
So we say it's combinatorially hard, and those combinatorially hard problems are they fall into the complexity class of NP hardness in computer science.
So very, very hard stuff.
And what was the outcome of your process?
What stage is this business project at currently?
Yeah.
So I'm toward the end of my PhD.
Many years have gone into it, and I've completed a lot, but there's a lot more to go.
So what I've done so far is I've looked at the hardest problem within electronic design automation, which is a routing problem.
Now routing is hard, not just because of the complexity of the task, but the scale at which we do it.
So for routing, we typically think of the chip as a graph.
And when I say graph, I mean in the computer science domain where the graph has nodes and nodes have edges.
The nodes represent wiring resources in the chip and the edges represent the connectivity between the wires in the chip.
Now, the routing problem involves connecting, communicating components in the design.
So if I have component a in the chip and component b and my functional design says these two things need to talk to each other, we need to lay a wire that communicate that connects the two of them.
Now, that's really simple.
However, it's not usually just component a and b.
There's not normally two.
The typical order of magnitude in my work is about 10,000,000 components, and the typical number of wires is about a billion.
And so this graph then has about 10,000,000 nodes and a billion edges.
And so we have to use algorithms to solve this NP hard problem on a billion scale chip.
And so my research is predicting whether the algorithms can do that for a specific problem, predict how long it's going to take, and predict the quality of the final solution.
But if your process said, actually, this is going to either take a very long time or it might take a long time, might take a short time, but the solution it's going to come up with is not gonna be very good, what happens then?
Yep.
So a few different things can happen.
It depends on exactly how your design flow looks, but I'll give you two examples.
One example that is a state of the art for many people is they perform what we call the placement task, which happens before routing, with multiple different instances.
So what this means is that in placement, we have components of the chip, and we need to figure out where they go.
We have to place them at fixed locations.
We use pseudo random algorithms for this, which give different outputs based on a different random number generator seed.
So So let's say we'll do 10 different placements with 10 different random seeds.
And then now we have 10 different inputs to the routing problem.
We will then run 10 different instances of the routing problem, and we can use my research to predict which of those is most likely to succeed, take a short amount of time, and then give you a good result at the end.
And so you would kill nine of the instances and keep just one.
So that's one way it could be used.
Another way that it could be used in tandem with that actually is you have a routing problem and you predict it's not going to work out or it's there's a low probability that is going to work out and give a successful solution.
You can kill the routing run early, go to an earlier stage of the design process, make some tweaks based on your engineering expertise that you believe are going to improve the likelihood that the chip will be routable, as we like to say it.
You've worked out that you can't get from point a to point b by subway, and you've got to work out how to do it on foot instead.
Exactly.
Right?
During this project, was there anything that you thought would be hard that turned out to be easy or vice versa?
Anything you thought that would be easy turned out to be really, really difficult?
I can't say anything that I thought was going to be hard turned out to be easy, but but I can say that there are things which I thought was going to be easy that turned out to be hard.
I'll give you one example of that.
So something which I've personally tried to look at, which prior research hasn't really, is the the probabilities of what's going to happen.
So I apply machine learning to do these predictions.
There's been prior work on using machine learning for similar aspects of electronic design automation and they typically predict the expected outcome.
So for example, we could say that we expect that the chip is going to come out with an operating frequency of 500 megahertz.
Sure.
You can take a more nuanced view to this.
You could say our average expected result is 500 megahertz, but within a 5% to 95% confidence interval, we expect it to be be between 400 megahertz and 600 megahertz.
As it turns out, this is very useful to do, but it's not straightforward to do that.
Why is that so useful?
So it ends up being useful because we often have, let's say, some room for tolerance in these things.
Right?
So it it's nice to know that we expect a 500 megahertz chip, but maybe we don't actually need that.
Maybe 450 megahertz would be enough.
So only knowing that we expect 500 is nice.
That kind of tells us most likely we'll get what we want.
But what if there's a a significant probability of it going to be below 450 megahertz?
Then that single prediction of just 500 doesn't give you any information about that.
Now if you don't have that information, how do you make a decision?
How do you know whether this is going to be good enough?
You don't, and so you have to guess, and that's what people have done.
People have guessed in the face of uncertainty.
But now I'm giving you a probabilistic view of this.
I can tell you the probability based on prior data that we've collected indicates that there's an x percent chance that the frequency of the chip will be greater than your minimum threshold of 450 megahertz.
Are there any implications of these results beyond chip design?
I mean, I can imagine that other industries will also have routing problems and will also be using the algorithms to try to solve them.
Yes.
So I think this is an early but fertile area for research.
So thinking thinking more fundamentally about what my research is doing, yes, we're predicting outcomes in chip design, but we're really predicting the outcomes of algorithms.
Algorithms get used beyond the chip design world.
They get used in more places than I could ever imagine counting.
And so researching the outcomes of algorithms has a more global implication, an implication that I think a lot of other researchers and practitioners should pay attention to because it it gives us new applications for this research, but I think it also drives new research areas.
So one of the hot topics in machine learning these days is foundation models.
Now I'm telling you that there are many different algorithms that we could apply this to.
Here's an idea.
What if we collected data from several different algorithms to train one foundation model, which is used to predict the outcomes of many different algorithms, and then we fine tune this as we want.
This is a paradigm that we've seen successful in other areas of machine learning, but not really in the algorithms world.
And something that's really key to note here is that we can collect high quality data from algorithms.
It's really easy to extract information because we're not dealing with physical processes with noisy sensor data that you might have in, say, the physics world.
We're in the world of software.
And in the world of software, we can collect near perfect information to train our models.
That does sound exciting.
Okay.
Final question.
What are you planning to do next?
That's a good question.
So I do everything that I said in my academic life, in my PhD.
I also have an industry life on the side.
So I work at part time at a startup called Singulos Research in Vancouver, Canada, where we do nothing related to anything I just said.
We are looking at mixed reality experiences using computer vision for real time applications on mobile devices.
And I'm hoping that things go well with the start up, and if they do, I'm planning to continue research there.
Maybe I'll go between that and the chip design world.
I have several interests.
So it's up in there at the moment.
Well, good luck.
Thank you very much for appearing on the podcast.
It's been good to talk to you.
Thank you.
It's been lovely, Margaret.
That was Andrew Gunther speaking to Margaret Harris at the Heidelberg Laureate Forum in Heidelberg, Germany.
Thanks to Andrew and our other guest, Mariam Elgemel, for enlightening us about what goes into designing computer chips, the chips that we all use just about every hour of every day.
And, Margaret, thanks for coming on the podcast.
And, can can you give us a little preview of some other interviews that you've got from Heidelberg that are coming up?
I think we've got two more, don't we?
Yeah.
I spoke to two laureates at the Huddlberg Laureate Forum who both have a background in physics before they went into computer science, which is where they've made their names and and won their prizes.
Patrick Hanrahan is now best known for, his work on at Pixar on computer animation.
And Amanda Randalls is working on computational sort of health health science, in the engineering department at Duke University.
We actually overlapped there as physics students, so it was nice to talk to her for that reason as well.
And so, yeah, they'll be coming up on the podcast over the next couple of months.
We don't want to have all computer science all the time, but, back to our regular scheduled physics next week, I think.
That's right.
Well, I look forward to, to, to getting those, interviews onto the podcast.
Thanks, Margaret, for, coming on the podcast and for doing those interviews, and a special thanks to our producer, Fred Isles.
We'll be back again next week.
Bye.
