
·E94
Why Science and Philosophy Need Each Other | Lauren Ross & Megan Peters
Episode Transcript
Megan, Lauren, thanks so much for joining me today.
We have an exciting colloquium planned.
The topic is why science and philosophy need each other.
I can't think of the perfect two guests to be on the show because Lauren, you could be seen as a a scientifically informed philosopher.
Megan, you could be seen as a philosophically informed scientist.
However, one could argue that both of you are actually both.
So I think that brings us to this first question, which is fundamental to setting the stage.
Why do both of you think science and philosophy are often seen as separate, even antagonistic, when historically they emerged together?
Lauren, do you want to perhaps start?
And then Megan, you can take it from there.
Sounds great.
It's a very interesting question, and it partly has to do with how we think of both science and philosophy in modern times.
For science, usually we have more awareness and understanding of what we're referring to, but with philosophy, that's less the case.
Philosophy is this label that can be applied to many different things, many different types of thinking.
I mean, philosophy is also a field, of course, that involves different types of work.
In everyday life conversations, we use the term philosophy to sometimes refer to places where you have questions without answers, a place where anything goes, where maybe we're interested in your own subjective views, your beliefs and your thoughts.
And sometimes philosophy in this space is viewed as the opposite of pragmatic.
It's your philosophical musings or, or you'll sometimes hear even scientists say that they're asking a philosophical question where they mean there's this sort of unbounded open question.
Or you'll hear the expression near philosophy.
So I I think that starts to show and paint a picture for how science and philosophy can be seen as opposites.
And I would say mistakenly, because that picture of philosophy is not how we think of science.
This picture of philosophy questions without answers and the sort of unbounded anything goes, well, science doesn't operate like that.
And part of what can help here is to specify different types of philosophy, and in this case, philosophy of science, philosophy of mind, philosophy of cognitive science, where we don't have an anything goes type project.
We have an interest in the principles of science.
We have an interest in precision, clarity, rigor.
And so this would be one reason I would suggest for why they're sometimes seen as separate or in contrast.
It'll partly depend on who you talk to.
Many scientists do view them as helpful colleagues that kind of need to work together or that when they do work together can lead to various types of successes.
So that would be a first answer that I would give.
Megan same question.
Yeah, actually there's a lot of of what Lauren said that I agree with, but I'm actually kind of surprised by this question in general, because from my perspective, I don't find them antagonistic at all.
And I guess it's my privilege that the philosophers and scientists that I tend to hang out with might agree with me that they are not antagonistic.
But again, that's, that's my own privilege in the space that I, I choose to occupy and that I'm privileged to occupy.
I I feel that, yes, there is this general tenor, this general feeling that philosophy and science might be separate, antagonistic, because there's the empirical scientists who are doing the real work.
And it's the philosophers who are over here in their armchairs kind of, you know, deciding that there is a difference when maybe there isn't really a difference in in the real world.
And so there could be that tension where it's like, do we really need to be having that particular type of conversation?
Does that really matter to the experiments that I'm going to be doing?
But I think that again, the folks that I tend to interact with within the science and the philosophy space recognize that these are truly synergistic, not even just friendly, that they both can learn from each other in the ways that that Lauren pointed out.
But I will disagree, Lauren, with one thing that you said, which is that the realm of questions without answers might be more the philosophical realm.
And I feel like there's just so much of science that is that too.
There's so much where the purpose of what we're doing as scientists at the at the cutting edge, at the forefront of our knowledge is, well, maybe there isn't an answer yet, but the role of philosophy and the role of philosophically informed science could be to discover, is there an answer to be had here?
So we don't know the answer yet and we need to decide, is this something that we could actually go after scientifically?
So a lot of what I do is a question without an answer yet too.
So, yeah, I think that, sure, the general tenor might be that these are antagonistic, but I think that there are quite a lot of people who also disagree with that general assessment.
Yeah, I think that a lot of people actually forget that when you do a PhD in in anything scientific, you're just fundamentally getting a doctorate in philosophy because I mean, this, this is an enriched history of just philosophy that's expanded over time.
Natural philosophy has become what science is today.
Lauren, for you, as a philosopher of science and a trained physician, I know as a medical doctor, it's that most people who when I worked in the medical field, when I do work in the medical field, is that it's very uninformed philosophically.
It's a quite a common theme.
They're not not interested in it.
It's just that they just don't have the time perhaps to explore this as much as maybe someone like like myself does.
So what do you think scientists most often misunderstand about what philosophy actually contributes?
I think that scientists, if they are misunderstanding what philosophy is here, primarily philosophy of science and what it can contribute.
It's this view that philosophy is sort of so open and so anything goes that it isn't useful and that it's this base where you can't always get traction on questions or how to understand the world.
Part of what's coming up in this question and part of, as you mentioned, Ted, why this is so interesting, the kind of current views in some areas that these are disparate types of study is that they very much were part of the same program in early work.
So natural philosophy, we refer to early scientists as natural philosophers.
Aristotle is both someone we think of as an early biologist and also a philosopher.
And in in Darwin's lifetime, the term scientist was created.
And so he was a natural philosopher perhaps early on in his career and then was only referred to as a scientist later.
So they very much do have a shared root and a shared history.
Currently, the common misconceptions I see is that philosophy is so open that you can't use it to get guidance.
And that's, as Megan is suggesting, very much antithetical to the kind of work that you see in a lot of philosophy of Cogsai, a lot of philosophy of mind, a lot of philosophy of neuroscience.
This is a space where you have scientists, sorry, you have philosophers who are doing work where they're interested in, I'd say 3 main things.
There's an interest in getting precision about foundational topics and methods in science.
They want to know the principles and the justification that are guiding those concepts, those methods.
And then they want to know, and they want to be able to specify how something works.
If you have a scientist giving an explanation, how do you know it's a good one?
If you have scientists debating how we should understand causation or what the mechanism is for something, how do you know when it works and when it doesn't?
So here we're often looking at science from a functional perspective where scientists have goals and you can assess the success of science with respect to when scientists are reaching those goals.
And so in this space, we think of science as a as a, a practice that gives us our best understanding of the world.
And it often involves this theorizing that we sometimes call philosophy, that scientists are very much doing and philosophers of science are engaged in as well, where you're, you're looking at these fundamental scientific concepts and practices that scientists engage in.
If science does give us our best understanding of the world, we should be able to say how it does.
And that's where here it's helpful to get precision about what is an explanation in science, what is causation?
What is getting information about the causal structure of the world?
What are the principles that scientists use that we can identify to help guide work in this space?
And then how do you know when it works?
How do you know when scientists have met the standards of their field?
And that partly involves specifying what they are.
And so I think it's sometimes surprising to physicians, healthcare practitioners who are more in a professional space and they aren't necessarily theorizing the way that other types of scientists are, to hear that this is a kind of philosophy of that this is a kind of work that happens in philosophy and philosophy of science.
So, yeah, there's a kind of difference, I think, across types of researchers.
Some of them are more on the front lines of professional work, and maybe others are more engaged with research.
And you have some of these researchers who are working with philosophers and kind of interested in these theoretical questions that show up in philosophy of science.
Megan As a neuroscientist deeply grounded in philosophy, what do philosophers sometimes overlook about how scientific practice really works today?
You're asking me to to say what's wrong with all my colleagues.
So I think for me this is a, a challenge that maybe philosophers face more than some scientists, but scientists certainly face this challenge as well.
And that is, I hinted at this earlier, this idea of is this a difference that makes a difference?
So philosophers of science, philosophers of mind, philosophers of modeling of cognitive science will often try to drive at the the conceptual distinctions that provide clarity with respect to the questions that we're asking and an assessment of the validity of the methods that we're using to answer those questions.
But sometimes I think philosophers and scientists to a certain extent as well, we get so into the details of finding the joints in nature, you know, finding the separation between two concepts that ultimately, if we were to take a step back and say, all right, well, maybe there is this separation in concepts that you've identified, this difference between concept A and concept B that you've started to really home in on.
How could we ever know if that's a real difference, if that, And not just that this is a difference that we can conceptualize, that we can come up with, that we can describe, but that this is a real difference in the world.
This is a real joint in nature.
And I think that sometimes the pushback that scientists will give towards philosophers is this like, Oh, well, you're making distinctions that don't really have any bearing on anything that's physical, that's real, that's empirical.
And so you're, you're really just kind of in this space as as Lauren said, that where everything, anything goes, like you've, you've discovered a difference and you've decided that that's an important difference.
But I think that the hard part is, is not just dismissing these differences or these distinctions and saying, well, I could never test for them.
So it's not, it's not a meaningful distinction.
The hard part is deciding whether there is a meaningful distinction there.
And so deciding whether this is a problem where philosopher of science or cogs I or modeling has come up with this distinction that may or may not be empirically testable.
And the challenge is to say, do we care to empirically test this?
And if we do care to empirically test it, can we even come up with something that would allow us to see whether this joint in nature is actually present?
And so I think that that's a hard hump between science and and some philosophy, where some more pure philosophers of science will see the intrinsic value of making the distinction and clarifying it to begin with.
And some empirical scientists will say, well, that's great, you can write it down, That's lovely.
You can draw a picture.
But like, do I actually care?
Is this a thing that I can go and find with some sort of empirical study?
So that, I think would be the closest thing that I can think of to a a kind of something that philosophers might overlook or that they the relative value placed on that enterprise is different between philosophy of science and empirical science.
Megan, before we started, while we were waiting, Lauren, we've accidentally sent.
I might have missed mistakenly not sent the ring link to the right place, but we were chatting about one of our favorite heroes, who's Dan Daniel Dennett.
And, and I often looked at Dan growing growing up as a as a neuroscientist and a philosopher.
He was someone so ingrained into both of these fields.
And and he often touched on this deeper reflection culturally.
So this whole this, does this reflect something deeper, perhaps so like objectivity versus reflection or shut up and calculate versus anything goes.
What do both of you think about this?
And how might we bridge this divide to transform how we study the mind and consciousness?
It partly relates to what has come up already because, as Megan suggested, it's sometimes is confusing to think of there being a difference between this kind of philosophical work in science, because you see scientists who are engaged in philosophical questions and theorizing.
So from my perspective, I look at what they're doing and they're doing philosophy, and then looking at these philosophers who are interested in providing analysis and accounts that, as Megan was suggesting, kind of latch onto the world matter.
You can do something with them.
You can show why this would be a good account to have or not.
You see how both of them are really interrelated types of projects.
And so I think it partly boils down to sometimes we have cartoon pictures of both.
We have a kind of cartoon picture of a scientist who just takes out a measuring device and goes out and studies the world.
And what you miss if you look at that picture, is all the theorizing that took place before you set up that experiment.
There's so many assumptions involved.
There's so many methods you can choose from.
There's so many questions that scientists need to and do ask themselves an answer before they just go out and get the objective facts about the world.
And it's going to depend on the questions they ask, which is partly what Megan brought up.
And in some cases, you've got to ask the right kind of question too, or appreciate the different questions require different methods and then they give you different answers.
This is also just fascinating from the standpoint of how complex the world is.
Scientists have to deal with that and they want order.
And it's fascinating how they're able to do that given the complexity of the world.
And so, you know, they are able to do that.
We kind of look at the places where they've done it, and then we're looking at these other situations where there's a a complex new question, there's some new territory they're trying to understand, right?
Is the brain the most complicated machine on the planet?
You know, the brain is so complicated, the world is so complicated.
And so you see how they're making decisions about what to do with that because they can't cite all detail that's out there and not all of it matters.
So they have to figure out what details of the world matter and how to carve out questions that allow them to give principled answers to those kind of topics of interest.
And so this is both science and philosophy, the way I think Megan and I often see it.
But if you have a toy picture of science and a toy picture of philosophy, they look very distinct.
And there are types of philosophy, as Megan is suggesting, where it's more armchair type work.
And in this case what we want is we want philosophy that's useful for these kind of scientific questions.
And we see many examples of that.
That might be that would be part of the kind of answer I would give.
I, I totally agree with the, the cartoonification of science versus philosophy and in in particular this version of science, which is that you pull out your, you know, measurement O meter or whatever, and you point it at the thing and you get some sort of objective answer.
And so this mischaracterization of philosophy as this subjective anything goes and science is objective and like, definitely we're just measuring the world.
No, like there is no such thing as objective science.
Sorry, but there just isn't that.
We carry, as Lauren you said, the the assumptions that we make about the structure of reality, about the types of measurements that are going to be useful, about the types of models that we can build that will be useful to answering a particular type of question or retrieving a particular type of explanatory goal.
There's so many cases where if you actually do kind of a historical overview of a particular niche field.
For example, like this particular type of model of decisions and reaction times in, in neuroscience, you know, how, how do people make decisions in a noisy environment and how long does it take them to come to a decision under the conditions of noise in the world?
You know, you're driving down a foggy Rd.
it's foggy.
How long do you take to decide what you're seeing, right?
And what do you decide that you're seeing?
There's models of that kind of decision process and those models have been successful for literally decades since they were developed.
And there's been a lot of really beautiful work to say this is now like the dominant explanation of how we make these types of decisions.
But they have assumptions and those assumptions, Dr., the experiments that are done to to generate the objective empirical data that then goes on to validate or 'cause those models to be modified a little bit.
And if you take a step back and you look at those assumptions, they have constrained the space of inquiry in a way that obscured potential alternative explanations.
So this is a particular hobby horse of mine because we've got a couple papers on this recently.
But I think the general principle applies across all of science, not just cognitive science and psychology and you know, complexity sciences within neuroscience, but in general, the, the way you think the world works and the models that you've built and their relative success in capturing the components of the system that you're trying to explain that gives you myopia.
And if you don't get out of that, if you don't take off the blinders, you're going to miss a whole lot.
And simply the recognition that you have blinders on in the first place allows you to acknowledge that science is not an objective enterprise, that there is always a scientist in the picture, and that we are human beings and we have biases and we have preconceived notions and we have assumptions and we we shape the way that we go about trying to understand the world in ways that we not, we may not be fully aware of at all.
Those biases and implicit assumptions are, are implicit.
They are deeply buried and and they're going to shape the new models that we built.
So I fully, fully agree with Lauren here.
And this is another case where I think there's a, you know, seeming divide between objectivity and subjectivity, philosophy versus science, that kind of thing.
And it's we're kidding ourselves if we think that science is truly objective, because it just really is not.
Well, OK, so the stage is set, and I think now would be a great way to explore both of your work together while trying to illuminate each other's work.
So in that, with that being said, let's try this.
Megan, perhaps could you tell us why Lauren's work helps illuminate?
So let's say, why does her philosophical work help illuminate science?
And then I'm going to ask you, Lauren, do the same question but in reverse.
Sure.
So I as probably was, was said, you know, in, in my introduction, you can go like Google both of us.
So I am a a philosopher and scientist of subjective experience.
I study the brain and the mind.
I try to reverse engineer the software that's running on the wetware of our brains and how that creates the subjective experiences that you have of the world and the models that you build and query and kind of run forward to predict what's going to happen in your environment and how you're going to interact with it.
So the kind of work that Lauren does is really helpful to me because it brings this conceptual clarity.
You know, the consciousness science as a broadly writ field is a little bit all over the place.
You've got everybody from folks who are studying this from, you know, kind of the, the quantum or mathematical side.
And then you've got the cognitive neuroscientists who like to go look at brain activity.
And then you've got the theoreticians.
So it's, it's a little bit all over the place, like a lot of fields.
Sure, you've got a lot of interdisciplinarity, but the nature of what we are studying as folks who are interested in subjective experience is even less objectively identifiable than basically anything else in the world because it is the thing that lives inside your head by definition.
And so having clarity on those concepts or seeking clarity on those concepts, what do I mean when I say consciousness, when I say subjective experience, when I say qualitative experience?
This gives us Lauren's work.
And, and I saw this very clearly actually at the Southern California Consciousness Conference that we both went to, I don't know, last spring where Lauren kept pushing the rest of us scientists in the room to say, what is it in what actually are you trying to explain?
What is the target of your explanation?
Because every time you all say the word consciousness, I'm paraphrasing here, Lauren was a lot more, you know, diplomatic.
But basically, you know, every time that we said the word consciousness, everybody in the room meant something slightly different.
And it wasn't.
This isn't just a taxonomic or linguistic problem.
This is a conceptual clarity problem.
And so I think that for the kind of work that I do and even more expansive, the kind of work that any cognitive scientist or computational neuroscientist does, where we're really trying to reverse engineer the software of the mind.
In a lot of ways, the target of the explanation itself is unclear from the beginning.
And it's really hard to come up with a nicely constrained little box to live in and say that is the thing that I want to explain.
And so this is where someone like Lauren and Lauren is particularly good at doing this in a way that corrals the cats and herds the cats into coming up with something useful.
It's it's really valuable because without that clarity, we're just going to have the same conversations over and over and over again and they will always devolve into what is it that we're even trying to understand.
Lauren, same question, but about Megan's work.
Perfect.
It's, it's so important as a philosopher of science to talk to actual scientists to make sure that the way you're characterizing what they do makes sense, is accurate, and it sort of keeps you in check a bit.
One of the challenges of my field is that sometimes philosophers will have toy simplified characterizations of what scientists are interested in, what they want to explain, and then what they're doing in the 1st place.
And so one of the areas I work on is scientific explanation.
How do scientists give explanations?
How do you know they've got a real one?
What are the standards that need to be met?
Well, one thing you need as a philosopher of science, if you're going to do that well, is you need to capture the actual explanatory targets that scientists are interested in.
And so one of the many values of talking to Megan is looking at the types of explanatory targets that she's interested in her work and then in her field, they're far more complicated than a lot of the more simple models we have for how explanations work.
And so if we're going to provide hopeful, accurate accounts, scientific explanation, we need to make sure that we're not just talking about explaining how if you throw a rock and a bottle, it shatters, which is a, you know, there's these kind of classic examples that show up a lot in philosophy that are often quite simple.
They have an explanatory target that's binary.
It sort of happens or it doesn't.
And you can even think of these examples that are more sciency.
So you might want to explain eye color in a fruit fly.
There's different colors that will show up and you want to know well what explains why it's got red eyes or white or black.
Or you might want to explain the height of a plant.
You have genetically identical plants and they've got different heights.
What explains that?
Those are getting us real scientific examples, but those are so much more simplified and not complex when you compare it to something like explaining subjective experience.
When you look at explaining consciousness, even when you look at explaining disease outcomes that are harder to identify and measure.
And so keeping us honest, right?
And so that's one of the main advantages of of working with Megan is it keeps your philosophy honest, both in terms of are we actually capturing the phenomena in the world that scientists are interested, that they're studying and then how they do it.
So another nice thing that Megan mentioned is that scientists, you know, and humans, when we're reasoning in everyday life and in scientific context, we have limited information about the world.
We don't have that picture where you've got information about all of the details.
And so one of the features we need to include in our accounts is that limitation.
When humans reason, there's limitations in terms of computational abilities, computational power, the time scale in which they're making decisions.
Scientists are humans and so what's what's important is our accounts of explanation need to include those limitations, but also they managed to be successful despite those constraints.
And so part of what is so helpful about interdisciplinary connections of being a philosopher of science, working with an actual scientist is that when we're coming up with accounts of how scientific practice and explanations work, you can actually check it with the practice of scientists that are right next door to you.
You can talk to them about it.
You can make sure that you have clarity on what their goals are, right?
That's, that's something that's very important for, in order for us to provide criteria for explanation or ways of understanding causality that are useful, that we need to know what goals scientists have.
And then are these concepts useful for their goals.
And so there's a a whole host of reasons why working with Megan and talking with Megan kind of helps keep my philosophy honest in a way that I wouldn't be able to do on my own, right?
Because he's doing that kind of scientific work in a way that I'm not.
So it's a big advantage of this interdisciplinary approach.
Yeah, I completely agree.
I think that both of you work works together.
It's a very symbiotic relationship.
It's it's something that should be seen as one.
And I think that by the end of this conversation, hopefully you both do identify as both philosopher and neuroscientist.
But Megan, let's let's go to your work for a moment.
In computational and cognitive neuroscience, models attempt to capture how the brain handles uncertainty.
What can these models truly reveal about something you just touched on earlier, subjective experience.
So if this is truly subjective, are these models going to give us any sort of objective information?
Yeah, great, great question.
And this is maybe not the hard problem, but this is one of the hard questions, right.
So the, the idea here is can any empirical science give us any sort of foothold or toehold or fingernail hold on, on something that we might refer to as the hard problem?
And then the nature of subjective experience.
And I, I think, you know, I'm gonna use a couple overused examples here maybe to explain where I'm coming from.
But a lot of folks that in the philosophically informed science of consciousness might say that consciousness science right now is in the state that life sciences was, you know, several 100 years ago where there was this magical force that we called life.
And it was this vital force.
And we didn't know what it was, but it was like a thing that was out there and, and it was magic.
And that as we learned more about biology, the problem just kind of dissolved that we found ways of describing and explaining what was going on that made it very clear, well, this is a thing that's alive and this is a thing that's not alive.
And this is a thing that's maybe halfway in between like viruses and we're not really sure whether they're alive or not alive by, by different definitions.
But it kind of doesn't matter where the the bifurcation where we put that binary point anymore.
And I feel like I agree with the folks who will, who will state that consciousness science may have a similar future ahead of it, where right now we have this monolithic thing that we call consciousness or subjective experience.
And it seems like there is this massive explanatory gap, but the reality very well could be that as we approach that explanatory gap, it it shrinks and it, it appears to be this big chasm from over here.
But as we take tiny baby steps towards it, it turns out that that was an illusion or a barrage or, or something.
So I think that the work that we're doing on how the brain deals with uncertainty, how it arrives at the best that it it does, that kind of inference to the best explanation.
You know, your brain is itself a natural philosopher and that it's trying to understand the environment and build a model of the environment all the time.
It's doing what scientists are trying to do with, you know, with help from philosophers of science.
And so I think that in a way, understanding how the brain is, is building these models of the world.
The result of those models is ultimately somehow magically our subjective experience.
Unless you want to deny that subjective experience exists.
And that again, might be like, OK, I'm going to leave that over there for the the folks who want to to argue that maybe subjective experience doesn't exist.
But for me, it's a useful assumption to say, Yep, subjective experience exists, conscious awareness exists.
So I'm going to try to build ways of capturing variants in it and linking that variance to simplified components of models that I build.
If I twist this knob in my model, it predicts that some sort of output on the subjective experience side is going to change in a particular way.
I'd go do an experiment.
Yeah, it did OK when people say, oh, I have a stronger subjective experience.
OK, so maybe I'm on to something there.
I'll link it up with the brain and say, OK, if I twist this knob, then I see like this area of the brain lights up more or the pattern changes or something, then I can say, OK, I think that this is the nature of the information being represented in the patterns of neural activity.
And it maps onto this component of the model and it maps onto this report of your subjective experience.
So that's how I'm trying to go about it.
I'm not going to say that any work that I'm doing is, is building, is solving any sort of hard problem or jumping any sort of explanatory gap.
But I think that if we sit over here and we say, hey, look at that explanatory gap, it's the size of the Grand Canyon.
I'm not even going to bother approaching it to see how big it is.
I don't think that that's a useful enterprise.
So I want to take, I want to create approaches to take those baby steps.
And that's some of the work that we're doing on metacognition specifically is not just understanding how the brain kind of builds models of the world or how the mind builds models of the world, but how it also puts itself into those models, how it builds models of itself.
And the subjective experiences that we have are ultimately the reflection of a combination of the model that we've built of our environments and kind of our own understanding or introspective insight into that model that we built that we can query and and evaluate that model and look at it.
So that's how I use uncertainty or noise or variation is to look for how it dictates how it interacts with the subjective experiences that we can report in these kinds of experimental approaches.
Lauren, my question for you it, you'll notice that these questions sort of inform each other and then bounce back and forth.
So your research as a, as a sort of reply to Megan, your research, Lauren, distinguishes between types of explanation.
So, mechanistic causal unification based.
When a neuroscientist claims that they've explained something, for example consciousness, which form of explanation are they actually offering?
I think the short answer to this question is that it's still a bit of an open, It's still a bit of an open question, what we expect these explanations to meet in terms of the criteria.
I also think that few people in this space suggest that they have a full explanation or almost any explanation of consciousness.
So let me back up a little bit.
Here in my work in philosophy of science, we study scientific explanation.
What does it take for a scientist to have an explanation and to give an explanation?
Something that's very important about this space is that is saying a little bit about what explanation is.
So we often think of explanation as one of the most important things that scientists do.
It's a very difficult thing for them to do.
We think of explanations as giving deep understanding of the world.
So in this sense, explanation is different from other types of projects that scientists engage in that are very important projects, like giving descriptions of the world.
So I can describe the color of a leaf on a tree, but I haven't explained why it has that color.
So that's a description.
Scientists engage in classification.
They sort things into helpful categories.
That's also not an explanation of something in the world.
And in other cases they give predictions.
And giving a prediction is of course very useful, but it's not yet giving an explanation.
We think of explanations as answering why questions.
So why is it the case that that leaf is green?
Why does this person have a disease as opposed to not?
Why does this?
Plant have a certain height as opposed to having another height.
And so and so a first thing to point out is that explanations offer deep understanding of the world.
We want to know what criteria they need to meet to know that we have good ones right.
How do you know when you have a right, a good or an appropriate explanation of any kind of phenomenon of interest, right, A disease outcome, social inequalities, right?
This doesn't just extend to neuroscience.
This is all scientific domains.
So the two parts that you see here for an explanation or two parts that show up is you first need to ask an explanatory why question, or you can couch your explanatory target in terms of a why question, right?
What explains consciousness is going to be a sort of start or you can put in any kind of target of interest.
So you ask a why question.
The explanation is the answer.
So why is it the case that this patient has measles as opposed to not?
Well, part of the explanation is there's some virus that they encountered and then there's a bunch of other interactions in the immune system that explain why they have that disease outcome.
SO2 parts of an explanation, explanatory why question and then your answer to that question.
So in order to give an explanation for something, you need to say what you want to explain.
And that's where that why question shows up.
And there's actually a lot of features involved in providing a well defined explanatory target.
And so right now in consciousness research, there's debate and investigation and discussion about what's the explanatory target and then what's the answer?
What's the stuff that explains that target?
And as Megan dimensioned, there are many different explanatory targets that are showing up in consciousness research.
And part of the challenge is being very clear about which one a scientist is interested in.
So saying you know what explains consciousness, that's not a well defined scientific question yet.
It's not yet a well defined explanatory why question for two reasons.
First, you need to define consciousness and we don't have a consensus definition.
So then you need to be precise about which one you have in mind.
And then the second is you need a contrast.
You always have to say as opposed to what?
So if I'm interested in explaining why someone has a loss of sensation in their hand, I can't just say what explains why they have a loss of sensation in their hand.
I have to say as opposed to what?
As opposed to full sensation in their hand, or as opposed to a loss of sensation in their leg, right.
If I don't specify the contrast, you don't know what answer to give me.
And so part of what?
Philosophers of science do here is we're looking at what are the things that need to be met to have a well defined explanatory target.
And you see them in other scientific fields.
So we're looking at cases where we have scientists who've successfully given explanations and we're looking at the criteria.
And then we're looking at consciousness research and these other spaces where you have scientists working on answering really difficult questions that we don't yet have answers to.
But you first have to ask the right kind of question before you can get an answer.
And so there's two main challenges.
What's the right question?
And then in terms of what's the right answer, here's where you start to see what do you need to give an answer.
Do you want causal information?
Do you want a causal explanation?
Do you want a functional explanation?
We sometimes think that computational explanations are.
There's something there that we need that's going to help answer that question.
Mechanism, of course, shows up in philosophy of science.
We have different categories of explanations.
Causal is a main one.
I would put mechanistic explanation that's just a causal explanation.
Mechanisms are just saying you've identified.
Well in most cases mechanism is a causal explanation.
In other cases there might be a non causal mathematical explanation.
So I guess 3 categories I would pin down are causal explanation, non causal, mathematical.
There's a lot of debate about what those look like.
Functional explanations you could think like evolutionary explanation.
That's not quite what we're interested in here.
And computational, there's a question what you know, are computational explanations causal?
Do they, are they a subcategory of causal?
But for the most part, we're often interested in causal explanations.
So you're, you're looking for the main factors that 'cause that target of interest.
And there's also debate here about in consciousness research.
Do you have the right factors there?
If you're interested in correlates, neural correlates, there's often a bit of slippage in how that's used.
But if something is a mere correlation with your target, then you don't yet have causality.
So this is where a philosopher is working with scientists to, to to help determine what are your different explanatory targets, because that's going to help you get the right answer to that question.
And what I would say is there isn't one question here.
There almost never is for complex systems.
There isn't A1 complete full theory of everything explanation.
It's piece meal.
And so you're asking different why questions about a complex system.
And that's the sort of trick that scientists have to manage this complexity.
But part of what that shows are those two pieces of an explanation.
Your explanatory why question the X we, the fancy word here is explanondem.
This is what you want to explain.
And then the explanons is what answers that question.
What gives you the explanation?
Usually some kind of causal information causes explain their effects.
And so there's a whole challenge of once you have a well defined explanatory target going out in the world and identifying the main causes that are relevant to that target.
I think let's try and bridge these two together.
So Megan, taking all of that into account, these levels of explanation, your work in metacognition or your research in neuroimaging modelling, etcetera, how would you then address what Lawrence talking about using your work as a as a guide for us?
Yeah, great.
Great question.
That's kind of the whole enterprise, right?
I think there's a couple things that Lauren said that really resonate with me.
And this is the nature of being very clear about the the questions that you're asking.
So in actually this is this is what we try to instill in our students at Neuromash is the asking and answering the right kind of question is, is the primary thing that you should be looking at.
The technique can come afterwards.
You have to pick the technique later in order to answer the question, but you got to get the question right first.
And, and I just had a piece come out recently about how to come up with good scientific questions and what that really looks like.
And there's been a lot of work in computational neuroscience and cognitive neuroscience in how to think about the interaction between the questions that you're asking and the goals that you have as a modeler or as a scientist in general.
And the plurality that Lauren noted is absolutely right that there's, you know, depending on who you ask, there's what, how and why questions.
That's classic Diane and Abbott 2005.
There's, you know, Mars levels of analysis, which are computational and algorithmic and implementation.
You can ask questions about each of those levels of, you know, at each of those levels of inquiry, you can have questions that target different levels of granularity.
So you have micro versus macro versus organismal versus like societal.
And so this plurality of questions and plurality of approaches I think is really critical because as Lauren said, there is no one question to rule them all.
There is no one answer or one explanation to rule them all.
There's no one ring to rule them all.
It's just not going to happen.
So I think that from our perspective, this is this is actually something that I try to instill in all of not just my, you know, doctoral trainees, but the undergraduates that I teach and the folks that we reach out to at Neuromatch as well.
Is that this the recognition that the heterogeneity is, is a feature, not a bug that I think is is really, really critical.
There was something else that that Lauren said to earlier though about this, which is in coming up with your your type of question, you have to have a little bit of an understanding of the the way in which you might go about building that explanons that explanation of the target and the level of visibility that you might have into the system.
The level of access that you might have into the system.
Because you can come up with this amazing question that is actually unanswerable with the tools that we have available to us.
And you can also come up with a question that might be answerable in.
So it's, it's answerable in principle, but not in practice.
That's one kind.
But then there are others that might like not be answerable in principle, at least not yet.
Because we don't have, it's not that we don't have the right tool, the right neuroimaging technique or the right model or something.
It's that we don't know how to ask that question by the right way yet.
And and you said something, Lauren, that that really struck me, this kind of limited visibility into the world idea that we always have this, these barriers that shape the types of explanations we seek, the types of questions that we can shape, types of answers that we can go out and look for and the kinds of data that we can acquire.
But I do think that there are there are other kinds of limitations that are not these kind of practical like, you know, the parts of the world are unobservable.
I think that there are other limitations that that we should acknowledge in building these questions as well.
So, you know, imagine a case where you have I've built some sort of magical machine in the future, some magical brain imaging device that has perfect visibility into everything that the that every neuron is do on every every synapse.
I have the morphology, the shape of every neuron.
I have the structure of the dendritic tree.
I have all the chemical interactions.
I have literally everything about the brain.
I still could shape all sorts of different kinds of questions.
I can't just take that model of the brain and shove it into some artificial intelligence and be like, poof, great.
I understand.
I have an explanation.
It's still like even if we had perfect visibility, the questions are still going to be the primary driver and the lack of visibility into certain kinds of systems is still going to be the limitation.
And that lack of visibility is now not coming from the tools that we have available.
It's like the lack of conceptual clarity, the lack of, of being very precise about the target of explanation.
So yeah, I think it's, it's all got to come down to the questions that you that you ask, the shape of those and how those questions interact with the goals that you have as a, as a scientist.
So do you want to build an explanation that has clinical impact?
Do you want to build an explanation that is beautiful and intuitive and simple and like easy to explain to others?
Do you, you know, So what is?
What is the kind of explanation that you want to build too, not just the kind of question that you want to ask?
Lauren, anything you want to add to that?
Absolutely.
We sometimes discuss this in philosophy in terms of having a God's eye view of the world or the Laplacian demon sort of knowledge about all of the stuff that's out there.
And it can be very tempting as a philosopher, sometimes a scientist too, to think there's all of this stuff out there.
If I just knew more about all of the stuff, I would get the perfect complete explanation.
And the challenge for that picture is we currently don't have that information yet.
We're successful at navigating the world.
So part of what we're looking at here as philosophers is how scientists reason and how they're successful.
But also in everyday life, we give explanations, we engage in causal reasoning, and we do that pretty well.
Are we perfect?
No, but we do it pretty well.
And we just don't have that kind of full, complete information about the world.
So the question is, how do we do that?
It looks like we don't need that sort of information.
And if you wanna provide an an account of how a human or a scientist ever studies the world, you can never include that kind of picture because it's just that's, that's a fantasy story, right?
Where all scientists are humans and they're engaged with the world.
And if you want to talk about having all those details, you're talking about a future science that doesn't exist.
And I'm not sure my future science is going to match up to.
So what we want to talk about is current science and and past and what has worked.
And so one of the fun parts of doing this kind of work for me, I think Megan has this too, is you're looking at what has worked in these different scientific contexts, and you have a sort of domain, general view of real scientific practice and how scientists manage those limitations to get information about the world.
And so, yeah, it can be very tempting.
There's interesting temptations and interesting pictures we have in everyday life, philosophers and scientists about getting full detail is very attractive to us.
Also reduction, which I think will come up.
If we just could get more information about stuff at lower levels, we could get better explanations or the view that that's where we should look to get the right kind of explanatory account.
So yes, very much, very much compatible with this kind of realistic picture of scientific practice and scientific work, as opposed to this idealized view where we ever had access to all of the details.
There's a, let me just follow up on that for two seconds.
There's a, a favorite paper that I like to send students in my neuroanalytics class to, to kind of highlight this.
If only we had perfect access to everything, then we would definitely understand.
And it's this paper that Conrad recording and, and some colleagues wrote, I don't know, 10 to something years ago.
It's it's called could a neuroscient, Could a neuroscientist understand a microprocessor?
And they have this toy example where they say, OK, I've got this microprocessor and it runs like Donkey Kong and Sonic the Hedgehog and Mario or something like that.
And they go about dissecting this microprocessor using all of the fancy available tools, all of the models that we would use in neurosciences.
So they measure all the resistors and they measure all the synapses between all the, you know, nodes in the microprocessor.
And it's a simulation microprocessor.
So it literally they have perfect access, right?
There's like no noise in the system.
And they do like inactivation experiments and they measure like the network connectivity and the like the state transitions and they do all the tricks.
And they still don't end up with an explanation for why poking the thing in this way makes it unable to run Mario or why poking the thing in this way versus that way has no effect on whether Mario can hop over the thing or not.
So it's really like a, a kind of cheeky demonstration that it really matters what you think you're measuring.
Like do you have perfect access to the system?
Do you have perfect access to all of the things that are actually the, the parts of the system that you need to have access to?
And there they have access to all the physical system, but they're not like reading the software.
And they'd have to come up with, you know, the software in order to, to kind of build more of an explanation for how the software and the hardware interact.
So I don't know, it's, it's a fun one.
If you haven't seen that paper, those of you who are out there listening, I I suggest you go and have a look because it is.
It's fun and it's cheeky and it's also quite profound.
I love that concept, Megan, if you've got the link, please share it with me so I can put it in the.
Yeah, I will.
In your work, from all the work you've done, what are some of the ground breaking things you guys have figured out during this time that allow us to ask deeper philosophical questions?
I think that some of so some of the things that that I am interested in here is as we said, the nature of metacognition and subjective experience and how those two interacts and metacognition being the process and subjective experience potentially being like the output of that process.
I like this approach because it combines, it combines neuroscience and behavior and psychophysics and psychometrics and also computational models in a way that tries to build like this piece meal.
Small, tiny explanation of why it is that if I change this particular aspect of the world, it changes your subjective experience in this particular way, and it changes your subjective experience in a way that's different from changing your ability to just interact with the world in a meaningful, goal directed, kind of evolutionarily optimized way.
And So what I mean by that is that for us, when we process the world, you can think of a lot of what's happening in that processing is going on under the hood, so to speak.
There's a lot of heavy lifting that the brain does that is not available to us.
Consciously, subjectively, anything like that.
And I'm not even just talking reflexes, I'm talking all the processing that gives rise to the fact that you see the world in 3D.
Can you like kind of consciously intervene on that and say like, no, it's, I know that it's actually a 2D image on my retina.
No, like you just see the world in 3D.
It just happens magically somehow.
And so there's a lot of this complex processing that goes on under the hood.
I'm a vision scientist.
So vision science is my my typical workhorse here.
But you can play this game for a lot of other things too.
For any way that you interact with the world, you see a complex, noisy, stochastic dynamic environment and you are standing like you're standing on a sidewalk and you're deciding whether to cross the road or not.
And you hear things and you see things and you have to decide is it safe?
And, and that decision is going to impact your ability to survive, right.
If you if you get it wrong, you get hit by a car.
But so much of that could be said to be done potentially without conscious awareness.
And so a lot of the work that we're doing is taking all of these tools in our tool kit and pointing them at trying to dissociate the conscious experience or subjective experience part from all the other stuff that like a Tesla could do or a Roomba could maybe do a very smart Roomba.
And there's nothing that it's like to be a Roomba, presumably.
So I think the favorite thing that I'm doing right now that might hopefully have some impact is the conceptual and methodological distinction between understanding the behaviors, computations and neural correlates that give rise to adaptive goal directed interaction with the environment.
Not dying, not stepping in front of the car and separating that from the computations and neural circuits and neural representations that are uniquely driving or uniquely correlated with the subjective experience part.
And I think the reason, so I'm not the only one who's working on this, by the way, there's like quite a lot of us who see that distinction as being really important, but there's also quite a lot of people who don't.
And I would say that there are some folks who are in the artificial intelligence space, for example, who conflate intelligent behavior with probability of being, of someone being in there, of being conscious, or even worse, conflate intelligent looking behavior with, you know, subjective experience or consciousness with threat.
You know, we say, oh, no, Skynet's going to wake up and it's going to kill all of us.
And I think that some of the work that we're doing, admittedly with very simple stimuli and very simple models will help drive at that really important distinction that just because you've got a system that's seeming intelligent, it's seeming like it can solve problems, that doesn't mean that anyone's in there, that it's that there's anything that it's like to be that system.
It also doesn't mean that it's a threat.
Things can be threatening without having subjective experience and without being intelligent.
And so all of those things are independent.
So I think that was maybe the thing that I would say is potentially the impact of the work that we're doing.
Lauren, if you had to envision a philosophically informed neuroscience infused with your work regarding mechanism, explanation, etcetera, what would that look like in practice?
An experiment, design, theory, formation, or even peer review.
Part of what it would look like is a kind of neuroscience that we partly already see.
But part of what it would highlight is clarity about the types of causes and causal systems that neuroscientists study and that that researchers are studying in this space.
One way to see this is we often find the term mechanism being used to refer loosely to any kind of causal system.
Part of what my work has done is specified that there are different types of causes out in the world that scientists study.
They have very different types of features, and those matter for how we study the systems.
They matter for the behaviors they produce.
And you start to see some of these distinctions show up already when scientists talk about causes that are probabilistic versus deterministic, causes that are more or less strong, more or less stable, or when scientists use terms like referring to a causal system as a pathway or a circuit or a cascade.
There's a reference here and then analogy to different types of causal systems.
So part of what my work is compatible with and can encourage is far more clarity about the types of causes that are out there that we study.
We partly need words to refer to the different types we have, different features that they have, and this is partly going to inform the standards that we have, basically being clear about the standards that we have for the kind of causal information that neuroscientists need to provide right now.
The standard is usually phrased as a mechanism.
Scientists needs to provide mechanistic information about a system.
You see this in grant calls, you see it in journal publication guidelines.
In order to get funded, in order to get published, A researcher needs to identify a mechanism or provide mechanistic insights.
But then the editors very quickly follow up by saying that they can't tell you what a mechanism is.
And then it's often the case that two or more researchers reviewing the same paper completely disagree about whether the same paper provides mechanistic insights or not.
So right now you have a causal standard for the field that is this word mechanism, and we have different people defining that term in different ways, and there's no consensus on what exactly it refers to.
Is it lower level cellular details?
Is it higher level network information?
You have researchers pointing to both as real mechanisms.
Partly we need to put mechanism aside, and when we're interested in causal explanation, we just need to talk about these as causal systems, causal relationships, the way that causes are organized and arranged.
How do you know you have the right kind of causal information that's explanatory, relevant to your target?
So part of it is clarity on the standards for the field and getting more clarity on what we mean by mechanism if that's the current specification of the field standard.
Yeah, I'm, I'm looking forward to having both of you separately on the channel so we can explore specific aspects of both your work.
But I think at this point to get back to the main topic here, if you both had to look at science and philosophy and moving into the future, bringing them both together, what would this new, what would a new picture of mind would emerge from this?
Would it be something different, do you think, do you think it would change anything?
And what advantage does this have for new fields specifically that will arise?
I'll take that.
I I love that you said advantages for new fields because I think that one of the challenges that we have in neuroscience, again, I'm a neuroscientist, so that's where I'm coming from, is that this is still a new field.
It's really young, especially like the neuroscience of, you know, consciousness or something like that.
Like psychology, yeah.
Has been, oh, it's been around for, you know, 150 years in its present state and, and, you know, quantitative computational psychology.
Yeah.
But like, that's not very long.
That's really not very long.
Modern science is not very old in general, but when it comes to modern science, philosophy or psychology and neuroscience specifically, like our first neural signatures are only about 100 and some odd years old, you know, when EEG was first invented.
And so this is a really young field.
And so I think that new fields and emerging fields like this is where the value is.
This is where we need help at getting conceptual clarity because a lot, in a lot of cases for new and emerging fields, the, the major tool that we have to say, well, where do we even begin is something like intuition.
I came up with an idea and like, let's just run with it and see what happens.
And as we, I think all have probably discovered through one point or another in our lives, what you think is happening and what is actually happening is never like your first guess is never the right one.
And so recognizing the value of philosophy of science in young and emerging fields and fields that have yet to emerge, I think is really powerful.
And as Lauren said, some, you know, this idea that especially at the beginning in a young science, seeing the commonalities in the structure across this new emerging field and maybe a more established discipline that has kind of already figured out some stuff.
So we've got a lot of really precise terminology in how we understand mechanism, whatever that prestige wars is, by the way, like, yeah, we all, we all know that we want to go for a mechanistic or causal explanation and get what even is that?
But there are, there are even among the young modern science fields, there are some that are very, very young, you know, they're children.
And then there are some that are a little bit more middle-aged.
And so on the surface they're all gonna have these extremely different features, these extremely different kind of surface level properties of or observables.
But causal and mechanistic explanations are a unifying principle.
And so recognizing that the shape of the problems that we're trying to solve might actually be quite similar in this new and emerging field to a more established field.
But when you're a scientist and you are reading the science journals and you're kind of like in your little your little box, you don't have time to pop your head out and go read some astrophysics journal.
It's just not going to happen.
Or some material Science Journal or something.
And so having though this target of building explanatory models, of getting conceptual clarity, of understanding the types of causal and mechanistic explanations that we can go for that can provide a bridge.
And you say, OK, well, we're talking about completely different systems, completely different targets of explanation, but the kinds of explanations that we're trying to build might actually be quite similar.
And I have experiences with this where I, I wrote this, this paper with my one of my graduate students and another professor and his graduate student.
And he's a microbiologist.
He studies the microbiome of pregnant women and how the microbiome of pregnant women impacts birth outcomes and maternal outcomes.
I don't do that at all.
I have no idea even what half of the vocabulary is that he says.
And yet through talking with him, we discovered that the shape of the problem that we are trying to solve was actually very similar.
And so we wrote a paper about that and how like these kind of modern, you know, machine learning tools might be able to help us with that.
And that's what we need in these young emerging fields is to see, well, someone else solved a problem that had a similar shape.
And, and if we can get that right, it will propel the new fields that we have right now forward and emerging fields that come in the future.
I think that that will be a major step forward in building better science, building more coherent science that is self perpetuating.
100% agree with Megan here and.
We partly already see nice features and aspects of current work in neuroscience that show this interdisciplinary aspect.
We've got lots of neuroscientist philosophers who are aware of both fields.
You see this in Megan's work.
You see it in the work of Anil Seth, Danny Bassett, other cognitive scientists like Nadia Cherniak, Karen Walker.
There really are lots of scientists and academics who are engaged in this interdisciplinary approach.
We 100% need it too.
For many of these challenging questions, we have an all hands on deck type situation.
We.
Need many different people from many different perspectives to help out with these questions.
The challenges are it can be pretty uncomfortable to do this kind of work because you're never the main expert.
When I'm talking to scientists, I mean, they are always so much more of a deep expert in their area of work than I than I could ever be.
And I'm, it's partly, it's the way it has to be.
I'm talking to social scientists, I'm talking to cognitive scientists, I'm talking to neuroscientists.
It's a bit of a a stretch sometimes, but for me, it's very important for me to put myself in their perspective.
What are they interested in?
Can I bring philosophy of science?
That's useful and that's helpful in order to figure out if it's a person on the team or a hand on your deck.
They do need to be useful and they do need to be helpful.
And it's not easy for philosophers to fill those shoes sometimes with respect to scientific work because it can be uncomfortable.
You have to learn a lot of science, and you're still never going to know, You're never going to have the same kind of picture.
But these discussions can show you types of philosophy that will be really helpful for scientists to have.
You don't.
We also don't want to reinvent the wheel.
And we've seen this in cases where you have researchers that aren't interacting with each other, right?
Someone spends a lot of their career developing an approach that someone built basically 3 decades earlier.
So you don't want to reinvent the wheel.
You do want some pushback.
I need it from Megan.
I try to give it to her too.
I think the, the standard thing you'll probably hear scientists say about philosophers is they're sort of the one asking that question of, well, what do you mean by mechanism?
You know, And then you give an answer and then we think of counter examples and it's like, well, if that's what you mean, then there's a problem that shows up.
Or if that's what you mean by explanation, you're including all these cases that you don't want to include.
So we are trained to think abstractly and we are trained to kind of want that precision.
And so that is something that we can attribute.
And there are scientists who lean into this interdisciplinary approach by bringing philosophers on board, but of course scientists from all sorts of other domains.
There's interesting examples where philosophy can suggest ideas that scientists go on to study in their empirical work that they might not have thought of originally or as quickly, because it's a little easier to see them in some frameworks.
We see this in Cogsai with studies of different types of causal relationships, things like stability and strength, for example.
But yeah, I think part of what we, part of what I envision for a kind of future here, and the advantages come from this interdisciplinary work.
I also think part of what would be helpful is for scientists to have a little bit more time and space to to do theorizing right.
So, yeah, so they're the, you really start to appreciate the challenges of the scientific work when you look at the sense in which they're trying to tackle new problem spaces.
You, you know, often the funding, the funding incentives are for this tried and true method.
And if you kind of already know it works, you can do a lot of that.
If you're expected to publish a lot, that doesn't always incentivize taking the time to think about all these different routes you could take and being able to discuss which one, which ones you should follow.
So I think having a little bit more space for scientists to have the time, like Megan mentioned in philosophy, we have a little less of the pressures that they have.
But part of it is in having the kind of time and and incentives to take advantage of interdisciplinary connections in work.
And that's not always easy for scientists to do, given the constraints that they have.
You know, this, this question of time and, and you know, publisher perish mentality, there's a a lot of people who are probably out there listening right now.
We say like, why do we even care that you're publishing papers?
Who, who reads those papers?
And to a certain extent, you're absolutely right.
Like we, you know, the metric of our success and the thing that allows us as academics to proceed through the ranks and get promoted and, you know, do our jobs well and so on is to get grants and to publish papers.
And it feels very insular.
It feels very much like you're kind of in a little echo chamber.
And I think that that's that's a correct way of looking at this, that this is an old school way of thinking about how we should go about the enterprise of doing science.
And it shouldn't be contrasted with the way that industry professionals are doing science, which is to produce products and to do to engage in activities that have the potential for clinical or societal benefit, basic science and foundational science.
And it has to be there in order for those those kind of more applied approaches to have legs to have a foundation to stand on.
But I do think that the model of do a thing and then write a paper and then get a grant to continue doing the thing and then writing another paper is, is doomed.
Ultimately, to put it bluntly, that it is striking to me that in 2025 we are still doing science the way we did in the 1800s, that we've got scientists who are doing.
Science and then writing a little paper that other scientists will read and that maybe makes a big splash and has some sort of impact on some, you know, applied science later.
The basic science has to be done right.
The reason that we have technologies like GPS, for example, is because someone at some point was like, huh, I wonder if we can do that.
And so they figured out how to do the technological basis, the foundation that became GPS.
And it wasn't because they went about trying to invent GPS from the beginning and an applied technology.
It's because they did the basic science work first.
But this practice of just writing a little paper and then like, you know, packaging it and tying it up with a nice little bow and sending it to a journal and paying thousands of dollars to publish it and then having it be locked behind a paintball.
This is a rant, but it's also, I think, a recognition that in order to realize the future that Lauren and I have been really talking about, we need to change this.
Because it's not just a time constraint, it's a societal and like expectation constraint on the way that we as basic scientists and academics are engaging this enterprise.
It's hamstringing us.
It's preventing us from engaging in this future that Lauren and I, I think have laid out and that we're both very excited about and I think that others are excited about too.
That we need to find a way to be more interconnected, to capitalize on the fact that we do have a global scientific community that doesn't need to wait for a paper to get published in order to learn about some new scientific finding.
There's got to be a better way.
And it isn't social media.
We need something in between and so, and it's not, you know, conferences only.
I think that there's got to be a better way to do this.
And I don't know exactly what it looks like, but there's a call to action for the folks listening in here that if you think that this future sounds cool and exciting and powerful, think about how to make it a reality.
And this is something that I think about a lot.
And then some of the activities I'm engaged in are trying to do, but, but I think we need more people.
So there, that's my my plea to get involved in making this future a reality.
Let's get back to the this idea of consciousness, computation, and causation.
Megan, you've described the brain as a probabilistic machine navigating uncertainty.
Would you describe consciousness as a byproduct of computation or an adaptive feature of it?
Yeah, I, I don't know if I want to weigh in on that and, and pick a hill to die on.
This is a big question.
Is consciousness in every phenomenon, is it just kind of there as a by product or does it serve some kind of meaningful function in our ability to, you know, from an evolutionarily evolutionary perspective, stay, stay alive, engage, procreate, that kind of thing.
So I think an important component of this question is to differentiate among a, a potential function of consciousness versus a potential function for consciousness versus functions associated with consciousness.
So there's you're asking, is consciousness an epiphenomenon?
That would be there is no function at all.
It's just kind of it happens because you know, that just is with the way the universe is set up.
I, I personally think that it's probably the case that it's not totally an epiphenomenon that it is emerges as a component in a giant functional system that probably was evolutionarily optimized in some way.
So I think that there is a function of consciousness it, it has a purpose, there is something that it does that is adaptive and facilitatory for the Organism that possesses it.
It allows you to bring information into a global workspace so that you can manipulate it in a kind of a domain general way, or it allows you to differentiate between something that is real out there in the world and something that you just kind of hallucinated or made-up in your head or just noise.
So this is sometimes called reality monitoring.
And so the, the presence of phenomenal experience is the result of some reality monitoring tagging system that says these are the components of the world that are probably real.
And these are the components of your internal representation that are probably just noise or you just made it up.
And then then there's a function, you know, for consciousness that is the internal machinations that gave rise to the the conscious experience that's very different than the reason that we have it.
And then there would be all the other things that go along with consciousness in us anyway, like language and executive functioning and reasoning and problem solving and, you know, stuff like that, that seem to be present when you are conscious and seem to be absent when you're not, or seem to be present when you are conscious of a particular piece of information and absent when you're not.
So there was a big debate for a while about can you do math unconsciously?
Can you do arithmetic or addition unconsciously, that kind of thing.
So the truth is we don't know if consciousness has a function.
I think that something like the ability to decide when to update your model that you've built of the world based on new incoming information, that seems like a useful thing for a reality monitoring or similar mechanism to do.
I don't know that phenomenal experience per SE is the component that has the functional, that is the functional or like kind of causally efficacious knob in the system.
But all indications seem to point to in in my mind that without phenomenal consciousness you cannot do some things that it does have some sort of facilitatory function for us.
So I think that there is a function.
It probably has to do with learning adaptive behavior, updating of world models.
Pretty hand WAVY answer, but I don't think it's an EPI phenomenon.
I think that there's probably a reason that it's there.
Lauren, when it comes to the philosophy side of this and the question of what is consciousness, are we even asking the right question?
I think that there are many questions that are being asked right now in this space.
It's, it's a mistake, I would say, to think that there is one question.
And it's helpful to consider the sense in which we're trying to figure out, even if there are many questions that we're asking, that any given question involves, there's a lot of boxes that need to be checked to make sure that it's well defined.
And so there are as, as Megan has suggested and as we see from a cursory understanding of research in this space, there are really different types of topics of interest that consciousness researchers are focused on.
One helpful thing we can do is to separate out those questions.
It would be unhelpful to think that there's one.
I'm also skeptical about the need for some unifying theory that they all need to strictly fall under, although that might take and require a longer set of discussions.
I think there is some kind of unification that's helpful, but it's somewhat loose.
What we do want are very principled, clear questions.
And so we don't have this anything goes, you know, ask whatever question you want.
There's all these different facets.
No, the questions that we ask in this space need to be so precise that one of the main challenges is asking the right question, right?
That's something that's been showing up repeatedly in this discussion.
It reminds me of, there's this great quote, I think it's from the band U2, which is we thought we knew the answers, it was the questions we had wrong.
And so a big challenge in scientific space is asking the right questions.
And we often think of that as the starting point for giving an explanation.
I can't give you an explanation for something until you first tell me exactly what it is you want explained.
And we sometimes start on that path and we get stuck at the 1st, that first step, specifying the target.
And that's where a lot of discussion is in this space.
It would be silly to think you could give the explanation if the target isn't sufficiently precise yet.
There are different targets of interest, that's just fine.
I can't think of many scientific spaces where that's not the standard for any kind of system.
There's so many different questions you could ask.
There's some that we might want to put outside the space of an interest of a consciousness researcher.
So that's up for debate too.
What's the, what are the bounds on the space of explanatory why questions here for consciousness research?
We're interested in consciousness.
What are we, what are we interested in explaining?
So I think it's helpful to think that an important part of scientific work is asking the right questions.
And I don't think that in this space there's a lot of fixed consensus on exactly what those are.
But that's the way science works.
And it's helpful to think that that's the first step that you need to accomplish before you can get the proper answer.
So if you want to skip that step and start looking for the answer, you're going to be wading through a mess of stuff and you just won't have the right guidelines because you don't yet know what you're looking for.
And sometimes in science, we start with a rough question and we go and we look for the causes and based on what we find, we go back and we refine the question.
You see this happen in medicine, psychiatric medicine, right?
We start with the disease category.
We think we've got the right and then we go and we look for what the causes are.
We might re describe the target on the basis of what we find.
That's a kind of brick.
It's a very smart strategy that scientists use to get order in the world.
So I don't think we're there yet, but it's and I don't think there's one question in that space, but a lot of the research is focused there as I think it should be.
Megan, when it comes to consciousness, it's almost impossible to nowadays have a conversation about it without mentioning AI.
So I feel like we have to touch on this.
So can AI systems or large language models ever genuinely experience uncertainty, or will there always be simulations without subjectivity?
You really want a definitive answer to this, don't you?
I so I there's two big, there's two big camps in the consciousness science field about this and you've articulated them very nicely.
1 is that artificial systems have the potential.
I think most people would agree that they don't now have some sort of consciousness, but that in the future they have the potential to manifest subjective experience or phenomenal consciousness or whatever terminology you want to use for someone being in there, the lights being on, etcetera.
And then there's the other camp, which is kind of the more biological naturalism camp, which says like, no, there's really something very special about biology and silicon based systems or something.
Something that is not biological is never going to be able to instantiate this type of this type of thing.
And you have really smart people on both sides arguing both camps.
So, you know, Anil Seth has just written a piece in Behavioral Brain Sciences that's one of those kind of target article.
And then there's a bunch of commentaries that come out associated with it that will say things like, so Anil's piece says it argues for the point that, you know, there is something special about, as he puts it, being a beast machine, that biology does have components that allow it to maybe manifest the types of computations that are necessary in order to instantiate consciousness.
But he actually issues the idea of computational functionalism in general and says it's not a function that there really is something special about, you know, synapses and biology and the squishy piece of wetware.
And the philosopher Ned Block has, you know, written a commentary that kind of agrees with him that says there's something that, you know, might be, although I don't want to mischaracterize Ned.
But then there's other philosophers who and scientists who have argued against this.
And I tend to be more in the more functionalism camp.
So Matthias, Michelle also argues that, yeah, we can say that there is something special about biology, but the special thing about biology might be that it has the particular, it's the, it's the only kind of substrate that can instantiate that function.
But the function is the key component.
The function or the computation is the key component that gives rise to consciousness.
And so in the future, it is possible that maybe we figure out what it is that might have been special about biology and we actually build an artificial system that has all those special components and now it can instantiate consciousness as well.
So that's a very tight view.
There's also a more general view that says, oh, well, maybe neuromorphic systems might be able to instantiate consciousness.
Neuromorphic really just is a fancy word for brain inspired and it can mean it either instantiates the algorithms that we are discovering in the brain or more likely neuromorphic refers to something hardware based that there's this particular kind of spiking neural network that is in that is manifested or instantiated on a particular kind of hardware.
Where we did some material science to come up with the resistors and stuff that would actually like look a little bit more like brain as opposed to traditional.
You know, when you think vacuum tubes style 1960s computers, memory is over here and computation is over here.
And so then you move information between memory and computation and then you put it back in memory.
And so there's, when we talk about artificial intelligence, anything that is not biology is in this big pile, but we have to think about differentiating it a little bit more.
And then the very abstract version of this is it doesn't matter what the substrate is.
It could be a traditional, it could be a neuromorphic system.
It could be a von Neumann architecture, which is like this.
You know, memory is over here and compute is over here.
It could be your laptop.
It could be some technology that we haven't come up with yet, all of those.
It could be a large language model that runs on a server farm.
It could be kind of anything.
And it's the computations that matter.
It it doesn't matter what the hardware is at all.
It's just the computations and the type of like representations that the system can have.
And so from that perspective, maybe large language models are like this close to waking up.
I tend to be more on the computational functionalist side.
So that was a long winded way of saying I think it's the computations that matter.
I don't think that there is anything particularly magical or special about biology, except perhaps that it can instantiate certain kinds of computations that we don't yet know how to do or that might end up being impossible to do in certain kinds of non biological systems.
So from that perspective, I would say yeah, probably in the future artificial systems could wake up.
Is it around the corner?
Probably not.
Don't think that GPT 5 is on the cusp of having subjective experiences and maybe this is not the place to go into this necessarily.
But let's say that you disagree with me and you say no, it does.
How would you test for that?
How would you know?
That's like a whole other whole other conversation that maybe we can get into it at another time.
But this idea that we have ways of evaluating whether someone is in there or not for us, for, you know, neurotypical awake behaving human versus a neurotypical asleep human who is not behaving or a human who is in a coma or that kind of thing.
Like maybe those tests work pretty well for a clinical and bedside.
But as soon as you get outside of the population on which they've been validated, like what do you do?
You can't apply them to the artificial systems.
You fall back on tests of intelligence, which is, as we've discussed, not the same thing.
So I think it's very possible that artificial systems will be able to have subjective experiences in the future.
It is not a hill that I'm going to die on.
And the the way of answering.
Has that happened yet or at some point in the future, When does it happen is really, really hard.
It's really hard to answer that.
Lauren, do you do you have anything to add to that?
I do, I think part of what can be helpful in terms of looking at progress explanations and work in this space is that this is an explanatory target that is so much different from many others that we're interested in in science.
And that's part of the challenge.
And that can partly explain why we we don't yet have an answer, but also have specifying the standards is difficult.
So this is a type of thing we want to explain that is different from other types of things we want to explain in science.
And then we have explained.
And so we partly need to figure out what those differences are.
And then the second is this interesting feature where for the types of targets that we do see across different life sciences that we are interested in explaining, we give explanations for.
There is often a set of challenges that show up with respect to how much detail you need to cite to give an explanation.
And one thing I find is that there's sometimes a kind of confusion between stuff in the system or stuff that's necessary and stuff that's explanatory.
And this partly relates to reduction and just figuring out what details.
That a scientist needs to cite and should cite in their explanations.
And this is where we find various interesting, confusing things that show up when we're interested in giving explanation.
So how, how low do we need to go in giving an explanation?
And how far back in the causal history of something do we need to go?
Is another question that shows up.
Explanations are selective, they are choosy and they pick some of those details, not all of them.
One confusion, confusion that can show up is you can admit a kind of physicalist position for a biological system, a neural system, and agree that there's physical stuff at lower scales, but that doesn't mean it's explanatory.
And when someone is referring to factors at a higher scale is explanatory, they're not denying that physicalist picture.
And sometimes those get confused.
And so we need to separate explanatory relevance from physicalism because they're very different.
I mean, if we needed to cite all of that physical stuff, we would almost never be able to give an explanation, but we also don't need to.
And so the the way I think about many of our causal explanations here is that a causal explanation isn't a game of how low can you go, but a game of what gives you control.
And depending on your explanatory target of interest, the factors that give you control might be at a higher scale.
And so this is partly where we need to kind of make these helpful distinctions to solve these kinds of things that can get tricky, where a scientist might think that if you include more and more lower level detail, you're always giving a better explanation.
Or that network neuroscientists deny physicalism when that's not what they're doing.
If they're making an explanatory claim.
Or there's a puzzle that that philosophers sometimes run into where they think The Big Bang, since it's in the causal history of of everything, that it's something you should cite in your explanation.
So do you need to cite The Big Bang in explaining why we're all here today or why a patient has a disease?
That sounds so silly to us.
A philosopher's job is partly to say why that's silly and why that's not explanatory.
But we get stuck on those cases.
So we get stuck on reductionism and we get stuck on the entire causal history and sometimes distinctions like physicalism and explanatory relevance and necessity too.
Something can be necessary for an outcome doesn't mean it explains it, right?
The Big Bang is necessary for my having asthma, but it doesn't explain why I have it.
If I went into the position and they started and I asked, you know, why do I have asthma and The Big Bang is that what's that doesn't sound right.
So part of what we see in these spaces are really important questions about how to how scientists are making are making progress, the types of explanatory targets they have and important distinctions we need to make to get over these puzzles that that show up that can kind of lead us astray and that don't capture the rationale that does underlie our explanation.
I mean, you partly see it with control, right?
The Big Bang isn't something, if you were to hypothetically manipulate it, it doesn't control whether a patient has measles or not.
So it doesn't explain that outcome.
So, so scientists.
Sorry, sure.
Are you sure?
It doesn't explain why they have measles?
I'm just being cheeky.
I mean in terms of what's currently on offer in.
Fair enough.
But but part of what philosophy of science, when it's at its best, it can help with a bit of this science communication element, which is what is the justification for why physicians say that there's a virus that causes measles and not fundamental physics or not The Big Bang or, Yeah, why neuroscientists are working so hard to explain something like consciousness and why this is actually more difficult than explaining just any kind of trait in biology.
So yeah, just a bit of a follow up there and support of various things, Megan said.
Megan, I asked Lauren about what a philosophically informed neuroscience would look like, So I'm curious to know from your side, what would a neuroscientifically informed philosophy of mind look like in practice for you?
I think I will go back to something that Lawrence said actually very much at the beginning, which is this recognition of the complexity of the system that we're trying to understand, that we're trying to explain that in some cases toy examples and and simplified models are really the only thing that we have available to us.
And they can be very powerful.
And sometimes a really highly oversimplified explanation or model or description of what's going on is surprisingly powerful.
It's it's really kind of remarkable how something as simple as, well, I'll use an example from my own field, Signal detection theory can actually do a remarkable job at explaining how, or at least describing, maybe I shouldn't use the word explanation, but describing how an observer like you or me is going to deal with noise in our environment or in our own minds.
And signal detection theory, it turns out, was not even developed for psychology.
It was developed to understand and characterize the noise in electrical circuits in like the 1950s.
Yeah.
How do you find the signal in the noise?
That's basically what it's trying to do, really.
Almost stupidly simple explanation.
Stupidly simple system does a pretty good job at targeting how and maybe why certain kinds of behaviors emerge in certain kinds of situations from, you know, a human or or animal observer model.
And yet ultimately the thing that we are trying to capture to explain is things that we know exists.
It's one of the most complex things on this planet.
Brains are really hard.
They're really highly non linear dynamical systems.
There's a lot of components that we have no visibility into.
There's a lot of stuff that we are still kind of floundering around in the dark to try to build even just a just so post hoc story of why the system did what it did.
What are the kinds of informational structures that are present?
What even could the software look like?
Is it software are we like, what are we even doing here, man?
And so the recognition of just the sheer mind boggling, unfathomable complexity of what it is that we're trying to reverse engineer.
I think that and, and the gulf between that and billiard balls on the table, which is a causal explanation of why this ball went into the pocket or didn't go into the pocket or something that I think would be we would, we would all do very well to to recognize the size of that gulf and to try to try to shrink it a little bit.
So for young researchers who feel pressure to pick a side, scientist or philosopher, what would both of you tell them about integrating both parts meaningfully?
Anyone can start.
Don't pick a side.
Look at me and Lauren.
We didn't pick a side.
And maybe this discussion is also highlighted the, the extraordinary value of not picking a side of not burying your nose in the sand and just kind of doing the one thing.
And that, yeah, it's uncomfortable, as Lauren said, to maybe not always be the expert in the room.
I'm certainly no, not the expert in the room in a lot of ways that there's a lot of things that I, I want to have my fingers in a lot of pies.
I want to understand a little bit about a lot of things.
And I do have deep expertise in a couple areas, but there are a lot of spaces that I have been in where the folks around me know way more about a particular than I do.
And that direction that that can be the norm and that that's OK.
And that a lot of other people in the room might seem like they are topic matter experts in something that you understand.
And they are, but you're also a topic matter expert in something that they're not and you see things that they're not able to see.
And there's a one example of this from my own life is that, you know, I sometimes go to these, these conferences or workshops that are really focused on computational and theoretical neuroscience and, and even neurotechnology.
I'm not a neurotechnologist.
I know things about that, but I definitely am not that person.
And there are things that I can bring to the table as someone who's a little bit more of a generalist, like that's really like, like bringing in.
I, I remember recently I brought in actually some of Lauren's work.
I said, what you're doing is trying to build an explanation of, you know, how the brain does something in order to drive like a neuroprosthetic, for example.
And it would it, you really don't want to just drive the neuroprosthetic, which we can do already using neural recordings.
But in order to optimize that, it would be really great if you could understand why that kind of model is working better than this other kind of model, or why one type of model is more or less susceptible to neural drift.
Like once you put the implant in and you train the model, you come back next week, it doesn't work anymore.
Why?
Why did that model fail and this other model might not fail?
Like those kinds of explanatory, those kinds of explanations could be really useful from a practical perspective.
And a lot of the folks in neurotechnology do not think about explanation.
They don't prediction ability to capture variance in a system that is the target and that is the thing that matters to them.
And so differentiating between prediction and explanation and differentiating between, you know, models and targets of different levels of complexity is something that I can bring to the table.
And I can't help them optimize their neural implant, but they can inform me about what they're doing and I can inform them about what what I'm doing.
And so I guess learning to build calluses and tolerate that uncertainty and that discomfort of not being the expert in the room.
No one is the expert in everything though.
And so like to a certain extent, even if you build deep expertise in one area, you're going to have to navigate spaces where you're not the expert anyway.
So you might as well get used to it now.
Lauren, anything you want to add there?
Yeah, there's a few things I would add.
I think that finding work that you like, finding people and researchers that are doing work that you're interested in, and as Megan says already may be interested in science and philosophy is helpful.
These are academic fields.
I mean, academia is still pretty siloed, so I don't always get easy access to scientists and I'm not always credited for working with them or taking the time to talk to them or even writing in writing publications that get published in scientific journals.
So there are interesting standards of my field philosophy, philosophy of science, that are quite different from various scientific fields where, I mean, they also might not get credit for talking to a philosopher or writing with one.
In really different scientific fields value philosophy very differently.
One of the advantages of talking to neuroscientists is they already value philosophy a bit and they're already more aware of it than other scientific fields.
So when I talk to a biologist, I might have to do a little bit more legwork to tell them what I do and to persuade them that I'm someone useful to be in the room in the first place.
That's not the case with many neuroscientists, cognitive scientists, same kind of thing.
You know, cogs size, a field that views itself as interdisciplinary, and one of its areas is philosophy, computer science, psychology.
So it really does depend on the scientific field and that you're interested in.
And it helps to talk to people who work in that space because they know a bit about the norms in the field and the expectations.
You know, Megan has different expectations on her than I do in the field of philosophy.
We're both probably doing more than the standard person in the sense that, you know, people aren't expecting me to get grants.
They're not expecting me to work with neuroscientists.
But I care about that work, and it's important.
One thing I sometimes say is that being a philosopher of science is a bit odd because you sometimes feel like you're telling scientists why philosophy matters and philosophers why science matters.
And so there's also philosophers who I'm talking to, and they do not think that the way to understand the fundamental causal structure of the world is to look at anything scientists are doing.
Why would you do that?
Why would you?
So not only do I, you know, not get credit for interdisciplinary work in that sense, but they don't see why they should care about science if they're interested in causation or explanation in some cases or understanding the world.
So in in all of our fields, there are different groups of people who are approaching problems in different ways.
It's helpful to find the work that speaks to you, the researchers who are doing things you're interested in.
Also to look more pragmatically at.
I mean, it's one thing to study philosophy and to study neuroscience as more of a hobby, but in terms of going into it as a APHD student or a professor, you know, there's certain types of aspects of the, of those cultures that it's helpful to learn a bit about.
And there's also differences, right, in terms of different types of people.
But it is fascinating the differences across fields.
But I have the advantage of, I mean, Megan started studying philosophy before I did, you know, I started studying philosophy.
I took my first class was basically at the end of undergrad, and then it shows up a lot later.
So I don't have to do as much legwork when I'm talking to Megan.
And.
But when I am working with scientists, a main goal is to bring the philosophy that's useful for what they're interested in.
If they want to get pulled into some of the philosophical debates, we can do that too.
There's jargon that we're using that.
Yeah, You know, I don't that I don't want to kind of burden people with, but part of these interdisciplinary connections is learning how to speak to people who use very different vocabularies.
When you have someone that knows a bit of the philosophy already, they already know the vocabulary.
And then, you know, I've trained in medicine, so I know a bit of theirs too.
But there is still this needing to be comfortable in an uncomfortable situation where you're not the main expert and you're leaning on other people and looking for their input too.
But once you start to see the value of that discomfort and that approach, and you're you're among academics who have the, the, you know, ideal disposition of being open to being wrong, to pursuing big ideas and taking risks, but also reorienting the Sky's the limit.
And then you do have a kind of team in a kind of group that can start to ask the right kinds of questions so that we can ultimately get helpful answers.
But you know, it's very interesting to think of the differences across fields.
And as a philosopher of science, it's non trivial to convey to different types of scientists what it is that I do, how it might be useful.
And the same goes for the public or any kind of audience.
But part of what those interdisciplinary connections help you learn is just, is doing just that, you know, speaking to different audiences and and working to do that effectively or well.
I, I think you know, something as you're talking, Lauren, I, I feel like something's really just crystallized in my mind that in this type of discussion, we really say, well, we're, you know, if you're a philosophically informed scientist or vice versa, you're the bridge between the, you know, potentially domain matter experts.
And so maybe you're not the expert in the room on, you know, one of whatever it is that's, that's being spoken about, but you know what?
You are the expert in the room on making bridges, on finding those connections like that is your expertise.
You're not an expert in, you know, the, the measles or whatever, right, But you are an expert at, at finding the shape of the problem and, and building those bridges and this science communication, this, this ability to translate between specialized vocabularies.
That itself is an expertise area and it's valuable not only in academic or scientific or even industry spaces, that kind of thing.
It's also valuable in, as you said, communication to a broader audience to translating to make sure that the people that you're listening to and the people that you're speaking to can actually understand each other.
You're a translator, you're a, a bridge between disciplines.
You're a holistic trees level or sorry, forest level, not trees level kind of perspective.
That is the domain expertise that someone who wants to occupy this space will bring.
And, and it brings with it the, the requirement of developing another skill too, which is not just talking, but also listening.
And I think that as a lot of domain matter experts like we, we tend to want to talk a lot.
We tend to want to, you know, come up with our own description or explanation for what's happening and and push on that.
But it's harder to learn how to listen, especially when you don't really speak the language.
And so having essentially a translator in the room is such a valuable asset.
And being that expert, can be can be the difference between a breakthrough or just kind of continuing on in parallel with our blinders on and reinventing the wheel.
Yeah, I think that's.
Interesting.
Oh, sorry.
There's something you have said before to Megan about a researcher showing as opposed to just saying they're doing something.
What was that?
Do you remember that expression?
Yeah.
Like show don't tell, you know, like don't tell us that you found the explanation for something.
Show us what that explanation is and how you're writing about it and the story that you're telling and the narrative that you're constructing.
You know, you want to, you want to take the listener or the reader and guide them by the hand so that they have that aha moment along with you.
This is, you know, this is what you want to do in storytelling and narrative buildings, what you want to do in film and media, right?
You want to show the audience, don't tell them.
No one wants to read a a story about that's a list of accomplishments.
They want to take the journey with you.
So this is the same kind of thing.
Yeah.
So I wonder if in terms of talking about this interdisciplinary approach and perspective and an academic who does this and does it well, picking up on what you said, one element of it is being a listener in part.
And then another element of it, I would add is this sort of, you're open.
I mean, when you think something's right, you really stand by it, but you're also open to being wrong.
And one of the challenges of some work and philosophy is someone becomes known for a view and then they don't want to change it because they're sort of known for it.
So they're not really open to modifying it or being wrong.
And some of the most impressive academics I know are truly open to that.
And it allows them to reach certain types of peaks that they wouldn't have had access to.
So listening, being open, being open to considering new ideas, maybe even being wrong.
But then also there's this interesting piece where you do have to pitch things and you do have to tell a story and getting a grant, I mean, Megan's more of the expert here for sure.
But, but also, I mean, when we write papers, you are pitching an idea.
When we're writing arguments, I'm trying to persuade someone I'm in in similar with a grant or in any kind of communication, science communication, there is a lot to the story that you tell, but the best academics, they can back it up.
And it isn't just a tell me, it's also a show me.
So they can do both.
And maybe they don't even put that story together until they know that they could show you.
And so sometimes you see, I mean, scientists are engaged in a social, you know, this is a this is a social space.
If I tell you I've got a mechanism and I tell you I've got an explanation, and I'm coming from a fancy university and I've done a couple things, I can, you know, that might go a long way.
And, and we do need to be able to communicate well and some people can check that box.
But if you really want to do the best work, it's not just being a communicator, you've got to back it up.
And so then when someone asks you, what do you mean that you say you have an explanation, how is how is this explanatorily relevant?
What's your guiding principle?
You need to have an answer.
Or when they say, what do you mean by causation here?
How is this a call?
What do you mean by mechanism?
Right?
They need to have an answer.
And so we have these buzzwords, they're status terms.
And part of playing the game well is knowing how to use words that that gain some traction.
But if you want to play the game the best, you just have to back that up.
And really science should be something that we can back up in that way.
So that's that's a tall order for a scientist or a researcher, but it shows you how they're willing to adapt and that they can really tell you the value of their work and the justification for it.
But you start to see the kind of theorizing that a philosopher might do and that scientists are doing with the scientific practice, and then this interesting aspect, which is their need to pitch this work right, to communicate it to other people in papers, grants and so on.
Yeah.
So if you wanna, you wanna do that communication, well that storytelling, well what better way than to wear 2 hats?
Philosopher and scientist.
Yeah, I think you both are excellent in in both fields and and that skill you were talking about Megan, that the fact that Lauren has that skill, I think you both technically do.
You both are these translators in both fields and, and, and I think you can see this becoming a thing where most up and coming scientists, researchers are trying to make sure that they understand both, both sides nowadays.
So when you do look at young researchers, they're, they're ingrained into multidisciplinary fields like never before.
You'll see someone doing mathematics, AI, consciousness research all in one go.
And, and it's kind of surprising, but but super exciting as well because it means that the future is kind of bright in that regard.
What do you think that we should close off with?
Anything that you feel you haven't said?
Is there anything about this conversation, why science and philosophy need each other, that you feel you'd like to just hone in on A drive home before we close for me?
I think we've covered quite a lot of ground here, but one theme that maybe has been a common thread throughout this is, is the need for recognizing that whatever you're doing, whether you're a scientist or a philosopher or some, you know, blend of both, you're not doing it in a vacuum.
There's all these other folks around you and that, you know, doing good science and good philosophy is it's a social and it's a, a networked enterprise and that no one researcher, no one expert is an island.
And this isn't just you gotta read stuff.
Everybody knows you have to read the literature and it's gobs and, you know, piles of, of literature all the time.
And if you're especially in like artificial intelligence or machine learning, like good luck keeping up with archive, good luck.
But it's not just that.
It's not just reading and, and thinking and making connections yourself and working with your, your local research group and so on.
It's, I think really trying to get out and make your network as big and as interdisciplinary as possible.
You don't necessarily have to be the true bridge.
If that's not your bag, that's fine.
But recognizing the value of all these different kinds of approaches and kinds of ways of doing science as a community, rather than as a collection of individuals, that there's, there's an emergent property that we should be going for here.
And the, the way to do that is to recognize value and, and really celebrate the different kinds of expertise that we can all bring to the table.
So the community aspect, I think is something that's been a thread throughout all of this that maybe I'll just bring to the forefront at the, the end is that you too, all of you listeners, you can also be part of this community.
And I'm sure that you already are.
Lauren you.
Great.
Yeah, just building on that and adding to that the type.
So why do science and philosophy need each other?
Part of the answer is that the projects that are involved in both are intimately related.
Many scientists I know are doing theorizing and theoretical work that is similar to the kinds of philosophy of science that I'm engaged in and that other philosophers are engaged in.
So there's a sense in which it's hard to separate them if you're looking at scientific research and if you're looking at scientifically informed philosophy.
I think if we're looking at current research that scientists are doing, where they're interested in big questions and they're at the forefront and they're trying to uncover and understand new things that we don't yet understand.
If you're looking at those open questions in the sort of cutting edge of science, or if you're looking at justifying scientific practice as it's taken place for decades and centuries, philosophy of science is very useful for both of those projects.
Philosophy of science here is a kind of work that's focused on foundations of science, precision in the concepts and the methods that scientists use, the principles that guide their research and how it how it is that it works, the success that they get, how they reach the goals that they have.
And so this isn't something that a philosopher can do in a vacuum, right?
We're studying and hopefully working with scientists to get that precision principles and those kinds of goals to show, you know, how they actually work.
And that's a, a kind of philosophy that scientists do, that philosophers of science do.
And it's helpful for both being able to justify the scientific method, how science gives us our best understanding of the world, but also when scientists are tackling these big questions, it helps to see and to look at with a clear lens scientific practice and all of these domains in the principles that you find across those contexts and across those domains.
So, yeah, very much fields that in some sense can be continuous.
And I think it's very much the case that you find many scientists engaged in theorizing that we can think of and in many cases should think of as philosophical.
But of course it's going to depend on what we mean by philosophy.
And that's maybe also one point of our discussions is that philosophy, philosophical work, philosophical thinking can mean very different things in different contexts.
But here it's focused on critical thinking, argumentation, and in particular kind of foundations of of science and how scientific thinking, reasoning and explanations work and how they're so successful.
Well, I just want to say thank you both for this wonderful conversation.
You both are definitely experts in the field.
I can't wait to dissect your work individually as well, showcase it highlights as much as possible.
It's a true privilege and honor to have you both.
And yeah, thank you so much.
This is a wonderful discussion and I really enjoyed it.
Have thank you so much for having us.
This has been really, really fun and engaging.
Looking forward to the next time.
And it's always fun to hang out with Lauren and talk about science and philosophy.
It's one of my favorite things.
Oh yeah, always, always fun to talk more, learn more from Megan and Ted.
Yeah, Thank you so much.
Great to be here and looking forward to more.