Navigated to From Clinician to Chief Health AI Officer: A Conversation with Dr. Karandeep Singh - Transcript

From Clinician to Chief Health AI Officer: A Conversation with Dr. Karandeep Singh

Episode Transcript

We have access to a couple different HIPAA-compliant, GPT-type tools.

Some of them are plugged into things like web search and some of the things aren't.

And so, you know, clinicians saying, well, I'm thinking of using this tool to estimate a patient's risk.

And I'm like, wait, is this plugged into web search?

No, it's just the language model.

And I'm like, I would not trust a language model 10 feet.

It's very attractive.

'cause you can put a patient's chart in there.

It is not a risk calculator.

If there is no risk calculator yet for this tool, LLM is not gonna get you where you want to be.

It will give you something, and I wouldn't rely on that thing at all.

Hi, and welcome to another episode of NEJM AI Grand Rounds.

I'm Raj Manrai and with my co-host Andy Beam, we're delighted to bring you our conversation with Professor Karandeep Singh.

Karandeep is the Jacob's Chancellor's Endowed Chair and Chief Health AI Officer at UC San Diego Health.

He took us through his pioneering academic research on evaluating AI models across health systems and how he approaches his job now as a Chief Health AI Officer.

This is a new and important role.

And we had a lot of questions about how a Chief Health AI Officer spends the day and how he thinks about which AI tools are safe for his health system to use.

And if you are lucky enough to know Karandeep already, you know that Andy and I, of course, had to ask him about his love of the Julia programming language as well.

The NEJM AI Grand Rounds Podcast is brought to you by Microsoft Viz.ai, Lyric, and Elevance Health.

We thank them for their support.

And with that, we bring you our conversation with Dr.

Karandeep Singh.

Karandeep, thanks for joining us on AI Grand Rounds today.

We're super excited to have you.

I'm super excited to be here.

Karandeep, it's great to have you on the podcast.

This is a question that we always get started with.

Could you tell us about the training procedure for your own neural network?

How did you get interested in artificial intelligence and what data and experiences led you to where you are today?

Thanks.

You know, I have been interested in computers and programming since I was a kid.

I think I wrote my first program when I was, like, probably eight years old.

I think that might've been either in Basic or Pascal.

And ever since then, I think my interest has been is in trying to solve kind of the problems I've been facing, or things that I wish I had with computer code.

And so, I think that early on a lot of that was being able to program games.

And I think as I got older, it was things like, writing software that could help sync my email up with my iPod so I can walk around and read emails on my iPod back before iPods were Internet enabled.

And I think that kind of just gradually kept growing and growing.

And I was never a formally trained computer scientist.

I think I took one computer science class kind of throughout my career.

But it was one of those things where, by the time I was learning my probably 12th or 13th programming language and using it to build different projects, it was something that had become, just part of the way that I approached problem solving was thinking about, can we solve this problem with code?

So, I think that somewhere along the way, I fell in love with medicine and wanted to become a doctor, went to medical school.

Even when I was applying to medical school, I was trying to convey to the interviewers, hey, I really wanna do computer science plus medicine.

And I think they looked at what I had done and the things I'd built.

And they, I think it was really hard at that time, without kind of proper mentorship, to really convey to people what's possible when you apply the problem solving that comes from computing, and I think now AI, into a setting that's deeply entrenched in the way that things are done like medicine.

So, I think somewhere along the way I was doing medicine and just doing stuff on the side.

I would build kind of, uh, when I was a Chief resident at UCLA, I built a mobile app for iOS and Android that could allow you to send pages from phone, which at that time wasn't allowed, or there wasn't like a mechanism to do it, digitized a lot of our paper-based educational materials that we had for residents, and I think it wasn't until I actually hit fellowship in nephrology where I encountered the field of biomedical informatics.

And it was really there that I would say my kind of career was launched in the way that I have now followed.

'Cause up until that point I really viewed computing and AI as things that you did on the side to improve your productivity, improve your learning, improve your kind of experience, and the things that you were trying to do to deliver care and to be a good physician.

And it wasn't until I did my training in informatics where I realized it's not only something that you can do on an individual scale, it's something that you can actually do at a much larger scale all the way up to the medical school scale or a health system scale, or even on the consumer scale well beyond.

And I think that really opened my eyes.

And so, I would say the way that my career got me to where I was today is I had this aha moment with talking to one of my mentors back in my informatics masters while I was still doing clinical training, where he said you can do your clinical specialty and be the kind of computing and AI expert in that clinical specialty.

Or you can really take a much broader view and try to be an AI expert who affects all of medicine.

And then also practice clinical medicine as a way to stay entrenched and learn about the system.

And that I think was the moment that really led me to say, I actually want to aim big.

I wanna make sure that the things that we're doing in AI actually improve patient care, actually make things better.

And the way I will do that is by also practicing medicine in parallel with that, to be able to really understand what is the patient experience, what is the clinician experience, so that we can do a better job.

And that led me to, I think, a faculty career in informatics, growing operations role in AI, and then led me to my role today where I serve as the Chief Health AI Officer at UC, San Diego Health.

Cool.

Thanks.

Maybe just to follow up on a couple things there.

So we've spoken many times before and I usually have some intro where, like, Karandeep's one of these people who like makes you feel self-conscious, whether you're a doctor or a programmer.

I'm a computer scientist by training, and I think you're humble in how you describe your programming ability.

We'll talk about this later, and all the deep technical contributions you've made in the Julia programming language.

And you just mentioned that you had learned like 11 or 12 different programming languages, and I guess I'm just curious, you clearly have, like, natural ability in that.

What kept you anchored in medicine versus being, I mean, I'm assuming we're approximately the same age, like, .com was taking off, Internet was taking off, personal computers were all happening.

I guess what kept you grounded in medicine as your essentially core identity where computing was this thing that you did on the side?

Yeah, so I think we had a lot of deep conversations in our family about this.

My dad was actually an auto engineer, and so, he was a trained hardware and software engineer.

And I think his experience was that it's not the programming that makes you who you are, it's the domain expertise that makes you who you are.

And that being able to bring programming to a domain really lets you take that domain way further than if you were kind of a programmer and really working with the domain expert without really understanding that domain.

I think when had talked about careers in computer science, careers in medicine, he really, very much strongly, and not just him, I think a lot of folks in my family were pushing.

No, you should really do medicine, and doing medicine does not mean not doing computer science.

And that was another thing where this is maybe stereotypical of Indian families, but it was you can do computer science, just do medicine first.

And that's like literally what I ended up doing was when I kind of fell into informatics, I said, oh, this is essentially computer science and medicine.

But I didn't have a name for this field when I was starting.

And had I really known the field existed, I might've gone straight into biomedical informatics as an area, but my kind of winding path led me to do medicine first, and I'm really glad that I did.

I think that a lot of the problems that I learn about a health system or the way things work are probably felt most viscerally, when I'm actually working clinically, and are not things you can read from a manual and learn from a manual.

Amazing.

I can relate to the parental pressure to just do medicine first.

I don't know if I've actually shared this on the podcast, but I think many folks know Zak Kohane was my PhD mentor, right.

And on the evening after my dissertation, like defense, my PhD defense, we're having dinner with my family and Zak.

And my mom, this is again, right after I finished my PhD, was saying at the time, oh, like now it's time for medical school.

And Zak was maybe the first person to fully convince her that there are other paths to impacting medicine without me actually going through medical school itself.

But one other thing that I've been really struck by what you said was, and I think this has been such a consistent theme on the podcast is deep domain expertise, right?

And so in biomedical informatics, for example, there are many ways to approach sort of medical AI problems, biomedical informatics problems.

But what we've been hearing about from all the folks who are really having a lot of impact, and that we've been lucky to interview on this podcast, is that they are really, really deep technically, but they really have this deep appreciation for biology or for medicine.

And so, think Ziad Obermeyer really articulated this quite well.

Like where does creativity really lie?

Where do really interesting problems come from?

And where do really interesting we think about the iconic papers in the last 10, 15 years?

It's because it's really hard to do this outside of one mind where the latency is just too low across two different people talking about a topic.

If there's not enough that's shared about knowledge about the domain as well as the sort of technical application that you're using.

And so, I think that really resonates.

It's just gonna, you know, it's been a real consistent theme.

Thanks for sort of sharing that.

I think we'll probably dig into it a little bit more in some of your papers as well.

Maybe with that, Andy, maybe it's a good time to sort of transition to, uh, Karandeep's academic work.

Yeah, I think that's a natural segue.

So, the first paper we wanted to talk about is a pretty well-known paper of yours.

I think it got a lot of attention when it was published.

It was published in JAMA Internal Medicine in 2021.

The title is "External Validity of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients." So, maybe could you set us up, what was the motivation for this paper?

What were the key findings?

And then I think I'd like to dig into some of the implications for AI more generally.

This paper came out because I was interested as I was growing my faculty career at University of Michigan in seeing how a health system turns on and monitors AI tools as we're using them.

One of the things I've always been interested in is the gap between what we say we should do and what we're actually doing on the ground.

And, also, the gap between how good things work on paper and how good things work in reality.

And I think those are both solvable things.

I don't think when you say something doesn't work, that's a permanent indictment on it, and say it doesn't work, so therefore it will never work.

But I think that it is helpful to understand, kind of, where things are so that when you do want to improve things, you can understand where do things actually need improving.

So, when I was at University of Michigan, I was in the Department of Learning Health Sciences, which was an academic department focused on understanding and building learning health systems.

At the same time, the health system was setting up AI governance, and I had the privilege to start on that AI governance as it was being created, and then ultimately in chairing our health AI governance at University of Michigan.

One of the things that really came up was, hey, there are all these tools that are now available within the electronic health record.

Which ones should we turn on, and which ones should we use, and how should we be using them?

And I think those are questions that on a really simplistic level, health systems have been thinking about when it comes to things like clinical decision support.

So, we've been thinking about, hey, should we turn this alert on?

And what's the implication of this alert in terms of the work that creates, in terms of the actual, you know, can we move the needle on better outcomes as a result of turning on this alert?

But here there was actually a lot more variables.

It was okay, so do we turn this on?

Who do we make it available to?

Is it interruptive?

And it's interruptive, not just in the sense that when you log into a patient's chart, it pops up.

But interruptive in the sense that it can actually stop what you're doing and sends you an alert and say, even if you're not in that patient's room, in that patient's chart, you can get kind of stopped to say, you need to go check on this patient.

That's a type of workflow most clinical decision support never did.

And so, that was kind of a big c change about how interruptive we want to be.

And you wanna make sure that if you're interrupting a clinician doing something else, that it's kind of really worth it.

It was also the first time we've been thinking about thresholds.

So, at what threshold would we alert someone?

And there was this really tension between, you wanna alert someone to tell them something they don't know.

You don't necessarily wanna alert someone to tell them something they already know.

And so, when I came into this role, I said, well, let's look at a number of these tools.

So, I think we didn't set out to look at just the sepsis model.

We also published on the deterioration model.

We also had a number of other things that we didn't publish but we looked at, because the primary intent here was not to do research.

The primary intent actually was to understand how a health system can make an informed decision about use of these tools.

And also, I think a growing recognition that even though the vendors give us great tools to try to understand how well these tools work in real life.

Sometimes the assumptions that the vendors make are very different than the assumptions our own clinical care teams make.

And so, this was one of these instances where we, I think, dove in to try to understand what happened and how well this tool was working.

And, you know, I can get more into, kind of, what do we actually do step-by-step to get to the point of actually even looking at this as a formal evaluation.

But then even after we turned it on, there was a lot of things that happened after the fact.

That helped us kind of then not just turn this on and evaluate it, but also then to monitor it even to the point of at some period of time, turning it off period of time because of COVID.

Got it.

And just to maybe be a little more concrete, this is an alert for patients who are at risk for sepsis?

Yep.

So, this is a model that the way it was implemented at our institution, it was running every 15 to 20 minutes on every single hospitalized patient and patients in the ER.

And it was trying to predict the probability that you would have sepsis in the next X number of hours.

And the idea here was, is that if you crossed a threshold of, I think, 6%, that at that stage we would say, okay, at this point it's now worth it to generate an alert.

And when we used some of the tools that were given to us by the vendor to look at how well, like, what would that mean in terms of things like positive predictive value in terms of catching a number of cases, sepsis, we felt like, wow, this actually looks really useful.

And so, this is something that based on the tools given to us, feels like we should definitely have on, because ultimately at the end of the day, we wanna make sure our patients who are developing sepsis get care as fast as possible, right?

They get started on antibiotics.

There's a whole sepsis bundle that gets put in place once you have sepsis.

It's not just starting antibiotics.

It's making sure you get the right amount of IV fluids given to you.

It's making sure that we look for other signs of end-organ damage by checking lactic acid and the blood.

So, there's a whole workflow that gets launched when you think someone may have sepsis.

And so, this tool, I think we were excited by.

And so, when we saw these results from the vendor, we said, let's go ahead based on this knowledge and turn it on in, kind of, one or two hospital units just to start to gain experience with how well this is working.

Mainly 'cause it was one of our earlier experiences, this plus deterioration models, of actually trying to institute these into a clinical workflow at our health system.

So, what did you find?

So, when we turned this on originally, we maintained an open line of communication with our sepsis committee.

So, we had them come to our clinical intelligence committee, which is the name for our health AI governance at Michigan, and talk about what was the kind of experience of using it.

And in the early, kind of, months we asked about, well, has this improved some of the metrics that you guys look at in terms of time to antibiotics or other timestamp based metrics that we internally track?

And the answer at that point was, not really, but that's not surprising.

It's our first experience turning something on.

And the other question that they actually brought to us is it seems like the tool's really good at identifying people after the fact that they have sepsis.

And I'll just talk about what I mean by after the fact.

I mean, in the few hours after they have sepsis.

Now, the first few hours after you have sepsis, knowing that you have sepsis is still very clinically useful because it's not that the goal of treatment of sepsis is to treat sepsis actually before we know someone has sepsis.

Like, that would be great if we could do that, but that's a really tall order.

And so, it's okay actually to be able to predict and act on sepsis after it happens.

It just raises questions of why does the tool recognize sepsis after it happens?

Is it recognizing the syndrome?

Is it recognizing the digital footprint of our treating sepsis?

And, like, it just opens up more questions around trying to unpack, is it telling people what they don't already know?

At least, kind of, in that stage.

But the interesting thing was is that the tool, at least in the original description of it was designed to predict sepsis before it happens.

They used the time of sepsis as time zero and said, we use the predictions leading up to that to try to predict the occurrence of sepsis.

So, that to me was actually a deviation from what the tool was, I thought, supposed to be doing.

So, we actually said, wait, do you guys have the timestamps for sepsis that you use operationally as a health system?

And I was still saying you, because at that stage I was just starting to transition into a more operations-type role.

I was still very much an outsider in how I viewed myself as a researcher, kind of given the privilege of participating in this operational activity.

And so, they said actually we do, we have a set of criteria we use and that's how we track our internally, you know, how good we're doing in caring for sepsis.

So, we said, well, if you can give us the actual timestamps you use to decide if someone has sepsis, we will do an evaluation against that.

And so, this was our kind of first step to say we're not going to rely simply on the vendor, which again, many times you have to, and many times there are good reasons to, and pragmatically it makes a lot of sense if you and the vendor have agreed on how to evaluate things.

But because this was one of our first experiences, this in deterioration models of looking at how we operationalize these tools in our health setting, this is a setting where we actually said, okay, we'll do this independently.

And when we did this independently, we found that the tool actually is, as they found, pretty darn good at predicting sepsis after sepsis happens, but not nearly as good as one would expect in predicting sepsis before it happens.

And then some follow-up work that we did in collaboration with Dr.

Jenna Wiens, we found that, in fact, if you go back to before the antibiotic is even prescribed in the first place, which is a moment where you could say someone has recognized that someone's at least potentially infected, which is a precursor to having sepsis.

The tool is basically no better than just flipping a coin.

This is the NEJM AI paper, right?

Yeah, this is the follow-up NEJM AI paper, and so this was us trying to unpack, well, if it's not obvious before the timestamp, then what's, we're trying to unpack this relationship, this causal relationship between a person having access to a model, using it to drive a workflow, and also the model simply also having access to what the person's doing to drive its prediction.

And so, you can get this kind of feedback loop that's been described by many other people, but that feed lap feedback loop, you know, as the name suggests, goes in both directions.

It can make the tool look better than it actually is, or in some cases worse than it actually is, depending on the actual relationship between the intervention and the outcome.

In this case, I think it was set up so that the tool was looking better than it actually is.

And at the same time, clinicians were reacting before the tool had identified that a patient had sepsis.

And so that's what a lot of our kind of follow-up work was: try to unpack.

It's almost like the prediction tool would be better for automatic billing of sepsis than it would be for proactively identifying who's at risk for sepsis.

Cause it's, yeah, it's measuring their health care's reaction to a patient versus giving you prospective information.

And actually a question came up, why do we need a sepsis model?

Why can't we just use the criteria to define sepsis, to identify patients in real time?

And I think the issue is, is that as most sepsis experts know, but other people may not be aware of the criteria we use to define sepsis are only appliable after the fact.

They rely on, for example, someone being treated with antibiotics for one day or two days.

So, it's not something you can actually apply in real time.

So, it's one of these things where the whole reason the industry of sepsis models exists is because the gold standard outcome is only knowable when someone goes home.

So, when someone goes home, yeah, we can just apply the gold standard outcome.

We don't need a model.

But there is this kind of intermediate period where someone has developed sepsis, we still have actions we can take to change their outcome meaningfully.

And the question is, can the model give us an answer in that period?

And can we try to unpack what is the added value of things that people did in response to the model versus things that they would've done anyway?

So, that's great.

Let me follow up on a couple things.

As an AI researcher, if you came to me and described the sepsis scenario, I would be, like, that's an awesome scenario where we could build a predictive model to inform patient care.

Like you said, there's this sepsis care bundle that gets rolled out, so we know what to do about it.

Often in many health care scenarios, the disease definition is ambiguous.

And even if you know it's gonna happen, you don't know exactly what to do.

So again, like, if you came to me and I had not read your paper, I'd be, like, this is awesome.

Obviously, there's a whole literature of sepsis prediction models that we're not gonna touch, but so, given the favorable qualities that it has, does this make you very bearish on AI generally for medicine?

I know that this is in the context of a specific predictive model, which is a subset of what we generally think of as AI, but has this informed your thinking about the potential for AI in health care in either direction?

So, let me first criticize our paper for a moment before I answer that question.

What did we not do in our paper?

We didn't look at, did actually implementing this tool actually improve patient outcomes?

We tried to infer based on information we had about the way the model was being used, the way we were using it to then drive actions to see whether we could draw clear pathway between the model performance and then better outcomes.

But we actually didn't do is measure it.

And the challenge was, is that there is a, there was a chicken and egg problem really early in this field of AI implementation, which is you don't wanna implement something at scale before you know that it works.

You can study all the outcomes you want in a smaller scale, but often not with the statistical power you need to decide if it actually works.

So, you almost have to implement at scale to see if things work.

And so, I think that was a lot of the early challenges.

I think that there have been folks who published either with this sepsis model or with other sepsis models that they were able to improve certain kinds of either process outcomes or clinical outcomes.

So, I think that I'm not bearish on the kind of field based on this paper.

What this paper I think just reminds us, and I think what it really cemented to me, is that you can't just turn things on.

And when it comes to decision support, we have ways to review it.

And I think the challenge was, is that the leap from non-AI based clinical decision support to clinical decision support that's driven by AI is that you now need to think about a class of problems and a class of issues that you just didn't have to think about before.

And that when you do this kind of an effort, you can see how just how easily an analysis like this could be gamed by making one assumption or the other assumption if you don't understand the kind of clinical reality of the way these things are implemented.

So, I think that to me it was, it was something about, you know, uh, I'm not bearish on it.

I actually think what this means is there's a new class of professionals who are needed who can help bridge AI with implementation science and help unpack whether these things work.

Yeah, I mean, your point there about the chicken and the egg, like you have to be well powered to find an effect.

And the effect has to be precisely defined.

Like, those are the standards that we apply to clinical trials for pharmaceutical interventions, for medical devices.

And I've always been curious that there's this subsection of research primarily done by clinicians called quality improvement or QI projects, which has always felt like, not quite I'm gonna design a full trial of intervention, but also slightly more rigorous than just mess around and find out.

So, like, do you think that we need to elevate the status of decision support and quality improvements to be a sister discipline to these other categories of research?

Yeah, so, I think there's a long history in this country of why we have things that are in scope of the IRB and things that are outta scope of the IRB.

And I would say a lot of what is defined as quality improvement is driven by intent.

Is the intent to make care better here or is the intent to generate generalizable knowledge?

And one of the challenges is, and we talked about this a lot in my, kind of, previous role at University of Michigan, is that when you are trying to operate within a learning health system, you're actually trying to do both.

You wanna make sure that you generate generalizable knowledge.

And so, the work has to be done in a high quality enough way where it actually.

You can stand behind it and say, this is real, and this is something that other similar systems may also find.

But at the same time, you don't wanna publish and then find that the next day, the care that you deliver at that system is the same as the care you delivered yesterday.

Because even though you found this thing, there was just no kind of feedback loop closed between you finding something and that thing actually improving the way that you deliver care at your system.

So, I think that, um, in my mind, really good quality improvement should be the same as really good clinical research because the quality of the work is actually not what should be driving the label of quality improvement versus research.

So, what you'll see is that even some of my peers here at UC, San Diego, like Dr.

Ming Tai-seale, have published randomized waiting studies of things like our patient message replies drafted by AI to try to understand, are there time savings?

So there, you know, applying more rigor, a randomized design, to try to study it in a quality improvement setting and I think she's not the only one.

You know, Dr.

Leora Horwitz at NYU has published on rapid cycle learning health systems.

So, I think that quality improvement really shouldn't be viewed from a lens of, oh, we're now gonna use some kinda sub-like an inferior design to study the thing that we're studying.

It really should be based, in my mind, in intent.

If the intent is to drive better outcomes here, and we think that there is, uh, real equipoise in those outcomes, and the intent is fair, and it's things that would be in the natural variation of the way that care gets delivered anyway, then I think that's fair game for quality improvement.

And at UC, San Diego, we actually have a registry of all the quality improvement projects that are happening, maintained very similarly to the way that an IRB would of active QI.

And that actually holds us accountable to make sure that the QI that we're doing is not after-the-fact QI, where we did something and then we say, oh, yeah, yeah, we did that before, but we're actually pre-registering it in a similar way that we would for research.

Awesome.

Thanks Karandeep.

So, uh, Karandeep, I think that's a great transition to this sort of next topic that we wanna talk about.

Just to make sure we have the chronology right.

So, you finished your nephrology training.

You did a degree in informatics, biomedical informatics at Harvard Medical School, started your lab at Michigan, and then most recently you've moved as of maybe a year or two ago at this point.

Mm-hmm.

To UCSD and at UCSD you are the inaugural Chief Health AI Officer.

Did I get all of that correct?

Yeah, you did.

Amazing.

So, I guess my first question is, you're one of, I think maybe a few now, Chief Health AI Officers across the country.

I'm sure you've spoken with some of the other ones, and you know each other and you chat, or you intersect at meetings and things like that.

But maybe you can just start us off with telling us what a Chief Health AI Officer does, how you see your role within the system, what your job really is.

Yeah, so I think that the, in my mind, what a Chief Health AI Officer does is it's someone who's accountable for the way that a health system uses AI.

If you go to most health systems and you say, do you use AI for something, some aspect of your operations or some aspect of your clinical care, the answer is often gonna be yes.

And it's increasingly yes, because it's not that they themselves maybe came up with AI, but if you look across the various productivity products they have their electronic health record other vendors they work with, almost every tool nowadays has some level of AI baked into it.

And then you have to ask who ultimately is responsible for making sure that the AI is working the way it's supposed to, that someone's looking at it, and that the ways and that we're using AI are ethical and really respect the patients that we serve and respect the communities that we serve.

And in my mind, that's the responsibility, at least in part of a Chief Health AI Officer.

So it's my job to make sure I'm not an evangelist for AI within the health system.

My job is to make sure that we're using AI in a smart way, and that as we think about our health system strategy and the way that we're gonna improve access to care, the ways that which we know we're gonna improve, the quality of the care that we deliver, that we think about what aspects of that strategy could be supported by AI rather than using AI to actually develop like a separate AI strategy.

I think you do need some kind of an AI strategy that helps you figure out how do we decide things, like, build versus buy?

How do we decide, like, how are we gonna upskill our workforce to use AI productivity tools that the health system has approved as being HIPAA compliant and secure?

But beyond those things, I think the real job, I think is being plugged into the rest of the health system, understanding the pain points, understanding what the health system does well, what the health system doesn't do well, and then really thinking from a problem-oriented standpoint, where are there opportunities to really use AI?

And where should we not use AI because we feel like it's not the right tool for this job.

Or what would we need to do to get to a point, like, I think a thing that everyone's struggling with right now is there are all these patient facing generative AI tools and chat bots that are coming about that—.

That are being used, that are being used all the time by patients, right?

Well, patients are using them, sure.

But now there are health system products that are patient-facing AI, right?

AI call centers, AI things where a patient interacts with a health system and they're actually interfacing with an AI and maybe not with a human in the loop always.

And so, I think that is this thing where you really think like, okay.

How are we as a health system gonna make a determination that this is something we're okay with and what kind of testing do we need to do?

Is it enough to just rely on vendor testing?

Do we need to do some vibe testing and, and actually play with these tools and say are they okay?

Some of the voice AI tools might talk too fast, and you won't know that until you just play with it and you say, there's no way some of our patients calling in would be able to even follow what this is saying and get it redirected appropriately to a human.

So, I think it's everything about build versus buy.

It's about how do we upskill our workforce.

And then it's also about how do we plug AI into the relevant parts of health care operations and clinical care at a systematic way that lets us scale the use of AI.

And then I think it's also enabling.

So, how do we enable our faculty as an academic medical center and our staff as an academic medical center to actually use these tools to implement things at scale that otherwise they might have thought really small about, because there wasn't a shared thinking and a shared infrastructure.

What's been most surprising about the job?

I would say what's been most surprising early on is when you meet people, them kind of assuming you are there to evangelize AI.

Um, so I would say that having AI in your title, oftentimes people will approach me and say, um, you know, oh, like I'm using AI for this.

And I'm like, I don't know that I would use that tool for that.

And yeah, you know, so an example would be we have access to a couple different HIPAA-compliant GPT-type tools.

Some of them are plugged into things like web search and some of the things aren't.

And so, you know, clinicians saying, well, I'm thinking of using this tool to estimate a patient's risk.

And I'm like, wait, is this plugged into web search?

No, it's just the language model and I'm like, I would not trust a language model 10 feet with a question of—.

Like for a, like for like ASCVD risk score or eGFR or something like that.

Yeah, no, it's very attractive.

'Cause you can put a, you can put a patient's chart in there or you know, parts of a patient's chart in there.

Yeah.

And, but understanding that, like, the LLM is not a risk, it's not a risk calculator that is a different, it's a different thing and—.

What you might, reframing it as if there is no risk calculator yet for this tool, LLM is not gonna get you where you want to be.

It will give you something.

And I wouldn't rely on that thing at all.

Some of the language models are good about saying I can't do that, but this is where I think it's really important for models that say I can't do that, but also for our folks to understand what you can and can't do.

So, you can use it to look things up where you've missed something.

I've seen rare diseases on the inpatient side where I'm like, I'm gonna sit down and read about this, but in the two minutes I have before I walk into that patient's room, let me quickly use an LLM-driven search that pulls up relevant papers and things that I need to know as a nephrologist before I walk into that room.

It's not the replacement for what I'm gonna do afterwards, but it will give me context that I could never get in two minutes walking into a patient's room with a rare mix of conditions or kind of a unique situation where I'm trying to figure out how am I thinking through the questions I'm even gonna ask them to get to the right place where I can make a good solid recommendation.

So, I think that's one thing.

The other one I think is just educating people about the fact that just because these tools can train on your data doesn't mean that they have to.

And a lot of the enterprise agreements that we have actually greatly limit what the vendors can do with data that we enter.

And I think that the kind of prevailing understanding is, is that in order for you to use these tools, they have to constantly train on the data that you give it.

And so, a lot internally is what we're doing is trying to make AI boring.

Right?

Make it something that is not interesting, that is just the way that we do things.

Because boring feels, 'cause boring, feels safer.

Well, when you convert it into something that is an analog, analogous thing that's not AI.

People get it.

And it somehow, when you say it's AI, people it is just computer code, and when you call it AI, people will now make magical assumptions that are both overestimating what it can do and also severely underestimating what it can do.

So, I think a lot of it's just saying, okay, this question that you've been thinking a lot about, imagine it was just this, would you view it differently?

And they say, oh, when I use my email, that's my enterprise email.

I don't make any assumptions that like anything's being trained on that.

And so, you say, well actually that's the same agreements we have in place with our vendors.

They can't train in our data.

So, that's why, just because we're using AI does not mean we are giving away our patients data to a whole bunch of vendors.

We are pretty strict in terms of as a health system, as a university, and as a university located in California where there are very strong consumer productions in terms of what we are able to do in partnership with vendors and what we explicitly prevent them from doing with our patient's data.

Awesome.

I don't think that this is cutting off any of the questions that we have in the next section, but what are areas, given your perch where you're seeing the most rapid change in adoption from AI, and like the flip side of that, which ones seem to be calcified and resistant to AI-related change?

Yeah, so I think if you asked any Chief AI Officer this question, they probably say AI scribes is where we're seeing the most rapid adoption.

And I think that this is an area that has existed for a while.

The companies that are around today, some of them are really well entrenched companies that have been doing dictation and kind of other related tasks and even the intersection between dictation and then a little bit of note preparation.

I think the thing that is really interesting to see is that every tool that's starting off its life as an AI scribing tool is also morphing into a clinical decision support tool, morphing into a revenue cycle billing tool.

And so, I think that the categories that you look at and you say, oh yeah, these were clinical decision support tools.

These were the billing kind of vendors that we had, these were the scribing vendors.

And what we're finding is that the generation of the documentation is a bridge into the rest of health care operations.

And so, you're starting to see this real convergence between what started off in one domain area and now is suddenly a platform that is cross health system and touching many different areas independently.

I wanna ask one question that's a little bit less serious, and then one that's a little bit more.

So, the less serious one and this is at the, a very deliberate risk of potentially setting off Andy here.

You're the Chief Health AI Officer, so presumably you have to define what AI is and what within the hospital counts as AI and what doesn't.

And so, Andy and I have had many, many, many discussions about like where, what family of statistical models is AI versus is not.

And so, the classic one is logistic regression being rebranded, some risk score being rebranded as AI in today's time.

Um, you must have to actually.

This is the sort of non-serious question or less serious question, but you must have to actually deal with this, right?

Like, oh, Karandeep, we have a new MD calc risk score that we wanna put into the HR as like a dot phrase.

Is this safe for us to use?

So, like it's just some, it's a logistic regression risk score.

Is that something that you would be sometimes or routinely asked to sort of opine on as part of the Chief Health AI Officer job?

Yeah, so I would say what I will tell people is I am a person who does AI.

I'm not the only person who does AI in our health system, right?

And so, there's a lot of people who are thinking about AI and particularly AI in clinical specialties.

There's a lot of situations where the AI is actually directly coming from a clinical guideline, and then there's times where it's something that someone's built that's gonna be used to drive clinical care decisions, but there's no kind of official clinical guideline kind of guiding it.

On one hand, we have a fairly broad definition of AI.

It's, I can pull it up exactly.

I don't have it exactly off the top of my head, but I think it's AI is a tool that makes information available to support decision making, and that's driven by data.

I think that predictive AI kind of very cleanly falls into that.

And I think we also might expand it to say and makes recommendations 'cause generative AI can often be used well beyond just getting information and actually generating a bunch of information that you're gonna directly share with someone.

So we have some definitions laid out of what is AI in general, what is predictive AI?

What's generative AI?

There's a lot of convergence happening where people are using generative AI models to do prediction, et cetera.

But just to say, I think it, it does help to have some kind of framework in mind.

One of the things I think that's been, that I get wrapped up in is making sure what things actually require a full committee review from our health AI governance.

And what we've actually decided is a lot of things don't actually require a full committee review.

It requires eyes on it, but it lets us be nimble and move fast without trying to bring every logistic regression model to full committee.

And so, some of the things we look at to decide, does it really require all hands on deck to look at it.

Is, is it high impact or is it high risk?

High impact are things that affect multiple different service lines or multiple different roles.

We have tools that affect potentially physicians, nursing, physical therapists, like a sepsis model, right?

Might touch on a bunch of different type of roles within a health system, or we have things that are in the area of revenue cycle that are clinician facing that might affect a lot of different clinicians across surgical, medical specialties, et cetera.

So, those things I think are high impact.

Get a full committee review.

The high-risk things are things that we use without a human in the loop, things where we're using it to actually drive clinical care decisions.

And there are things that affect the way that our work is organized.

So, think about things that actually change people's roles in a system.

And therefore, it could have implications on jobs, job descriptions, and things like that.

So, anything that falls into those categories, we take those extremely seriously and we wanna make sure that we are very thoughtful about those and we get a lot of input from a lot of different people.

Because our AI governance committee actually consists largely of our health care leaders.

It's actually got a handful of folks with expertise in AI methods.

But it's largely leaders now.

Leaders includes experts in health equity, experts in ethics.

So, it's not just leaders from a business operation standpoint, but it's leaders with different areas of domain expertise who understand the gravity of, like, we're gonna use AI to do X, Y, Z.

So, I would say, to answer your question, most of those things would be single service line, single clinician.

So, they may come to us and if they came to us, they'll reach a pretty quick resolution, usually.

But I think that, you know, we, we do look at those.

It's just that, you know, we, yeah, if you look at all the things that are happening.

The fires are not generally in those situations.

The fires are in situations where we really need to spend a lot more of our time and, yeah, so we spend a lot time on things where there's no human in the loop, where we're like, okay, we wanna make sure that is as safe as possible and as restricted to use cases where we think it's appropriate as possible.

So, I'm gonna quote you on logistic regression.

Is AI officially in 2025?

No, that was great.

And that was, uh, that was also a very helpful way to think about how you partition effort, time, and precious resources to sort of address different AI applications.

One last question then I think we're gonna jump to the lightning round.

So, you have to decide when an AI tool is ready for frontline use and then presumably also monitor.

And I think this has been a big part of your both academic research career and then also I think your job now monitor the tools that are being used, that they're being used safely and they're still functioning as intended in your health system.

And so, my first question is, do you have sort of a go, uh, and this is kind of a, maybe a, give us a quick answer for this one.

Do you have a go/no-go checklist for thinking about whether an AI tool is ready for frontline use or something similar?

I would say no, we don't have a go/no-go checklist.

We do what I would say is a holistic review.

Okay.

And that is to measure it with what the implication is on a patient's safety, like on the health system.

And I would say there are things where we really do need to go deep and do a lot of double, triple, quadruple checking.

And then there are things that really, it's not gonna take us there.

I would love for us to have a standardized set of tools that we could use to evaluate all of the AI models that we use.

I think that would be amazing.

The reality is, is that we have models that sit in our electronic health record.

We have models that sit in imaging vendor data.

We have models that sit completely outside in various clouds that are securely connected into our data sources, but are running completely elsewhere.

There's really not an easy way in 2025 to actually have a standardized evaluation toolkit that you can use across models that are filing their scores and relying on totally disparate types of data across the range of the sort of vendors that we work with and the range of the in-house, uh, kind of efforts to, to build things.

So, I think that I've shifted my thinking a little bit on this where I would've said, you know, maybe a year ago, two years ago, we should be evaluating every single AI that we use.

What I would say in monitoring every single AI that we use, I would say in 2025, my pragmatic thinking is we are not resourced to do that.

So, we need to make sure that our contracts with our vendors really hold them accountable to helping surface issues to us that may not lead to them to make different decisions, but may lead us to make more informed decisions about whether a tool's working.

There's also a whole open science in model monitoring that I think, um, maybe, you know, wasn't appreciated a year or two ago.

People had kind of talked obliquely about feedback loops and things like, oh, this is kinda some hypothetical thing.

But it's very true.

The moment you start using an AI model to change an outcome that that model was designed to predict, it becomes extremely difficult to actually say that that model is now not working well.

Because if your model was predicting an outcome that you are trying to avoid and you have a really effective intervention.

If your model's working, you will have less of that outcome and your model's AUC will get worse.

Your performance measures will look worse.

And in fact you might say, I even need to like my, it's even mis-calibrated, I need to recalibrate it.

If you recalibrate it, you will cause more people to experience the outcome, 'cause fewer people will receive the intervention.

So, actually, I would say if you have a really effective intervention, you want the model.

If the model is designed to avoid the outcome, to actually get worse and be less well-calibrated.

'Cause that's a sign of success.

On the flip side, if you are driving an outcome, that's a process outcome where when you see that tool, it leads you to do something.

Well then, if the tool is working or the intervention's working, you better be doing more of that thing in response to that model.

So, you would expect the AUC to get better, you'd expect it to get mis-calibrated in the other direction, and those would be good things and signs of success.

So, I think that, you know, everyone in their heart feels like we need to monitor models, especially predictive models in this way.

Um, but I think the reality is, is it's just not that simple.

Probably most health systems don't have the expertise to unpack what's real and what's actual miscalibration as a result of dataset shift.

And so, I think that that's why a partnership with vendors is required.

Generative AI evaluation is an open book in 2025.

I think we've kind of come to terms with how we internally plan to evaluate those tools.

But I think that that's something where, um, you ask different people and they will give you totally different answers.

Some people are extremely metrics driven, some people are vibe driven.

And I think I probably have shifted more towards being vibe driven mainly 'cause if you look at, um, you know, things like, uh, uh, LMSys Arena.

Yeah, right.

Or LMArena, where you find that, you know, a lot of the evaluations are really vibe-based and you, you can actually differentiate models with if enough people use it for enough different tasks.

Very generally, I think that, uh, a mix of vibe based evaluation and kind of the, uh—.

Did you read Rick Rubin's book?

Rick Rubin's book The Way of Code?

Yeah.

That, and I think, uh, and I think it's like HealthBench actually do help.

Yeah.

Or MedHELM where you have a specific, um, scenario where you want the LLM to say something in response to something that a patient said, and you can test it to see does it actually do what I want it to do and judge it on like a rubric.

So, the, I think that the larger answer to the question was, yeah, we have ways to evaluate it, but I think that it's not as simple as we will do all the evaluation in a standardized way across all the different modalities of AI that we have and all the different ways that scores get calculated and filed in different places.

We are just not resourced and I, I doubt any health system actually is truly resourced to do that at operation scale.

So, I think we have to be smart about how we approach that.

I think you made a lot of subtle points there and uh, you also just illustrated really well, I think why you're, uh, a Chief Health AI Officer 'cause it's very thoughtful and subtle points.

So, so thank you, Karandeep and I think, Andy, are we ready for the lightning round?

Yeah, just one quick point and then we'll go to lightning round.

Like, I also, one of my big memories of you is, like, working with Andrew Vickers and advocating for decision-curve analysis, and the fact that Karandeep is now advocating for vibe-based analysis is probably like one of the biggest moves—.

System two to system one.

I don't know.

Yeah, I think it's gen.

I think it's for generative AI.

I think for predictive AI I really do think vibe-based is not enough.

Um, and that's because for predictive AI, the vibes are always positive.

Got it.

And I think that for generative AI, what I think is actually the smartest thing to do is rubric-based analysis.

Give people a, what would you want the language models to do?

So, when I say we do vibe-based analysis, what I really mean by that is we look at does the thing do what it's supposed to?

Mm-hmm.

Does it not do what it's not supposed to do?

And can we drill, break it?

Yep.

And those are like three of the areas we look at for journey AI model.

But a lot of that is through you typing and, and playing with it.

I say that you can subtly figure out like, oh, this is just not gonna work because even though it meets our rubric, it does it in a weird way.

Cool.

And so that, that's the kind of part where it's that open-endedness that's left.

But I, yeah, I, I don't think vibes are sufficient, but I think that absence of vibes leaves a big gap.

Got it.

Cool.

Alright, we're gonna hop to the lightning round now.

The answers are a mix of serious, non-serious.

The goal is, uh, short answers, but we'll ask for elaboration perhaps where warranted.

So, the first question is near and dear to my heart.

As a father of two young kids, and I think having followed you on Twitter for a long time, you've been extremely successful at this.

So my question is, what tips do you have for parents to get their kids into coding?

I don't think there's any tips.

I think your kids naturally have to be into it, but I think that Scratch is a great entry point into coding because I think when people say, oh, I can make like this character walk two steps to the right and then do this, if that resonates with a kid, I think that kid will love coding.

Some kids that resonates.

For some kids it doesn't.

And so, I think that you can't forcibly make your kids love it, but I do think that for a kid that gets that, then they're like, wait.

At some point they'll get to, how do I do this without blocks?

Mm-hmm.

And I think that was the moment where we kind of switched away from blocks, but the blocks make the kid understand what you can actually do with it.

And it actually lets it so that for the first time as a parent you can step away and say, why don't you go play with this?

And then really let kids build their creativity in the same that they would way that they would kind of fall in love with something like art.

Cool.

Awesome.

Thanks.

All right.

Our next lightning round question.

What is more therapeutic: going on vacation or spending an afternoon coding in Julia?

Oh man.

So, I don't think you guys know this, but last week I was on clinical service working in the hospital and I got one day of coverage and in that one day I flew from San Diego to Atlanta to Pittsburgh to give a keynote at JuliaCon.

And then flew to Minneapolis, back to San Diego so that I could go back on service the next day, uh, and take care of patients.

Wow.

Uh, and so I would say that they are both therapeutic.

There's a time when creative energy builds up and you need to code?

I don't know that it has to be Julia.

I think it's whatever problem you're solving and whatever tool that is that that's in front of you.

Um, I spent, um, the last couple weeks building a, a simulation model of our hospital in R using the simmer package.

And so, it's one of these things where, you know, when you have that creative energy built up and you can see the clear way that you would solve it, and you just haven't had the time to sit and write code, I would much rather be coding than be on vacation.

But I think that once I've done that, then I would be like, okay, now I go on vacation.

Awesome.

Nice.

Cool.

Um, next question.

What was your first job?

I tried to get a job at McDonald's, but they said, you can have this job as long as you're willing to shave your beard, which at that time was very small.

But as a Sikh, I don't shave, and so I had said, I would love to work here.

I just can't deal with the one restriction that you have.

So, actually it was not McDonald's, but it would've been 'cause I think I applied to, um, McDonald's as a job in high school.

So, I think my first actual job, um, and I think I did like some paperboy stuff, was um, believe it or not, being a resident.

Nice.

I would not have had McDonald's on my Bingo card for first job.

So, that's a super interesting response.

Alright, our next question.

My son wants to work at Chipotle 'cause he loves Chipotle.

He is like, all the people work there, get free meals.

Um, and so he is like, that's my goal is to just work there through college and then just not have to like, ever order any food.

Amazing.

Alright, I think you mentioned this earlier, uh, so you are a nephrologist and so, uh, I'm just gonna give our listeners that context and then ask, uh, this question.

Was it a coincidence, uh, or was it destiny that Bud Rose, creator of UpToDate was also a nephrologist?

I think it was destiny.

I think that a lot of the way that nephrology is both taught and experienced is through a bunch of math and a bunch of mathematical relationships between things like electrolytes, things like kidney function, different types of cells you have throughout the kidneys that really keep you in homeostasis.

Homeostasis is the product in a urine is the, like, pollutant is I think what Joel Topf, uh, who goes by Kidney Boy online, uh, had kind of created.

So, I think, uh, it's all about homeostasis.

And so, I think, you understand a lot of these mathematical things and uh, even now, I mean there are things where.

You know, someone comes in with a just completely electrolytes that are off and you're mathematically thinking, what am I gonna correct first?

How am I gonna do it?

How does their clinical presentation relate to their labs?

Because it's not just about fixing labs.

You can't fix labs without knowing what's going on with the person that created those labs in the first place.

So, there's a real connection there.

And actually one of the most viewed UpToDate pages from what I know, is actually the page on low sodium or hyponatremia.

And that's because that's one of those situations where, you know, when people come with extremely low sodium, there's a lot that you have to do to unpack why it's there and how to fix it and to work up why it happened in the first place.

So, it's one of these things where I think math comes together beautifully with kind of human biology and medicine, and that's honestly what probably led me to nephrology myself before I was really able to channel that energy into informatics.

Cool.

Thanks.

Um, so we mentioned at the top of the episode how you have taught yourself somewhere between 11 and 12 different, uh, programming languages.

So, this question is about skill transfer.

So, how has being a programming polyglot helped you outside of programming?

I think you see problems inherently as solvable, um, because I think, you know, look, when I set out to make a, uh, Windows app, uh, that could download your email and put it on your iPod, the user interface was written in Visual Basic, which was something that I knew at the time, the actual.

All of the interfacing with the web was written in PHP that was running locally on your machine, which was something I actually did not know.

And so, I remember being like, when I came across this problem, I, I didn't assume that just because it wasn't there in Visual Basic that it wasn't doable.

I said, there's gotta be a way to do it.

And so, the first thing I did was I learned PHP.

The next thing I did was, well, how do I run this securely?

I can't run this on the web.

I don't wanna send all my emails to the web, um, and have like, you know, my, my entire inbox coming through the web.

So, I figured out how to run it locally and then I figured out how to get PHP to talk to Visual Basic using standard out and standard in.

So, it's like you would file it here and this would be monitoring that stream.

And it was something that I, no one had I'd ever come across had made these two languages talk to each other in that way.

But coming and looking at how people had done it for other tools, it made me understand that it was doable.

So, I would say I am someone who is way happier reading documentation than probably all of my non-programming peers.

Um, and I assume that if I can read enough about it, that it's learnable.

And I would say that, you know, many non-programmers will kind of need to learn it in a lot of different ways.

And for me, if I can read something kind of front to back.

I can feel like I can get the beginnings of trying to figure out how it works.

Um, and it's something that I think is visceral, that you just have to have faith in yourself and have faith in what's out there.

And, you know, then that's how I learned Julia was I literally one day just sat and, like, read a book and then read a bunch of the manual, just like almost cover-to-cover.

And then I sat down to write my first lines of code.

And then of course it was an error.

And then you're like, okay, why is it there?

And then you kind of play with it.

But when you conceptually understand something, you can do the things.

And so that's what, that's I think, a thing that really makes you different as a programmer.

Awesome.

Alright.

Our last lightning round question, and this is sticking with, uh, the nephrology theme, uh, from the last one that I asked.

Um, currently Cystatin C

Um, currently Cystatin C: underused or overhyped.

Oh man.

I think underused, I think creatinine as, as everyone knows, is a breakdown product of creatine which comes through your muscle.

And I think that we have a growing population of people who are chronically ill and their muscle mass is, I mean, depleted.

And so I think that any way that we can kind of get at better estimates of how injured your kidneys are, is helpful.

That said, what do you do with chronic kidney disease that is intermediate?

It's not super severe.

That, I think, is where the real question lies.

And I think if someone has diabetes, we now actually have medicines to treat that and prevent that.

Um, so I would say underused from a standpoint of understanding kidney function maybe overused from a standpoint of what we can do about it, independent of knowing other things about someone that we have medicines to treat now.

Awesome.

So, I think we're gonna, um, zoom out and ask you one or two big picture questions before we wrap up.

Um, and so, we kind of like to end on a positive note.

And so, the first question I'll ask you is, uh, we're gonna get the pessimistic stuff out of the way first.

So, outside of health care, things are changing super-fast in AI as we've discussed, like, in some areas of health care, they're starting to change.

As you're watching this change happen in the health care system, what gives you most concern?

What do you, what are you most worried about going wrong?

I think the two things that worry me most, one is that I think funding models are gonna take a long time to catch up to new care models.

So, I think that we can come up with the best way to do remote patient monitoring that actually keeps people outta the hospital.

If we don't have a way to make that revenue neutral, revenue positive, it's just not gonna happen.

And I think that when you combine that with the deep cuts that are likely coming to Medicare, Medicaid, I'm really worried that all the AI in the world won't be able to fix bad health policy.

And so I think it's one of these things where we need the funding metals to really catch up so that we are incentivized to do all we can do to keep people home and to keep people healthy.

Yeah, I mean, I totally agree.

Like, we pay for treatments, we don't pay for prevention.

It seems like that coupled with the cuts that you mentioned could be a perfect storm for, um, lots of bad stuff in health care.

And a lot of the AI that's coming is patient-facing AI.

It's things that inherently have the capability potentially to keep people healthy at home or to keep people who are chronically ill getting care at home without having to come into a clinic or to another care environment.

So, I think that really the more we can do to help there, will really help us build the capacity we need to take care of the new generation of people who are really kind of surviving because of the wonders of modern medicine.

Yep.

Totally.

Okay, so now let's turn that around what are you most excited about?

What gives you most, the most cause for optimism that you see happening?

I think integrations, um, between different genres of technology.

It's always been stuck in my head that the EHR is just outta bounds and does not connect with other things and that your productivity tools just don't connect with your other things.

You can work in those playgrounds, but you can't work outside.

And I think that model context, protocol and some of these other kind of ways that you can plug-and-play things to work completely across genres, I think has the capability to actually enable patients to build things that would be really useful that patients could never have built before.

So, I think we kind of undervalue the value that our patients and our general populace at hand can do to actually improve their own health.

And I think that if you don't give them API access into things, they can't do anything.

And so, I think what I'm encouraged by is that we're seeing more and more things be able to talk to each other.

I just hope that it's able to talk to each other in an easier and easier way, such that we're able to get new classes of products that don't exist today because those integrations would never have been possible before.

I think that's a great note to wrap on.

Thank you so much for being on AI Grand Rounds, Karandeep Singh.

This was a pleasure.

Thanks guys.

Yeah, thanks for coming.

This copyrighted podcast from the Massachusetts Medical Society may not be reproduced, distributed, or used for commercial purposes without prior written permission of the Massachusetts Medical Society.

For information on reusing NEJM Group podcasts, please visit the permissions and licensing page at the NEJM website.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.