Episode Transcript
Welcome to episode 407 of the Microsoft Cloud IT Pro podcast recorded live on 07/25/2025.
This is a show about Microsoft three sixty five and Azure from the perspective of IT pros and end users, where we discuss the topic for recent news and how it relates to you.
In this episode, we dive deep into the USB port of AI communications as we discuss model context protocol or MCP.
This is a game changing specification that's extending the capabilities of large language models or LLMs to help you bring your data together into a central LLM of your choice.
We'll talk about what they are, where you can use them, and things you may want to think about when it comes to using these MCPs.
So let's dive into the show.
Scott, we're between vacations.
I'll pretend I'm surprised.
Yeah.
I know.
You just got back from vacation.
I'm about ready to leave for vacation.
Summer recording is always interesting.
I had a good vacation.
We went to we we did Oregon, Northern California and just did a a ton of driving.
So we did about 2,000 miles in ten days.
How many kilometers for our non American friends?
What is it?
It's multi multiply by, what, three?
Something like that.
I don't know.
I don't know.
It's a bunch of kilometers.
But yeah, good trip, lots of hiking.
We went and saw like the redwoods and did, Crater Lake and all sorts of stuff.
3,218 kilometers is what you drove.
Not too shabby.
Thank goodness for rental cars and somebody else's mileage.
That sounds good.
I'm kinda I'm jealous.
Yeah.
I have never been out, like, Northern California, Oregon area.
Have you ever seen The Goonies?
I don't you're gonna make fun of me again.
I don't know that I have.
Then.
Another one of those.
If it's been a while if I have, it's been a while.
I'll make you set up the projector outside.
I'll come over.
You can cook me dinner, and we'll watch you at least.
The Goonies?
So that was kind of our one of our last stops on that trip was staying up in Cannon Beach where Haystack Rock is, and it's like the big rock that the pirate ship floats out of at the end of The Goonies and things like that, so that was kind of fun too.
The other really nice thing about that part of California, like the very tip Northern California up through Oregon, you're in the Pacific Northwest in the summer, it is not like Florida in the summer.
So Florida in the summer, like I'm looking right now, it's 88% outside with 82% humidity and feels like 106%.
It is miserable and you do not want to be outside.
In that part of The United States, it is because you're right on the coast, so you get that coastal fog in the morning.
Some days you can't even see the sunrise or the sunset because the fog is so thick.
70 degrees, 60 degrees, it all just hits different.
It was absolutely beautiful.
I got super sad when I got there because I forgot to pack a pair of shorts and I was like, Oh my gosh, it's going to be like seventy, eighty degrees, it's going to be hot.
It was perfectly comfortable every day in a sweatshirt and long pants and all that stuff.
So it was really great, good little change of pace.
I'm ready to go back, I think, on vacation.
Doctor.
On vacation?
See, I'm looking forward to it too.
We're going up north to Michigan, and then we're going out to Denver.
And same thing, I'm like, it's not Florida in the summer.
It'll be beautiful.
With that, can you use an MCP something or other to help you plan your vacation?
To plan my vacation?
Was that was quite a stretch there, Scott, for it.
For a segue, you were looking for one?
Yeah.
It all went sideways.
You mean a Microsoft certified professional?
Do Microsoft certified professionals?
We like acronyms.
Right?
It's what an MCP used to be.
I guess it still is.
I was thinking more like model context protocol and some of the new specs and things have been coming out of anthropic and then the really cool nifty functionality that's been coming out along the way.
Yeah.
I mean, I suppose you could.
Right?
Like, with MCPs and kind of the goal, like you said, model context protocol, is that this kind of extends LLMs to other data sources.
So I suppose maybe if there's MCPs out there to connect to some type of data that you might want to use to plan your vacation or something that you're planning your vacation from, you probably could use an MCP to plan your vacation, Scott.
Probably.
What is an MCP?
If it hooked up to the right set of data sources.
Absolutely.
Yeah.
With vacations, this is gonna be maybe the should we give people context?
We are doing a longer episode that we're gonna break into two parts today because of vacations.
So we're gonna kinda start off with what is an MCP followed by how do you use MCPs?
We're jumping ahead of ourselves.
So what is an MCP, Scott?
What is the model context protocol from Anthropic, creators of also Claude?
Model context protocol is a specification, like first of all, right?
It's a Doctor.
Joshua Klooz (zero 50 threeJoshua Klooz (zero 50 three:fifty three): Yep.
Doctor.
Joshua Klooz (zero fifty threeJoshua Klooz (zero fifty three:fifty three): Framework and architecture for developing servers and clients that can interact with large language models and LLMs and become part of agentic workflows for LLMs.
So they can surface very specific pieces of functionality.
So for example, you might have OpenAI out there and you're using ChatGPT four o, and you're chatting with that, and that thing was trained on a specific set of data.
It could be that you're maybe grounding it in some additional data.
Maybe you're doing rack or things like that.
But maybe you need, like, additional functionality or more system or service specific functionality to do things.
For example, let's do like a pie in the sky thing.
So I have a Notion database, and one of my Notion databases is recipes, where I capture recipes that I find on YouTube.
There's a sub database that actually has, like, the recipe itself.
Here's the steps.
Here's the ingredients, all those kinds of things along the way.
So one of the things you might be able to do is go out and ask an LLM, hey.
Can you give me a new recipe for roasted chicken, roast potatoes, asparagus, and make it a little spicy?
And what would be good if it had, like, these flavor properties?
Well, if you had it tied into a tool with an MCP agent, you could also extend that prompt and you could say, once you've have that recipe and I've iterated on it, great.
Hey, that recipe looks good.
Can you take it and save it in my Notion database over here?
And then because now you have the ability to have that Notion specific functionality tied into your client that you're chatting with that LLM, now you can start to chain that together and you can, like, push that information down from the LLM out to an external tool.
Or you can do this in other ways.
Like, let's say, for example, in Azure land, there's an Azure MCP server, and that server lets you do things like list the resources in my subscription.
Great.
Like, list all the resources.
Now, once you've listed the resources, go find the properties for those resources.
Now, once you've found those resources with those properties, maybe take them and write them out to this other thing.
Like, send them to Slack, send them to Notion, send them to OneNote, something like that, so that you can continue to have it go further.
So you end up in this world where you can extend LLMs with domain specific knowledge and domain specific functionality that's built by developers and by these services, and then you can chain it all together because now the clients that are interacting with these LLMs and have the ability to integrate with MCP servers, they can also know, like, oh, hey.
I'm looking at an agent or a tool that surfaces this piece of functionality.
Therefore, go talk to this, chain it into this, things like that.
So that's all really cool because you can do that just through a singular client and kind of like a real time, like, back and forth kind of interaction.
The other way that these things can be used is as part of agentic workflows.
So rather than me being like, hey.
I'm gonna sit here and prompt you in real time, maybe I just have a prompt that kicks off overnight and does something in the background for me.
And it's like one agent talks to another agent, talks to another agent, talks to another agent, talks to another agent.
So if anybody's maybe been playing around with some of the agents or tools, like, they're called different things all over the place.
Like, in in Microsoft three sixty five Copilot, they're called agents.
If you go into like Copilot Studio, you've got these things called tools and other bits and pieces.
Like this is a more rich interaction surface on top of those things that have been there before.
I think one of the weird things about MCP in general is when I think about it, I think about more from like the end user side, like how do I as a user get started with it?
Well, there's a whole bunch of LLM clients out there, like Copilot or heck, ChatGPT itself, that don't allow you to actually interact with MCPs, but others do.
So if you have maybe like Claude or Perplexity on the desktop, then those let you run and integrate with MCP servers, both local servers and remote servers, which that's a distinction that we should talk through, tools like AI driven IDs like Cursor, WindServ, Versus Code as well, right, if you're just looking to, like, pick this up and and get going and get free free stuff along the way.
So I think they are a weird new enabler, Like, they're definitely, like, cutting edge and we're on some, like, precipice or cliff, and it's not clear, like, does the ground continue to extend in front of us?
Do we fall off the edge, and what does that look like?
But it's definitely, like, exciting just to, like, be able to get hands on with some of this stuff and leverage it and find ways to use it in your day to day workflows, be it like, hey, you're a consumer.
Like I said, maybe you're sitting here and you're doing, like maybe you're like a content creator and you're doing video creation and you say, okay, hey, ChatGPT, create my script.
Well, great.
Go into Versus Code, have a chat with ChatGPT four zero, create my script, now take my script and push it out to here, and let me leverage it that way or have it and iterate it on it, things like that.
Or just in my day to day professional job, I'm always looking at documentation, code, samples, all those kinds of things.
So there are very specific MCP server implementations that I can integrate with a client and be able to really kind of just, like, accelerate and augment those workflows within my day to day, which is very exciting.
Right?
Like, I think it's, like, I like cutting edge and raw, and it's always fun to get hands on with a new tool or a new toolset.
A lot of these and where I've started playing with MCPs too is you mentioned like Claude or OpenAI or Copilot.
It's you're limited in what data you can access.
You can go build an agent in Copilot, but in that case, Microsoft has very defined connectors.
They're starting to get more with some of the custom connectors and different things there, but like OpenAI.
OpenAI, the data sources, kind of before MCPs, were a lot more limited.
OpenAI is set to go look at the web or look at this data or use this model.
Claude, same thing.
This is really opening up that ability to say, now I want Claude to go look at all this other data that's more specific to me.
Like you mentioned, I want Claude to go be able to look at my data in Notion, or I want Claude to be able to go look at the data in a SQL database.
I want Claude to be able to go look at data over in Microsoft three sixty five in SharePoint land.
It's allowing you to bring your own it's almost like bring your own data.
Do we have a BYO instead of BYOD for devices?
It's BYOD for data.
Bring your own data into these different LLMs so that as you're querying, you can get different responses, better responses, more accurate responses, or responses very specific to the data that you want these LLMs to be able to parse over and pull these results from or these responses from?
It's that domain specificity and the ability to plug and play domain specificity.
So Anthropic, the folks who make Claw and SONNET and all that stuff, they are the ones who came up with this specification and pushed it out there into the world, and one of the ways that they frame it is MCP is the USB port of AI communications and kinda AI interaction.
So if you think about that, like, if you think about, hey, I have this LLM that's sitting here, and now it's got an infinite number of USB ports on it.
Just think about, like, all the things that you plug into your computer to help you do your job better.
You plug in the dongle for the mouse.
You plug in your monitor.
You might plug in a dock.
I've got an audio interface.
I've got a camera.
All these different things sitting in front of me.
Well, those abstractions also apply in this world of MCP, where you've got an LM that's now kinda sitting there, and you've it's got, like I said, infinite USB ports, and you just start plugging in that domain specific knowledge and now the cool thing about the spec and the way it's laid out is it's kind of built around this core set of building blocks that they see and that sit out there.
So clients can have a whole specific set of functionality where they can plug into a couple underlying building blocks within the MCP protocol itself.
So you've got resources, prompts, tools, there's some other more esoteric things that I don't think I've seen involved in too many MCP implementations like sampling and roots and elicitation and things like that.
But there's kind of like these core building blocks in tools, resources, and prompts.
So tools are these things that are specifically for AI actions and AI interactions.
They are controlled by the model itself, so the LLM is doing things.
So that might be like I have an MCP server maybe that searches for flight information.
Right?
Maybe it integrates with, like, kayak.com or something like that.
Well, that could be controlled by the model where I could have a tool in an MCP server that's directly integrated with the Kayak API where I could say, hey.
Go search I just tell my LLM.
Go search for flights on this date with this carrier from a to b.
And based on the context of, oh, I see the user is searching for flights, and I see I have this MCP server over here that I can attach to, and it's offered me an agent with this capability.
Let me go use that.
Let me go get that domain specific knowledge.
So that could be like, like I said, something corny.
You're searching for flights.
You're working on your scheduling, calendaring, simple things like that.
Maybe you're using it as a way to send messages back and forth in Slack or things like that.
So that's one of your building block is tools, which are model controlled.
Do you feel overwhelmed by trying to manage your Office three sixty five environment?
Are you facing unexpected issues that disrupt your company's productivity?
Intelligink is here to help.
Much like you take your car to to the mechanic that has specialized knowledge on how to best keep your car running, Intelligink helps you with your Microsoft cloud environment because that's their expertise.
Intelligink keeps up with the latest updates in the Microsoft cloud to help keep your business running smoothly and ahead of the curve.
Whether you are a small organization with just a few users up to an organization of several thousand employees, they want to partner with you to implement and administer your Microsoft Cloud technology.
Visit them at inteliginc.com/podcast.
That's intelligink.com/podcast for more information or to schedule a 30 call to get started with them today.
Remember, And then you have these things called resources, which are application controlled or controlled a little bit more by the server implementation, the MCP server, server implementation itself.
So those are all for contextual data.
So those would be things like documents, calendar events, could be emails, could be other domain specific data things like that like anthropic I think in one of their examples uses like weather data as a thing that's there in that first one where I had a tool I said go search for the flight and go do these things for me The other thing that I could do is then I could have a resource that goes and actually reads out of my calendar.
So don't just go search for flights.
Maybe I don't say search for flights between these dates where I say search for flights between these dates that don't conflict with my existing calendar.
And if there's an integration with, like, Google Calendar or Outlook or things like that, then you've got all that.
And then finally, a third building block that kinda comes into play with these that you'll see in a bunch of client implementations is prompts.
So these are for, like, interactive templates basically.
So it's all user controlled.
So when you're kinda you can basically ground these things and give them a set of, like, base instructions that help them understand where you want them to go with it.
So if I'm back to, like, that thing of kayak and search for flights, well, maybe I'm planning to go on a vacation or a business trip.
So I could provide some very specific grounding that says, I'm planning a family vacation for a family of four.
My wife likes to do this.
My kids like to do this.
And now I don't have to put all that into as I'm prompting the LMS as I go along.
It's just like a base set of instructions that it knows.
Maybe you've got, like, multiple MCP servers that you're talking to.
Maybe you've got one for, like, Kayak.
Maybe you got one for Google flights.
Maybe you've got one for, like, a carrier like Delta.
Maybe you've got one for, like, hotels like Marriott, things like that.
And you can actually tell it in the instructions like, hey, I I really prefer you to book and look for hotels with Marriott or like me, like, I tend to fly with Delta a lot.
Like, please only pick Delta flights with no more than two stops and that go through these hubs kind of thing.
So they just enable and open up this really kinda interesting world.
So you've got a little bit of a graphic up on the screen now that kinda talks about and speaks to clients and servers and how they come together.
So with MCP, obviously, you need an endpoint or a thing that you can talk to, that you can have the LLM and the ID or the client that's able to recognize MCP servers that it, like, knows how to talk to it and get out there.
So a server and the implementation of a server can be local or it can be remote.
So for example, there is an Azure, there there's a Microsoft Learn MCP server.
And what the folks at Microsoft Learn did was they actually stood up an MCP endpoint, and it's an endpoint that's compliant with the MCP protocol.
And you can install you can install an MCP server for Microsoft Learn into your local client.
But when you're installing that server, all it's really doing is talking to a remote HTTP endpoint.
It's doing, like, SSE and stream mobile HTTP and things like that.
The alternative way you could do it is and how some of these have manifested is they've manifested as local server implementations.
So literally you go in and you say, okay.
Here's the configuration for an MCP server.
And quite often these things will be spun up inside of, like, containers or just little m MPM servers that are running locally, where it's running a web endpoint locally on the same client where you're executing the client with the prompt and things like that, and you've got a little bit of a different flow there.
But you can always mix and match all these things.
Right?
Like, this is all just, like, how many of these things do I chain together?
What what what are they, and how do they all manifest?
So the biggest thing you gotta figure out is, like, hey.
Like, is there, is is there a use case for me with MCP, you know, given your persona, you you know, your work identity, whatever it is you happen to do and and how you wanna use it?
And if yes, then what's the client I wanna use to get there?
Because there are a there's a pretty broad set of clients that can leverage MCP integrations.
But that said, you gotta kinda, like, pick the right tool for the right job, and and that's a little confusing right now because, like I said, things like Copilot desktop, Microsoft three sixty five Copilot, they don't have the ability to integrate with an MCP server, nor does the, I believe, the OpenAI desktop client.
But that said, the Claw desktop client does, the perplexity client does, Versus Code does, Cursor does, WinSurf does, like, like, all these different things that are out there.
And then and then, like, I'm I'm missing a whole list.
Like, if you go on, like, the anthropic documentation, like, they've got, like, gobs and gobs and gobs of clients and which ones work with resources, which ones work with prompts, which ones work with tools, all those kinds of things along the way.
I would imagine for most of the folks that probably listen to our podcast who are living in the Microsoft ecosystem, you probably work with an MCP capable client already.
I know, like, I live in Versus Code, like, 90% of my day.
That's all just there and ready to go and available to me.
This stuff is, like it looks a little geeky, like, when you start to get into it, but, you know, you need to know, like, hey.
How do I define an MCP server that already exists out there?
And some of the clients, you just have to pull like, just post, like, and copy and paste, like, raw JSON into these things to get them going for the configs.
So that can be, like, a little intimidating.
But once you've done all that, like, really, it just becomes chatting with an LLM, and now you've extended the you've extended the capabilities of that LLM.
And by extending the capabilities of that LLM, I think really what we're saying is we're extending the capabilities of ourselves, right, because we were using those LLMs to enable our jobs and enable our workflows and and move those things forward.
So that that's kind of an exciting thing.
For sure.
And like you said, there's a whole list of clients.
I think OpenAI is getting closer to having more.
Like, with some of the connectors they've put in there, their desktop client is getting close, and I've started playing with a few of these clients.
Because of that, with Claude, with OpenAI, with Versus Code, all of those.
But one thing that's interesting, and this is something again, we're talking about using these clients, hooking them up to other data sources, pulling them into an LLM, kinda like this architecture diagram you were talking about where you have your MCP host or your AI application, OpenAI Cloud.
You're using the clients, and you're going out and connecting it to all this data.
Tying that into another concept you mentioned where Anthropic described this as the like a USB c port where you can have unlimited USB c ports plugged it in.
What is the first thing you you're taught with data security, Scott, with USB ports?
You plug in any random stuff you can.
Plug them in.
Right.
When you see a USB port, just plug it in.
I think that's one interesting aspect too when you start thinking about these MCPs is to that extent, what are you plugging your data into?
If I go in and you mentioned you give it access to your calendar, you give it access to your Notion database, maybe you're giving it access to a SQL server.
You're giving it access to your file server.
It's actually now pulling data from all these different data sources into an LLM.
I know SharePoint is one that has started coming up more and more as well is from that security side, like Microsoft three sixty five, you can do sensitivity labels, you can do security.
They've built a bunch of different controls into SharePoint so that you can control how Copilot and the LLMs behind Microsoft three sixty five Copilot interact with your data.
What data they're allowed to interact with?
How does that handle sensitive information?
How does it handle Social Security numbers that it may come across?
How can you define different controls?
But now what happens if you have a connector, an MCP, that maybe goes and looks at your Microsoft three sixty five data and starts pulling all these files and documents in because there's an API for them.
You can get them.
But that MCP or that connector may not have all those same controls that Copilot has in place for, especially, I think, of things like sensitivity labels and how you can filter things out there.
Permissions, the ACLs that are maybe on some of these files are a little bit easier.
But I think you really also need to start thinking about now from a security perspective, whether it's at a corporate data level or even your personal data, what's happening to your data as you're bringing it into these different LLMs?
For sure.
And I'll I'll I'll throw you one more on top of there.
So, if you think about a local MCP server, so like I said, like, often you can spin these things up Yep.
Just like a simple m MPM Docker on your machine.
Yep.
Do you do it in Docker where it's isolated and you have maybe some more of those operational controls, but then you might have to contend with things like container networking and and routing and other constructs?
Do you end up in a place where you have just a bunch of random web servers running, like, locally on on your on your machine?
Right?
Like, how many tie how many MPM servers do you wanna spin up in the background for API endpoints for these specific pieces of functionality?
Because really, like, in some cases, what we're talking about sometimes is even, like, little helpers to to do things.
Right?
Because you could have, like, you could have an MCP interaction where the MCP is interacting with, like, a local file on your desktop, and then it's taking information from that file, pushing it to a cloudy service, a sassy service, whatever that is, vice versa.
So I I there there are a whole bunch of considerations there.
You know, these things are not super mature in the sense that, you know, they've been out for years and years and years, and we understand all the edge cases and the flows and, you know, do they have the right operational controls, things like that.
So it's like, yeah, I think you need to be a little bit, like, careful with them, but I don't know that, like, where things are today that it's as big of a concern as it might be.
Like, I see a lot of these things as accelerators for developer, workflows, accelerators for, like, no code, low code kind of workflows, those kinds of things.
And, you know, if you're scared of the destructive nature of something, like, just don't use it.
Keep doing it the way you've you've been doing it.
Right?
Like, nobody's saying, like, hey.
You have to do these things.
But certainly, yeah, like, treat them safely, right, if you can.
Run-in a sandbox the first time.
Figure out, like, hey, is is is this the right thing for me, my workflow, and what I'm trying to accomplish right now?
Yeah.
And I think there are some scenarios I think about when we get into some of the examples and start talking about maybe how we've used them that I think about a little bit.
There's an aspect to some of these too, depending on the MCP you're using.
Like, I would love to see these, and I'm hoping they will get there eventually come to something like Copilot so I can tie some of these MCPs that I wanna use into my Microsoft three sixty five environment so that I can start maybe supplementing Microsoft three sixty five Copilot with some of the data that would come from an MCP.
Right now, I feel like I'm in the opposite boat where things like Cloud and OpenAI are building all these connectors and allowing me to pull data in from all of those, including from SharePoint, where I'm starting to sometimes even find myself, it's like, well, it's almost easier to use OpenAI or Claude or some of these other LLMs just because the openness is there to get some of this other data in from other tools that I use where I mean, personally, I would love to just have it in Copilot.
Some of it, again, due to the nature of the data, some of it is, like, a lot of these MCPs yet, they're still paywalled.
I'm paying for Claude right now.
I'm playing for OpenAI right now, and I'm paying for Copilot right now.
Just so I can test all this out where if there started to be some of that feature parity, I could not pay for all three of them.
Yeah.
I think it is tough to get there, right?
Like, there's certainly, like, an ecosystem kind of thing.
I I will say, like, you can get by a lot with a lot of the stuff just, like, on the on the on the free side.
So, you know, like, if you're using, like, Claude, Claude desktop, Perplexity, things like that, you don't need to be on, like, the paid plan to integrate an MCP server with with Claude.
I think most of these even like, all all the ones that I can think of, like, they do have some free version.
Even, like, GitHub, like, hey.
I'm gonna turn on GitHub Copilot because I need GitHub Copilot to be able to do the chat with the LLMs, and then I'm, like, adding these agentic flows to it.
That stuff's available and ready to go, and and you can do that pretty turnkey.
So you might be limited in some functionality and some other things that are out there.
I don't think that stops folks from trying to get, you know, hands on with it and and see where some of that value is.
And as we get into our later conversation about, like, maybe some specific MCP servers, at least everything I plan to talk about, like, it's all free or it's already integrated into the ecosystems that you're probably in anyway.
You're paying for, like, the underlying thing.
Right?
Like, if you use, like, the Azure MCP server, then you're probably already an Azure customer.
Yeah.
It'll be interesting to see where this goes, see where the architecture goes, see how these different clients start bringing them in, incorporating all of it together.
Yeah.
So why don't we, wrap this one, and then we'll come back for a part two, and and we'll talk about some specific MCP servers and kinda like how we're using them in in our day to day and and kinda how our journey has been here a little bit, and maybe that informs some others.
You know, if anybody's out there and you're using an MCP server, like, we we'd love to hear from you, like, hey, What are you using it for?
What's going on?
Like, what's your specific use case use cases?
You know, I I think Ben and I are, like, pretty big on just, experimentation.
Ben always talks about his list that is ever growing.
I've got a similar one.
Like, we'd love to hear how others are enabling, like, their workflows.
For sure.
So reach out.
Let us know.
Again, LinkedIn is probably the best.
I am finding that as becoming my social network of choice lately Mhmm.
Or contact page on the website, mscloudi2pro.com.
We'd love to hear from you.
Alright.
Sounds good.
Hopefully, this was enthralling and folks come back for part two.
Absolutely.
Alright.
Thanks, Ben.
Yep.
Thanks, Scott.
If you enjoyed the podcast, go leave us a five star rating in iTunes.
It helps to get the word out so more IT pros can learn about Office three sixty five and Azure.
If you have any questions you want us to address on the show, or feedback about the show, feel free to reach out via our website, Twitter, or Facebook.
Thanks again for listening, and have a great day.
