Navigated to Episode 411 – Securing AI with Microsoft Purview: Data Security Posture Management for AI - Transcript

Episode 411 – Securing AI with Microsoft Purview: Data Security Posture Management for AI

Episode Transcript

Welcome to episode 411 of the Microsoft Cloud IT Pro podcast recorded live on 09/24/2025.

This is a show about Microsoft three sixty five and Azure from the perspective of IT pros and end users, where we discuss a topic or recent news and how it relates to you.

In this episode, we circle back to our conversation around preparing for Microsoft three sixty five Copilot and, frankly, all AI in a conversation about data security posture management.

We discuss what's available to you within DSPM for AI and Microsoft Purview, what you need to do in order to get started with it, and how you can leverage this tool within your company to keep your sensitive data in Microsoft three sixty five safe and secure from AI.

Let's dive in.

Scott, it feels like forever since we've talked.

I think this might be the longest we've gone in a while without talking.

Well, that's because you went to a conference, and you let Jay and Joy take over the podcast Saturday.

That was a mistake.

That's fine.

Nah.

It was all good.

Oh.

Don't you go back and listen to that one if they haven't?

Joy and Jay take over and talk change management, episode four ten Four ten.

Of the Microsoft Cloud IT Pro podcast.

Yes.

That was a good one.

We need to get you back to a conference, Scott.

I was at a conference last week.

I was just at a different conference.

Which conference were you at?

I was at the storage developer conference out in Santa Clara, California.

Oh, how are storage developers these days?

They keep on developing things.

I had a talk on retrieval augmented generation and object storage.

So where does object storage fit into the kinda AI pipeline when you think about an end to end spectrum of AI training all the way through to AI inference and actively, like, serving these workloads and LLMs and things like that are on it.

But my talk was a little bit kind of about, like, the three quarters of the way through the AI pipeline with fine tuning and retrieval augmented generation.

Got it.

Interesting.

I was gonna say, because you probably need storage, like, at every step of the pipeline.

I mean, AI kinda depends on data a little bit, which kinda depends on storage.

Yeah.

If you're training a foundational model, you require petabytes and petabytes or exabytes of storage.

If you're just doing fine tuning, you might be in the petabyte range, or you could be smaller.

You could be down in the terabyte range and taking a very specific dataset to teach a model the new set of skills along the way.

But it turns out that object storage platforms are super important when it comes to retrieval augmented generation as well, because you've got this thing where you have to effectively kinda rather than teaching new skills like you would fine tuning, you have to teach new facts.

Like, hey.

Here's something that is going to augment your knowledge.

So if you think about, like, Copilot notebook, for example.

So we've talked about those in the past.

So you could go into a Copilot notebook, and you can put whatever it is, ten, twenty documents in there.

When you put those documents into the notebook, Copilot and the underlying LOM that's serving the response back to you, they're not actually reading that document, like, as a document.

Like, they don't open up a Word document or a PDF and read it page by page.

They have to go ahead and generate embeddings.

So effectively, they take that PDF for that Word document, and they turn it into a bunch of chunks, so and or embeddings.

Those embeddings go to a vector database and vectorize, which means then you need a a vector search mechanism that can augment the LLM response.

But object storage is kind of important, or if you think about, like, putting a new document into a Copilot notebook, something like that Yep.

Because it's gotta be chunked, and it has to be generated, and embeddings need to be generated, and those embeddings need to be pushed over to a vector database.

Well, that means anytime I upload a new document, if I put a new document or I take a document out of my Copilot notebook, anything like that, you actually need to kinda kick off back end process for data freshness.

So having things like at least in our platform on the Azure storage side or the object storage side, we have things like change feed, which can be leveraged by these systems to kinda give them a proactive notification and say, hey.

There's a new object here.

Go and run whatever you need to run on top of this.

For example, maybe you integrate Azure AI search into your workflow, so you pump a bunch of documents into a storage account.

AI search can actually respond to the change feed notifications that are coming out, and it can automatically do the chunking, embedding, and store those embeddings in the vector database just based on the virtue of having the ability to notify in, like, a durable ordered list with something like a change feed along the way.

It's kinda super interesting because I don't think folks really think about the back end of these things too much.

Right?

If you're playing with a Copilot notebook and you're putting documents in there, it's all just kinda like magic that all of a sudden it can reason over my document in really near real time.

It doesn't it doesn't take too long to generate those embeddings, especially with smaller documents and smaller datasets, which is what most people are using in things like Copilot notebooks today.

But there's a lot of moving pieces on the back end to get that all going and get it to where it needs to be.

Yeah.

So anyway, I had a session on that.

That'll be up on YouTube.

I'll I can share it in a couple of months once once those come out for public consumption.

And then I also spent a couple days at a a plug fest.

So SNIA and the broader storage developer conference, they organize effectively these hackathons that bring a bunch of vendors together under NDAs so that we can go and do interop testing with each other and kinda do debugging in real time.

So there's a bunch of those plug fests that's needed hosts.

They host one for Swordfish.

They host one for SMB for, like, vendors to test their SMB implementations that might live outside of Windows or things like that.

And then the one that I did was an object storage plug fest.

So we have focused on, like, restful API surfaces for object storage vendors of which Microsoft is one with Azure storage, and then you've got s three protocols, which are kind of the elephant to the room as s three is not only an s three, but, you know, Google supports s three as a protocol.

And they're a cloud object storage platform, Alibaba, Backblaze, and then a whole ton of on prem vendors, IBM Cloud Object Storage, Oracle, both their on prem offering, OCI as well, and their cloud infrastructure offers an s three protocol.

So spent a couple days kinda hacking away at that and then seeing if we can get some of our products that we have that talk to s three.

Like, while Azure Storage does not have a native s three service, we definitely have products that talk s three, especially for data management, data movement scenarios.

So things like like AZCopy, StorageMover, you pick up one of our SDKs, we absolutely allow you, and we've got some great enabling APIs to help you move your data from S3 over to us.

Got it.

Very cool.

Well, hopefully we'll get you at a, at another conference yet this year, Scott, because we have Ignite coming up in November, November.

So hopefully doing podcast stuff there again.

I actually have a couple I have one before that too.

I'm going down to Orlando too.

It was dev intersections, and now they're also doing, like, co locating dev intersections with cybersecurity intersection and the Next Gen AI conference.

So I have a couple talks, presentations Oh, nice.

At cybersecurity intersections.

And then I'm actually doing a couple more, two in December in Dallas at Workplace Ninjas.

So Cool.

I've got two more coming up.

Well, three more coming up, hopefully Ignite, and hopefully we can get you out to Ignite too.

Yeah.

We'll see.

Pump it out on the socials.

You can always find Ben on LinkedIn.

You can find the podcast on LinkedIn.

Can you find me on LinkedIn as well?

Yeah.

I think it's becoming one of our preferred communication method mechanisms for the podcast here.

I would say LinkedIn is absolutely becoming the preferred one.

All of the other ones.

This is not just a any social media.

Like, all of the other ones I have had to turn into looking at them all as just pure entertainment.

There is nothing of the state of the world.

Yeah.

Not much of value that I have found on many of the other social platforms.

So LinkedIn is absolutely becoming that preferred social media for the podcast, although we are still on all the other ones.

But speaking of AI and storage and data, we should probably dive into this is like the second part of episode four zero nine when we started talking about preparing for Copilot in Microsoft March, getting ready for it.

And we didn't quite get to kind of the last point that I covered in the presentation I gave wherever I was for that one, Branson, Missouri around kind of diving into one of the, I would say, very quickly growing product in purview or a solution technically in Purview or doing a lot of the management, controlling, monitoring, copilot, and really other AI usages is the data security posture management or d DSPM for AI that exists in Purview.

We should kind of talk through this one.

It was a little bit of an interesting new area for me.

Yeah.

So if you haven't looked at it, I definitely recommend if you're doing anything with Copilot or really anything at all and you care about your dating, keeping it safe from AI, because there's aspects of this that don't apply to just Copilot.

You should go check this out.

So it's in Purview.

And once you go into Purview, you'll just start on, like, a general overview, typical to a lot of the different admin centers that gives you some things to get started with.

So there's kind of four big aspects to this whole data security posture management for AI that you need to enable that are required to really get going in terms of monitoring your data, because this is all about discovering what are people doing, securing AI activity.

So in order to secure it, discover it, know what people are doing, you you kinda have to be able to look at the data.

So there's Turns out.

Yeah.

So there's a whole just first, just activate Microsoft Purview audit.

This is really just that audit log that we've had for years.

That's both security center compliance center.

It used to be the unified auto log.

Now it's just audit, but then there's also in, I don't even remember when this came out.

I don't remember ever talking about it, seeing it in the news.

There's a Purview browser extension that you can deploy now, and it really doesn't I would say it doesn't do much.

There's nothing really for end users to interact with.

It's one of those that you can install and just kinda keep it hidden in the background.

There's no reason to show it in your task bar, but it is really just watching browser activity.

Install it there so it can see what are you doing within your edge browser.

I think there's I think the extension also exists for Chrome.

And if I remember right, the edge one may actually just be in the Chrome store.

I have it for Firefox too.

So edge Chrome, Firefox to really just help watch signals of what are people doing in the browser with data.

Again, this kind of ties into, are people going to other AI sites, going to chat GPT, going to claw, going to some of those.

So that's the next step.

Just get that browser extension deployed.

There's guides on, you can go do this with Intune, whatever solutions you kind of use to manage end user devices, leverage those to deploy Purview.

And then after that, or along with that, is also onboarding devices to Purview.

So this is one too that I've actually seen some people miss Scott is that just because you've onboarded the device into Intune or into other, enrolled it and enter ID joined it to enter ID.

There's a whole nother aspect of actually onboarding devices into Purview as well.

So you can have a device in Intune, have a device in Entra, and not have it onboarded into Purview.

Yeah.

I wonder can we take a step back?

So Yeah.

I'm I'm just kinda, like, watching your screen as it's up.

So when you go look at Microsoft documentation and you say, hey, these two folks are talking about data security posture managements, and maybe you just go Google for that or Bing or DuckDuckGo or you search for TPSM, you're going to end up in one section of the Purview documentation.

So you're gonna end up in a section around data security solutions.

But as you've got the portal up, and if you kinda keep, like, Google being DuckDuckGo search engine of choice, you end up on a different page, which is DPSM for AI, and then starts talking about the specific considerations for that, onboarding to the additional set of capabilities, the co pilots, and things that come with that.

So can you help me, like, disambiguate the two?

DPSM, DPSM for AI?

AI.

So DSPM, this is one I haven't done a lot with yet.

I actually click over to data.

I don't know that I have full access into it.

I do have some of it in mind.

Where I would say DSPM is just really overall data security.

Right?

You're gonna go in and look at and if you go into the portal and look at it, it's risky user investigation.

Are users doing performing risky behaviors with their data?

Or where is that sensitive data?

So this ties a little bit more into what I would say of information protection or even a little bit maybe it's not even information protection, it's a little bit higher.

So you just have, like, what is my overall posture when it comes to keeping my data safe?

It could include some of the information protection.

It could include insider risk management.

It could include, like, one of the other related solutions that shows up in Purview is data security investigations.

It's broader, I would say, is how you look at data security posture management.

The DSPM for AI really drills into then taking that data security posture management.

How am I securing my data?

Now let's look at it just through the lens of how am I securing it when it comes to interacting with AI.

So that's things like your co pilots, maybe your agents that are out there, like your declarative agents, knowledge agents, and Yep.

The co pilot portal or teams.

It's basically like a superset of capabilities or a double click to go and monitor AI workloads specifically?

Yeah.

That's exactly it and guidance around it.

Like, even if you go into the data security posture management portal in Purview, not the AI one, Really, it just talks about, like, there's some stuff here around risky user investigation, sensitive data, but it's really just an overview.

You've got some reports around users performing risk related activities on unprotected sensitive assets.

So I'm going into SharePoint files where I have credit card numbers, I have PII, I have IP addresses, I have connection strings, and there's people interacting with that sensitive data in some way.

It doesn't necessarily have to be AI.

It could be creating a sharing link to it.

It could be emailing it to external users, some of that information protection type stuff.

And it just kind of gives you an overview and some reports there, a few recommendations, but there's, I would say, DSPM, there's really not much there.

It's more of a reporting and a little bit more monitoring.

It's not it doesn't have many features in it, quite honestly.

This is where, like when you mentioned the monitoring and some of those things, so this is the other areas, like, I think it's hard.

If you don't find specifically the DPSM for AI section in the docs and you just end up on DSPM, you're gonna get, like, two different sets of guidance for onboarding Yep.

Or, like, how far you should go and what you should turn on.

So some of those things that you mentioned, like, if you're trying to specifically monitor AI usage within your organization, and maybe that includes things like third party gen AI sites, that could be OpenAI, Claude, Perplexity, things like that, Google, Gemini, things like that that are out there, then you do have to make sure that your devices are onboarded to Purview.

So you gotta go through, like, that whole flow.

You might need DLP in place.

That browser extension becomes super important Yep.

To put out there and make sure that's all good.

The auditing piece, like, yeah, you gotta go turn all of that stuff on.

And then there's always that pesky, like, little licensing consideration.

So it looks like for some reason for, like, Copilot in fabric or the security Copilot, then you need the enterprise version of Microsoft Purview data governance, which then lights up a new set of APIs and a new set of capabilities behind it.

So there's definitely, like, a lot of reading with this one, I think, just to figure out how it composes.

And then what are you gonna go turn on?

And then like you said, because it's primarily a reporting thing.

And to a certain degree, like, remediation.

Right?

You're gonna use these reports to drive new behaviors in your organization, things like that.

But you kinda have to decide what you wanna report on out there.

Like, if you wanna report on usage of cloud in your organization, then you've gotta go down, like, this, like, wild path of kinda turning everything on.

Yes.

And you'll find and we can talk about some of this too.

There is even some functionality in here that is pay as you go, where you end up having to go in and connect your Purview instance to an Azure subscription or some of the pay as you go functionality as well.

I did see that.

So there's this little note, Riley, here, like a little bullet that says, hey.

For AI apps other than Copilot and Facilitator, you have to set up pay as you go billing for your organization.

And they do guide you into kinda, like, where in the UI to turn it on and things.

But it's a little weird because Purview has multiple billing models.

It's got the per user billing model, and then you're blending the PAYGo model on top of it.

It's not like one or the other.

You've gotta get out there and kinda figure out what he uses what.

Yeah.

Yeah.

And you might be there already.

Like, that's the other thing I think that I noticed that I saw was remembering, like, my days of Purview.

Like, we we already had customers who were doing things like monitoring AWS usage, And we were going out and deploying the proxies and the monitoring solutions and things like that.

Those are all out there.

Like, if so if you already have, say, like, AWS monitoring in place, you're probably a PayGo customer already, and you've probably already configured the PayGo PayGo billing.

It was a little surprising to me to see, like, Azure services fall into the PayGo thing as well.

I can understand why, like, Amazon or Dropbox, Google, things like that, but there's stuff like Microsoft Fabric that requires PayGo, monitoring Azure Data Lake Storage or Azure SQL data sources, that all requires PAYGo.

It was kind of a a interesting list to look at.

I'll make sure to put a look at the the billing docs and the billing considerations in the show notes.

Yeah.

Do you feel overwhelmed by trying to manage your Office three sixty five environment?

Are you facing unexpected issues that disrupt your company's productivity?

Intelligink is here to help.

Much like you take your car to the mechanic that has specialized knowledge on how to best keep your car running, Intelligink helps you with your Microsoft cloud environment because that's their expertise.

Intelligent keeps up with the latest updates in the Microsoft cloud to help keep your business running smoothly and ahead of the curve.

Whether you are a small organization with just a few users up to an organization of several thousand employees, They want to partner with you to implement and administer your Microsoft cloud technology.

Visit them at inteligync.com/podcast.

That's intelligink.com/podcast for more information or to schedule a thirty minute call to get started with them today.

Remember, Intelligink focuses on the Microsoft cloud so you can focus on your business.

The one thing I would say when I started doing this, they do make it fairly straightforward to get going.

So what I did, like, I went through that overview, turned on those those four required things, the auditing, the onboarding devices, extending insights was one of them, and then the browser extension.

But even some of these, like when you click on, okay, we need to discover sensitive information, a lot of what DSPM is policy driven off of other areas within Purview.

So when you click to start extending your insights for data discovery, what it gives you is what the three policies are.

And I already have them turned on in mind, but if they're not turned on, it's like this is the PayGo policy or this is free depending on those interactions in just a one click button to deploy it.

And then once you deploy it, you can always go in and look at policy details.

So for extending that data discovery, like, again, the first part of this is what are people doing is like, are people putting sensitive information in AI prompts in Edge or in your browser?

And this policy is actually a data loss prevention policy that sits in the collection policies in DLP.

So when you click it, it's just going out and creating this.

And when you would go into data loss prevention and your collection policies, you'll see all those DSPM policies that you've created, and you can go in and look at what those policies are doing.

You can tweak them, turn them on or off.

So while it is like a one click, just deploy it, I would also say go in and look at some of these policies because it might also highlight some additional functionality that's available within these different platforms that is I would say they've almost kind of snuck in there.

Like these collections I mean, you're paying for it.

Right.

It's it's out there.

And these collection policies, some of the newer ones that are out here now are like network and browser based.

So this one was sensitive information shared in the browser, but now it's also detecting sensitive information shared via AI over the network.

So not only just what am I doing in the browser, but watching my browsers, my applications, API calls, add ins, watching my different network traffic from my devices for sensitive information going to other AI sites.

You brought up licensing.

It does require the A little bit.

Global secure access, the secure edge licensing as well.

So you will start running into, Hey, now I want to start capturing just network data.

Guess what?

You need some solution to actually capture that network data.

So you have some policies there just around data collection, but then this is where you also have a bunch of policies if you go into recommendations after you start doing data collection, and you can go in and start creating different policies around then detecting that risk information.

If you're using ChatGPT Enterprise, there's built in policies here for discovering and governing interactions with ChatGPT Enterprise, using Copilot and agents to improve your data security, and a lot of different functionality around creating these different policies to prevent, monitor, alert on where's my sensitive information going when it relates, again, specifically to AI in this particular solution.

Yeah.

Yeah.

We should probably spend the last couple of minutes here and kinda talk about once you've got the insights, what do you do with them?

Yeah.

So say you're out there.

You've got, like you said, your DLP in place.

You've got this solution that's doing this monitoring for you.

And then what are you gonna do with the information?

And how do you leverage those rest of the tools for remediation?

You know, I I think about, like, that proxy that gets deployed, like you said, like, the global secure access thing.

Well, you can block stuff through that.

Yep.

You wanna take this report and maybe go turn those kinds of things on.

And I think the biggest one here when I start talking to clients and working on it is once you have the detection to your point, what are you gonna do about it, is going in and building out those data loss prevention policies.

One of them that kind of comes out of this, you don't need the SPM to do it, but it walks you through getting everything in place for it, is just protecting sensitive information in your own environment from Copilot.

And this is an example I give to clients is up until deploying this with DLP, the only option to, like, prevent data from going into Copilot was to disable the index, which doesn't impact just Copilot, but also index impacts your search index, or it's to remove permissions to it.

There may be things, Scott, like, I need to get to all of the data in my tenant.

I need to work with that.

I need to work with, I have w nines out there.

If I have bank statements out there, like, this is my data.

There is going to be sensitive information out there that I'm gonna need access to, and that I may wanna search, but I may not want Copilot necessarily processing that returning it.

So, by going in and starting to apply sensitivity labels to it and then using this protect sensitive information from Copilot, I still have Yeah.

I Yeah.

I think over time, that granularity is gonna continue to change, like because you're gonna have subsets of people who might need to work on things, and they might need Gen AI and Copilot for them.

And then you might have people who don't.

So you're back to kind of just the regular, hey, let me make sure this stuff is secure.

But it's also funny, I think, how things do leak in Copilot just because organizations that largely haven't kept up, I think, with having a big corpus of data that lives in SharePoint.

And they never really figured out the whole security trimmed search thing, let alone the security trimmed AI search thing and how that comes together.

Yeah.

And then I would say some of the other ones is like going back to the other AI sites.

Once you're starting to capture it and depending on your licenses and what you have turned on, it's not just, yeah, I don't want Copilot to process this, but you know what?

I can't, I can't, don't want, however you wanna phrase it, my users going in and using sensitive information in other sites.

So some of these DLP policies then that you can apply to devices is auditing that sensitive data that is shared to other sites.

And I'm pulling up what these are, like uploaded over the network or one of these policies.

It's not this one that I just pulled up is looking specifically for people adding it to BARD or to ChatGPT.

Because now that you are auditing it, you can see people are pasting PII or pasting social security numbers into these other third party Gen AI sites.

And frankly, we just can't have that and need to prevent that from even happening or prevent it from even occurring.

Yeah.

I think it's one of those rabbit holes.

Yeah.

You start going down.

I mean, DLP policies in general are their own rabbit hole.

I do like having kind of the enabler of the policies are canned and kind of pre baked, and then you can tweak them from there.

But there's a whole bunch, I think, to go figure out.

And then products like these are so easy to just get caught in the the charts and the reports.

But I think figuring out how you actually get it wired up in your environment and get it to where it needs to be, that's another story and another thing that needs to be taken care of.

Yeah.

Then the other thing I would say is just the activities explore in the last few minutes.

If you, you can see, Scott, even some of the stuff that I've done is AI interactions start showing up now in your activity explorer alongside emails, alongside working with files in SharePoint, and it allows you to start even monitoring and alerting on what are users doing.

So it will you wanna get super big brother, Scott.

Like, your Microsoft three sixty five tenants, your employers, they can spy on you all they want to, and the ability is here.

It starts pulling in.

I can go in and look at any prompt that I've done in Copilot, what that response was, any additional responses to it, going and looking through, again, just that audit log, having that audit history of AI interaction, which, I mean, I think this does become more and more important as people are using AI for their jobs, for generating documents, for generating emails.

Let's face it, we all do it.

There is going to start being instances where there may be cases that arise that you need to go figure out where this certain employee misbehaved with AI.

And having that audit log, that Activity Explorer, well, yes, it does feel big brotherish by an employer.

From a legal perspective, compliance perspective, I think this is absolutely necessary as well.

So, yeah, all of that gets built in there.

And then there's some data risk assessments.

If you want to go run some assessments around the amount of sensitive data, links that have gotten shared, how many links have been created in the last 100 or the last 30 or so.

There's also some of that.

It's similar to what's like in the SharePoint, the data access governance reports in SharePoint.

Some of these data risk assessments around what types of links and sensitive data have been shared across my environment so that I can go in and respond, maybe have user training because it's highlighting some bad behaviors of employees.

But being able to run some of that reporting and assessment around where are people sharing data and how much sensitive data is out there?

Where are they sharing data more than you think?

How much is out there also more than you think?

A 100%.

The other interesting thing, Aaron, I wonder if you've run into this just, like, with any of your customers, is some of the Gen AI platforms don't actually run their own things.

So, like, if you're a perplexity customer coming through a perplexity API, then you're actually using an underlying model from, like, an anthropic or OpenAI or something on the back end.

So there there's all these weird little change.

I I would also encourage folks, like, if you are looking at this, like and some of the stuff we talked about, like, hey, this thing can monitor AI sites.

That's, like, a fixed list of sites that come out.

So so they'd is very comprehensive.

Like, I I will say, like, if you go through, like, the list of sites that the it does cover, it's a lot.

Like, I hadn't even heard of, like, 90% of them easy.

But, you know, if there's something out there where you're like, oh, this is a new Gen AI tool or AI products that my organization uses, it might not be on the list.

You gotta go figure that stuff out too.

Yes.

Microsoft does work to keep it updated, but, yeah, I mean, you or I could go stand up a site on Azure, give it some random domain name tied into some LLM on the back end, and all of a sudden, there's a new Gen AI site out there hosted by the podcast.

Is that what you wanna do?

No.

That is not what I wanna do.

But we could.

Enough, it does monitor.

So it does monitor for azure.microsoft.com and specifically the OpenAI service product page.

So, like, the page that would tell you about, not the learn docs and, how to use it, but more like the marketing page with the pricing and things like that on there.

So that's on the list, but, azure.com is not.

So like you said, you can go, potentially spin these things up because you can absolutely get into the portal.

But it had a lot of, like, I was kind of impressed.

I had like Devon, a lot of the smaller stuff was baked into there as well.

Along with that, like, just like the, I think the more like US centric AI or European offerings, but also stuff out of Asia Pacific and China and things like that.

So it had broad coverage, but, you know, you gotta check the list to make sure your thing is there.

Yep.

Yeah.

They've done a good job with it.

So I think that's kind of the last aspect to it.

And that's what I talked about in that presentation is not only do you need to go in and be, let's say, preventative about where this data is going with some of the cleaning up security, cleaning up permissions, starting to get labels on your content, knowing what content is out there, But then leveraging something like DSPM where you can do some of that deal those DLP policies, a lot of this is just baked in to help you kinda get there, I would say, quickly.

But then also that kinda ongoing monitoring, like, going in and looking at the activity explorer in here and some of the reports in here to just see, like, I've had 540 activities in the last thirty days with different interactions with Copilot and agents.

Going and looking at my total visits to other AI apps, you can see I do tend to favor Copilot, but like Notion and Perplexity and Suno and some of those other AI tools that I have used have been showing up in here.

So just being aware of where are your users potentially exposing your data to places they shouldn't be.

Yeah.

I I think it's, it's an important one.

Absolutely.

But with that, are we supposed to go back to work now?

Is that what's supposed to happen?

We are we are supposed to go back to work.

Oh.

Alright.

Go back to work.

Before you go back to work.

This far.

And you're still listening.

Feel free to reach out.

We have a contact us page on the website.

Let us know what you'd like to hear about, what topics are interesting to you.

Give us a ping on LinkedIn.

You can find Ben there.

You can find myself there.

You can find the podcast there.

And let us know what topics, people, interviews, anything like that would be interesting to you.

Absolutely.

We'd love to hear from you.

So thanks.

Alright.

Thanks, Ben.

As always, I appreciate it.

Definitely.

And we'll talk to you later.

If you enjoyed the podcast, go leave us a five star rating in iTunes.

It helps to get the word out so more IT pros can learn about Office three sixty five and Azure.

If you have any questions you want us to address on the show, or feedback about the show, feel free to reach out via our website, Twitter, or Facebook.

Thanks again for listening, and have a great day.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.