Artificial Intelligence (AI) and Large Language Models (LLM) are now hot topics in every company and organization. How can we use them to increase our efficiency and become more competitive? The challenge is that the train has already left the station. Our users have already started utilizing these tools, and we as an organization need to understand how they are being used.
For instance, AI services offered by various cloud providers can contain sensitive or personal information that needs to be protected according to laws and regulations. If the organization is not aware of which AI services are being used, by whom, and for what purpose, it can lead to data breaches, fines, loss of trust, and damage to the brand.
How Can You Get an Overview of Your Organization’s AI Usage?
Microsoft Defender for Cloud Apps is a solution that gives you visibility and control over your cloud apps and services. With Defender for Cloud Apps, you can discover which cloud apps and services are being used in your organization. Over 31,000 applications have been security-reviewed and categorized to provide more insight into what the app is and the risks it might pose, such as where data is stored and who owns the service. One of these categories includes generative AI services like Microsoft Bing Chat, Google Bard, ChatGPT, or Amazon AI. You can also monitor and analyze user behavior, data flows, and security incidents in your cloud apps and services.
With the integration to Defender for Endpoint, you can even block access to services that should not be used.
The biggest advantage is gaining an understanding of what is currently being used. This can serve as a basis for developing guidelines on how AI should be used and in which services within the organization.
Purview AI Hub
When it comes to gaining insight into risky usage, such as sensitive information within generative AI, Microsoft recently launched AI Hub as a feature within Microsoft Purview.
![]()
You can read more about Microsoft’s AI Hub itself here: Microsoft Purview data security and compliance protections for Microsoft Copilot and other generative AI apps | Microsoft Learn
With this article, I aim to provide you with tips on how to streamline and leverage the tool effectively.
This adaptation builds on built-in DLP (Data Loss Prevention) functions specifically designed to monitor AI services. The services being monitored are categorized here: Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn
When an organization initiates AI Hub, an endpoint DLP (Data Loss Prevention) rule is
set up in the background. This rule is specifically tailored to monitor AI services and ensure that sensitive information is audited.
As a default, it monitors ALL predefined Sensitive Information Types (SITs) if they appear on any of the LLM sites listed:![]()
This is a good starting point for inventorying the data used in these AI services. However, my advice is to undertake a review to determine which data is truly sensitive and critical within the organization. Customize this rule accordingly and include the data that is most relevant to monitor. If you haven’t already done this work, it involves initiating a Data Discovery process by engaging information owners and other stakeholders in the organization to:
- Inventory the results of built-in Trainable Classifiers and SITs and ensure that they correctly identify critical data.
- Create custom SITs based on input from stakeholders/information owners and ensure their functionality.
This approach allows us to tailor the DLP rule for generative AI according to the requirements from these information owners. The significant advantage here is also that we can leverage even more features within Purview, namely Communication Compliance and eDiscovery.
With these services, we can create policies with predefined incident flows to notify these information owners if their sensitive information appears in Copilot interactions or other connected services such as Slack, WhatsApp.
Furthermore, we can use eDiscovery to compile evidence in the event of an incident and assign this to the correct information owner or legal department.![]()
I hope this article has provided you with greater insight into how to gain better control over the use of generative AI, as well as the importance of involving your business in this process to make the most of Purview’s solutions.

Pingback: Ta kontroll över AI-tjänster och känslig data inom din organisation | IT-Säkerhetsguiden
Pingback: Take Control of AI Services and Sensitive Data in Your Organization | IT-Säkerhetsguiden