Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
For the most part, Data Security Posture Management for AI is easy to use and self-explanatory, guiding you through prerequisites and preconfigured reports and policies. Use this section to complement that information and provide additional details that you might need.
Prerequisites for Data Security Posture Management for AI
To use Data Security Posture Management for AI from the Microsoft Purview portal, you must have the following prerequisites:
You have the right permissions.
Required for monitoring interactions with Copilot and agents:
Microsoft Purview auditing is enabled for your organization. Although this is the default, you might want to check the instructions for Turn auditing on or off.
For Microsoft 365 Copilot and agents, users are assigned a license for Microsoft 365 Copilot.
For Copilot in Fabric and Security Copilot:
- The enterprise version of Microsoft Purview data governance, to support the required APIs.
- A collection policy, such as the one created from the recommendation Secure interactions for Microsoft Copilot experiences.
Required for monitoring interactions and applying DLP policies to other AI apps in Edge:
- An Edge configuration policy is required to activate the Microsoft Purview integration in Edge. For configuration information, see Activate your DLP policy in Microsoft Edge.
Required for monitoring interactions with third-party generative AI sites:
Devices are onboarded to Microsoft Purview, required for:
- Gaining visibility into sensitive information that's shared with third-party generative AI sites. For example, a user pastes credit card numbers into ChatGPT.
- Applying endpoint DLP policies to warn or block users from sharing sensitive information with third-party generative AI sites. For example, a user identified as elevated risk in Adaptive Protection is blocked with the option to override when they paste credit card numbers into ChatGPT.
The Microsoft Purview browser extension is deployed to Windows users and required to discover visits to third-party generative AI sites by using an Insider Risk Management policy. The browser extension is also required for endpoint DLP policies on Windows when you use Chrome.
For AI apps other than Microsoft 365 Copilot and Microsoft Facilitator, you've set up pay-as-you-go billing for your organization. When this billing model is applicable for specific configurations, you'll see notifications and instructions in the UI.
You'll find more information about the prerequisites for auditing, device onboarding, and the browser extension in Data Security Posture Management for AI: Navigate to Overview > Get started section.
For a list of currently supported third-party AI apps, see Supported AI sites by Microsoft Purview for data security and compliance protections.
Note
If you're using administrative units, restricted administrators won't be able to create the one-click policies that apply to all users. You must be an unrestricted administrator to create these policies. Restricted administrators see the results only for users in their assigned administrative unit for the Policies page, and in the reports and activity explorer in Microsoft Purview Data Security Posture Management for AI.
One-click policies from Data Security Posture Management for AI
After the default policies are created, you can view and edit them at any time from their respective solution areas in the portal. For example, you want to scope the policies to specific users during testing, or for business requirements. Or, you want to add or remove classifiers that are used to detect sensitive information. Use the Policies page to quickly navigate to the right place in the portal.
Some policies, such as DSPM for AI - Capture interactions for Copilot experiences and DSPM for AI - Detect sensitive info shared with AI via network are collection policies that you can edit, and if necessary, delete like any other collection policy. For more information, see Accessing collection policies.
If you delete any of the policies, their status on the Policies page displays PendingDeletion and continues to show as created in their respective recommendation cards until the deletion process is complete.
For sensitivity labels and their policies, view and edit these independently from Data Security Posture Management for AI, by navigating to Information Protection in the portal. For more information, use the configuration links in Default labels and policies to protect your data.
For more information about the supported DLP actions and which platforms support them, see the first two rows in the table from Endpoint activities you can monitor and take action on.
For the default policies that use Adaptive Protection, this capability is turned on if it's not already on, using default risk levels for all users and groups to dynamically enforce protection actions. For more information, see Quick setup
Note
Any default policies created while Data Security Posture Management for AI was in preview and named Microsoft Purview AI Hub won't be changed. For example, policy names will retain their Microsoft AI Hub - prefix.
Default policies for data discovery using Data Security Posture Management for AI
DLP policy: DSPM for AI: Detect sensitive info added to AI sites
Source: Extend your insights for data discovery
This policy discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only.
Insider risk management policy: DSPM for AI - Detect when users visit AI sites
Source: Extend your insights for data discovery
This policy detects when users use a browser to visit AI sites.
Insider risk management policy: DSPM for AI - Detect risky AI usage
Source: Recommendation Detect risky interactions in AI apps
This policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot, agents, and other generative AI apps.
Communication Compliance: DSPM for AI - Unethical behavior in AI apps
Source: Recommendation Detect unethical behavior in AI apps
This policy detects sensitive information in prompts and responses in Microsoft 365 Copilot, agents, and other generative AI apps. This policy covers all users and groups in your organization.
Collection policy: DSPM for AI - Capture interactions for Copilot experiences
Source: Recommendation Secure interactions in Microsoft Copilot experiences
This policy captures prompts and responses for data security posture and regulatory compliance from Copilot in Fabric, and Security Copilot. Manage them in Microsoft Purview solutions like eDiscovery, Data Lifecycle Management, and more.
Collection policy: DSPM for AI - Detect sensitive info shared with AI via network
Source: Recommendation Extend insights into sensitive data in AI app interactions
This policy detects sensitive information shared with AI apps in browsers, applications, APIs, add-ins, and more, using a Secure Access Service Edge (SASE) or Security Service Edge (SSE) integration.
Important
This policy requires that you manually add one or more Secure Access Service Edge (SASE) or Security Service Edge (SSE) integrations in Data Loss Prevention settings. The detection of AI interactions is dependent on the network partner implementation.
If you want to capture prompts and responses in addition to detecting sensitive information, you must edit the collection policy and select the option to capture content.
Collection policy: DSPM for AI - Capture interactions for enterprise AI apps
Source: Recommendation Secure interactions from enterprise apps
This policy captures prompts and responses for regulatory compliance from enterprise AI apps, such as Chat GPT Enterprise and AI apps connected through Microsoft Entra or Azure AI services, so they can be managed in Microsoft Purview solutions like eDiscovery, Data Lifecycle Management, and more.
Collection policy: DSPM for AI - Detect sensitive info shared in AI prompts in Edge
Source: Extend your insights for data discovery
This policy detects prompts sent to generative AI apps in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only.
Default policies from data security to help you protect sensitive data used in generative AI
DLP policy DSPM for AI - Block sensitive info from AI sites
Source: Recommendation Fortify your data security
This policy uses Adaptive Protection to give a block-with-override to elevated risky users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode.
DLP policy DSPM for AI - Block elevated risk users from submitting prompts to AI apps in Microsoft Edge
Source: Recommendation Fortify your data security
This policy uses Adaptive Protection to block elevated, moderate, and minor risk users attempting to put information in AI apps while using Microsoft Edge.
DLP policy DSPM for AI - Block sensitive info from AI apps in Edge
Source: Recommendation Fortify your data security
This policy detects inline for a selection of common sensitive information types and blocks prompts from being sent to AI apps while using Microsoft Edge.
Information Protection - Sensitivity labels and policies
Source: Recommendation Protect your data with sensitiivity labels
This recommendation creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped.
Activity explorer events
Use the following information to help you understand the events you might see in the activity explorer from Data Security Posture Management for AI. References to a generative AI site can include Microsoft 365 Copilot, Microsoft 365 Copilot Chat, agents, other Microsoft copilots, and third-party AI sites.
Event | Description |
---|---|
AI interaction | User interacted with a generative AI site. Details include the prompts and responses. For Microsoft 365 Copilot and Microsoft 365 Copilot Chat, this event requires auditing to be turned on. For Copilot in Fabric and Security Copilot, and for non-Copilot AI apps, prompts and responses require a collection policy with content capture selected to capture these interactions. |
AI website visit | User browsed to a generative AI site. |
DLP rule match | A data loss prevention rule was matched when a user interacted with a generative AI site. Includes DLP for Microsoft 365 Copilot. |
Sensitive info types | Sensitive information types were found while a user interacted with a generative AI site. For Microsoft 365 Copilot and Microsoft 365 Copilot Chat, this event requires auditing to be turned on but doesn't require any active policies. |
The AI interaction event doesn't always display text for the Copilot prompt and response. Sometimes, the prompt and response spans consecutive entries. Other scenarios can include:
- Microsoft Facilitator AI-generated notes, no prompt or response is displayed
- When a user doesn't have a mailbox hosted in Exchange Online, no prompt or response is displayed
The Sensitive info types detected event doesn't display the user risk level.
For Microsoft Facilitator AI-generated notes, AI interaction events can't be linked to Sensitive info types detected events.
For collection policies, no prompt or response is displayed if the option to capture content isn't selected in the policy. For example, the one-click policy DSPM for AI - Detect sensitive info shared with AI via network doesn't select this option when the policy is automatically created, but you can manually edit the policy and select this option after the policy is created.