An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
Hello Stephanie Frenel,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
I understand that you are deeply concerns about Azure AI Foundry Retaining Data Issue.
Azure OpenAI, when used through Azure AI Foundry, applies a built‑in 30‑day abuse‑monitoring retention window for prompts and completions, and this cannot be disabled by customers on their own. The only path to fully stop this retention is applying for Modified Abuse Monitoring, an approval workflow available only to eligible managed organizations. Microsoft confirms the approval‑based process and the retention behavior in official references, including the Azure OpenAI Abuse Monitoring overview and the Microsoft Community Hub Data Storage article, which explains what data is and is not persisted in your tenant. - https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/data-storage-in-azure-openai-service/4382502 and https://learn.microsoft.com/en-us/azure/foundry/openai/concepts/abuse-monitoring give you more insight.
If your organization does not qualify for that approval, the correct compliance posture shifts to preventing sensitive data from ever reaching the service. You can accomplish this by enforcing pre‑processing redaction or tokenization at an API gateway such as Azure API Management, and by avoiding stateful Foundry features like stored threads or memory that create additional persistence. These practices are consistent with guidance from Microsoft’s published abuse‑monitoring documentation, which clarifies that model safety systems operate independently from your own storage and that persisted data stems only from optional features you enable. - https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/data-storage-in-azure-openai-service/4382502 and https://learn.microsoft.com/en-us/azure/foundry/openai/concepts/abuse-monitoring
To complete SOC‑2‑aligned governance, ensure your telemetry is captured without storing user content by routing only metadata into Log Analytics and enabling Microsoft Purview’s built‑in audit and lifecycle controls for Azure AI services. The Microsoft Community Hub guidance affirms that user‑controlled feature storage can be deleted at any time, and Purview’s governance pipeline provides compliant oversight without retaining sensitive payloads. Together, these measures form the only accurate, validated, and policy‑aligned method to operate Azure AI Foundry securely when full abuse‑monitoring opt‑out cannot be granted. - https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/data-storage-in-azure-openai-service/4382502
I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.