83 questions with Azure AI Content Safety tags

Sort by: Updated
1 answer

Unsupported File Type Extension while Following Azure Exercise for RAG-based Solutions

Hey, So I'm following along the exercise: https://learn.microsoft.com/en-us/training/modules/build-copilot-ai-studio/5-exercise I downloaded the provided brochure.zip file, which only contains PDF files. I was able to successfully upload the…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-05T19:43:18.0966667+00:00
Jonathan Nguyen 0 Reputation points
commented 2025-12-25T19:20:54.8166667+00:00
Sridhar M 3,320 Reputation points Microsoft External Staff Moderator
1 answer

Become a managed customer and reduce prompt filtering

Hi! I'm trying to create a ticket, but I don't see an option to do it on our Azure dashboard (although we should have access to it). I'm trying to become a managed user and thus reduce the filtering applied to our prompts. We are a health tech company,…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-18T19:11:57.8466667+00:00
Usawa Care 0 Reputation points
commented 2025-12-25T13:54:32.54+00:00
Sridhar M 3,320 Reputation points Microsoft External Staff Moderator
1 answer

Internal error when training custom category

I'm trying to use the Content Safety Studio to train the AI on a custom category. I uploaded my .jsonl file to a blob storage and have 64 examples in the following format in the file: {"text": "Message: Temporary Access Token One-Time…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-05T19:32:19.4066667+00:00
Tony Williams 5 Reputation points
answered 2025-12-23T05:20:39.9233333+00:00
Manas Mohanty 13,425 Reputation points Moderator
2 answers One of the answers was accepted by the question author.

Handling Content Filters for generating SOAP Notes for a Medical Use Case in Agents Service

Hey Everyone, I am using Azure OpenAI API with gpt-5-mini model, and I have a medical use case to transcribe a doctor patient appointment and generate SOAP Notes from it. But since in our case the patient might mention violence or self harm to the doctor…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-11T15:28:58.36+00:00
Omar Elhanafy 40 Reputation points
commented 2025-12-19T03:54:24.2633333+00:00
Omar Elhanafy 40 Reputation points
1 answer One of the answers was accepted by the question author.

Azure AI Foundry (Classic) Content Filters not working

Greetings! I tried adding Content Filter to Azure AI Foundry (Classic experience) and it doesn't work. Steps to reproduce: Created an AI Agent with Azure AI Foundry using a pre-created gpt-4.1 model deployment Create a content filter (both…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-05T10:03:24.7266667+00:00
Nazarii Klymok 20 Reputation points
accepted 2025-12-18T13:05:09.53+00:00
Nazarii Klymok 20 Reputation points
1 answer

Difference between HTTP 400 content filter errors and data in Azure portal Metrics

We are getting http 400 content filter errors quite often, but our Azure portal metrics graph shows only 1 instance in the last month. This is making it hard to keep on track of the issue. We've also hit rate limits for certain deployments and faced…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-17T14:24:22.1966667+00:00
Josh Ferry Woodard 0 Reputation points
answered 2025-12-17T15:26:21.68+00:00
Anshika Varshney 5,055 Reputation points Microsoft External Staff Moderator
2 answers One of the answers was accepted by the question author.

azure content safety api not working

I'm getting {"blocklistsMatch":[],"categoriesAnalysis":[]} for all text moderation calls i'm making. It used to work before.…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-03T22:49:49.3333333+00:00
Susie Park 20 Reputation points
accepted 2025-12-10T00:30:49.1+00:00
Susie Park 20 Reputation points
1 answer

Enable Zero Data Retention on openai resource

I am trying to enable Zero Data Retention on an openai resource in my org. How should I do this?

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-12-03T15:26:38.0833333+00:00
Ashwin 0 Reputation points
commented 2025-12-09T09:50:47.0333333+00:00
Anshika Varshney 5,055 Reputation points Microsoft External Staff Moderator
2 answers One of the answers was accepted by the question author.

AI-Driven Cloud Cost Optimization

Hi experts! I want to: Implement a system that uses Azure Cost Management API + ML models (Azure ML) to predict future spend. Auto-trigger rightsizing (scale sets, reserved instances) based on prediction. Deliverable: Self-adjusting cost governance…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-09-08T05:48:02.75+00:00
Nidhi Priya 571 Reputation points
commented 2025-12-05T12:07:39.1433333+00:00
Ash Katie 0 Reputation points
1 answer

Modified Content Filters / Managed Customer access.

[URLs modified to not qualify as a backlink] For our service https://crimeowl(.)ai we use managed Ai instances of various LLM models. Since we are into investigation crime and cold cases we get material that describes crimes as well. We already set…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-11-26T11:17:39.9433333+00:00
Martijn van Halen 0 Reputation points
answered 2025-11-26T14:09:12.3966667+00:00
SRILAKSHMI C 11,765 Reputation points Microsoft External Staff Moderator
1 answer One of the answers was accepted by the question author.

Validity of Scan Reports Despite Logged Errors

We ran the baseline scan and attack strategy scan against our model using the Red Team SDK. Although the output was generated, the logs show an error. Should I consider the report valid or invalid?

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-11-23T02:32:03.5133333+00:00
Karthickumar Karuppiah 80 Reputation points
accepted 2025-11-24T10:21:02.8633333+00:00
Karthickumar Karuppiah 80 Reputation points
1 answer One of the answers was accepted by the question author.

RedTeam SDK: ProtectedMaterial, CodeVulnerability, and UngroundedAttributes Testing

In the RedTeam SDK, I can see that the following categories have been added: ProtectedMaterial 200 CodeVulnerability 389 UngroundedAttributes 200 How can I test these against my model? What does ProtectedMaterial 200, CodeVulnerability 389, and…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-11-23T03:15:27.74+00:00
Karthickumar Karuppiah 80 Reputation points
accepted 2025-11-24T09:07:06.36+00:00
Karthickumar Karuppiah 80 Reputation points
1 answer One of the answers was accepted by the question author.

Handling or Disabling Content Filters for Medical Use Case in Agents Service

Hey Everyone, I am using Azure AI Agent Service with OpenAI gpt-4.1 model, and I have a medical use case that requires the agent to handle detecting self harm responses and act appropriately by calling our custom tools. Approaches Used: Configuring…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-11-14T18:42:57.0666667+00:00
Omar Elhanafy 40 Reputation points
commented 2025-11-19T16:57:10.87+00:00
Sridhar M 3,320 Reputation points Microsoft External Staff Moderator
2 answers One of the answers was accepted by the question author.

With the Red Team SDK, can we test only safety risks, or can we also test the OWASP Top 10 for LLMs?

The Red Team Agent SDK is working fine for testing risk categories and attack strategies. I want to know whether it is possible to test prompt injection, indirect prompt injection, and data poisoning attacks using the Red Team SDK

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-11-14T09:40:18.9966667+00:00
Karthickumar Karuppiah 80 Reputation points
edited an answer 2025-11-17T05:32:35.42+00:00
SRILAKSHMI C 11,765 Reputation points Microsoft External Staff Moderator
1 answer One of the answers was accepted by the question author.

AI RED TEAM AGENT unable to get output

Prerequisites: Red Team SDK and other libraries should be installed, and environment variables for CLIENT_ID, CLIENT_SECRET, TENANT_ID, and AZURE_AI_PROJECT should be set. When I run the source code, I don’t get any output.

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-11-07T07:52:14.2633333+00:00
Karthickumar Karuppiah 80 Reputation points
accepted 2025-11-11T06:25:33.0033333+00:00
Karthickumar Karuppiah 80 Reputation points
1 answer One of the answers was accepted by the question author.

Getting 404 error when trying Azure AI foundry AI Red Teaming

I am trying to work on AI red teaming in AI foundry, and retrieved a sample from documentation even for that even though complete setup is there like AZURE_SUBSCRIPTION_ID, AZURE_RESOURCE_GROUP, AZURE_PROJECT_NAME. here is my code: and error import…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-07-30T05:47:28.2666667+00:00
Abhishek Ramdhan Handibag 25 Reputation points
commented 2025-11-07T01:52:52.7166667+00:00
Aryan Parashar 3,690 Reputation points Microsoft External Staff Moderator
1 answer

Groundedness detection through Content Safety is not working

Hi, Content safety's groundedness detection is not working, when I run my code I receive the following output constantly { "ungroundedDetected": false, "ungroundedPercentage": 0, "ungroundedDetails": [] } Also when…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-09-20T13:15:29.5566667+00:00
Gebru, J (Jonathan) 0 Reputation points
answered 2025-10-31T15:58:35.8933333+00:00
Manas Mohanty 13,425 Reputation points Moderator
1 answer

Got an email saying ImageGen resource is throttled but couldn't get more details on the problem

Hi I just received an email saying my image generation resource is throttled, for the Model Name ImageGen. I don't know which model is actually causing the issue or what issue is it causing. I couldn't find my details on the issue other than the email I…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-10-27T21:10:06.0766667+00:00
John Zhang 0 Reputation points
answered 2025-10-28T03:09:01.11+00:00
SRILAKSHMI C 11,765 Reputation points Microsoft External Staff Moderator
1 answer One of the answers was accepted by the question author.

turn off content filtering

How do i turn off content filtering for azure open AI and cognitive services? My app relies on those programs to assess messages some of which contain harmful or violent messaging.

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-10-21T17:13:53.46+00:00
Michael Varga 25 Reputation points
edited a comment 2025-10-27T17:42:07.9666667+00:00
Anshika Varshney 5,055 Reputation points Microsoft External Staff Moderator
1 answer One of the answers was accepted by the question author.

Model deployment succeeded, but endpoint responses are inconsistent across regions

I managed to resolve the earlier “ModelNotFound” issue when deploying my custom text classification model in Azure AI Foundry — turned out it was a region mismatch between the training and endpoint resources. Now I’m facing a new problem: while the model…

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
83 questions
asked 2025-10-25T20:27:54.9333333+00:00
zhuzin zhuzin 40 Reputation points
accepted 2025-10-25T21:31:22.6066667+00:00
zhuzin zhuzin 40 Reputation points