An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
Hello Roland Bair,
Welcome to Microsoft Q&A and Thank you for reaching out,
I completely understand the concern given this is impacting a production service.
Based on the details you’ve shared and the AUP warning received for your Image-1.5 deployment, this indicates that Azure’s built-in content safety systems have detected repeated requests that may fall under restricted content categories. This is typically pattern-based and not always tied to a single request.
What this means
The service has identified potential Acceptable Use Policy (AUP) violations (e.g., related to violence, hate, or sensitive content categories)
A 24-hour suspension warning is triggered when such patterns cross a threshold
If similar traffic continues, it may lead to a temporary suspension of the resource
At this stage, there is no direct way to remove the warning, but you can take immediate corrective actions to prevent escalation
Recommended actions
- Inspect content filter triggers
- Enable logging of content_filter_results / annotations for each request
- Identify:
- Which category was triggered (e.g., violence, sexual, hate)
- Severity level (low / medium / high)
- This will help pinpoint exactly what is causing the warnings
- Strengthen prompt pre-filtering (primary mitigation)
Add validation before sending requests to Image-1.5:
- Block high-risk keywords and patterns
- Rewrite or reject ambiguous prompts
This can be implemented via:
- Backend validation layer
- Azure AI Content Safety
- Add output moderation
- Review generated images (or metadata signals if available)
- Prevent displaying, storing, or reusing flagged outputs
- Feed these results back into your filtering logic
- Handle false positives
In some edge cases Cropping, resizing, or simplifying inputs can reduce false positives
This is particularly useful if your pipeline includes image-to-image scenarios
- Configure content filtering policies
In Azure AI Foundry, you can:
- Associate a custom content filter policy with your deployment
- Adjust thresholds (e.g., control how medium/high severity is handled)
If your use case requires flexibility, you can apply for:
- Limited Access Review (Modified Content Filters) This allows more granular control, subject to approval
- Implement abuse protection & rate limiting
Add User-level throttling, Request pattern monitoring
Helps prevent repeated triggering patterns from automated or unintended usage
Steps to avoid suspension
Temporarily tighten filtering rules
Restrict or block high-risk prompt categories
Limit access to controlled or trusted users until mitigations are stable
The warning is due to repeated policy-triggering patterns, not necessarily a single request
There is no manual removal, but escalation can be avoided
Focus should be on:
- Prompt filtering
- Output moderation
- Monitoring & logging
Once mitigations are in place, you can proceed with an appeal for review
Please refer this
Azure Content Filtering Documentation (image content) https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter?tabs=definitions%2Cuser-prompt%2Cpython-new#image-content
Configure Content Filters in Azure AI Foundry https://learn.microsoft.com/azure/ai-services/openai/how-to/content-filters
I Hope this helps. Do let me know if you have any further queries.
If this answers your query, please do click Accept Answer and Yes for was this answer helpful.
Thank you!