Share via

Azure OpenAI returns "I cannot assist" despite content filters set to Lowest blocking - false positives on legitimate business prompts

Mujtaba 5 Reputation points
2026-04-14T12:18:30.81+00:00

My Azure OpenAI resource is returning "I'm sorry, but I cannot assist with that request" for legitimate business requests (medical appointment booking and job portal features and many more simple requests). I have already configured both Input and Output content filters to "Lowest blocking" (High severity only) for Violence and Self-harm categories. The issue persists even for benign prompts. I need Microsoft to investigate this false positive and unblock my use case.

Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.


2 answers

Sort by: Most helpful
  1. Manas Mohanty 16,190 Reputation points Microsoft External Staff Moderator
    2026-04-20T01:41:29.9266667+00:00

    Hi Mujtaba,

    Good day.

    Agreed to below community member suggestion.

    Yes, it would be recommended to opt for Modified Guard rails access which would disable content filtering and avoid false positive (Considering Production environment)

    (Currently limited to enterprise customer, approved from Gating team based on use case, please provide org email and justification on above form.)

    Overall recommendation

    1. Please test GPT 5.1 or 5.2, Restriction seems to be bit higher in GPT 5.4 models
    2. Create a new content filter with lowest threshold to isolate any regression issue
    3. opt for Modified content filtering (Guard rails) access
    4. Please share the prompts and model used in private message if you are still looking for investigation on "False positive content filtering"

    Sorry for the inconvenience.

    Thank you for your inputs.

    0 comments No comments

  2. Ghulam Muhayyu Din 0 Reputation points
    2026-04-19T17:24:04.6633333+00:00

    Hello Mujtaba,

    Dealing with false positives is a frequent challenge when analyzing model robustness an issue I often tackle in my own NLP and machine learning research.

    In Azure OpenAI, the "I cannot assist" message can trigger independently of your configurable content filter policy. The models have a built-in, non-configurable safety layer that can aggressively block business or medical prompts. First, navigate to your content filter policy and disable "Prompt Shields" (specifically Jailbreak and Indirect Prompt Attack detection), which are notorious for over-triggering on legitimate requests. If the false positives persist, you will need to apply for modified content filtering access through Microsoft's official form (aka.ms/oai/modifiedaccess) to grant deployment-level exemptions for your specific business use case.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.