Share via

Responses API with MCP Tool attached to o3-deep-research is failed with the status incomplete with reason as content filter

Muru S 0 Reputation points
2026-03-17T17:44:53.83+00:00

Objective : Performing deep research on internal data using o3-deep-research model.Design : MCP Server deployed into ACA with search and fetch tool with signatures in complaint with the specification (https://developers.openai.com/apps-sdk/build/mcp-server#company-knowledge-compatibility). OpenAI client created with 03-deep-research model with MCP tool, in a loop response status being checked. (https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/deep-research#remote-mcp-server-with-deep-research)

Problem:

Deep Research is being carried out for sometime, I could see in the log that handshake has been made, ListTools invoked, search tool is called post that fetch is called for the queries framed by the model.. But sometimes the response status is becoming "incomplete" with incomplete reason as "content_filter".
Data :
The uploaded file data on which search and fetch operations are carried out do not contain any potential words/phrases that might have caused the content filter. A research topic is given as input and instructions were given additionally.

Other config: background and store parameters were set as True.,
Behavior :
Strangely, I had file with few companies name alone as content, for which the response status was incomplete and reason as content filter. For few files , the deep research worked well and final report generated properly. For few files with proper content it throwed the content filter reason.

Questions :

  1. How to know what caused the content filter whether its a prompt or some reasoning steps or the search or fetch results ?
  2. How to check intermediate results or COT summary which may give the clue ?
Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.

{count} votes

2 answers

Sort by: Most helpful
  1. Karnam Venkata Rajeswari 565 Reputation points Microsoft External Staff Moderator
    2026-03-25T17:28:25.64+00:00

    Hello Muru S,

    Welcome to Microsoft Q&A and Thank you for reaching out.

    In addition to the inputs given by Anish Raj , please see if the following helps

    The behaviour being seen is expected for the o3‑deep‑research model when used with the Responses API and MCP tools.

    Deep research runs as a multi‑step background process. During this process, the model continuously plans, searches, fetches data, and reasons internally. These intermediate reasoning steps are not streamed or exposed while the task is still running. Reasoning summaries are only produced after the task completes successfully.

    If the request is stopped early with the response status incomplete and reason content_filter, the deep‑research session is terminated immediately. When this happens, the platform does not finalize or return any reasoning summaries or intermediate chain‑of‑thought. This is why reasoning summaries are visible only when the final response status is complete, and not when the status is incomplete.

    This behavior is by design and applies even when:

    • The original prompt looks safe
    • The uploaded files appear clean
    • The MCP search and fetch steps are functioning correctly

    The content filter can trigger unexpectedly as it is applied not only to the input prompt or final answer, but also to:

    • Internal reasoning steps generated during research
    • Queries automatically formed by the model during search
    • Text returned by MCP search or fetch tools

    When the content filter triggers at any internal step, the entire research task is stopped and marked as incomplete, without exposing partial reasoning.

    The intermediate reasoning cannot be viewed while research is in progress.At this time:

    • Intermediate reasoning or chain‑of‑thought cannot be accessed during execution
    • Reasoning summaries are generated only after successful completion
    • When filtering occurs mid‑execution, reasoning artifacts are discarded

    There is currently no supported mechanism to retrieve live reasoning, partial reasoning, or intermediate summaries while deep research is still running or when it ends due to content filtering.

    Please consider the following steps to reduce content‑filter interruptions

    While the exact trigger cannot be pinpointed after the fact, the following actions have been shown to reduce false positives. These steps help prevent the model from generating ambiguous intermediate text that may trigger safety systems.

    1. Consider adding context to sparse data by avoiding files that contain only short names or isolated terms. Adding a one‑line description or background text helps the model interpret the content safely.
    2. Ensure richer MCP responses that return short fragments or minimal text from search/fetch increases the chance of misclassification. Returning fuller, contextual passages is safer.
    3. Review content filter configuration – Having a custom content‑filter policy with adjusted sensitivity (within approved usage guidelines) can reduce false positives for enterprise data.

    References:

    Deep research with the Responses API - Microsoft Foundry | Microsoft Learn

    Use the Azure OpenAI Responses API - Microsoft Foundry | Microsoft Learn

     

    Thank you!

     

    Please 'Upvote'(Thumbs-up) and 'Accept' as answer if the reply was helpful. This will be benefitting other community members who face the same issue. 

     

    0 comments No comments

  2. Anish Raj 0 Reputation points
    2026-03-18T20:32:51.7933333+00:00

    Fix: Responses API failing with incomplete + content_filter reason

    This is a known intermittent behavior with o3-deep-research when the

    model's internal reasoning steps or search results (not just your

    input data) trigger the content filter — even when your uploaded files

    look clean.


    Answer to Question 1: What triggered the content filter?

    The content filter in Azure OpenAI can be triggered at 3 points:

    1. Your input prompt — unlikely here since some files work fine
    2. The model's internal reasoning/COT steps — most likely cause in your case
    3. The fetched search results — company names alone can sometimes match flagged entity patterns in Azure's content filter

    To identify which one, add include parameter to capture the full

    response object:

    
    response = client.responses.create(
    
        model="o3-deep-research",
    
        input=your_input,
    
        include=["reasoning.encrypted_content"]  # captures COT steps
    
    )
    
    # Check the incomplete reason details
    
    if response.status == "incomplete":
    
        print(response.incomplete_details)  # shows filter trigger point
    
        print(response.output)              # check which output block failed
    
    

    Answer to Question 2: How to check intermediate COT results

    Use the output array in the response — each step is a separate block:

    
    for item in response.output:
    
        print(f"Type: {item.type}")
    
        if item.type == "reasoning":
    
            print(f"Reasoning summary: {item.summary}")
    
        if item.type == "message":
    
            print(f"Content: {item.content}")
    
    

    The block where content stops appearing is where the filter triggered.


    Practical fix for company-name-only files

    Azure content filter can misclassify sparse files (few words,

    company names only) as potentially sensitive. Two options:

    1. Add context to sparse files — add a one-line description alongside the company name so the model has richer context
    2. Adjust content filter settings — go to Azure AI Foundry → your deployment → Content filters → create a custom filter policy with reduced sensitivity for hate/violence categories

    I encountered similar content filter interruptions when building a

    RAG pipeline with LangChain + OpenAI — sparse chunks with minimal

    context triggered false positives. Adding semantic context to each

    chunk resolved it.

    Reference: Azure OpenAI content filtering

    Hope this helps — let me know if checking the output array

    identifies the trigger point!


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.