An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
Hello Muru S,
Welcome to Microsoft Q&A and Thank you for reaching out.
In addition to the inputs given by Anish Raj , please see if the following helps
The behaviour being seen is expected for the o3‑deep‑research model when used with the Responses API and MCP tools.
Deep research runs as a multi‑step background process. During this process, the model continuously plans, searches, fetches data, and reasons internally. These intermediate reasoning steps are not streamed or exposed while the task is still running. Reasoning summaries are only produced after the task completes successfully.
If the request is stopped early with the response status incomplete and reason content_filter, the deep‑research session is terminated immediately. When this happens, the platform does not finalize or return any reasoning summaries or intermediate chain‑of‑thought. This is why reasoning summaries are visible only when the final response status is complete, and not when the status is incomplete.
This behavior is by design and applies even when:
- The original prompt looks safe
- The uploaded files appear clean
- The MCP search and fetch steps are functioning correctly
The content filter can trigger unexpectedly as it is applied not only to the input prompt or final answer, but also to:
- Internal reasoning steps generated during research
- Queries automatically formed by the model during search
- Text returned by MCP search or fetch tools
When the content filter triggers at any internal step, the entire research task is stopped and marked as incomplete, without exposing partial reasoning.
The intermediate reasoning cannot be viewed while research is in progress.At this time:
- Intermediate reasoning or chain‑of‑thought cannot be accessed during execution
- Reasoning summaries are generated only after successful completion
- When filtering occurs mid‑execution, reasoning artifacts are discarded
There is currently no supported mechanism to retrieve live reasoning, partial reasoning, or intermediate summaries while deep research is still running or when it ends due to content filtering.
Please consider the following steps to reduce content‑filter interruptions
While the exact trigger cannot be pinpointed after the fact, the following actions have been shown to reduce false positives. These steps help prevent the model from generating ambiguous intermediate text that may trigger safety systems.
- Consider adding context to sparse data by avoiding files that contain only short names or isolated terms. Adding a one‑line description or background text helps the model interpret the content safely.
- Ensure richer MCP responses that return short fragments or minimal text from search/fetch increases the chance of misclassification. Returning fuller, contextual passages is safer.
- Review content filter configuration – Having a custom content‑filter policy with adjusted sensitivity (within approved usage guidelines) can reduce false positives for enterprise data.
References:
Deep research with the Responses API - Microsoft Foundry | Microsoft Learn
Use the Azure OpenAI Responses API - Microsoft Foundry | Microsoft Learn
Thank you!
Please 'Upvote'(Thumbs-up) and 'Accept' as answer if the reply was helpful. This will be benefitting other community members who face the same issue.