Edit

Share via


Connect a Foundry IQ knowledge base to Foundry Agent Service

Important

Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In this article, you learn how to connect an agent in Microsoft Foundry to a knowledge base in Foundry IQ, which is powered by Azure AI Search. The connection uses Model Context Protocol (MCP) to facilitate tool calls. When invoked by the agent, the knowledge base orchestrates the following retrieval operations:

  • Plans and decomposes a user query into subqueries.
  • Processes the subqueries simultaneously using keyword, vector, or hybrid techniques.
  • Applies semantic reranking to identify the most relevant results.
  • Synthesizes the results into a unified response with source references.

The agent uses the response to ground its answers in enterprise data or web sources, ensuring factual accuracy and transparency through source attribution.

For an end-to-end example of integrating Azure AI Search and Foundry Agent Service for knowledge retrieval, see the agentic-retrieval-pipeline-example Python sample on GitHub.

Prerequisites

Authentication and permissions

We recommend role-based access control for production deployments. If roles aren't feasible, skip this section and use key-based authentication instead.

  • On the parent resource of your project, you must have the Azure AI User role to access model deployments and create agents. This assignment is conferred automatically for Owners when you create the resource. Other users need a specific role assignment. For more information, see Role-based access control in Foundry portal.

  • On the parent resource of your project, you must have the Azure AI Project Manager role to create a project connection for MCP authentication and either Azure AI User or Azure AI Project Manager to use the MCP tool in agents.

  • On your project, create a system-assigned managed identity for interactions with Azure AI Search.

Understand knowledge

Knowledge enables agent grounding and reasoning over enterprise content by integrating the following services:

  • Azure AI Search provides knowledge sources (what to retrieve) and knowledge bases (how to retrieve). The knowledge base plans and executes subqueries and outputs formatted results with citations.

    Although knowledge bases support answer synthesis, we recommend the extractive data output mode for integration with Foundry Agent Service. This mode ensures that the agent receives verbatim content instead of pre-generated answers, providing full control over the response format and quality.

  • Foundry Agent Service orchestrates calls to the knowledge base via the MCP tool and synthesizes the final answer. At runtime, the agent calls only the knowledge base, not the data platform (such as Azure Blob Storage or Microsoft OneLake) that underlies the knowledge source. The knowledge base handles all retrieval operations.

Create a project connection

Start by creating a RemoteTool connection on your Microsoft Foundry project. This connection uses the project's managed identity to target the MCP endpoint of the knowledge base, allowing the agent to securely communicate with Azure AI Search for retrieval operations.

import requests
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

# Provide connection details
credential = DefaultAzureCredential()
project_resource_id = "{project_resource_id}" # e.g. /subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{account_name}/projects/{project_name}
project_connection_name = "{project_connection_name}"
mcp_endpoint = "{search_service_endpoint}/knowledgebases/{knowledge_base_name}/mcp?api-version=2025-11-01-preview" # This endpoint enables the MCP connection between the agent and knowledge base

# Get bearer token for authentication
bearer_token_provider = get_bearer_token_provider(credential, "https://management.azure.com/.default")
headers = {
  "Authorization": f"Bearer {bearer_token_provider()}",
}

# Create project connection
response = requests.put(
  f"https://management.azure.com{project_resource_id}/connections/{project_connection_name}?api-version=2025-10-01-preview",
  headers = headers,
  json = {
    "name": "project_connection_name",
    "type": "Microsoft.MachineLearningServices/workspaces/connections",
    "properties": {
      "authType": "ProjectManagedIdentity",
      "category": "RemoteTool",
      "target": mcp_endpoint,
      "isSharedToAll": True,
      "audience": "https://search.azure.com/",
      "metadata": { "ApiType": "Azure" }
    }
  }
)

response.raise_for_status()
print(f"Connection '{project_connection_name}' created or updated successfully.")

Optimize agent instructions for knowledge base retrieval

To maximize the accuracy of knowledge base invocations and ensure proper citation formatting, use optimized agent instructions. Based on our experiments, we recommend the following instruction template as a starting point:

You are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.
Every answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`
If you cannot find the answer in the provided knowledge base you must respond with "I don't know".
"""

This instruction template optimizes for:

  • Higher MCP tool invocation rates: Explicit directives ensure the agent consistently calls the knowledge base tool rather than relying on its training data.
  • Proper annotation formatting: The specified citation format ensures the agent includes provenance information in responses, making it clear which knowledge sources were used.

Tip

While this template provides a strong foundation, we recommend evaluating and iterating on the instructions based on your specific use case and the tasks you're trying to accomplish. Test different variations to find what works best for your scenarios.

Create an agent with the MCP tool

The next step is to create an agent that integrates the knowledge base as an MCP tool. The agent uses a system prompt to instruct when and how to call the knowledge base, follows instructions on how to answer questions, and automatically maintains its tool configuration and settings across conversation sessions.

Add the knowledge base MCP tool with the project connection you previously created. This tool orchestrates query planning, decomposition, and retrieval across configured knowledge sources. The agent uses this tool to answer queries.

Note

Azure AI Search knowledge bases expose the knowledge_base_retrieve MCP tool for agent integration. This is the only tool currently supported for use with Foundry Agent Service.

from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import PromptAgentDefinition, MCPTool
from azure.identity import DefaultAzureCredential

# Provide agent configuration details
credential = DefaultAzureCredential()
mcp_endpoint = "{search_service_endpoint}/knowledgebases/{knowledge_base_name}/mcp?api-version=2025-11-01-preview"
project_endpoint = "{project_endpoint}" # e.g. https://your-foundry-resource.services.ai.azure.com/api/projects/your-foundry-project
project_connection_name = "{project_connection_name}"
agent_name = "{agent_name}"
agent_model = "{deployed_LLM}" # e.g. gpt-4.1-mini

# Create project client
project_client = AIProjectClient(endpoint = project_endpoint, credential = credential)

# Define agent instructions (see "Optimize agent instructions" section for guidance)
instructions = """
You are a helpful assistant that must use the knowledge base to answer all the questions from user. You must never answer from your own knowledge under any circumstances.
Every answer must always provide annotations for using the MCP knowledge base tool and render them as: `【message_idx:search_idx†source_name】`
If you cannot find the answer in the provided knowledge base you must respond with "I don't know".
"""

# Create MCP tool with knowledge base connection
mcp_kb_tool = MCPTool(
    server_label = "knowledge-base",
    server_url = mcp_endpoint,
    require_approval = "never",
    allowed_tools = ["knowledge_base_retrieve"],
    project_connection_id = project_connection_name
)

# Create agent with MCP tool
agent = project_client.agents.create_version(
    agent_name = agent_name,
    definition = PromptAgentDefinition(
        model = agent_model,
        instructions = instructions,
        tools = [mcp_kb_tool]
    )
)

print(f"Agent '{agent_name}' created or updated successfully.")

Connect to a remote SharePoint knowledge source

If your knowledge base includes a remote SharePoint knowledge source, you must also include the x-ms-query-source-authorization header in the MCP tool connection.

mcp_kb_tool = MCPTool(
    server_label = "knowledge-base",
    server_url = mcp_endpoint,
    require_approval = "never",
    allowed_tools = ["knowledge_base_retrieve"],
    project_connection_id = project_connection_name,
    headers = {
        "x-ms-query-source-authorization": get_bearer_token_provider(credential, "https://search.azure.com/.default")()
    }
)

Invoke the agent with a query

Create a conversation session and send a user query to the agent. When appropriate, the agent orchestrates calls to the MCP tool to retrieve relevant content from the knowledge base. The agent then synthesizes this content into a natural-language response that cites the source documents.

# Get the OpenAI client for responses and conversations
openai_client = project_client.get_openai_client()

# Create conversation
conversation = openai_client.conversations.create()

# Send request to trigger the MCP tool
response = openai_client.responses.create(
    conversation = conversation.id,
    input = """
        Why do suburban belts display larger December brightening than urban cores even though absolute light levels are higher downtown?
        Why is the Phoenix nighttime street grid is so sharply visible from space, whereas large stretches of the interstate between midwestern cities remain comparatively dim?
    """,
    extra_body = {"agent": {"name": agent.name, "type": "agent_reference"}},
)

print(f"Response: {response.output_text}")

The output should be similar to the following:

Response: Suburban belts display larger December brightening than urban cores, even though absolute light levels are higher downtown, primarily because holiday lights increase most dramatically in the suburbs and outskirts of major cities. This is due to more yard space and a prevalence of single-family homes in suburban areas, which results in greater use of decorative holiday lighting. By contrast, central urban areas experience a smaller increase in lighting during the holidays, typically 20 to 30 percent brightening, because of their different building structures and possibly less outdoor space for such decorations. This pattern holds true across the United States as part of the nationally shared tradition of increased holiday lighting in December (Sources: earth_at_night_508_page_174, earth_at_night_508_page_176, earth_at_night_508_page_175).

The Phoenix nighttime street grid is sharply visible from space due to the city's layout along a regular grid of city blocks and streets with extensive street lighting. The major street grid is oriented mostly north-south, with notable diagonal thoroughfares like Grand Avenue that are also brightly lit. The illuminated grid reflects the widespread suburban and residential development fueled by automobile use in the 20th century, which led to optimal access routes to new real estate on the city's borders. Large shopping centers, strip malls, gas stations, and other commercial properties at major intersections also contribute to the brightness. Additionally, parts of the Phoenix metropolitan area remain dark where there are parks, recreational land, and agricultural fields, providing contrast that highlights the lit urban grid (Sources: earth_at_night_508_page_104, earth_at_night_508_page_105).

In contrast, large stretches of the interstate between Midwestern cities remain comparatively dim because although the transportation corridors are well-established, many rural and agricultural areas lack widespread nighttime lighting. The interstate highways are visible but do not have the same continuous bright lighting found in the dense urban grids and commercial suburban zones. The transportation network is extensive, but many roadways running through less populated regions have limited illumination, which renders them less visible in nighttime satellite imagery (Sources: earth_at_night_508_page_124, earth_at_night_508_page_125).

References:
- earth_at_night_508_page_174, earth_at_night_508_page_176, earth_at_night_508_page_175 (Holiday lighting and suburban December brightening)
- earth_at_night_508_page_104, earth_at_night_508_page_105 (Phoenix urban grid visibility)
- earth_at_night_508_page_124, earth_at_night_508_page_125 (Interstate lighting and Midwestern dim stretches)

Delete the agent and project connection

# Delete the agent
project_client.agents.delete_version(agent.name, agent.version)
print(f"Agent '{agent.name}' version '{agent.version}' deleted successfully.")

# Delete the project connection
mgmt_client.project_connections.delete(
    resource_group_name = resource_group,
    account_name = account_name,
    project_name = project_name,
    connection_name = project_connection_name
)
print(f"Project connection '{project_connection_name}' deleted successfully.")

Note

Deleting your agent and project connection doesn't delete your knowledge base or its constituent knowledge sources, which must be deleted separately on your Azure AI Search service. For more information, see Delete a knowledge base and Delete a knowledge source.