Share via

Issue Updating Agent with Non-ChatGPT Models in Azure AI Foundry

khushal 0 Reputation points
2026-03-09T05:25:41.7933333+00:00

I'm unable to update agents with non-ChatGPT models in Azure AI Foundry. While I can create agents using those models, updating an existing agent to use a different model (for example, Llama-4-Maverick-17B-128E-Instruct-FP8) results in an error. Could you help me understand why this is happening? If I use the Assistant API, I can create an agent using the Llama-4-Maverick-17B-128E-Instruct-FP8 model, but I’m unable to update the agent with this model afterward.

Foundry Tools
Foundry Tools

Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform

0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Karnam Venkata Rajeswari 565 Reputation points Microsoft External Staff Moderator
    2026-03-20T13:58:45.8833333+00:00

     Hello khushal,

    While working with Azure AI Foundry, agents can be created successfully using non‑ChatGPT models, including Llama‑4‑Maverick‑17B‑128E‑Instruct‑FP8. However, updating an existing agent to switch its underlying model is not supported as an in‑place operation.

    This behavior is expected and is part of the current design of the Foundry Agent Service and Assistant‑style APIs. Once an agent version is created, the model selection becomes immutable for that version. As a result, attempts to update an existing agent definition to swap from a ChatGPT‑style model to a non‑ChatGPT model (such as Llama‑4) can return an error.

    This is not a defect in the Llama‑4 model. The limitation applies to how agent updates are handled across the platform.

    The supported way to move an agent to a different model is through agent versioning, not through an update call.

    Using the Azure AI Foundry Portal

    1. Please open Azure AI Foundry
    2. Then navigate to Agents
    3. Select the existing agent
    4. Open the Versions tab
    5. Select New version
    6. Choose Llama‑4‑Maverick‑17B‑128E‑Instruct‑FP8 as the model
    7. Save the new version
    8. Mark the new version as Default / Active

    Using SDK or API

    1. Please create a new agent version using the desired Llama‑4 model ID
    2. Please avoid using update operations to change the model
    3. Then promote the new version after validation

     

    References:

    Migrate to the new Foundry Agent Service - Microsoft Foundry | Microsoft Learn

    Quotas and limits for Microsoft Foundry Agent Service - Microsoft Foundry | Microsoft Learn

     

    Thank you! 


  2. Q&A Assist
    2026-03-09T05:25:54.75+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Non-ChatGPT models such as Llama-4-Maverick-17B-128E-Instruct-FP8 are supported in Microsoft Foundry, but not all models are supported uniformly across all agent experiences and operations.

    From the documented behavior:

    1. Agent-supported model requirement
      Only models that are marked as Agent supported in the Foundry Model catalog can be reliably used for agents, including creating and updating them. The non-OpenAI models list (for example, Llama-4-Maverick-17B-128E-Instruct-FP8, Llama-3.3-70B-Instruct, DeepSeek-*, grok-*, etc.) are “models sold directly by Azure,” but whether they can be attached to an agent (and updated on an existing agent) depends on them being flagged as Agent supported. To verify this:
      1. Sign in to Microsoft Foundry (classic) with the New Foundry toggle off.
      2. Open Model catalog.
      3. Filter by Capabilities → Agent supported.
      4. Confirm that the target model (for example, Llama-4-Maverick-17B-128E-Instruct-FP8) appears in this filtered list.
      If the model does not appear under Agent supported, it can often still be deployed and used directly (for example, via SDKs), but it is not supported for agents or for updating an existing agent’s model, which leads to errors when attempting to switch.
    2. Classic vs new agents behavior
      The context explicitly distinguishes Foundry (classic) agents from the new Microsoft Foundry Agents Service, and notes that classic agents are deprecated. Some tooling and model support differ between these two experiences. For classic agents, only the subset of models marked as Agent supported in the classic portal can be used. If the agent was created in classic and the target model is only supported (or only configured) in the new experience, updating will fail.
    3. Model availability and region constraints
      Model usage (including for agents) is also constrained by region and availability. If a model is not available or not enabled in the project’s region as an agent-supported model, attempts to update an existing agent to that model will fail, even if the model can be deployed separately.

    In practice, this means:

    • Creating an agent with a given non-ChatGPT model may succeed if that model is agent-supported and correctly deployed in the project.
    • Updating an existing agent to a different non-ChatGPT model will fail if that target model is not in the Agent supported subset for the project/region or is only available in a different agents experience (classic vs new).

    To resolve the issue:

    1. Confirm that the target model appears in the Model catalog with the Agent supported capability for the same Foundry experience (classic vs new) where the agent lives.
    2. If it does not, choose a model from the Agent supported list instead, or migrate to the new Agents Service if the model is only supported there.
    3. If the model is listed as agent-supported but still fails, verify region/model availability using the model region support documentation and ensure the agent and model are in a compatible region.

    References:


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.