Share via

Azure openai in region sweden central seems not to be working.

Truong, David L SITI-ITY/BE 0 Reputation points
2026-04-15T21:22:18.38+00:00

Error:

Traceback (most recent call last):

  File "/...../AI_Foundry/test_ai_foundry_gpt54.py", line 17, in <module>

    response = client.chat.completions.create(

               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/...../.pyenv/versions/3.12.11/lib/python3.12/site-packages/openai/_utils/_utils.py", line 287, in wrapper

    return func(*args, **kwargs)

           ^^^^^^^^^^^^^^^^^^^^^

  File "/...../.pyenv/versions/3.12.11/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 925, in create

    return self._post(

           ^^^^^^^^^^^

  File "/...../.pyenv/versions/3.12.11/lib/python3.12/site-packages/openai/_base_client.py", line 1249, in post

    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/...../.pyenv/versions/3.12.11/lib/python3.12/site-packages/openai/_base_client.py", line 1037, in request

    raise self._make_status_error_from_response(err.response) from None

openai.BadRequestError: Error code: 400 - {'error': {'message': 'The requested operation is unsupported.'}}


AI Foundry provided test code for gpt-5.4-pro:

from openai import OpenAI

endpoint = "https://xxxxxxxxxxxxx-resource.openai.azure.com/openai/v1/"
deployment_name = "gpt-5.4-pro"
api_key = "xxxxxxxxxxx"

client = OpenAI(
    base_url=endpoint,
    api_key=api_key
)

completion = client.chat.completions.create(
    model=deployment_name,
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?",
        }
    ],
)

print(completion.choices[0].message)
Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.


1 answer

Sort by: Most helpful
  1. Q&A Assist
    2026-04-15T21:22:32.7033333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The error openai.BadRequestError: ... 'The requested operation is unsupported.' with status code 400 indicates that the request shape or operation is not supported for the target Azure OpenAI endpoint/deployment, rather than a pure network reachability issue from Sweden Central.

    From the provided code and the available information, the following points are relevant:

    1. Error meaning and class
      • HTTP 400 maps to BadRequestError in the Python client.
      • This is raised when the service receives the request but rejects it as invalid or unsupported.
      • It is not a 401/403 (auth/permission) or 404 (wrong path/model) or a connectivity error.
    2. General troubleshooting guidance for 4xx errors
      • For 401/403: verify the identity or API key and that it has access to the Azure OpenAI resource.
      • For 404: verify that the endpoint uses the ...openai.azure.com/openai/v1/ path and that the model value is a valid deployment name.
      • For unexpected failures: check for proxy/firewall issues and try a smaller prompt.
      In this case, the endpoint format in the code matches the documented pattern:
         https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/
      
      and the call is using chat.completions.create, which is the correct operation type for chat models.
    3. What to verify in this scenario Based on the documented troubleshooting steps, the following should be checked for this specific deployment:
      1. Endpoint and path
      • Confirm that the endpoint string exactly matches the Azure OpenAI resource endpoint from the Azure portal, including the /openai/v1/ suffix.
      1. Model deployment name
      • Confirm that deployment_name = "gpt-5.4-pro" exactly matches the deployment name configured in Azure OpenAI / AI Foundry, not just the base model name.
      • If the deployment name differs (for example, a custom name like gpt54prod), that exact name must be used in model=.
      1. Region/model support
      • Ensure that the specific model version deployed (for example, a gpt-5.x variant) is actually available and supported in the region where the Azure OpenAI resource is created.
      • If the model or version is not supported in that region, calls can fail even if the deployment appears present.
      1. Operation type vs. model capability
      • Verify that the deployed model supports the chat.completions API. If the deployment is of a type that only supports another API surface (for example, Responses API or a different task type), calling it via chat.completions.create can result in an "unsupported operation" error.
    4. Error-handling and diagnostics
      • Since this is a 400 BadRequestError, retries will not resolve it; the request must be corrected.
      • Retrieve and log the request ID for support/troubleshooting:
             print(completion._request_id)
        
        This corresponds to the x-request-id header and can be used with Azure support.
    5. Authentication and client configuration
      • The client instantiation pattern in the snippet matches the documented usage when using an API key:
             from openai import OpenAI
             
             client = OpenAI(
                 api_key=api_key,
                 base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",
             )
        
      • Alternatively, Microsoft Entra ID (DefaultAzureCredential + bearer token provider) can be used, but that does not change the semantics of a 400 error.

    If, after verifying the endpoint, deployment name, model support in the region, and that the model is intended for chat.completions, the error persists, capturing the _request_id and engaging Azure support with that ID is the next step, as the service is clearly reachable but rejecting the specific operation as unsupported.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.