An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
The error openai.BadRequestError: ... 'The requested operation is unsupported.' with status code 400 indicates that the request shape or operation is not supported for the target Azure OpenAI endpoint/deployment, rather than a pure network reachability issue from Sweden Central.
From the provided code and the available information, the following points are relevant:
- Error meaning and class
- HTTP 400 maps to
BadRequestErrorin the Python client. - This is raised when the service receives the request but rejects it as invalid or unsupported.
- It is not a 401/403 (auth/permission) or 404 (wrong path/model) or a connectivity error.
- HTTP 400 maps to
- General troubleshooting guidance for 4xx errors
- For 401/403: verify the identity or API key and that it has access to the Azure OpenAI resource.
- For 404: verify that the endpoint uses the
...openai.azure.com/openai/v1/path and that themodelvalue is a valid deployment name. - For unexpected failures: check for proxy/firewall issues and try a smaller prompt.
and the call is usinghttps://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/chat.completions.create, which is the correct operation type for chat models. - What to verify in this scenario
Based on the documented troubleshooting steps, the following should be checked for this specific deployment:
- Endpoint and path
- Confirm that the
endpointstring exactly matches the Azure OpenAI resource endpoint from the Azure portal, including the/openai/v1/suffix.
- Model deployment name
- Confirm that
deployment_name = "gpt-5.4-pro"exactly matches the deployment name configured in Azure OpenAI / AI Foundry, not just the base model name. - If the deployment name differs (for example, a custom name like
gpt54prod), that exact name must be used inmodel=.
- Region/model support
- Ensure that the specific model version deployed (for example, a
gpt-5.xvariant) is actually available and supported in the region where the Azure OpenAI resource is created. - If the model or version is not supported in that region, calls can fail even if the deployment appears present.
- Operation type vs. model capability
- Verify that the deployed model supports the
chat.completionsAPI. If the deployment is of a type that only supports another API surface (for example, Responses API or a different task type), calling it viachat.completions.createcan result in an "unsupported operation" error.
- Error-handling and diagnostics
- Since this is a 400
BadRequestError, retries will not resolve it; the request must be corrected. - Retrieve and log the request ID for support/troubleshooting:
This corresponds to theprint(completion._request_id)x-request-idheader and can be used with Azure support.
- Since this is a 400
- Authentication and client configuration
- The client instantiation pattern in the snippet matches the documented usage when using an API key:
from openai import OpenAI client = OpenAI( api_key=api_key, base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/", ) - Alternatively, Microsoft Entra ID (DefaultAzureCredential + bearer token provider) can be used, but that does not change the semantics of a 400 error.
- The client instantiation pattern in the snippet matches the documented usage when using an API key:
If, after verifying the endpoint, deployment name, model support in the region, and that the model is intended for chat.completions, the error persists, capturing the _request_id and engaging Azure support with that ID is the next step, as the service is clearly reachable but rejecting the specific operation as unsupported.
References: