An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
Here are few reason behind above error
HTTP 400 error {'error': {'message': 'Failed to download image from [URL].', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Your OpenAI resource might be behind virtual network with no outbound access to public links allowed.
Blocked due to policy restriction.
Recommendation
- you can convert the image url to base 64 formats.
def encode_image_to_data_url(image_path: str) -> str: """ Encode a local image to a data URL: data:image/<ext>;base64,<...> Azure OpenAI Responses API supports base64 images in vision prompts. [1](https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/responses)[2](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?view=foundry-classic) """ ext = os.path.splitext(image_path)[1].lower() if ext in [".jpg", ".jpeg"]: mime = "image/jpeg" elif ext == ".png": mime = "image/png" elif ext == ".webp": mime = "image/webp" else: raise ValueError("Use PNG, JPEG/JPG, or WEBP for vision image input.") with open(image_path, "rb") as f: b64 = base64.b64encode(f.read()).decode("utf-8") return f"data:{mime};base64,{b64}" if __name__ == "__main__": image_path = "path_to_your_image.jpg" data_url = encode_image_to_data_url(image_path) # For vision-enabled models, the doc shows: # - content: input_text + input_image (image_url string) # - Base64 encoded image approach also shown in the same doc. [1](https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/responses)[2](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?view=foundry-classic) response = client.responses.create( model="gpt-4o", # Use your deployed model name if required in your setup input=[ { "role": "user", "content": [ {"type": "input_text", "text": "Describe what is in this image."}, {"type": "input_image", "image_url": data_url}, ], } ], ) # Print the full response JSON print(response) - you can copy to Azure storage
Follow up query
Please share your network setting and the file link to replicate the issue
Which model you are using
Thank you.