Share via

Azure OpenAI returns 400 error for public image URLs

Leonardo Buffoni 0 Reputation points
2026-03-20T06:57:32.3866667+00:00

Calls to the Azure OpenAI completion endpoint with image_url inputs return HTTP 400 error {'error': {'message': 'Failed to download image from [URL].', 'type': 'invalid_request_error', 'param': None, 'code': None}} for any publicly accessible URL (e.g. Wikipedia images reachable without authentication); the JSON payload and parameters are valid, and the issue occurs consistently across multiple requests and URLs, suggesting a failure in the service when fetching external images rather than a client-side error; please confirm if there are network restrictions, allowlist requirements, or known issues affecting external image retrieval.

Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.

{count} votes

1 answer

Sort by: Most helpful
  1. Manas Mohanty 15,795 Reputation points Microsoft External Staff Moderator
    2026-03-23T22:32:16.7133333+00:00

    Hi Leonardo Buffoni

    Here are few reason behind above error

     HTTP 400 error {'error': {'message': 'Failed to download image from [URL].', 'type': 'invalid_request_error', 'param': None, 'code': None}} 
    
    

    Your OpenAI resource might be behind virtual network with no outbound access to public links allowed.

    Blocked due to policy restriction.

    Recommendation

    1. you can convert the image url to base 64 formats.
         def encode_image_to_data_url(image_path: str) -> str:
             """
             Encode a local image to a data URL: data:image/<ext>;base64,<...>
             Azure OpenAI Responses API supports base64 images in vision prompts. [1](https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/responses)[2](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?view=foundry-classic)
             """
             ext = os.path.splitext(image_path)[1].lower()
             if ext in [".jpg", ".jpeg"]:
                 mime = "image/jpeg"
             elif ext == ".png":
                 mime = "image/png"
             elif ext == ".webp":
                 mime = "image/webp"
             else:
                 raise ValueError("Use PNG, JPEG/JPG, or WEBP for vision image input.")
             with open(image_path, "rb") as f:
                 b64 = base64.b64encode(f.read()).decode("utf-8")
             return f"data:{mime};base64,{b64}"
         if __name__ == "__main__":
             image_path = "path_to_your_image.jpg"
             data_url = encode_image_to_data_url(image_path)
             # For vision-enabled models, the doc shows:
             # - content: input_text + input_image (image_url string)
             # - Base64 encoded image approach also shown in the same doc. [1](https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/responses)[2](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses?view=foundry-classic)
             response = client.responses.create(
                 model="gpt-4o",  # Use your deployed model name if required in your setup
                 input=[
                     {
                         "role": "user",
                         "content": [
                             {"type": "input_text", "text": "Describe what is in this image."},
                             {"type": "input_image", "image_url": data_url},
                         ],
                     }
                 ],
             )
             # Print the full response JSON
             print(response)
      
    2. you can copy to Azure storage

    Follow up query

    Please share your network setting and the file link to replicate the issue

    Which model you are using

    Thank you.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.