This article demonstrates how to call the Image Analysis version 3.2 API to return information about an image's visual features. It also shows you how to parse the returned information using the client SDKs or REST API.
The code in this guide uses remote images referenced by URL. You might want to try different images on your own to see the full capability of the Image Analysis features.
When analyzing a remote image, you specify the image's URL by formatting the request body like this: {"url":"http://example.com/images/test.jpg"}
.
To analyze a local image, put the binary image data in the HTTP request body.
In your main class, save a reference to the URL of the image you want to analyze.
// URL image used for analyzing an image (image of puppy)
private const string ANALYZE_URL_IMAGE = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/refs/heads/master/ComputerVision/Images/dog.jpg";
To analyze a local image, see the ComputerVisionClient methods, such as AnalyzeImageInStreamAsync
. Or, see the sample code on GitHub for scenarios involving local images.
In your main class, save a reference to the URL of the image you want to analyze.
String pathToRemoteImage = "https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/ComputerVision/Images/faces.jpg";
To analyze a local image, see the ComputerVision methods, such as AnalyzeImage
. Or, see the sample code on GitHub for scenarios involving local images.
In your main function, save a reference to the URL of the image you want to analyze.
const describeURL = 'https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/celebrities.jpg';
To analyze a local image, see the ComputerVisionClient methods, such as describeImageInStream
. Or, see the sample code on GitHub for scenarios involving local images.
Save a reference to the URL of the image you want to analyze.
remote_image_url = "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
To analyze a local image, see the ComputerVisionClientOperationsMixin methods, such as analyze_image_in_stream
. Or, see the sample code on GitHub for scenarios involving local images.
The Analyze API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. For a description of each feature, see the Azure AI Vision overview. The examples in the following sections add all of the available visual features, but for practical usage you likely only need one or two.
You can specify which features you want to use by setting the URL query parameters of the Analyze API. A parameter can have multiple values, separated by commas. Each feature you specify requires more computation time, so only specify what you need.
URL parameter |
Value |
Description |
features |
Read |
reads the visible text in the image and outputs it as structured JSON data |
features |
Description |
describes the image content with a complete sentence in supported languages |
features |
SmartCrops |
finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest |
features |
Objects |
detects various objects within an image, including the approximate location. The Objects argument is only available in English |
features |
Tags |
tags the image with a detailed list of words related to the image content |
A populated URL might look like this:
<endpoint>/vision/v3.2/analyze?visualFeatures=Tags
Define your new method for image analysis. Add the following code, which specifies visual features you'd like to extract in your analysis. See the VisualFeatureTypes enum for a complete list.
/*
* ANALYZE IMAGE - URL IMAGE
* Analyze URL image. Extracts captions, categories, tags, objects, faces, racy/adult/gory content,
* brands, celebrities, landmarks, color scheme, and image types.
*/
public static async Task AnalyzeImageUrl(ComputerVisionClient client, string imageUrl)
{
Console.WriteLine("----------------------------------------------------------");
Console.WriteLine("ANALYZE IMAGE - URL");
Console.WriteLine();
// Creating a list that defines the features to be extracted from the image.
List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>()
{
VisualFeatureTypes.Categories, VisualFeatureTypes.Description,
VisualFeatureTypes.Faces, VisualFeatureTypes.ImageType,
VisualFeatureTypes.Tags, VisualFeatureTypes.Adult,
VisualFeatureTypes.Color, VisualFeatureTypes.Brands,
VisualFeatureTypes.Objects
};
Specify which visual features you'd like to extract in your analysis. See the VisualFeatureTypes enum for a complete list.
// This list defines the features to be extracted from the image.
List<VisualFeatureTypes> featuresToExtractFromRemoteImage = new ArrayList<>();
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.DESCRIPTION);
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.CATEGORIES);
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.TAGS);
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.FACES);
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.ADULT);
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.COLOR);
featuresToExtractFromRemoteImage.add(VisualFeatureTypes.IMAGE_TYPE);
Specify which visual features you'd like to extract in your analysis. See the VisualFeatureTypes enum for a complete list.
// Get the visual feature for analysis
const features = ['Categories','Brands','Adult','Color','Description','Faces','Image_type','Objects','Tags'];
const domainDetails = ['Celebrities','Landmarks'];
Specify which visual features you'd like to extract in your analysis. See the VisualFeatureTypes enum for a complete list.
print("===== Analyze an image - remote =====")
# Select the visual feature(s) you want.
remote_image_features = [VisualFeatureTypes.categories,VisualFeatureTypes.brands,VisualFeatureTypes.adult,VisualFeatureTypes.color,VisualFeatureTypes.description,VisualFeatureTypes.faces,VisualFeatureTypes.image_type,VisualFeatureTypes.objects,VisualFeatureTypes.tags]
remote_image_details = [Details.celebrities,Details.landmarks]
You can also specify the language of the returned data.
The following URL query parameter specifies the language. The default value is en
.
URL parameter |
Value |
Description |
language |
en |
English |
language |
es |
Spanish |
language |
ja |
Japanese |
language |
pt |
Portuguese |
language |
zh |
Simplified Chinese |
A populated URL might look like this:
<endpoint>/vision/v3.2/analyze?visualFeatures=Tags&language=en
Use the language parameter of the AnalyzeImageAsync call to specify a language.
Language |
Value |
English |
en |
Spanish |
es |
Japanese |
ja |
Portuguese |
pt |
Simplified Chinese |
zh |
A method call that specifies a language might look like the following.
ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features, language: "en");
Use the AnalyzeImageOptionalParameter input in your Analyze call to specify a language.
Language |
Value |
English |
en |
Spanish |
es |
Japanese |
ja |
Portuguese |
pt |
Simplified Chinese |
zh |
A method call that specifies a language might look like the following.
ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
.withVisualFeatures(featuresToExtractFromLocalImage)
.language("en")
.execute();
Use the language
property of the ComputerVisionClientAnalyzeImageOptionalParams input in your Analyze call to specify a language.
Language |
Value |
English |
en |
Spanish |
es |
Japanese |
ja |
Portuguese |
pt |
Simplified Chinese |
zh |
A method call that specifies a language might look like the following.
const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
Use the language
parameter of your analyze_image call to specify a language.
Language |
Value |
English |
en |
Spanish |
es |
Japanese |
ja |
Portuguese |
pt |
Simplified Chinese |
zh |
A method call that specifies a language might look like the following.
results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details, 'en')
This section shows you how to parse the results of the API call. It includes the API call itself.
The service returns a 200
HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
{
"metadata":
{
"width": 300,
"height": 200
},
"tagsResult":
{
"values":
[
{
"name": "grass",
"confidence": 0.9960499405860901
},
{
"name": "outdoor",
"confidence": 0.9956876635551453
},
{
"name": "building",
"confidence": 0.9893627166748047
},
{
"name": "property",
"confidence": 0.9853052496910095
},
{
"name": "plant",
"confidence": 0.9791355729103088
}
]
}
}
Error codes
See the following list of possible errors and their causes:
- 400
InvalidImageUrl
- Image URL is badly formatted or not accessible
InvalidImageFormat
- Input data isn't a valid image
InvalidImageSize
- Input image is too large
NotSupportedVisualFeature
- Specified feature type isn't valid
NotSupportedImage
- Unsupported image, for example child pornography
InvalidDetails
- Unsupported detail
parameter value
NotSupportedLanguage
- The requested operation isn't supported in the language specified
BadArgument
- More details are provided in the error message
- 415 - Unsupported media type error. The Content-Type isn't in the allowed types:
- For an image URL, Content-Type should be
application/json
- For a binary image data, Content-Type should be
application/octet-stream
or multipart/form-data
- 500
FailedToProcess
Timeout
- Image processing timed out
InternalServerError
The following code calls the Image Analysis API and prints the results to the console.
// Analyze the URL image
ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features);
// Summarizes the image content.
Console.WriteLine("Summary:");
foreach (var caption in results.Description.Captions)
{
Console.WriteLine($"{caption.Text} with confidence {caption.Confidence}");
}
Console.WriteLine();
// Display categories the image is divided into.
Console.WriteLine("Categories:");
foreach (var category in results.Categories)
{
Console.WriteLine($"{category.Name} with confidence {category.Score}");
}
Console.WriteLine();
// Image tags and their confidence score
Console.WriteLine("Tags:");
foreach (var tag in results.Tags)
{
Console.WriteLine($"{tag.Name} {tag.Confidence}");
}
Console.WriteLine();
// Objects
Console.WriteLine("Objects:");
foreach (var obj in results.Objects)
{
Console.WriteLine($"{obj.ObjectProperty} with confidence {obj.Confidence} at location {obj.Rectangle.X}, " +
$"{obj.Rectangle.X + obj.Rectangle.W}, {obj.Rectangle.Y}, {obj.Rectangle.Y + obj.Rectangle.H}");
}
Console.WriteLine();
// Faces
Console.WriteLine("Faces:");
foreach (var face in results.Faces)
{
Console.WriteLine($"A {face.Gender} of age {face.Age} at location {face.FaceRectangle.Left}, " +
$"{face.FaceRectangle.Left}, {face.FaceRectangle.Top + face.FaceRectangle.Width}, " +
$"{face.FaceRectangle.Top + face.FaceRectangle.Height}");
}
Console.WriteLine();
// Adult or racy content, if any.
Console.WriteLine("Adult:");
Console.WriteLine($"Has adult content: {results.Adult.IsAdultContent} with confidence {results.Adult.AdultScore}");
Console.WriteLine($"Has racy content: {results.Adult.IsRacyContent} with confidence {results.Adult.RacyScore}");
Console.WriteLine($"Has gory content: {results.Adult.IsGoryContent} with confidence {results.Adult.GoreScore}");
Console.WriteLine();
// Well-known (or custom, if set) brands.
Console.WriteLine("Brands:");
foreach (var brand in results.Brands)
{
Console.WriteLine($"Logo of {brand.Name} with confidence {brand.Confidence} at location {brand.Rectangle.X}, " +
$"{brand.Rectangle.X + brand.Rectangle.W}, {brand.Rectangle.Y}, {brand.Rectangle.Y + brand.Rectangle.H}");
}
Console.WriteLine();
// Celebrities in image, if any.
Console.WriteLine("Celebrities:");
foreach (var category in results.Categories)
{
if (category.Detail?.Celebrities != null)
{
foreach (var celeb in category.Detail.Celebrities)
{
Console.WriteLine($"{celeb.Name} with confidence {celeb.Confidence} at location {celeb.FaceRectangle.Left}, " +
$"{celeb.FaceRectangle.Top}, {celeb.FaceRectangle.Height}, {celeb.FaceRectangle.Width}");
}
}
}
Console.WriteLine();
// Popular landmarks in image, if any.
Console.WriteLine("Landmarks:");
foreach (var category in results.Categories)
{
if (category.Detail?.Landmarks != null)
{
foreach (var landmark in category.Detail.Landmarks)
{
Console.WriteLine($"{landmark.Name} with confidence {landmark.Confidence}");
}
}
}
Console.WriteLine();
// Identifies the color scheme.
Console.WriteLine("Color Scheme:");
Console.WriteLine("Is black and white?: " + results.Color.IsBWImg);
Console.WriteLine("Accent color: " + results.Color.AccentColor);
Console.WriteLine("Dominant background color: " + results.Color.DominantColorBackground);
Console.WriteLine("Dominant foreground color: " + results.Color.DominantColorForeground);
Console.WriteLine("Dominant colors: " + string.Join(",", results.Color.DominantColors));
Console.WriteLine();
// Detects the image types.
Console.WriteLine("Image Type:");
Console.WriteLine("Clip Art Type: " + results.ImageType.ClipArtType);
Console.WriteLine("Line Drawing Type: " + results.ImageType.LineDrawingType);
Console.WriteLine();
The following code calls the Image Analysis API and prints the results to the console.
// Call the Computer Vision service and tell it to analyze the loaded image.
ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
.withVisualFeatures(featuresToExtractFromRemoteImage).execute();
// Display image captions and confidence values.
System.out.println("\nCaptions: ");
for (ImageCaption caption : analysis.description().captions()) {
System.out.printf("\'%s\' with confidence %f\n", caption.text(), caption.confidence());
}
// Display image category names and confidence values.
System.out.println("\nCategories: ");
for (Category category : analysis.categories()) {
System.out.printf("\'%s\' with confidence %f\n", category.name(), category.score());
}
// Display image tags and confidence values.
System.out.println("\nTags: ");
for (ImageTag tag : analysis.tags()) {
System.out.printf("\'%s\' with confidence %f\n", tag.name(), tag.confidence());
}
// Display any faces found in the image and their location.
System.out.println("\nFaces: ");
for (FaceDescription face : analysis.faces()) {
System.out.printf("\'%s\' of age %d at location (%d, %d), (%d, %d)\n", face.gender(), face.age(),
face.faceRectangle().left(), face.faceRectangle().top(),
face.faceRectangle().left() + face.faceRectangle().width(),
face.faceRectangle().top() + face.faceRectangle().height());
}
// Display whether any adult or racy content was detected and the confidence
// values.
System.out.println("\nAdult: ");
System.out.printf("Is adult content: %b with confidence %f\n", analysis.adult().isAdultContent(),
analysis.adult().adultScore());
System.out.printf("Has racy content: %b with confidence %f\n", analysis.adult().isRacyContent(),
analysis.adult().racyScore());
// Display the image color scheme.
System.out.println("\nColor scheme: ");
System.out.println("Is black and white: " + analysis.color().isBWImg());
System.out.println("Accent color: " + analysis.color().accentColor());
System.out.println("Dominant background color: " + analysis.color().dominantColorBackground());
System.out.println("Dominant foreground color: " + analysis.color().dominantColorForeground());
System.out.println("Dominant colors: " + String.join(", ", analysis.color().dominantColors()));
// Display any celebrities detected in the image and their locations.
System.out.println("\nCelebrities: ");
for (Category category : analysis.categories()) {
if (category.detail() != null && category.detail().celebrities() != null) {
for (CelebritiesModel celeb : category.detail().celebrities()) {
System.out.printf("\'%s\' with confidence %f at location (%d, %d), (%d, %d)\n", celeb.name(),
celeb.confidence(), celeb.faceRectangle().left(), celeb.faceRectangle().top(),
celeb.faceRectangle().left() + celeb.faceRectangle().width(),
celeb.faceRectangle().top() + celeb.faceRectangle().height());
}
}
}
// Display any landmarks detected in the image and their locations.
System.out.println("\nLandmarks: ");
for (Category category : analysis.categories()) {
if (category.detail() != null && category.detail().landmarks() != null) {
for (LandmarksModel landmark : category.detail().landmarks()) {
System.out.printf("\'%s\' with confidence %f\n", landmark.name(), landmark.confidence());
}
}
}
// Display what type of clip art or line drawing the image is.
System.out.println("\nImage type:");
System.out.println("Clip art type: " + analysis.imageType().clipArtType());
System.out.println("Line drawing type: " + analysis.imageType().lineDrawingType());
The following code calls the Image Analysis API and prints the results to the console.
const result = (await computerVisionClient.analyzeImage(facesImageURL,{visualFeatures: features},{details: domainDetails}));
// Detect faces
// Print the bounding box, gender, and age from the faces.
const faces = result.faces
if (faces.length) {
console.log(`${faces.length} face${faces.length == 1 ? '' : 's'} found:`);
for (const face of faces) {
console.log(` Gender: ${face.gender}`.padEnd(20)
+ ` Age: ${face.age}`.padEnd(10) + `at ${formatRectFaces(face.faceRectangle)}`);
}
} else { console.log('No faces found.'); }
// Formats the bounding box
function formatRectFaces(rect) {
return `top=${rect.top}`.padEnd(10) + `left=${rect.left}`.padEnd(10) + `bottom=${rect.top + rect.height}`.padEnd(12)
+ `right=${rect.left + rect.width}`.padEnd(10) + `(${rect.width}x${rect.height})`;
}
// Detect Objects
const objects = result.objects;
console.log();
// Print objects bounding box and confidence
if (objects.length) {
console.log(`${objects.length} object${objects.length == 1 ? '' : 's'} found:`);
for (const obj of objects) { console.log(` ${obj.object} (${obj.confidence.toFixed(2)}) at ${formatRectObjects(obj.rectangle)}`); }
} else { console.log('No objects found.'); }
// Formats the bounding box
function formatRectObjects(rect) {
return `top=${rect.y}`.padEnd(10) + `left=${rect.x}`.padEnd(10) + `bottom=${rect.y + rect.h}`.padEnd(12)
+ `right=${rect.x + rect.w}`.padEnd(10) + `(${rect.w}x${rect.h})`;
}
console.log();
// Detect tags
const tags = result.tags;
console.log(`Tags: ${formatTags(tags)}`);
// Format tags for display
function formatTags(tags) {
return tags.map(tag => (`${tag.name} (${tag.confidence.toFixed(2)})`)).join(', ');
}
console.log();
// Detect image type
const types = result.imageType;
console.log(`Image appears to be ${describeType(types)}`);
function describeType(imageType) {
if (imageType.clipArtType && imageType.clipArtType > imageType.lineDrawingType) return 'clip art';
if (imageType.lineDrawingType && imageType.clipArtType < imageType.lineDrawingType) return 'a line drawing';
return 'a photograph';
}
console.log();
// Detect Category
const categories = result.categories;
console.log(`Categories: ${formatCategories(categories)}`);
// Formats the image categories
function formatCategories(categories) {
categories.sort((a, b) => b.score - a.score);
return categories.map(cat => `${cat.name} (${cat.score.toFixed(2)})`).join(', ');
}
console.log();
// Detect Brands
const brands = result.brands;
// Print the brands found
if (brands.length) {
console.log(`${brands.length} brand${brands.length != 1 ? 's' : ''} found:`);
for (const brand of brands) {
console.log(` ${brand.name} (${brand.confidence.toFixed(2)} confidence)`);
}
} else { console.log(`No brands found.`); }
console.log();
// Detect Colors
const color = result.color;
printColorScheme(color);
// Print a detected color scheme
function printColorScheme(colors) {
console.log(`Image is in ${colors.isBwImg ? 'black and white' : 'color'}`);
console.log(`Dominant colors: ${colors.dominantColors.join(', ')}`);
console.log(`Dominant foreground color: ${colors.dominantColorForeground}`);
console.log(`Dominant background color: ${colors.dominantColorBackground}`);
console.log(`Suggested accent color: #${colors.accentColor}`);
}
console.log();
// Detect landmarks
const domain = result.landmarks;
// Prints domain-specific, recognized objects
if (domain.length) {
console.log(`${domain.length} ${domain.length == 1 ? 'landmark' : 'landmarks'} found:`);
for (const obj of domain) {
console.log(` ${obj.name}`.padEnd(20) + `(${obj.confidence.toFixed(2)} confidence)`.padEnd(20) + `${formatRectDomain(obj.faceRectangle)}`);
}
} else {
console.log('No landmarks found.');
}
// Formats bounding box
function formatRectDomain(rect) {
if (!rect) return '';
return `top=${rect.top}`.padEnd(10) + `left=${rect.left}`.padEnd(10) + `bottom=${rect.top + rect.height}`.padEnd(12) +
`right=${rect.left + rect.width}`.padEnd(10) + `(${rect.width}x${rect.height})`;
}
console.log();
// Detect Adult content
// Function to confirm racy or not
const isIt = flag => flag ? 'is' : "isn't";
const adult = result.adult;
console.log(`This probably ${isIt(adult.isAdultContent)} adult content (${adult.adultScore.toFixed(4)} score)`);
console.log(`This probably ${isIt(adult.isRacyContent)} racy content (${adult.racyScore.toFixed(4)} score)`);
console.log();
The following code calls the Image Analysis API and prints the results to the console.
# Call API with URL and features
results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details)
# Print results with confidence score
print("Categories from remote image: ")
if (len(results_remote.categories) == 0):
print("No categories detected.")
else:
for category in results_remote.categories:
print("'{}' with confidence {:.2f}%".format(category.name, category.score * 100))
print()
# Detect faces
# Print the results with gender, age, and bounding box
print("Faces in the remote image: ")
if (len(results_remote.faces) == 0):
print("No faces detected.")
else:
for face in results_remote.faces:
print("'{}' of age {} at location {}, {}, {}, {}".format(face.gender, face.age, \
face.face_rectangle.left, face.face_rectangle.top, \
face.face_rectangle.left + face.face_rectangle.width, \
face.face_rectangle.top + face.face_rectangle.height))
# Adult content
# Print results with adult/racy score
print("Analyzing remote image for adult or racy content ... ")
print("Is adult content: {} with confidence {:.2f}".format(results_remote.adult.is_adult_content, results_remote.adult.adult_score * 100))
print("Has racy content: {} with confidence {:.2f}".format(results_remote.adult.is_racy_content, results_remote.adult.racy_score * 100))
print()
# Detect colors
# Print results of color scheme
print("Getting color scheme of the remote image: ")
print("Is black and white: {}".format(results_remote.color.is_bw_img))
print("Accent color: {}".format(results_remote.color.accent_color))
print("Dominant background color: {}".format(results_remote.color.dominant_color_background))
print("Dominant foreground color: {}".format(results_remote.color.dominant_color_foreground))
print("Dominant colors: {}".format(results_remote.color.dominant_colors))
print()
# Detect image type
# Prints type results with degree of accuracy
print("Type of remote image:")
if results_remote.image_type.clip_art_type == 0:
print("Image is not clip art.")
elif results_remote.image_type.line_drawing_type == 1:
print("Image is ambiguously clip art.")
elif results_remote.image_type.line_drawing_type == 2:
print("Image is normal clip art.")
else:
print("Image is good clip art.")
if results_remote.image_type.line_drawing_type == 0:
print("Image is not a line drawing.")
else:
print("Image is a line drawing")
# Detect brands
print("Detecting brands in remote image: ")
if len(results_remote.brands) == 0:
print("No brands detected.")
else:
for brand in results_remote.brands:
print("'{}' brand detected with confidence {:.1f}% at location {}, {}, {}, {}".format( \
brand.name, brand.confidence * 100, brand.rectangle.x, brand.rectangle.x + brand.rectangle.w, \
brand.rectangle.y, brand.rectangle.y + brand.rectangle.h))
# Detect objects
# Print detected objects results with bounding boxes
print("Detecting objects in remote image:")
if len(results_remote.objects) == 0:
print("No objects detected.")
else:
for object in detect_objects_results_remote.objects:
print("object at location {}, {}, {}, {}".format( \
object.rectangle.x, object.rectangle.x + object.rectangle.w, \
object.rectangle.y, object.rectangle.y + object.rectangle.h))
# Describe image
# Get the captions (descriptions) from the response, with confidence level
print("Description of remote image: ")
if (len(results_remote.description.captions) == 0):
print("No description detected.")
else:
for caption in results_remote.description.captions:
print("'{}' with confidence {:.2f}%".format(caption.text, caption.confidence * 100))
print()
# Return tags
# Print results with confidence score
print("Tags in the remote image: ")
if (len(results_remote.tags) == 0):
print("No tags detected.")
else:
for tag in results_remote.tags:
print("'{}' with confidence {:.2f}%".format(tag.name, tag.confidence * 100))
# Detect celebrities
print("Celebrities in the remote image:")
if (len(results_remote.categories.detail.celebrities) == 0):
print("No celebrities detected.")
else:
for celeb in results_remote.categories.detail.celebrities:
print(celeb["name"])
# Detect landmarks
print("Landmarks in the remote image:")
if len(results_remote.categories.detail.landmarks) == 0:
print("No landmarks detected.")
else:
for landmark in results_remote.categories.detail.landmarks:
print(landmark["name"])