Author avatar

Douglas Starnes

Computer Vision with Microsoft Azure

Douglas Starnes

  • Jul 8, 2020
  • 10 Min read
  • Jul 8, 2020
  • 10 Min read
Data Analytics
Machine Learning


Computer vision is one of the most popular uses of artificial intelligence. It allows computers to analyze visual images and find features more accurately and efficiently than a human would. It's also very difficult to implement correctly. Computer vision models require tremendous resources in computing, time, and knowledge. As computer vision becomes more accepted and even expected in today's modern apps, how can developers at small companies without the resources of a software giant keep up?

That's where Microsoft Azure Cognitive Services come in. Using the Computer Vision service, developers of any app can add computer vision with little or no knowledge of machine learning. For example, the classic "Hot Dog or No Hot Dog" app could easily be implemented with the image analysis service. And adult content detection could be used to make sure that age-appropriate images are uploaded to an app.

The Azure Cognitive Services are implemented as REST APIs. These APIs expose the power of models that Microsoft has spent the time and resources to train. You can send image data to these APIs, and Microsoft will perform some computer vision magic, charge you a little bit of money (though it's free to get started), and send the results back to you.


Before using the APIs, you'll need to create an instance of the Computer Vision service. In the Azure Portal, click the Create a resource link and then search for Computer Vision. Click the Create button. Give the resource a Name and select a Subscription and Location. Select F0 for the Pricing tier to get a limited quota to use the service for free. This is adequate for experimentation, but a paid tier should be used for production. Click the Create button and the resource will take a minute to deploy. new computer vision

In the overview for the resource, click the link to manage the keys. You'll see two keys and an endpoint. The keys will authenticate you as a user of the service. Treat them like passwords.

keys and endpoint

Image Analysis

This guide will use Python to demonstrate how to use the Computer Vision service. Software development kits also support other languages, including C#, Java, and JavaScript. You can also send data directly to the API endpoints using a library like requests. To use the Python SDK, you'll need to install it.

1$ pip install azure-cognitiveservices-vision-computervision

Next, create a ComputerVisionClient using the key and endpoint from the Azure Portal.

1from import ComputerVisionClient
2from msrest.authentication import CognitiveServicesCredentials
4client = ComputerVisionClient(ENDPOINT, CognitiveServicesCredentials(KEY))

Use the client to describe an image by passing the URL of an image to the describe_image method.

1description = client.describe_image(IMAGE_URL)

If the Computer Vision service was able to describe the image, it will return a list of captions.

1for caption in description.captions:
2  print(caption.text)

Azure describes this picture as "a room filled with furniture and a large window."


And that's all there is to it. You can use the results in any app with no knowledge of computer vision!

Face Detection

The same client is capable of detecting faces in an image. Simply provide an image URL to the analyze_image method. The second parameter is a list features to detect. To detect faces, provide a list containing a single string, faces.

1face_results = client.analyze_image(FACE_IMAGE_URL, ['faces'])

The results include a list named faces. Each face detected, if any, will include the age, gender, and face_rectangle or bounds.

1for face in face_results.faces:
2  print('{} year old {}'.format(face.age, face.gender))
3  print(face.face_rectangle.as_dict())

Azure identifies this picture as a 21-year-old female.


Keep in mind that Azure Cognitive Services also includes a dedicated Face service. It does everything the Computer Vision service does and can also detect emotions, facial hair, glasses. It can even identify people detected in images.

Other Image Analysis Features

In a perfect world, people would not upload inappropriate content to apps. In the real world, the Azure Computer Vision service can detect and score adult, racy, and gory content in images. Use the adult feature with the analyze_image method.

1adult_results = client.analyze_image(ADULT_IMAGE_URL, ['adult'])

The results include a bool if the content is considered adult, racy, or gory. It also includes a confidence score between 0.0 and 1.0 based on how likely the content is to be adult, racy, or gory.

1print('Image is{} adult with a confidence of {}'.format('' if else ' not',

I'm not going to demo this one. You'll just have to trust that it works.

The categories feature will return a category from one of 86 in an image taxonomy. Examples include "building_stair" and "plant_tree". A complete list can be found at

1category_results = 
2    client.analyze_image(CATEGORY_IMAGE_URL, ['categories'])

Each category, if any, in the categories field has a name of a category the image belongs to. As with the adult content detector, a score indicates the confidence that the category applies to the image.

1for category in category_results.categories:
2	print('Image is in the {} category with a confidence of {}'
3            .format(, category_results.score))

Azure placed this picture of a panda bear in the "animal_panda" category with a confidence level of 0.99609375, which is almost certain.


The pattern continues for brands.

1brand_results = client.analyze_image(BRAND_IMAGE_URL, ['brands'])

Each brand in the brands list has a name and a confidence score. This time, the bounds of the brand are included in rectangle.

1print('The brand for {} was found in the image with a confidence of {}'
2    .format(, brand_results.confidence))

Azure finds a brand in this picture.


The brand is "Coca-Cola" with a confidence of 85.7%.

The service can also detect dominant and accent colors in an image with the color feature.

1color_results = client.analyze_image(COLOR_IMAGE_URL, ['color'])

The is_bw_img field is a bool determining whether the image is black and white or full color. The dominant_colors and accent_color fields hold the dominant and accent colors, respectively. The dominant colors are from a set of twelve named colors while the accent color is a hexadecimal value. All color information is stored in the color field of the results.

1print('The image is {}'
2    .format('black and white' if color_results.color.is_bw_img else 'color'))
3print('Dominant colors: {}'.format(color_results.color.dominant_colors))
4print('Accent color: {}'.format(color_results.color.accent_color))

According to Azure, the dominant colors in this image are purple, black, and white.


Text Recognition

The Computer Vision service can also recognize text, both printed and handwritten. This one works differently from the other features as text recognition can take more time. Thus the final results are not returned by the read method.

1read_operation =, raw=True)

The headers of the response contain a resource with an ID for the operation at the end.

1id_ = read_operation.headers['Operation-Location'].split('/')[-1]

Pass the id_ to the get_read_result method to retrieve the status of the text recognition operation.

1read_results = client.get_read_result(id_)

If the status is Succeeded, then you can go on to the next step. The recognition_results will have a set of lines, and the individual line will have text.

1first_result = read_results.analyze_result.read_results[0]
2for line in first_result.lines:
3  print(line.text)

And from this image, Azure detected the text 'Azure Cognitive Services'.

The bounding box of each line can also be found in bounding_box. In the real world, you would poll the status of the read results. This could take some time depending on the amount of text to be read.


The Computer Vision service in Azure Cognitive Services adds computer vision to almost any app. It supports several different languages with SDKs, or your app can call the API endpoint directly. Best of all, you need little if any knowledge or experience with computer vision and machine learning to use it. If you can write code that calls a REST API, you're all set. And it's cost-efficient, many times costing fractions of a penny per transaction with a free quota to experiment. Thanks for reading!