Course info
Jun 21, 2018
1h 31m

Artificial Intelligence is quickly becoming one of the most important technologies in the fast-moving world today. In this course, Microsoft Azure Cognitive Services: Face API, you will learn how to extract metadata from faces such as emotion, age, gender, and more. First, you will see how face detection works using direct http calls as well as with the SDK. Next, you will get familiar with the face identification workflow and how to use face identification in a Postman environment. Finally, you will see how to use the Face API Explorer on your local machine. By the end of this course, you will feel confident fully leveraging the Face API over HTTP which will translate easily into your preferred programming language. Software required: Microsoft Azure Cognitive Services Face API 1.0.

About the author
About the author

Steve is a Program Manager with Microsoft on the Azure Global Engineering team. Prior to joining Microsoft, he was a 7-time Microsoft ASP.NET MVP.

More from the author
Dapper: Getting Started
1h 28m
Jun 12, 2019
ng-bootstrap Playbook
2h 48m
Nov 20, 2018
More courses by Steve Michelotti
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi everyone, my name is Steve Michelotti. Welcome to my course, Microsoft Cognitive Services: Face API. I am a software engineer and technologist, and I work at Microsoft. Our world is currently in an amazing period of exploding innovation in the areas of artificial intelligence and machine learning. The Cognitive Services Face API is a great example of that innovation. This course is a quick introduction to the Face API, and you just need some basic knowledge of how HTTP works in order to follow all the examples in this course. Some of the major topics that we will cover include getting started with the Face API by provisioning an account and making your first call, face detection including emotion detection, face identification, and other advanced features such as face verification and grouping of similar faces. By the end of this course, you'll know all the basics of working with the Face API, and you'll be able to actually incorporate this functionality into your own apps. I hope you'll join me on this journey to learn the Face API with the Microsoft Cognitive Services Face API course at Pluralsight.

Hi, this is Steve Michelotti. Welcome to the course on Microsoft Cognitive Services: Face API. Over the last several years, the world has seen an explosion in artificial intelligence-based technologies that would have seemed like science fiction just a short time ago. The Cognitive Services Face API is one example of that. The Face API enables developers to easily perform face and emotion detection on demand with a simple HTTP call, and embed this functionality directly into apps. In this course, I'll show you how easy it is to quickly get up and running with the Face API. From provisioning a Face API account to making your first call, you'll be up and running in no time. You'll see that the Face API is simply an HTTP Web API, so we'll leverage Postman heavily throughout this course to enable us to easily make direct calls to the Face API. In addition to showing direct HTTP calls, I'll show you how to incorporate the Face API in an application. For the demo app in this course, I'll be using Angular, but you can use any technology you want. Here's a look at the app we'll be building to perform face detection, which can provide metadata about a face such as age estimates, gender, emotion, and more. Once we get past face detection, I'll show many advanced face analysis operations such as face identification. We'll also look at other face analysis operations like face grouping and finding similar faces.

Getting Started
In this module, we'll get up and running with the Face API by provisioning accounts. We'll first provision a Face API account with the Azure portal. I'll then show you how to provision an account with the Azure command line, also known as the Azure CLI. Let's get started.

Face Detection
In this module, we'll go deeper into face detection. We already made our first call to face detection in the last demo, but now we'll look at more options. First we'll start out by diving deeper into the various options we have for face detection using the direct HTTP Web API call with the Postman tool. I'll then show you various SDK choices provided by the Face API, and then I'll implement an example using C#, which will give you a flavor of what it looks like to use one of the SDKs. Face detection provides a wide range of metadata about each face. First, it provides a location of where you can find the face inside a given image. This location is what we refer to as a face rectangle, which is simply the width, height, top and then left. The landmarks are an optional piece of metadata you can use to get x, y coordinates for other information about the face. Examples include coordinates for the eyes, nose, mouth, eyebrows, and lips. A unique ID for each face is provided if you optionally want to be able to keep track of the same face across calls. We'll see this used later in the course. Various attributes can optionally be provided. These include age, gender, emotions, head pose, smile, facial hair, whether the person is wearing glasses, and more. Let's see this in action.

Face Identification
In this module, we're going to walk through a complete workflow for face identification. We're going to do this with Postman so you have total visibility into each API call that is being made. In the next module, we'll bring in a web app to do this in a more visual way. Before we start executing calls, it's very important to understand the overall workflow of how the face identification process works. This is extremely valuable context. We'll be examining each construct in depth as we walk through the workflow. I'm going to set up our Postman environment by showing you some Postman tips and tricks that will make our development experience more efficient. We'll then look at person group operations, which is the foundation for face identification. We'll explore various methods for working with person entities and attaching them to person groups. We'll add faces to person objects with person face operations. Finally, we'll look at the various API calls needed to train our model and invoke face identification.

Face API Explorer and Face Analysis
In this module, we're going to do face detection and identification, but this time we'll use a web app so things are more visual. I'll also show you some of the other face analysis features that come out-of-the-box with the Face API. I'll be demonstrating an open source sample I created called the Face API Explorer. I'll start out by showing you how to download the sample onto your local machine, and how to get everything set up locally. I'll then use the Face API Explorer to create person groups, persons, and faces. This will be conceptually similar to what we did before with Postman, but this time we'll be using a Web UI to do it visually. Once we have our person groups and persons established, we'll do face detection and verification with our Web UI. I'll then show some of the other face analysis operations available with the Face API. I'll show Verify, which verifies if two faces belong to the same person. I'll show Face Grouping, which can group together similar faces given a face list. Finally, I'll show the Find Similar operation, which can either match a face or match a person. As you might expect, Match Person can be used to find the same person, matching a face can return similar faces, even if they're not the same person. This type of functionality could be used to find your celebrity look-alike, for example. Now it's time to get the Face API Explorer up and running.