How To Learn AI With Facial Recognition Using C#

There is no doubt that artificial intelligence (AI) and robotics is the future of computing. One major milestone we have seen with AI and the internet of things (IOT) is recognizing faces to interact with and recognize humans. Even though this is a simple task that humans do every day, it’s a little more complex for computers to recognize and assign attributes to the human face.

As artificial intelligence is maturing, we are finding that the use of biometrics such as facial recognition can help computers learn faster and have more meaningful interactions with the “actors” around the computer. To be honest, interpreting facial patterns and attributes is not something I would decide to just code myself for any given software I’m coding because of the complexity and deep understanding of biometrics that would be required to code a feature like that.

Luckily for you and I, Microsoft has already created API services with the logic needed to run biometric evaluations through its services called Cognitive Services. We’ll go through a C# example of using Cognitive Services as well as discuss how and why these services can be used in your coding endeavors. If you are interested in additional tutorials on facial recognition services, feel free to sign up for a free trial at Pluralsight, take this C# facial recognition course on Udemy or read Learning Microsoft Cognitive Services by Leif Larsen.

Who Is Using Facial Recognition

If AI is the future of computing and facial recognition is an important aspect of facial AI, then obviously there have to be companies already using it in their software, right? That’s right! As a matter of fact, the industry that has been using it the most for years is the security industry.

Biometrics has been viewed as the ultimate authentication measurement for a long time. Biometrics has often been discussed as the ultimate authentication measurement to assess identification, which is why when watching many movies, you almost always see anything that is overly valuable protected behind one or more biometric scanning sensors that are programmed to unlock vaults and doors only for the owner of the valuable asset.

According to Techtarget, biometric authentication is a security process that relies on the unique biological characteristics of an individual to verify that the identity of a specific person. Different types of biometric identification include retinal scanning, fingerprint scanning, voice recognition and many more. Cameras are the most commonly used security tools with facial recognition software such as the Ring Video Doorbell, the Nest Security Video Camera, and the Motorola Baby Monitor.

Other common places you will see facial recognition software used is on platforms such as Facebook to tag your friends within a picture based on the faces in the picture and Snapchat to create filters for you and your child with dog ears and a dog nose and mouth (Arguably the biggest misuse of facial recognition, but I’m not judging).

These are not even the beginning of how facial recognition is used in today’s software, but hopefully, this provides you with enough background to understand how facial recognition is used today in some instances. Now that you have the background of what facial recognition is and how it’s used, we can get into how to write some C# code to utilize Microsoft’s Cognitive Services for facial recognition.

The Code Behind The Face… Or Facial Recognition

[code lang=”csharp”]

using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;

namespace FacialRecognitionApp
{
static class Program
{
// Replace with your valid subscription key.
const string subscriptionKey = “”;

// NOTE: You must use the same region in your REST call as you used to
// obtain your subscription keys. For example, if you obtained your
// subscription keys from westus, replace “westcentralus” in the URL
// below with “westus”.
//
// Free trial subscription keys are generated in the westcentralus region.
// If you use a free trial subscription key, you shouldn’t need to change
// this region.
const string uriBase =
https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect”;

static void Main()
{
// Get the path and filename to process from the user.
Console.WriteLine(“Detect faces:”);
Console.Write(
“Enter the path to an image with faces that you wish to analyze: “);
string imageFilePath = Console.ReadLine();

if (File.Exists(imageFilePath))
{
// Execute the REST API call.
try
{
MakeAnalysisRequest(imageFilePath);
Console.WriteLine(“\nWait a moment for the results to appear.\n”);
}
catch (Exception e)
{
Console.WriteLine(“\n” + e.Message + “\nPress Enter to exit…\n”);
}
}
else
{
Console.WriteLine(“\nInvalid file path.\nPress Enter to exit…\n”);
}
Console.ReadLine();
}

///
/// Gets the analysis of the specified image by using the Face REST API.
///

/// The image file.
static async void MakeAnalysisRequest(string imageFilePath)
{
HttpClient client = new HttpClient();

// Request headers.
client.DefaultRequestHeaders.Add(
“Ocp-Apim-Subscription-Key”, subscriptionKey);

// Request parameters. A third optional parameter is “details”.
string requestParameters = “returnFaceId=true&returnFaceLandmarks=true” +
“&returnFaceAttributes=age,gender,headPose,smile,facialHair,glasses,” +
“emotion,hair,makeup,occlusion,accessories,blur,exposure,noise”;

// Assemble the URI for the REST API Call.
string uri = uriBase + “?” + requestParameters;

HttpResponseMessage response;

// Request body. Posts a locally stored image.
byte[] byteData = GetImageAsByteArray(imageFilePath);

using (ByteArrayContent content = new ByteArrayContent(byteData))
{
// This example uses content type “application/octet-stream”.
// The other content types you can use are “application/json”
// and “multipart/form-data”.
content.Headers.ContentType =
new MediaTypeHeaderValue(“application/octet-stream”);

// Execute the REST API call.
response = await client.PostAsync(uri, content);

// Get the JSON response.
string contentString = await response.Content.ReadAsStringAsync();

// Display the JSON response.
Console.WriteLine(“\nResponse:\n”);
Console.WriteLine(JsonPrettyPrint(contentString));
Console.WriteLine(“\nPress Enter to exit…”);
}
}

///
/// Returns the contents of the specified file as a byte array.
///

/// The image file to read.
/// The byte array of the image data.
static byte[] GetImageAsByteArray(string imageFilePath)
{
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}

///
/// Formats the given JSON string by adding line breaks and indents.
///

/// The raw JSON string to format.
/// The formatted JSON string.
static string JsonPrettyPrint(string json)
{
if (string.IsNullOrEmpty(json))
return string.Empty;

json = json.Replace(Environment.NewLine, “”).Replace(“\t”, “”);

StringBuilder sb = new StringBuilder();
bool quote = false;
bool ignore = false;
int offset = 0;
int indentLength = 3;

foreach (char ch in json)
{
switch (ch)
{
case ‘”‘:
if (!ignore) quote = !quote;
break;
case ‘\”:
if (quote) ignore = !ignore;
break;
}

if (quote)
sb.Append(ch);
else
{
switch (ch)
{
case ‘{‘:
case ‘[‘:
sb.Append(ch);
sb.Append(Environment.NewLine);
sb.Append(new string(‘ ‘, ++offset * indentLength));
break;
case ‘}’:
case ‘]’:
sb.Append(Environment.NewLine);
sb.Append(new string(‘ ‘, –offset * indentLength));
sb.Append(ch);
break;
case ‘,’:
sb.Append(ch);
sb.Append(Environment.NewLine);
sb.Append(new string(‘ ‘, offset * indentLength));
break;
case ‘:’:
sb.Append(ch);
sb.Append(‘ ‘);
break;
default:
if (ch != ‘ ‘) sb.Append(ch);
break;
}
}
}

return sb.ToString().Trim();
}
}
}

[/code]

Above is the code that we will be using. Instead of just leaving you with the code, I want to go through the code so that you have a better understanding of what the code is doing. If you already understand the code and Cognitive Services, feel free to take the code and play with it.

Starting at the top of the code, there are two variables. The first is the subscription key you are provided when you sign up for Cognitive Services. The second is the URL you will use to access the API. Microsoft will assign you a URL to use based on your region. The default for the trial version will always be the URL in the west-central region.

The Main Method

In the Main method, we start the console by getting the path of the image to evaluate on the local machine. This will obviously be the same machine the code is hosted on, so if the code is deployed to an external server, it’s going to search the server’s directory.

After getting the image’s location, we use the File.Exists() method to check that there actually is a file in that location. Lastly, we put the logic in a try-catch block and call the important analyzing method called MakeAnalysisRequest().

The Facial Analysis Request Method

The method starts by initializing the HttpClient class used to create an HTTP request. The method then adds a header to the request with the name of Ocp-Apim-Subscription-Key and the value of the subscription key we were given. If this header is not populated and does not match a valid subscription key, then you will likely receive a 403 HTTP response.

The next declared variable called requestParameters is one I want to spend a little bit of time on since it’s the most configurable portion of Cognitive Services and has a direct impact on how the response will look. This variable is declaring the QueryString that will be passed to the API.

returnFaceId

The first parameter called returnFaceId is a boolean property that returns the arbitrary Guid the service assigns to each face in the image. The default for this property is set to true. It’s also important to note that this unique id will expire in 24 hours.

returnFaceLandmarks

The second parameter called returnFaceLandmarks is another boolean property that will return an array of 27-point face landmarks of important positions of face components. By default, this boolean property is set to false.

returnFaceAttributes

This is a comma-delimited list of attributes the JSON should return. There are computational and time costs with processing these attributes. We’ll get into the price a little later. Below is a list of the possible attributes you can select for the faces.

  • glasses – This attribute will return either NoGlasses, ReadingGlasses, Sunglasses, or SwimmingGoggles.
  • headPose – This uses three different measurements that include roll, yaw, and pitch to determine the face direction. Pitch is the direction of the face up or down. Roll is the tilting of the head left or right. Yaw is the turn of the face from the direct camera angle.
  • facialHair – This attribute returns lengths in three facial hair areas: mustache, beard and sideburns. The length is measured by a number between 0 and 1.
  • smile – This measures the smile intensity, a number between 0 and 1.
  • gender – this returns either male or female.
  • age – This is an estimated “visual age” number in years. It is how old a person looks like rather than the actual biological age.
  • noise – Wait, how does an image make a sound? Image noise is actually a random variation of brightness or color information in an image. The level value returns either Low, Medium, or High. The value attribute returns a number between 0 and 1, the larger the noisier
  • exposure – This is the face exposure level. The level returns GoodExposure, OverExposure or UnderExposure.
  • blur – This is how blurry the face in the image is with the level rating ‘High’, ‘Medium’ or ‘Low’ and the value ranging from 0 to 1. The higher the value, the more blurry the face.
  • accessories – This lists the accessories around the face such as ‘mask’, ‘headware’, ‘glasses’ and more.
  • makeup – This returns whether or not the face has makeup.
  • hair – This returns whether or not there is hair and the color of the hair if there is any.
  • emotion – This returns the emotional intensity and which emotion the face is exuding.

After concatenating the base URL and the querystring, the HttpResponse is initialized and the image is turned into a byte array through a BinaryReader. Next, the method uses a ByteArrayContent to add the contenttype of ‘application/octet-stream’ to the request’s content so the server knows the format of the request’s content.

The code then posts the request to the Cognitive Services Microsoft URL to return the response. Last, we take the response and turn it into a string to print out to the screen. Obviously, this is not what you would likely do with the response, but this will help you see what the JSON looks like when it is returned. Below is an example of the JSON.

[code lang=”csharp”]
[
{
“faceId”: “c5c24a82-6845-4031-9d5d-978df9175426”,
“faceRectangle”: {
“width”: 78,
“height”: 78,
“left”: 394,
“top”: 54
},
“faceLandmarks”: {
“pupilLeft”: {
“x”: 412.7,
“y”: 78.4
},
“pupilRight”: {
“x”: 446.8,
“y”: 74.2
},
“noseTip”: {
“x”: 437.7,
“y”: 92.4
},
“mouthLeft”: {
“x”: 417.8,
“y”: 114.4
},
“mouthRight”: {
“x”: 451.3,
“y”: 109.3
},
“eyebrowLeftOuter”: {
“x”: 397.9,
“y”: 78.5
},
“eyebrowLeftInner”: {
“x”: 425.4,
“y”: 70.5
},
“eyeLeftOuter”: {
“x”: 406.7,
“y”: 80.6
},
“eyeLeftTop”: {
“x”: 412.2,
“y”: 76.2
},
“eyeLeftBottom”: {
“x”: 413.0,
“y”: 80.1
},
“eyeLeftInner”: {
“x”: 418.9,
“y”: 78.0
},
“eyebrowRightInner”: {
“x”: 4.8,
“y”: 69.7
},
“eyebrowRightOuter”: {
“x”: 5.5,
“y”: 68.5
},
“eyeRightInner”: {
“x”: 441.5,
“y”: 75.0
},
“eyeRightTop”: {
“x”: 446.4,
“y”: 71.7
},
“eyeRightBottom”: {
“x”: 447.0,
“y”: 75.3
},
“eyeRightOuter”: {
“x”: 451.7,
“y”: 73.4
},
“noseRootLeft”: {
“x”: 428.0,
“y”: 77.1
},
“noseRootRight”: {
“x”: 435.8,
“y”: 75.6
},
“noseLeftAlarTop”: {
“x”: 428.3,
“y”: 89.7
},
“noseRightAlarTop”: {
“x”: 442.2,
“y”: 87.0
},
“noseLeftAlarOutTip”: {
“x”: 424.3,
“y”: 96.4
},
“noseRightAlarOutTip”: {
“x”: 446.6,
“y”: 92.5
},
“upperLipTop”: {
“x”: 437.6,
“y”: 105.9
},
“upperLipBottom”: {
“x”: 437.6,
“y”: 108.2
},
“underLipTop”: {
“x”: 436.8,
“y”: 111.4
},
“underLipBottom”: {
“x”: 437.3,
“y”: 114.5
}
},
“faceAttributes”: {
“age”: 71.0,
“gender”: “male”,
“smile”: 0.88,
“facialHair”: {
“moustache”: 0.8,
“beard”: 0.1,
“sideburns”: 0.02
},
“glasses”: “sunglasses”,
“headPose”: {
“roll”: 2.1,
“yaw”: 3,
“pitch”: 0
},
“emotion”: {
“anger”: 0.575,
“contempt”: 0,
“disgust”: 0.006,
“fear”: 0.008,
“happiness”: 0.394,
“neutral”: 0.013,
“sadness”: 0,
“surprise”: 0.004
},
“hair”: {
“bald”: 0.0,
“invisible”: false,
“hairColor”: [
{“color”: “brown”, “confidence”: 1.0},
{“color”: “blond”, “confidence”: 0.88},
{“color”: “black”, “confidence”: 0.48},
{“color”: “other”, “confidence”: 0.11},
{“color”: “gray”, “confidence”: 0.07},
{“color”: “red”, “confidence”: 0.03}
]
},
“makeup”: {
“eyeMakeup”: true,
“lipMakeup”: false
},
“occlusion”: {
“foreheadOccluded”: false,
“eyeOccluded”: false,
“mouthOccluded”: false
},
“accessories”: [
{“type”: “headWear”, “confidence”: 0.99},
{“type”: “glasses”, “confidence”: 1.0},
{“type”: “mask”,” confidence”: 0.87}
],
“blur”: {
“blurLevel”: “Medium”,
“value”: 0.51
},
“exposure”: {
“exposureLevel”: “GoodExposure”,
“value”: 0.55
},
“noise”: {
“noiseLevel”: “Low”,
“value”: 0.12
}
}
}
]
[/code]

How Is The Detect Service Useful

This service is only one out of five services in the specific group of services called ‘Face’.  This particular service is made to simply detect faces. Now that you’ve seen what the response looks like, let’s talk about what this detection service could be used for. Below are some ways to use this service.

  • Use to create a face filter to alter the look of faces within a video or image
  • Use on a social networking website to tag faces with names
  • Use within security software to detect when someone is in sight of a camera

How Much Does It Cost?

Just as with Azure functions, Microsoft provides a free allowance of 20 transactions per minute and 30,000 transactions per month. If you go past this allowance, you will fall under the standard plan that provides up to 10 transactions per minute and the following pricing model.

  • 0–1,000,000 transactions – $1 per 1,000 transactions
  • 1,000,001–5,000,000 transactions – $0.80 per 1,000 transactions
  • 5,000,001–100,000,000 transactions – $0.60 per 1,000 transactions
  • Over 100,000,000 transactions – $0.40 per 1,000 transactions
  • Face Storage – $0.25 per 1,000 faces per month

Final Thoughts

Now that you know what Cognitive Services are and how they fit within AI, you are ready to use the detect service above and create your own applications with it. As mentioned above, AI is moving forward and if you’re not using these services to utilize AI, you will likely fall behind by the time AI has consumed the market. It’s worth noting that if you are not a fan or comfortable with C#, Cognitive Services support many other languages since it is simply a webservice.

If you would like more information on Microsoft Cognitive Services, feel free to sign up for a free trial at Pluralsight, take this C# facial recognition course on Udemy or read Learning Microsoft Cognitive Services by Leif Larsen. Please feel free to comment on this article below and if you have a topic you would like to write about or would like me to write about please contact us.

If you enjoy this blog’s content and would like to help out a developer or computer science student, feel free to   Buy Me A Coffee

If you like the shirt below, you can view more about how to get it here.

Software Engineer Shirt

One thought on “How To Learn AI With Facial Recognition Using C#

  1. This is such an informative post and would help many to enhance their coding skills. Thanks for sharing and explaining it so well. The post is very helpful!

Leave a Reply