Microsoft Azure Cognitive Services make your application smarter. They add an element of artificial intelligence to your apps. When we say “Intelligence”, we're talking about how an app can extract the emotion behind the text. Such as being able to identify people in photos, searching with results that get more relevant to the user of your app over time, or even processing text in a way that humans speak.
You can do all of this without knowing anything about machine learning.
Microsoft Azure Cognitive Services consist of five categories: Vision, Speech, Search, Decision, and Language. It’s this last category, Language, that this guide will explore. While reading this guide, you’ll come to understand what the Language category of Cognitive Services is capable of, the different APIs or services that compose it, and you’ll see the unique features of each API, along with an example use of it.
There are five individual services that compose the Microsoft Azure Cognitive Services: Language Services group.
When used alone or in combination, these Cognitive Services APIs provide you the ability to add a deep understanding of language, in the way humans speak and write to your applications.
In the sections that follow, we’ll explore each of the Language APIs to find out more about what they can do and how they would be applied to a real-world application.
Have you ever wanted to hold a conversation with an application - a real conversation, where you enter free-form text and the app understands your intention? Then the Language Understanding (LUIS) Cognitive Service is for you.
LUIS takes advantage of several pre-built Natural Language Processing machine-learning models provided by Cognitive Services. These models allow LUIS, and thus your app, to predict the meaning of and identify relevant information from the text entered by the users of your application.
The best way to understand the features of LUIS is to understand its building blocks:
Your application is built upon endpoints, models, intents, utterances, and entities.
An intent is the action your user wants the application to perform.
An utterance is the phrase the user types to invoke an intent.
An entity is any important item found within the utterance. Generally, the entity will be useful to whatever action is defined by the intent.
Your application queries a LUIS endpoint to get information about conversational data.
That endpoint is the result of a published LUIS model.
The model is composed of three parts: Intents, utterances, and entities.
The process of defining intents, mapping example utterances to those intents, and then identifying key phrases within those utterances is part of creating a LUIS model.
LUIS comes with several pre-built models or you can create your own.
As your models gets used, LUIS provides you the tools to make it smarter, based off of usage patterns.
All together, LUIS provides a powerful, customizable, and extensible way to create conversation-enabled applications.
The classic example of a conversational application is a chat bot.
Imagine an application that allows you to order food from any number of restaurants. You could add a chat bot to that app to enable ordering without the user having to go through and pick menu items manually.
An example of the intent could be LookupMenu or PlaceOrder.
An utterance for LookupMenu could be “Look up the menu for Jack’s Pizza Shop”. And here, an entity would be “Jack’s Pizza Shop”.
When your code sends the text “Lookup the menu for Jack’s Pizza Shop”, LUIS will return information indicating that it thinks the intent of the user is to LookupMenu and the entity is “Jack’s Pizza Shop”. And, because LUIS is built upon machine-learning, it will eventually recognize that “Find the menu for Jack’s Pizza” has the same intent.
The Cognitive Services QnA Maker takes semi-structured data, like a FAQ or product manual, and allows your users to ask questions from it.
When your app queries a QnA Maker endpoint with a user’s question, it responds with the best answer from its dataset. The response from QnA maker even includes text as if a bot answered the question.
Creating a QnA knowledge base for your semi-structured data is done completely through the Azure portal. You can create a knowledge base without having any programming experience.
When creating the knowledge base, you point QnA maker to the semi-structured data source. That data source can be web pages, PDFs, DOCs, Excel, or TXT files. These should all be available over a public-facing URL.
If necessary, QnA Maker will crawl any links found within the data source to create the most complete dataset possible. The more structured the source, the better the results will be.
Once the knowledge base has been created, you can see what types of questions would lead to certain answers and apply any fixes as necessary. This includes manually adding new questions and answers.
Imagine your company just launched a brand new product which included an instruction manual for set up. You could create a website with an integrated QnA Maker that allows users to ask questions about the set up process of your product.
QnA Maker would first create a knowledge base out of the product’s instruction manual and then, after you tune the questions and answers through the Azure portal, your website will include a search box.
Instead of the user having to search through the instruction manual, now they can simply type, “how do I change the batteries”? QnA Maker will then issue the appropriate response.
Cognitive Services Text Analytics determines if a text phrase has a positive or negative sentiment. It also identifies key talking points within that same phrase, determines the language the phrase is using, and picks out named entities, such as people or organizations.
One call to the Text Analytics API gets your app a lot of information about the passed-in phrase.
Text Analytics will determine the sentiment, key talking points, language, and named entities within the phrase - all from a single call to the endpoint. You can query each function separately, if you don’t need all the information.
The Text Analytics API is already built upon an existing Natural Language Process machine learning model. You cannot change or customize it.
When using the Sentiment portion of the Text Analytics API, it will return a score from 0 to 1. Scores close to 0 indicate a negative sentiment, while those close to 1 indicate positive. The analysis is best run over short pieces of text, one or two sentences, rather than long paragraphs.
The Key Phrase Extraction API returns a list of important terms that it finds within a given text phrase. It works best with larger amounts of text passed in.
The Language Detection API will evaluate the phrases passed in and return the language it believes the phrase is written in. It will also return a confidence score. If a phrase passed in contains more than one language, the Language Detection API will return only the language it’s most confident of.
Finally, the Named Entity Recognition API parses the text for people, places, and things. It can identify entities as varied as people, organizations, dates, and quantities such as temperature.
Imagine a company using the Text Analytics API to monitor tweets that mention the company’s name.
By running each tweet through the various APIs, the company would easily be able to create a repository of whether the general public’s sentiment was positive or negative. They would also be able to pull out phrases in the tweets to get context around why people are mentioning them. A word cloud could be built with those terms. It would be an easy matter to determine the language people are tweeting in, again building context. And finally, with named entity recognition, the company could build up a repository of other things they are mentioned along with. Such as if tweets always mention the company with another company or holiday.
The Translator Text API of Cognitive Services allows your app to translate one language into another in near real-time.
This API is also capable of transliterating text. Transliterating means displaying inputted text in different alphabets.
And, of course, it can detect which language a passed-in phrase is written in. This way, you do not have to use the Text Analytics API’s language detection feature if that’s all you need to do.
The Translator Text API works in over 60 different languages. It can be used to translate text in a near real-time fashion.
It also is used for transliterating words from one alphabet to another, to help with display purposes. It can identify the language of an incoming phrase and it provides a means to look up words in a bilingual dictionary. This way your app can present your users with alternate translations and usage examples or a word.
Think back to the example used for the QnA Maker where the company created a bot that allowed anyone to enter natural language questions about setting up a product and the bot would return relevant sections from the instruction manual.
Adding in the Translator Text API would take that QnA bot to the next level. Now, the natural language text input could be translated into the native language of the instruction manual. Upon searching, the answer from the instruction manual could then be translated into the language the user queried in.
The Bing Spell Check API is more than a service that determines if a word is spelled incorrectly. It can also parse a given phrase to determine if the spelling is correct based on the context. In other words, it can flag words that are spelled correctly but are not used correctly for the situation, such as mixing up two and to.
The Bing Spell Check API allows your apps to perform simple spell checking. Pass in a phrase and it will check each word to determine if it is misspelled.
However, there is also a much more powerful feature, that performs spell checking within a context.
It can identify if a company or well-known person’s name is misspelled. It can also identify if there is a mix up between “two” and “to”, words that are spelled right but used incorrectly, also known as homonyms. Finally, it can provide automatic capitalization of proper nouns and recognize slang words that it won’t flag for correction.
Any text entry application would benefit from both basic and contextually-aware spell checking.
The five APIs that compose the Language Services group of Cognitive Services are powerful. Based on machine-learning, these services provide a strong foundation on which you can build any application which needs natural language processing. And you can build your natural language processing application without having to know machine-learning yourself.
The APIs include: