Microsoft Cognitive Services updates - Bing Entity Search API and Project Prague

Posted on 12 July, 2017

This blog post was authored by the Microsoft Cognitive Services Team.

Microsoft Cognitive Services enables developers to augment the next generation of applications with the ability to see, hear, speak, understand, and interpret needs using natural methods of communication.

Today, we are excited to announce several service updates:

  • We are launching Bing Entity Search API, a new service available in Free Preview which makes it easy for developers to build experiences that leverage the power of Bing knowledge graph with more engaging contextual experiences. Tap into the power of the web to search for the most relevant entities such as movies, books, famous people, and US local businesses, and easily provide primary details and information sources about them.
  • Microsoft Cognitive Services Lab’s Project Prague is now available. Project Prague lets you control and interact with devices using gestures to have a more intuitive and natural experience.
  • Presentation Translator, a Microsoft Garage project, is now available for download. It provides presenters the ability to add subtitles to their presentations in real time, in the same language for accessibility scenarios or in another language for multi-language situations. With customized speech recognition, presenters have the option to customize the speech recognition engine (English or Chinese) using the vocabulary within the slides and slide notes to adapt to jargon, technical terms, product, place names, etc. Presentation Translator is powered by the Microsoft Translator live feature, built on the Translator APIs of Microsoft Cognitive Services.

Let’s take a closer look at what these new APIs and services can do for you.

Bring rich knowledge of people, places, things and local businesses to your apps with Bing Entity Search API

As announced today, Bing Entity Search API is a new addition in our already existing set of Microsoft Cognitive Services Search APIs, including Bing Web Search, Image Search, Video Search, News Search, Bing Autosuggest, and Bing Custom Search. This API lets you search for entities in the Bing knowledge graph and retrieve the most relevant entities and primary details and information sources about them. This API also supports searching for local businesses in the US. It helps developers easily build apps that harness the power of the web and delight users with more engaging contextual experiences.

Get started

  • To get started today, let’s get a free preview subscription key on the Try Cognitive Services webpage.
  • After getting the key, I can start sending entity search queries to Bing. It’s as simple as sending the following query:
GET HTTP/1.1  
Ocp-Apim-Subscription-Key: 123456789ABCDE  
X-Search-ClientIP: 999.999.999.999  
X-Search-Location: lat:47.60357;long:-122.3295;re:100  

The request must specify the q query parameter, which contains the user's search term, and the Ocp-Apim-Subscription-Key header. For location aware queries like restaurants near me, it’s important to also include the X-Search-Location and X-MSEdge-ClientIP headers.

For more information about getting started, see the documentation page Making your first entities request.

The response

The following shows the response to the Mount Rainier query.

    "_type" : "SearchResponse",
    "queryContext" : {
        "originalQuery" : "mount rainier"
    "entities" : {
        "queryScenario" : "DominantEntity",
        "value" : [{
            "contractualRules" : [{
                "_type" : "ContractualRules\/LicenseAttribution",
                "targetPropertyName" : "description",
                "mustBeCloseToContent" : true,
                "license" : {
                    "name" : "CC-BY-SA",
                    "url" : "http:\/\/\/licenses\/by-sa\/3.0\/"
                "licenseNotice" : "Text under CC-BY-SA license"
                "_type" : "ContractualRules\/LinkAttribution",
                "targetPropertyName" : "description",
                "mustBeCloseToContent" : true,
                "text" : "",
                "url" : "http:\/\/\/wiki\/Mount_Rainier"
                "_type" : "ContractualRules\/MediaAttribution",
                "targetPropertyName" : "image",
                "mustBeCloseToContent" : true,
                "url" : "http:\/\/\/wiki\/Mount_Rainier"
            "webSearchUrl" : "https:\/\/\/search?q=Mount%20Rainier...",
            "name" : "Mount Rainier",
            "image" : {
                "name" : "Mount Rainier",
                "thumbnailUrl" : "https:\/\/\/th?id=A21890c0e1f...",
                "provider" : [{
                    "_type" : "Organization",
                    "url" : "http:\/\/\/wiki\/Mount_Rainier"
                "hostPageUrl" : "http:\/\/\/wikipedia...",
                "width" : 110,
                "height" : 110
            "description" : "Mount Rainier, Mount Tacoma, or Mount Tahoma is the highest...",
            "entityPresentationInfo" : {
                "entityScenario" : "DominantEntity",
                "entityTypeHints" : ["Attraction"],
                "entityTypeDisplayHint" : "Mountain"
            "bingId" : "9ae3e6ca-81ea-6fa1-ffa0-42e1d78906"

For more information about consuming the response, please refer to the documentation page Searching the Web for entities and places.

Try it now

Don’t hesitate to try it by yourself by going to the Entities Search API Testing Console.

Create more natural user experiences with gestures - Project Prague

Project Prague is a cutting edge, easy-to-use SDK that helps developers and UX designers incorporate gesture-based controls into their apps. It enables you to quickly define and implement customized hand gestures, creating a more natural user experience.

The SDK enables you to define your desired hand poses using simple constraints built with plain language. Once a gesture is defined and registered in your code, you will get a notification when your user does the gesture, and can select an action to assign in response.

Using Project Prague, you can enable your users to intuitively control videos, bookmark webpages, play music, send emojis, or summon a digital assistant.

Let’s say that I want to create new gesture to control my app "RotateRight”. First, I need to ensure that I have the hardware and software requirements. Please refer to the requirement section for more information. Intuitively, when performing the "RotateRight" gesture, a user would expect some object in the foreground application to be rotated right by 90°. We have used this gesture to trigger the rotation of an image in a PowerPoint slideshow in the video above.

The following code demonstrates one possible way to define the "RotateRight" gesture:

var rotateSet = new HandPose("RotateSet", new FingerPose(new[] { Finger.Thumb, Finger.Index }, FingerFlexion.Open, PoseDirection.Forward),
                                          new FingertipPlacementRelation(Finger.Index, RelativePlacement.Above, Finger.Thumb),
                                          new FingertipDistanceRelation(Finger.Index, RelativeDistance.NotTouching, Finger.Thumb));

var rotateGo = new HandPose("RotateGo", new FingerPose(new[] { Finger.Thumb, Finger.Index }, FingerFlexion.Open, PoseDirection.Forward),
                                        new FingertipPlacementRelation(Finger.Index, RelativePlacement.Right, Finger.Thumb),
                                        new FingertipDistanceRelation(Finger.Index, RelativeDistance.NotTouching, Finger.Thumb));

var rotateRight = new Gesture("RotateRight", rotateSet, rotateGo);

The "RotateRight" gesture is a sequence of two hand poses, "RotateSet" and "RotateGo". Both poses require the thumb and index to be open, pointing forward, and not touching each other. The difference between the poses is that "RotateSet" specifies that the index finger should be above the thumb and "RotateGo" specifies it should be right of the thumb. The transition between "RotateSet" and "RotateRight", therefore, corresponds to a rotation of the hand to the right.

Note that the middle, ring, and pinky fingers do not participate in the definition of the "RotateRight" gesture. This makes sense because we do not wish to constrain the state of these fingers in any way. In other words, these fingers are free to assume any pose during the execution of the "RotateRight" gesture.

Having defined the gesture, I need to hook up the event indicating gesture detection to the appropriate handler in your target application:

rotateRight.Triggered += (sender, args) => { /* This is called when the user performs the "RotateRight" gesture */ };

The detection itself is performed in the Microsoft.Gestures.Service.exe process. This is the process associated with the "Microsoft Gestures Service" window discussed above. This process runs in the background and acts as a service for gesture detection. I will need to create a GesturesServiceEndpoint instance in order to communicate with this service. The following code snippet instantiates a GesturesServiceEndpoint and registers the "RotateRight" gesture for detection:

var gesturesService = GesturesServiceEndpointFactory.Create();
await gesturesService.ConnectAsync();
await gesturesService.RegisterGesture(rotateRight);
When you wish to stop the detection of the "RotateRight" gesture, you can unregister it as follows:+ 
await gesturesService.UnregisterGesture(rotateRight);

The handler will no longer be triggered when the user executes the "RotateRight" gesture. When finished working with gestures, keep in mind I should dispose the GesturesServiceEndpoint object:


Please note that in order for the above code to compile, you will need to reference the following assemblies, located in the directory indicated by the MicrosoftGesturesInstallDir environment variable:

  • Microsoft.Gestures.dll
  • Microsoft.Gestures.Endpoint.dll
  • Microsoft.Gestures.Protocol.dll

For more information about the Getting Started guide, please refer to the documentation.

Thank you again and happy coding!