Language Understanding Intelligent Service

Understand language contextually, so your app communicates with people in the way they speak

Build custom language models

One of the key problems in human-computer interactions is the ability of the computers to understand what a person wants, and to find the pieces of information that are relevant to his/her intention. Our Language Understanding intelligent service, LUIS, provide simple tools that enable you to build your own language models (intents/entities) which allow any application/bot to understand your commands and act accordingly... Now, try our demo to visualize some of the usage scenarios relaying on LUIS.

See it in action

Smart light application in action

LUIS application response

By uploading data for this demo, you agree that Microsoft may store it and use it to improve Microsoft services, including this API. To help protect your privacy, we take steps to de-identify your data and keep it secure. We won’t publish your data or let other people use it.

Want to build this?

Why LUIS?

It is fast and easy

LUIS is designed to enable you to quickly deploy an HTTP endpoint that will take the sentences you send it and interpret them in terms of the intention they convey and the key entities that are present.

It learns and adapts

After your endpoint has processed a few dozen interactions, LUIS begins active learning. LUIS examines all the utterances that have been sent to it, and calls to your attention the ones that it would like you to label.

It offers pre-built applications

In addition to allowing you to build your own applications, it simplifies your jump-start by providing selected set of ready made languages models that can be directly used in your application.

It is a power developer tool

The over all experience of LUIS focuses on boosting developers’ productivity though providing a set of powerful tools, offered through a simple user-experience & comprehensive set of APIs.

Pivothead

"Using the Cognitive Services APIs, it took us three months to develop a test pair of glasses that can translate text and images into speech, identify emotions, and describe scenery. If we had been working full time, we could have done it in two weeks"

Benoit Chirouter: R&D Director | Pivothead

Explore the Cognitive Services APIs

Computer Vision API

Distill actionable information from images

Face API

Detect, identify, analyze, organize, and tag faces in photos

Content Moderator

Automated image, text, and video moderation

Emotion API PREVIEW

Personalize user experiences with emotion recognition

Video API PREVIEW

Intelligent video processing

Custom Vision Service PREVIEW

Easily customize your own state-of-the-art computer vision models for your unique use case

Video Indexer PREVIEW

Unlock video insights

Language Understanding Intelligent Service PREVIEW

Teach your apps to understand commands from your users

Text Analytics API PREVIEW

Easily evaluate sentiment and topics to understand what users want

Bing Spell Check API

Detect and correct spelling mistakes in your app

Translator Text API

Easily conduct machine translation with a simple REST API call

Web Language Model API PREVIEW

Use the power of predictive language models trained on web-scale data

Linguistic Analysis API PREVIEW

Simplify complex language concepts and parse text with the Linguistic Analysis API

Translator Speech API

Easily conduct real-time speech translation with a simple REST API call

Speaker Recognition API PREVIEW

Use speech to identify and authenticate individual speakers

Bing Speech API

Convert speech to text and back again to understand user intent

Custom Speech Service PREVIEW

Overcome speech recognition barriers like speaking style, background noise, and vocabulary

Recommendations API PREVIEW

Predict and recommend items your customers want

Academic Knowledge API PREVIEW

Tap into the wealth of academic content in the Microsoft Academic Graph

Knowledge Exploration Service PREVIEW

Enable interactive search experiences over structured data via natural language inputs

QnA Maker API PREVIEW

Distill information into conversational, easy-to-navigate answers

Entity Linking Intelligence Service API PREVIEW

Power your app's data links with named entity recognition and disambiguation

Custom Decision Service PREVIEW

A cloud-based, contextual decision-making API that sharpens with experience

Project Prague

Gesture based controls

Project Cuzco

Event associated with Wikipedia entries

Project Nanjing

Isochrones calculations

Project Abu Dhabi

Distance matrix

Project Johannesburg

Route logistics

Project Wollongong

Location insights

Ready to supercharge your app?