Skip Navigation

Computer Vision API

Extract rich information from images to categorize and process visual data – and machine-assisted moderation of images to help curate your services.

Analyze an image

This feature returns information about visual content found in an image. Use tagging, descriptions, and domain-specific models to identify content and label it with confidence. Apply the adult/racy settings to enable automated restriction of adult content. Identify image types and color schemes in pictures.

See it in action

Feature Name: Value
Description { "tags": [ "train", "platform", "station", "building", "indoor", "subway", "track", "walking", "waiting", "pulling", "board", "people", "man", "luggage", "standing", "holding", "large", "woman", "yellow", "suitcase" ], "captions": [ { "text": "people waiting at a train station", "confidence": 0.833099365 } ] }
Tags [ { "name": "train", "confidence": 0.9975446 }, { "name": "platform", "confidence": 0.995543063 }, { "name": "station", "confidence": 0.9798007 }, { "name": "indoor", "confidence": 0.927719653 }, { "name": "subway", "confidence": 0.838939846 }, { "name": "pulling", "confidence": 0.431715637 } ]
Image format "Jpeg"
Image dimensions 462 x 600
Clip art type 0
Line drawing type 0
Black and white false
Adult content false
Adult score 0.0147124995
Racy false
Racy score 0.0162802152
Categories [ { "name": "trans_trainstation", "score": 0.98828125 } ]
Faces []
Dominant color background
"Black"
Dominant color foreground
"Black"
Accent Color
#484C83

Want to build this?

Read text in images

Optical character recognition (OCR) detects text in an image and extract the recognized words into a machine-readable character stream. Analyze images to detect embedded text, generate character streams, and enable searching. Take photos of text instead of copying to save time and effort.

See it in action

  1. Preview
  2. JSON

IF WE DID

ALL

THE THINGS

WE ARE

CAPABLÉ•

OF DOING,

WE WOULD

LITERALLY

ASTOUND

QURSELV*S.

{
  "textAngle": 0.0,
  "orientation": "NotDetected",
  "language": "en",
  "regions": [
    {
      "boundingBox": "316,47,284,340",
      "lines": [
        {
          "boundingBox": "319,47,182,24",
          "words": [
            {
              "boundingBox": "319,47,42,24",
              "text": "IF"
            },
            {
              "boundingBox": "375,47,44,24",
              "text": "WE"
            },
            {
              "boundingBox": "435,47,66,23",
              "text": "DID"
            }
          ]
        },
        {
          "boundingBox": "316,74,204,69",
          "words": [
            {
              "boundingBox": "316,74,204,69",
              "text": "ALL"
            }
          ]
        },
        {
          "boundingBox": "318,147,207,24",
          "words": [
            {
              "boundingBox": "318,147,63,24",
              "text": "THE"
            },
            {
              "boundingBox": "397,147,128,24",
              "text": "THINGS"
            }
          ]
        },
        {
          "boundingBox": "316,176,125,23",
          "words": [
            {
              "boundingBox": "316,176,44,23",
              "text": "WE"
            },
            {
              "boundingBox": "375,176,66,23",
              "text": "ARE"
            }
          ]
        },
        {
          "boundingBox": "319,194,281,44",
          "words": [
            {
              "boundingBox": "319,194,281,44",
              "text": "CAPABLÉ•"
            }
          ]
        },
        {
          "boundingBox": "318,243,181,29",
          "words": [
            {
              "boundingBox": "318,243,43,23",
              "text": "OF"
            },
            {
              "boundingBox": "376,243,123,29",
              "text": "DOING,"
            }
          ]
        },
        {
          "boundingBox": "316,271,170,24",
          "words": [
            {
              "boundingBox": "316,272,44,23",
              "text": "WE"
            },
            {
              "boundingBox": "375,271,111,24",
              "text": "WOULD"
            }
          ]
        },
        {
          "boundingBox": "317,300,200,24",
          "words": [
            {
              "boundingBox": "317,300,200,24",
              "text": "LITERALLY"
            }
          ]
        },
        {
          "boundingBox": "316,328,157,24",
          "words": [
            {
              "boundingBox": "316,328,157,24",
              "text": "ASTOUND"
            }
          ]
        },
        {
          "boundingBox": "318,357,214,30",
          "words": [
            {
              "boundingBox": "318,357,214,30",
              "text": "QURSELV*S."
            }
          ]
        }
      ]
    }
  ]
}

By uploading data for this demo, you agree that Microsoft may store it and use it to improve Microsoft services, including this API. To help protect your privacy, we take steps to de-identify your data and keep it secure. We won’t publish your data or let other people use it.

Want to build this?

Preview: Read handwritten text from images

This technology (handwritten OCR) allows you to detect and extract handwritten text from notes, letters, essays, whiteboards, forms, etc. It works with different surfaces and backgrounds, such as white paper, yellow sticky notes, and whiteboards.

Handwritten text recognition saves time and effort and can make you more productive by allowing you to take images of text, rather than having to transcribe it. It makes it possible to digitize notes, which then allows you to implement quick and easy search. It also reduces paper clutter.

Note: this technology is currently in preview and is only available for English text.

To try this optical character recognition demo, upload a locally stored image or provide an image URL. We don’t store the images you supply for this demo unless you give us permission.

See it in action

  1. Preview
  2. JSON

Our greatest glory is not

in never failing ,

but in rising every

time we fall

{
  "status": "Succeeded",
  "succeeded": true,
  "failed": false,
  "finished": true,
  "recognitionResult": {
    "lines": [
      {
        "boundingBox": [
          67,
          204,
          668,
          210,
          667,
          272,
          66,
          267
        ],
        "text": "Our greatest glory is not",
        "words": [
          {
            "boundingBox": [
              47,
              206,
              161,
              205,
              157,
              274,
              43,
              275
            ],
            "text": "Our"
          },
          {
            "boundingBox": [
              179,
              205,
              350,
              204,
              346,
              273,
              175,
              274
            ],
            "text": "greatest"
          },
          {
            "boundingBox": [
              381,
              204,
              509,
              203,
              505,
              272,
              377,
              273
            ],
            "text": "glory"
          },
          {
            "boundingBox": [
              526,
              203,
              588,
              203,
              584,
              272,
              522,
              272
            ],
            "text": "is"
          },
          {
            "boundingBox": [
              588,
              203,
              680,
              202,
              676,
              271,
              584,
              272
            ],
            "text": "not"
          }
        ]
      },
      {
        "boundingBox": [
          540,
          289,
          900,
          302,
          897,
          374,
          538,
          360
        ],
        "text": "in never failing ,",
        "words": [
          {
            "boundingBox": [
              507,
              300,
              553,
              300,
              564,
              376,
              518,
              376
            ],
            "text": "in"
          },
          {
            "boundingBox": [
              579,
              300,
              693,
              300,
              704,
              376,
              590,
              376
            ],
            "text": "never"
          },
          {
            "boundingBox": [
              712,
              300,
              872,
              300,
              883,
              376,
              723,
              376
            ],
            "text": "failing"
          },
          {
            "boundingBox": [
              864,
              300,
              902,
              300,
              913,
              376,
              875,
              376
            ],
            "text": ","
          }
        ]
      },
      {
        "boundingBox": [
          139,
          416,
          572,
          433,
          570,
          491,
          136,
          474
        ],
        "text": "but in rising every",
        "words": [
          {
            "boundingBox": [
              125,
              417,
              213,
              418,
              200,
              491,
              112,
              490
            ],
            "text": "but"
          },
          {
            "boundingBox": [
              217,
              418,
              273,
              418,
              260,
              491,
              204,
              491
            ],
            "text": "in"
          },
          {
            "boundingBox": [
              297,
              418,
              433,
              419,
              420,
              492,
              284,
              491
            ],
            "text": "rising"
          },
          {
            "boundingBox": [
              461,
              419,
              589,
              420,
              576,
              492,
              448,
              492
            ],
            "text": "every"
          }
        ]
      },
      {
        "boundingBox": [
          622,
          413,
          967,
          410,
          968,
          470,
          623,
          472
        ],
        "text": "time we fall",
        "words": [
          {
            "boundingBox": [
              612,
              407,
              718,
              409,
              709,
              470,
              603,
              468
            ],
            "text": "time"
          },
          {
            "boundingBox": [
              753,
              409,
              825,
              410,
              815,
              471,
              743,
              470
            ],
            "text": "we"
          },
          {
            "boundingBox": [
              863,
              410,
              973,
              412,
              964,
              472,
              853,
              471
            ],
            "text": "fall"
          }
        ]
      }
    ]
  }
}

Want to build this?

Recognize celebrities and landmarks

The Celebrity and Landmark Models are examples of Domain Specific Models. Our celebrity recognition model recognizes 200K celebrities from business, politics, sports and entertainment. Our landmark recognition model recognizes 9000 natural and man-made landmarks from around the world. Domain Specific Models is a continuously evolving feature within Computer Vision API.

See it in action

{
  "categories": [
    {
      "name": "people_",
      "score": 0.86328125,
      "detail": {
        "celebrities": [
          {
            "name": "Satya Nadella",
            "faceRectangle": {
              "left": 240,
              "top": 294,
              "width": 135,
              "height": 135
            },
            "confidence": 0.99999558925628662
          }
        ],
        "landmarks": null
      }
    }
  ],
  "adult": null,
  "tags": [
    {
      "name": "person",
      "confidence": 0.99956613779067993
    },
    {
      "name": "suit",
      "confidence": 0.98934584856033325
    },
    {
      "name": "man",
      "confidence": 0.98844343423843384
    },
    {
      "name": "outdoor",
      "confidence": 0.860062301158905
    }
  ],
  "description": {
    "tags": [
      "person",
      "suit",
      "man",
      "necktie",
      "outdoor",
      "building",
      "clothing",
      "standing",
      "wearing",
      "business",
      "looking",
      "holding",
      "black",
      "front",
      "hand",
      "dressed",
      "phone",
      "field"
    ],
    "captions": [
      {
        "text": "Satya Nadella wearing a suit and tie",
        "confidence": 0.9903275009959599
      }
    ]
  },
  "requestId": "52b40152-d3c6-464c-986b-b58597d1b697",
  "metadata": {
    "width": 600,
    "height": 900,
    "format": "Jpeg"
  },
  "faces": [
    {
      "age": 48,
      "gender": "Male",
      "faceRectangle": {
        "left": 240,
        "top": 294,
        "width": 135,
        "height": 135
      }
    }
  ],
  "color": {
    "dominantColorForeground": "Black",
    "dominantColorBackground": "Black",
    "dominantColors": [
      "Black",
      "Grey"
    ],
    "accentColor": "7B5E50",
    "isBWImg": false
  },
  "imageType": {
    "clipArtType": 0,
    "lineDrawingType": 0
  }
}

Want to build this?

Analyze video in near real-time

Analyze video in near real-time Use any of the Computer Vision APIs with you video files by extracting frames of the video from your device and then sending those frames to the API calls of your choice. Get results from your videos faster.

Use our sample on GitHub to get started and build your own app.

Learn more

See it in action

Want to build this?

Generate a thumbnail

Generate a high quality storage-efficient thumbnail based on any input image. Use thumbnail generation to modify images to best suit your needs for size, shape, and style. Apply smart cropping to generate thumbnails that differ from the aspect ratio of your original image, yet preserve the region of interest.

See it in action

By uploading data for this demo, you agree that Microsoft may store it and use it to improve Microsoft services, including this API. To help protect your privacy, we take steps to de-identify your data and keep it secure. We won’t publish your data or let other people use it.

Want to build this?

Explore the Cognitive Services APIs

Computer Vision API

Distill actionable information from images

Content Moderator

Automated image, text, and video moderation

Emotion API PREVIEW

Personalize user experiences with emotion recognition

Face API

Detect, identify, analyze, organize, and tag faces in photos

Video Indexer PREVIEW

Unlock video insights

Custom Vision Service PREVIEW

Easily customize your own state-of-the-art computer vision models for your unique use case

Text Analytics API

Easily evaluate sentiment and topics to understand what users want

Web Language Model API PREVIEW

Use the power of predictive language models trained on web-scale data

Language Understanding (LUIS)

Teach your apps to understand commands from your users

Bing Spell Check API

Detect and correct spelling mistakes in your app

Translator Text API

Easily conduct machine translation with a simple REST API call

Linguistic Analysis API PREVIEW

Simplify complex language concepts and parse text with the Linguistic Analysis API

Bing Speech API

Convert speech to text and back again to understand user intent

Speaker Recognition API PREVIEW

Use speech to identify and authenticate individual speakers

Custom Speech Service PREVIEW

Overcome speech recognition barriers like speaking style, background noise, and vocabulary

Translator Speech API

Easily conduct real-time speech translation with a simple REST API call

QnA Maker API

Distill information into conversational, easy-to-navigate answers

Custom Decision Service

A cloud-based, contextual decision-making API that sharpens with experience

Project Gesture

Gesture based controls

Project Knowledge Exploration

Previously called Knowledge Exploration Service API

Project Event Tracking

Event associated with Wikipedia entries

Project Academic Knowledge

Previously called Academic Knowledge

Project Local Insights

Location insights

Project Entity Linking

Previously called Entity Linking Intelligence Service API

Ready to supercharge your app?