{"id":3954,"date":"2017-04-19T00:00:00","date_gmt":"2017-04-19T00:00:00","guid":{"rendered":"https:\/\/azure.microsoft.com\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator"},"modified":"2025-06-18T07:19:40","modified_gmt":"2025-06-18T14:19:40","slug":"microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator","status":"publish","type":"post","link":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/","title":{"rendered":"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\"><em>This post was authored by the Cognitive Services Team\u00e2\u20ac\u2039.<\/em><\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural methods of communication. We have made adding intelligent features to your platforms easier.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Today, at the first ever <a href=\"https:\/\/www.microsoft.com\/dataamp\">Microsoft Data Amp<\/a> online event, <strong>we\u2019re excited to announce the general availability of Face API, Computer Vision API and Content Moderator API from Microsoft Cognitive Services<\/strong>.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>Face API<\/strong> detects human faces and compares similar ones, organizes people into groups according to visual similarity, and identifies previously tagged people and their emotions in images.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Computer Vision API<\/strong> gives you the tools to understand the contents of any image. It creates tags that identify objects, beings like celebrities, or actions in an image, and crafts coherent sentences to describe it. You can now detect landmarks and handwriting in images. Handwriting detection remains in preview.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Content Moderator<\/strong> provides machine assisted moderation of text and images, augmented with human review tools. Video moderation is available in preview as part of Azure Media Services.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s take a closer look at what these APIs can do for you.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><em>Anna is presenting us the latest updates of Cognitive Services.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"bring-vision-to-your-app\">Bring vision to your app<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Previously, users of Face API could obtain attributes such as <em>age, gender, facial points,<\/em> and <em>headpose<\/em>. Now, it\u2019s also possible to obtain <strong>emotions<\/strong> in the same Face API call. This responds to some user scenarios in which both age and emotions were requested simultaneously. <a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/face-api\/documentation\/overview\">Learn more about Face API<\/a> in our guides.<br>\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"recognizing-landmarks\">Recognizing landmarks<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">We\u2019ve added more richness to Computer Vision API by integrating <strong>landmark recognition<\/strong>. Landmark models, as well as Celebrity Recognition, are examples of <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/cognitive-services\/computer-vision\/home#Domain-Specific\">Domain Specific Models<\/a>. Our landmark recognition model recognizes 9,000 natural and man-made landmarks from around the world. Domain Specific Models is a continuously evolving feature within Computer Vision API.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s say I want my app to recognize this picture I took while traveling:<\/p>\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.webp\" alt=\"a castle on top of Colosseum\" class=\"wp-image-10568 webp-format\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><em>You could have an idea about where this comes from, but how could a machine easily know it?<\/em><\/p>\n\n\n\n<p class=\"wp-block-paragraph\">In C#, we can leverage these capabilities by making a simple REST API call as the following. <em>By the way, other languages are at the bottom of this post.<\/em><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; auto-links: false; gutter: false; title: ; quick-code: false; notranslate\" title=\"\">\nusing System;\nusing System.IO;\nusing System.Net.Http;\nusing System.Net.Http.Headers;\n\nnamespace CSHttpClientSample\n{\n    static class Program\n    {\n        static void Main()\n        {\n            Console.Write(\"Enter image file path: \");\n            string imageFilePath = Console.ReadLine();\n\n            MakeAnalysisRequest(imageFilePath);\n\n            Console.WriteLine(\"nnHit ENTER to exit...n\");\n            Console.ReadLine();\n        }\n\n        static byte[] GetImageAsByteArray(string imageFilePath)\n        {\n            FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);\n            BinaryReader binaryReader = new BinaryReader(fileStream);\n            return binaryReader.ReadBytes((int)fileStream.Length);\n        }\n\n        static async void MakeAnalysisRequest(string imageFilePath)\n        {\n            var client = new HttpClient();\n\n            \/\/ Request headers. Replace the second parameter with a valid subscription key.\n            client.DefaultRequestHeaders.Add(\"Ocp-Apim-Subscription-Key\", \"putyourkeyhere\");\n\n            \/\/ Request parameters. You can change \"landmarks\" to \"celebrities\" on requestParameters and uri to use the Celebrities model.\n            string requestParameters = \"model=landmarks\";\n            string uri = \"https:\/\/westus.api.cognitive.microsoft.com\/vision\/v1.0\/models\/landmarks\/analyze?\" + requestParameters;\n            Console.WriteLine(uri);\n\n            HttpResponseMessage response;\n\n            \/\/ Request body. Try this sample with a locally stored JPEG image.\n            byte[] byteData = GetImageAsByteArray(imageFilePath);\n\n            using (var content = new ByteArrayContent(byteData))\n            {\n                \/\/ This example uses content type \"application\/octet-stream\".\n                \/\/ The other content types you can use are \"application\/json\" and \"multipart\/form-data\".\n                content.Headers.ContentType = new MediaTypeHeaderValue(\"application\/octet-stream\");\n                response = await client.PostAsync(uri, content);\n                string contentString = await response.Content.ReadAsStringAsync();\n                Console.WriteLine(\"Response:n\");\n                Console.WriteLine(contentString);\n            }\n        }\n    }\n}\n<\/pre><\/div>\n\n\n<p class=\"wp-block-paragraph\">The successful response, returned in JSON would be the following:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; auto-links: false; gutter: false; title: ; quick-code: false; notranslate\" title=\"\">\n```json\n{\n  \"requestId\": \"b15f13a4-77d9-4fab-a701-7ad65bcdcaed\",\n  \"metadata\": {\n    \"width\": 1024,\n    \"height\": 680,\n    \"format\": \"Jpeg\"\n  },\n  \"result\": {\n    \"landmarks\": [\n      {\n        \"name\": \"Colosseum\",\n        \"confidence\": 0.9448209\n      }\n    ]\n  }\n}\n```\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"recognizing-handwriting\">Recognizing handwriting<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Handwriting OCR<\/strong>&nbsp;is also available in preview in Computer Vision API. This feature detects text in a handwritten image and extracts the recognized characters into a machine-usable character stream.<br>It detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, etc. It works with different surfaces and backgrounds such as white paper, sticky notes, and whiteboards. No need to transcribe those handwritten notes anymore; you can snap an image instead and use Handwriting OCR to digitize your notes, saving time, effort, and paper clutter. You can even decide to do a quick search when you want to pull the notes up again.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">You can try this out yourself by&nbsp;<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/computer-vision-api\">uploading your sample in the interactive demonstration<\/a>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Let\u2019s say that I want to recognize the handwriting in the whiteboard:<\/p>\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/657052cd-d157-4d2a-85dc-1692c737dc82.webp\" alt=\"text, letter\" class=\"wp-image-10570 webp-format\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/657052cd-d157-4d2a-85dc-1692c737dc82.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><em>An inspiration quote I\u2019d like to keep.<\/em><\/p>\n\n\n\n<p class=\"wp-block-paragraph\">In C#, I would use the following:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; auto-links: false; gutter: false; title: ; quick-code: false; notranslate\" title=\"\">\nusing System;\nusing System.IO;\nusing System.Collections;\nusing System.Collections.Generic;\nusing System.Net.Http;\nusing System.Net.Http.Headers;\n\nnamespace CSHttpClientSample\n{\n    static class Program\n    {\n        static void Main()\n        {\n            Console.Write(\"Enter image file path: \");\n            string imageFilePath = Console.ReadLine();\n\n            ReadHandwrittenText(imageFilePath);\n\n            Console.WriteLine(\"nnnHit ENTER to exit...\");\n            Console.ReadLine();\n        }\n\n        static byte[] GetImageAsByteArray(string imageFilePath)\n        {\n            FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);\n            BinaryReader binaryReader = new BinaryReader(fileStream);\n            return binaryReader.ReadBytes((int)fileStream.Length);\n        }\n\n        static async void ReadHandwrittenText(string imageFilePath)\n        {\n            var client = new HttpClient();\n\n            \/\/ Request headers - replace this example key with your valid subscription key.\n            client.DefaultRequestHeaders.Add(\"Ocp-Apim-Subscription-Key\", \"putyourkeyhere\");\n\n            \/\/ Request parameters and URI. Set \"handwriting\" to false for printed text.\n            string requestParameter = \"handwriting=true\";\n            string uri = \"https:\/\/westus.api.cognitive.microsoft.com\/vision\/v1.0\/recognizeText?\" + requestParameter;\n\n            HttpResponseMessage response = null;\n            IEnumerable responseValues = null;\n            string operationLocation = null;\n\n            \/\/ Request body. Try this sample with a locally stored JPEG image.\n            byte[] byteData = GetImageAsByteArray(imageFilePath);\n            var content = new ByteArrayContent(byteData);\n\n            \/\/ This example uses content type \"application\/octet-stream\".\n            \/\/ You can also use \"application\/json\" and specify an image URL.\n            content.Headers.ContentType = new MediaTypeHeaderValue(\"application\/octet-stream\");\n\n            try {\n                response = await client.PostAsync(uri, content);\n                responseValues = response.Headers.GetValues(\"Operation-Location\");\n            }\n            catch (Exception e)\n            {\n                Console.WriteLine(e.Message);\n            }\n\n            foreach (var value in responseValues)\n            {\n                \/\/ This value is the URI where you can get the text recognition operation result.\n                operationLocation = value;\n                Console.WriteLine(operationLocation);\n                break;\n            }\n\n            try\n            {\n                \/\/ Note: The response may not be immediately available. Handwriting recognition is an\n                \/\/ async operation that can take a variable amount of time depending on the length\n                \/\/ of the text you want to recognize. You may need to wait or retry this operation.\n                response = await client.GetAsync(operationLocation);\n\n                \/\/ And now you can see the response in in JSON:\n                Console.WriteLine(await response.Content.ReadAsStringAsync());\n            }\n            catch (Exception e)\n            {\n                Console.WriteLine(e.Message);\n            }\n        }\n    }\n}\n\n<\/pre><\/div>\n\n\n<p class=\"wp-block-paragraph\">Upon success, the OCR results returned include text, bounding box for regions, lines, and words through the following JSON:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; auto-links: false; gutter: false; title: ; quick-code: false; notranslate\" title=\"\">\n{\n  \"status\": \"Succeeded\",\n  \"recognitionResult\": {\n    \"lines\": [\n      {\n        \"boundingBox\": [\n          542,\n          724,\n          1404,\n          722,\n          1406,\n          819,\n          544,\n          820\n        ],\n        \"text\": \"You must be the change\",\n        \"words\": [\n          {\n            \"boundingBox\": [\n              535,\n              725,\n              678,\n              721,\n              698,\n              841,\n              555,\n              845\n            ],\n            \"text\": \"You\"\n          },\n          {\n            \"boundingBox\": [\n              713,\n              720,\n              886,\n              715,\n              906,\n              835,\n              734,\n              840\n            ],\n            \"text\": \"must\"\n          },\n          {\n            \"boundingBox\": [\n              891,\n              715,\n              982,\n              713,\n              1002,\n              833,\n              911,\n              835\n            ],\n            \"text\": \"be\"\n          },\n          {\n            \"boundingBox\": [\n              1002,\n              712,\n              1129,\n              708,\n              1149,\n              829,\n              1022,\n              832\n            ],\n            \"text\": \"the\"\n          },\n          {\n            \"boundingBox\": [\n              1159,\n              708,\n              1427,\n              700,\n              1448,\n              820,\n              1179,\n              828\n            ],\n            \"text\": \"change\"\n          }\n        ]\n      },\n      {\n        \"boundingBox\": [\n          667,\n          905,\n          1766,\n          868,\n          1771,\n          976,\n          672,\n          1015\n        ],\n        \"text\": \"you want to see in the world !\",\n        \"words\": [\n          {\n            \"boundingBox\": [\n              665,\n              901,\n              758,\n              899,\n              768,\n              1015,\n              675,\n              1017\n            ],\n            \"text\": \"you\"\n          },\n          {\n            \"boundingBox\": [\n              752,\n              900,\n              941,\n              896,\n              951,\n              1012,\n              762,\n              1015\n            ],\n            \"text\": \"want\"\n          },\n          {\n            \"boundingBox\": [\n              960,\n              896,\n              1058,\n              895,\n              1068,\n              1010,\n              970,\n              1012\n            ],\n            \"text\": \"to\"\n          },\n          {\n            \"boundingBox\": [\n              1077,\n              894,\n              1227,\n              892,\n              1237,\n              1007,\n              1087,\n              1010\n            ],\n            \"text\": \"see\"\n          },\n          {\n            \"boundingBox\": [\n              1253,\n              891,\n              1338,\n              890,\n              1348,\n              1006,\n              1263,\n              1007\n            ],\n            \"text\": \"in\"\n          },\n          {\n            \"boundingBox\": [\n              1344,\n              890,\n              1488,\n              887,\n              1498,\n              1003,\n              1354,\n              1005\n            ],\n            \"text\": \"the\"\n          },\n          {\n            \"boundingBox\": [\n              1494,\n              887,\n              1755,\n              883,\n              1765,\n              999,\n              1504,\n              1003\n            ],\n            \"text\": \"world\"\n          },\n          {\n            \"boundingBox\": [\n              1735,\n              883,\n              1813,\n              882,\n              1823,\n              998,\n              1745,\n              999\n            ],\n            \"text\": \"!\"\n          }\n        ]\n      }\n    ]\n  }\n}\n<\/pre><\/div>\n\n\n<p class=\"wp-block-paragraph\">To easily get started in your preferred language, please refer to the following:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">The\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/face\/\">Face API page<\/a>\u00a0and quick-start guides for on\u00a0<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/Face\/documentation\/QuickStarts\/CSharp\">C#<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/Face\/documentation\/QuickStarts\/Java\">Java<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/Face\/documentation\/QuickStarts\/Python\">Python<\/a>, and many more.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">The\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/computer-vision\/\">Computer Vision API page<\/a>\u00a0and quick-start guides on\u00a0<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/Computer-Vision-API\/documentation\/QuickStarts\/CSharp\">C#<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/Computer-Vision-API\/documentation\/QuickStarts\/Java\">Java<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/cognitive-services\/en-us\/Computer-Vision-API\/documentation\/QuickStarts\/Python\">Python<\/a>, and more.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">The\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/cognitive-services\/content-moderator\/\">Content Moderator Page<\/a>\u00a0and\u00a0<a href=\"https:\/\/contentmoderator.cognitive.microsoft.com\/\">test drive Content Moderator<\/a>\u00a0to learn how we enable a complete, configurable content moderation lifecycle.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">For more information about our use cases, don\u2019t hesitate to take a look at our&nbsp;<a href=\"https:\/\/customers.microsoft.com\/en-us\/search?sq=%22Microsoft%20Cognitive%20Services%22&amp;ff=&amp;p=0&amp;so=story_publish_date%20desc\">customer stories<\/a>, including a great use of our&nbsp;<a href=\"https:\/\/customers.microsoft.com\/en-us\/story\/graymeta-media-cable-cognitive-services\">Vision APIs with GrayMeta<\/a>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Happy coding!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Bring Vision to Your App  Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ms_queue_id":[],"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","_alt_title":"","footnotes":"","msx_community_cta_settings":[]},"categories":[1454],"tags":[],"audience":[3057,3055,3056],"content-type":[1465],"product":[3164],"tech-community":[],"topic":[],"coauthors":[97],"class_list":["post-3954","post","type-post","status-publish","format-standard","hentry","category-ai-machine-learning","audience-data-professionals","audience-developers","audience-it-implementors","content-type-announcements","product-microsoft-foundry","review-flag-1680286581-295","review-flag-1680286581-364","review-flag-9-1680286581-259","review-flag-gener-1680286584-335","review-flag-lever-1680286579-649","review-flag-man-1680286580-833","review-flag-new-1680286579-546"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator | Microsoft Azure Blog<\/title>\n<meta name=\"description\" content=\"Bring Vision to Your App Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator | Microsoft Azure Blog\" \/>\n<meta property=\"og:description\" content=\"Bring Vision to Your App Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Azure Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/microsoftazure\" \/>\n<meta property=\"article:published_time\" content=\"2017-04-19T00:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-18T14:19:40+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.jpg\" \/>\n<meta name=\"author\" content=\"Microsoft Azure\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@azure\" \/>\n<meta name=\"twitter:site\" content=\"@azure\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Microsoft Azure\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\"},\"author\":[{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/microsoft-azure\/\",\"@type\":\"Person\",\"@name\":\"Microsoft Azure\"}],\"headline\":\"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator\",\"datePublished\":\"2017-04-19T00:00:00+00:00\",\"dateModified\":\"2025-06-18T14:19:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\"},\"wordCount\":663,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.jpg\",\"articleSection\":[\"AI + machine learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\",\"name\":\"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator | Microsoft Azure Blog\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.jpg\",\"datePublished\":\"2017-04-19T00:00:00+00:00\",\"dateModified\":\"2025-06-18T14:19:40+00:00\",\"description\":\"Bring Vision to Your App Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026\",\"breadcrumb\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.webp\",\"width\":654,\"height\":433,\"caption\":\"a castle on top of Colosseum\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog home\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI + machine learning\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"name\":\"Microsoft Azure Blog\",\"description\":\"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.\",\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\",\"name\":\"Microsoft Azure Blog\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"width\":512,\"height\":512,\"caption\":\"Microsoft Azure Blog\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/microsoftazure\",\"https:\/\/x.com\/azure\",\"https:\/\/www.instagram.com\/microsoftdeveloper\/\",\"https:\/\/www.linkedin.com\/company\/16188386\",\"https:\/\/www.youtube.com\/user\/windowsazure\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/c702e5edd662b328b49b7e1180cab117\",\"name\":\"shakir\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/9342c7c05bb16548741bc5cd3a3e3b7ee0c8e746844ad2cc582db5beb5514c6f?s=96&d=mm&r=g7664e653ea371ce16eaf75e9fa8952c4\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/9342c7c05bb16548741bc5cd3a3e3b7ee0c8e746844ad2cc582db5beb5514c6f?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/9342c7c05bb16548741bc5cd3a3e3b7ee0c8e746844ad2cc582db5beb5514c6f?s=96&d=mm&r=g\",\"caption\":\"shakir\"},\"sameAs\":[\"https:\/\/azure.microsoft.com\"],\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/shakir\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator | Microsoft Azure Blog","description":"Bring Vision to Your App Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/","og_locale":"en_US","og_type":"article","og_title":"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator | Microsoft Azure Blog","og_description":"Bring Vision to Your App Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026","og_url":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/","og_site_name":"Microsoft Azure Blog","article_publisher":"https:\/\/www.facebook.com\/microsoftazure","article_published_time":"2017-04-19T00:00:00+00:00","article_modified_time":"2025-06-18T14:19:40+00:00","og_image":[{"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.jpg","type":"","width":"","height":""}],"author":"Microsoft Azure","twitter_card":"summary_large_image","twitter_creator":"@azure","twitter_site":"@azure","twitter_misc":{"Written by":"Microsoft Azure","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#article","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/"},"author":[{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/microsoft-azure\/","@type":"Person","@name":"Microsoft Azure"}],"headline":"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator","datePublished":"2017-04-19T00:00:00+00:00","dateModified":"2025-06-18T14:19:40+00:00","mainEntityOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/"},"wordCount":663,"commentCount":0,"publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.jpg","articleSection":["AI + machine learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/","name":"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator | Microsoft Azure Blog","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.jpg","datePublished":"2017-04-19T00:00:00+00:00","dateModified":"2025-06-18T14:19:40+00:00","description":"Bring Vision to Your App Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural\u2026","breadcrumb":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#primaryimage","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2017\/04\/85ba8d75-4f75-4882-b446-ec64cf5747ab.webp","width":654,"height":433,"caption":"a castle on top of Colosseum"},{"@type":"BreadcrumbList","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/microsoft-cognitive-services-general-availability-for-face-api-computer-vision-api-and-content-moderator\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog home","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/"},{"@type":"ListItem","position":2,"name":"AI + machine learning","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/"},{"@type":"ListItem","position":3,"name":"Microsoft Cognitive Services \u2013 General availability for Face API, Computer Vision API and Content Moderator"}]},{"@type":"WebSite","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","name":"Microsoft Azure Blog","description":"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.","publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization","name":"Microsoft Azure Blog","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","width":512,"height":512,"caption":"Microsoft Azure Blog"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/microsoftazure","https:\/\/x.com\/azure","https:\/\/www.instagram.com\/microsoftdeveloper\/","https:\/\/www.linkedin.com\/company\/16188386","https:\/\/www.youtube.com\/user\/windowsazure"]},{"@type":"Person","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/c702e5edd662b328b49b7e1180cab117","name":"shakir","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/9342c7c05bb16548741bc5cd3a3e3b7ee0c8e746844ad2cc582db5beb5514c6f?s=96&d=mm&r=g7664e653ea371ce16eaf75e9fa8952c4","url":"https:\/\/secure.gravatar.com\/avatar\/9342c7c05bb16548741bc5cd3a3e3b7ee0c8e746844ad2cc582db5beb5514c6f?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/9342c7c05bb16548741bc5cd3a3e3b7ee0c8e746844ad2cc582db5beb5514c6f?s=96&d=mm&r=g","caption":"shakir"},"sameAs":["https:\/\/azure.microsoft.com"],"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/shakir\/"}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Azure Blog","distributor_original_site_url":"https:\/\/azure.microsoft.com\/en-us\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/3954","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/comments?post=3954"}],"version-history":[{"count":1,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/3954\/revisions"}],"predecessor-version":[{"id":42347,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/3954\/revisions\/42347"}],"wp:attachment":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media?parent=3954"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/categories?post=3954"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tags?post=3954"},{"taxonomy":"audience","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/audience?post=3954"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/content-type?post=3954"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/product?post=3954"},{"taxonomy":"tech-community","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tech-community?post=3954"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/topic?post=3954"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/coauthors?post=3954"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}