{"id":32790,"date":"2024-03-28T06:00:00","date_gmt":"2024-03-28T13:00:00","guid":{"rendered":"https:\/\/azure.microsoft.com\/en-us\/blog\/?p=32790"},"modified":"2025-01-15T14:34:45","modified_gmt":"2025-01-15T22:34:45","slug":"announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications","status":"publish","type":"post","link":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/","title":{"rendered":"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\">In the rapidly evolving landscape of generative AI, business leaders are trying to strike the right balance between innovation and risk management. Prompt injection attacks have emerged as a significant challenge, where malicious actors try to manipulate an AI system into doing something outside its intended purpose, such as producing harmful content or exfiltrating confidential data. In addition to mitigating these security risks, organizations are also<strong> <\/strong>concerned about quality and reliability. They want to ensure that their AI systems are not generating errors or adding information that isn\u2019t substantiated in the application\u2019s data sources, which can erode user trust.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">To help customers meet these AI quality and safety challenges, we\u2019re announcing new tools now available or coming soon to <a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-studio\">Azure AI Studio<\/a> for generative AI app developers:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/promptshields-techblog\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Prompt Shields<\/strong><\/a> to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/groundednessdetection-techblog\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Groundedness detection<\/strong><\/a> to detect \u201challucinations\u201d in model outputs, coming soon.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-services\/openai\/concepts\/system-message\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Safety system messages<\/strong><\/a><strong> <\/strong>to steer your model\u2019s behavior toward safe, responsible outputs, coming soon.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/Safety-Evals-Blog\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Safety evaluations<\/strong><\/a> to assess an application\u2019s vulnerability to jailbreak attacks and to generating content risks, now available in preview.&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/Safety-Monitoring-Blog\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Risk and safety monitoring<\/strong><\/a> to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">With these additions, <a href=\"https:\/\/azure.microsoft.com\/en-us\/solutions\/ai\/\">Azure AI<\/a> continues to provide our customers with innovative technologies to safeguard their applications across the generative AI lifecycle.<\/p>\n\n\n\n<figure class=\"wp-block-msx-ump-embed wp-block-msx-ump-embed\" class=\"wp-block-msxcm-ump-embed\">\n\t<div class=\"wp-block-embed__wrapper\">\n\t\t<universal-media-player id=\"ump-69e8d323efa76\"><\/universal-media-player>\n\t\t<script type=\"module\">\n\t\t\tconst currentTheme =\n\t\t\t\tlocalStorage.getItem('msxcmCurrentTheme') ||\n\t\t\t\t(window.matchMedia('(prefers-color-scheme: dark)').matches ? 'dark' : 'light');\n\n\t\t\t\/\/ Modify player theme based on localStorage value.\n\t\t\tlet options = {\"autoplay\":false,\"hideControls\":null,\"language\":\"en-us\",\"loop\":false,\"partnerName\":\"cloud-blogs\",\"poster\":\"https:\\\/\\\/cdn-dynmedia-1.microsoft.com\\\/is\\\/image\\\/microsoftcorp\\\/azure-ai-tools-372189?wid=1280\",\"title\":\"\",\"sources\":[{\"src\":\"https:\\\/\\\/cdn-dynmedia-1.microsoft.com\\\/is\\\/content\\\/microsoftcorp\\\/azure-ai-tools-372189-0x1080-6439k\",\"type\":\"video\\\/mp4\",\"quality\":\"HQ\"},{\"src\":\"https:\\\/\\\/cdn-dynmedia-1.microsoft.com\\\/is\\\/content\\\/microsoftcorp\\\/azure-ai-tools-372189-0x720-3266k\",\"type\":\"video\\\/mp4\",\"quality\":\"HD\"},{\"src\":\"https:\\\/\\\/cdn-dynmedia-1.microsoft.com\\\/is\\\/content\\\/microsoftcorp\\\/azure-ai-tools-372189-0x540-2160k\",\"type\":\"video\\\/mp4\",\"quality\":\"SD\"},{\"src\":\"https:\\\/\\\/cdn-dynmedia-1.microsoft.com\\\/is\\\/content\\\/microsoftcorp\\\/azure-ai-tools-372189-0x360-958k\",\"type\":\"video\\\/mp4\",\"quality\":\"LO\"}],\"ccFiles\":[{\"url\":\"https:\\\/\\\/azure.microsoft.com\\\/en-us\\\/blog\\\/wp-json\\\/msxcm\\\/v1\\\/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Ffinal%2Fen-us%2Fmicrosoft-product-and-services%2Fazure%2Fvideo%2Fazure-ai-tools-372189%2Fazure-ai-tools-372189_cc_en-us.ttml\",\"locale\":\"en-us\",\"ccType\":\"TTML\"}]};\n\n\t\t\tif (currentTheme) {\n\t\t\t\toptions.playButtonTheme = currentTheme;\n\t\t\t}\n\n\t\t\tdocument.addEventListener('DOMContentLoaded', () => {\n\t\t\t\tump(\"ump-69e8d323efa76\", options);\n\t\t\t});\n\t\t<\/script>\n\t<\/div>\n\t<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"safeguard-your-llms-against-prompt-injection-attacks-with-prompt-shields\">Safeguard your LLMs against prompt injection attacks with Prompt Shields<\/h2>\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-3.webp\" alt=\"A colorful illustration of red and blue cubes floating through a purple computer screen.\" class=\"wp-image-32793 webp-format\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-3.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Prompt injection attacks, both direct attacks, known as jailbreaks, and indirect attacks, are emerging as significant threats to foundation model safety and security. Successful attacks that bypass an AI system\u2019s safety mitigations can have severe consequences, such as personally identifiable information (PII) and intellectual property (IP) leakage.&nbsp;<\/p>\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-5.webp\" alt=\"A colorful illustration of red and blue cubes floating towards a purple computer screen. The red cubes are blocked by a shield in front of the computer screen.\" class=\"wp-image-32795 webp-format\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-5.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">To combat these threats, Microsoft has introduced Prompt Shields to detect suspicious inputs in real time and block them before they reach the foundation model. This proactive approach safeguards the integrity of large language model (LLM) systems and user interactions.<\/p>\n\n\n<figure class=\"wp-block-image alignleft size-full is-resized is-style-default\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/e418466a-bdb4-4cf5-b398-6515eecc3d9e.webp\" alt=\"A colorful illustration depicting a user sending a red prompt that says &ldquo;do anything now&rdquo; towards a blue computer screen. The prompt is blocked by a blue shield.\" class=\"wp-image-32798 webp-format\" style=\"width:400px\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/e418466a-bdb4-4cf5-b398-6515eecc3d9e.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Prompt Shield for Jailbreak Attacks: <\/strong>Jailbreak, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a \u2018DAN\u2019 (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions. Our Prompt Shield for jailbreak attacks, released this past November as \u2018jailbreak risk detection\u2019, detects these attacks by analyzing prompts for malicious instructions and blocks their execution.<\/p>\n\n\n<figure class=\"wp-block-image alignright size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/13ac9ce7-6490-42dd-9533-87f31865bd5f.webp\" alt=\"A colorful illustration of red and blue cubes floating from a yellow folder of files towards a blue rectangular screen. The red cubes are blocked by a blue shield.\" class=\"wp-image-32800 webp-format\" style=\"width:400px\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/13ac9ce7-6490-42dd-9533-87f31865bd5f.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Prompt Shield for Indirect Attacks: <\/strong>Indirect prompt injection attacks, although not as well-known as jailbreak attacks, present a unique challenge and threat. In these covert attacks, hackers aim to manipulate AI systems indirectly by altering input data, such as websites, emails, or uploaded documents. This allows hackers to trick the foundation model into performing unauthorized actions without directly tampering with the prompt or LLM. The consequences of which can lead to account takeover, defamatory or harassing content, and other malicious actions. To combat this, we&#8217;re introducing a Prompt Shield for indirect attacks, designed to detect and block these hidden attacks to support the security and integrity of your generative AI applications.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"identify-llm-hallucinations-with-groundedness-detection\">Identify LLM Hallucinations with Groundedness detection<\/h2>\n\n\n<figure class=\"wp-block-image alignleft size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/Groundednes-Detection.webp\" alt=\"A colorful illustration of a blue computer screen filled with lines of blue text and one squiggly line of red text, with a yellow alert symbol above it. Behind the computer screen are two blue databases with an arrow pointing to the computer screen.\" class=\"wp-image-32871 webp-format\" style=\"width:725px;height:auto\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/Groundednes-Detection.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">&#8216;Hallucinations&#8217; in generative AI refer to instances when a model confidently generates outputs that misalign with common sense or lack grounding data. This issue can manifest in different ways, ranging from minor inaccuracies to starkly false outputs. Identifying hallucinations is crucial for enhancing the quality and trustworthiness of generative AI systems. Today, Microsoft is announcing Groundedness detection, a new feature designed to identify text-based hallucinations. This feature detects &#8216;ungrounded material&#8217; in text to support the quality of LLM outputs.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"steer-your-application-with-an-effective-safety-system-message\">Steer your application with an effective safety system message<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">In addition to adding safety systems like <a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-services\/ai-content-safety\">Azure AI Content Safety<\/a>, prompt engineering is one of the most powerful and popular ways to improve the reliability of a generative AI system. Today, Azure AI enables users to ground foundation models on trusted data sources and build system messages that guide the optimal use of that grounding data and overall behavior (do this, not that). At Microsoft, we have found that even small changes to a system message can have a significant impact on an application\u2019s quality and safety. To help customers build effective system messages, we\u2019ll soon provide safety system message templates directly in the Azure AI Studio and <a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-services\/openai-service\">Azure OpenAI Service<\/a> playgrounds by default. Developed by Microsoft Research to mitigate harmful content generation and misuse, these templates can help developers start building high-quality applications in less time.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"evaluate-your-llm-application-for-risks-and-safety\">Evaluate your LLM application for risks and safety<\/h2>\n\n\n<figure class=\"wp-block-image alignleft size-large is-resized\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-4-1024x655.webp\" alt=\"A colorful illustration of a blue screen with three bar charts, each with text descriptions beneath them. One bar chart is highlighted and shows two short bars in a blue color and one tall bar in a red color.\" class=\"wp-image-32794 webp-format\" style=\"width:400px\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-4-1024x655.webp 1024w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-4-300x192.webp 300w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-4-768x491.webp 768w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-4.webp 1281w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-4-1024x655.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">How do you know if your application and mitigations are working as intended? Today, many organizations lack the resources to stress test their generative AI applications so they can confidently progress from prototype to production. First, it can be challenging to build a high-quality test dataset that reflects a range of new and emerging risks, such as jailbreak attacks. Even with quality data, evaluations can be a complex and manual process, and development teams may find it difficult to interpret the results to inform effective mitigations.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Azure AI Studio provides robust, automated evaluations to help organizations systematically assess and improve their generative AI applications before deploying to production. While we currently support pre-built quality evaluation metrics such as groundedness, relevance, and fluency, today we\u2019re <a href=\"https:\/\/aka.ms\/Safety-Evals-Blog\">announcing automated evaluations<\/a> for new risk and safety metrics. These safety evaluations measure an application\u2019s susceptibility to jailbreak attempts and to producing violent, sexual, self-harm-related, and hateful and unfair content. They also provide natural language explanations for evaluation results to help inform appropriate mitigations. Developers can evaluate an application using their own test dataset or simply generate a high-quality test dataset using adversarial prompt templates developed by Microsoft Research. With this capability, Azure AI Studio can also help augment and accelerate manual red-teaming efforts by enabling red teams to generate and automate adversarial prompts at scale.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"monitor-your-azure-openai-service-deployments-for-risks-and-safety-in-production\">Monitor your Azure OpenAI Service deployments for risks and safety in production<\/h2>\n\n\n<figure class=\"wp-block-image alignright size-large is-resized\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-6-1024x600.webp\" alt=\"A colorful illustration of a blue computer screen with different kinds of charts in a dashboard. A blue line graph is highlighted, with a peak on the y-axis in red.\" class=\"wp-image-32797 webp-format\" style=\"width:400px\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-6-1024x600.webp 1024w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-6-300x176.webp 300w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-6-768x450.webp 768w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-6.webp 1171w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/image-6-1024x600.webp\"><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Monitoring generative AI models in production is an essential part of the AI lifecycle. Today we are pleased to announce risk and safety monitoring in Azure OpenAI Service. Now, developers can visualize the volume, severity, and category of user inputs and model outputs that were blocked by their Azure OpenAI Service content filters and blocklists over time. In addition to content-level monitoring and insights, we are introducing reporting for potential abuse at the user level. Now, enterprise customers have greater visibility into trends where end-users continuously send risky or harmful requests to an Azure OpenAI Service model. If content from a user is flagged as harmful by a customer\u2019s pre-configured content filters or blocklists, the service will use contextual signals to determine whether the user\u2019s behavior qualifies as abuse of the AI system. With these new monitoring capabilities, organizations can better-understand trends in application and user behavior and apply those insights to adjust content filter configurations, blocklists, and overall application design.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"confidently-scale-the-next-generation-of-safe-responsible-ai-applications\">Confidently scale the next generation of safe, responsible AI applications<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Generative AI can be a force multiplier for every department, company, and industry. Azure AI customers are using this technology to <a href=\"https:\/\/customers.microsoft.com\/en-us\/story\/1728829430186194098-nsure-power-platform-insurance-usa\" target=\"_blank\" rel=\"noreferrer noopener\">operate more efficiently<\/a>, <a href=\"https:\/\/customers.microsoft.com\/en-us\/story\/1742331877394146486-iwilltherapy-azure-openai-service-health-provider-en-india\" target=\"_blank\" rel=\"noreferrer noopener\">improve customer experience<\/a>, and <a href=\"https:\/\/customers.microsoft.com\/en-us\/story\/1745242950134216820-schneider-electric-azure-machine-learning-discrete-manufacturing-en-france\" target=\"_blank\" rel=\"noreferrer noopener\">build new pathways for innovation and growth<\/a>. At the same time, foundation models introduce new challenges for security and safety that require novel mitigations and continuous learning.&nbsp;<\/p>\n\n\n<div class=\"wp-block-msxcm-kicker-container\">\n\t<div class=\" wp-block-msxcm-kicker-block wp-block-msxcm-kicker--align-right\" data-bi-an=\"Kicker Right\">\n\t\t<p class=\"wp-block-msxcm-kicker__title small text-neutral-400 text-uppercase\">\n\t\t\tInvest in App Innovation to Stay Ahead of the Curve\t\t<\/p>\n\t\t<a\n\t\t\tclass=\"wp-block-msxcm-kicker__cta btn btn-link p-0 text-decoration-none\"\n\t\t\thref=\"https:\/\/info.microsoft.com\/ww-landing-invest-in-app-innovation-to-stay-ahead-of-the-curve.html?lcid=EN-US\"\n\t\t\ttarget=\"_blank\"\t\t>\n\t\t\t<span>Learn more<\/span> <span class=\"glyph-append glyph-append-xsmall wp-block-msxcm-kicker__glyph glyph-append-go\"><\/span>\n\t\t<\/a>\n\t<\/div>\n<\/div>\n\n\n\n<p class=\"wp-block-paragraph\">At Microsoft, whether we are working on traditional machine learning or cutting-edge AI technologies, we ground our research, policy, and engineering efforts in our <a href=\"https:\/\/www.microsoft.com\/en-us\/ai\/principles-and-approach\" target=\"_blank\" rel=\"noreferrer noopener\">AI principles<\/a>. We\u2019ve built our Azure AI portfolio to help developers embed critical responsible AI practices directly into the AI development lifecycle. In this way, Azure AI provides a consistent, scalable platform for responsible innovation <a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/azure-openai-service-powers-the-microsoft-copilot-ecosystem\/\" target=\"_blank\" rel=\"noreferrer noopener\">for our first-party copilots<\/a> and for the thousands of customers building their own game-changing solutions with Azure AI. We\u2019re excited to continue collaborating with customers and partners on novel ways to mitigate, evaluate, and monitor risks and help every organization realize their goals with generative AI with confidence.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"learn-more-about-today-s-announcements\">Learn more about today\u2019s announcements<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Get started in <a href=\"https:\/\/ai.azure.com\/\">Azure AI Studio<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Dig deeper with technical blogs on Tech Community:\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/promptshields-techblog\" target=\"_blank\" rel=\"noreferrer noopener\">Prompt Shields<\/a><\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/groundednessdetection-techblog\" target=\"_blank\" rel=\"noreferrer noopener\">Groundedness detection<\/a><\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/Safety-Evals-Blog\" target=\"_blank\" rel=\"noreferrer noopener\">Safety evaluations<\/a><\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/aka.ms\/Safety-Monitoring-Blog\" target=\"_blank\" rel=\"noreferrer noopener\">Risk and safety monitoring<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-msxcm-cta-block\" data-moray data-bi-an=\"CTA Block\">\n\t<div class=\"card d-block mx-ng mx-md-0\">\n\t\t<div class=\"row no-gutters\">\n\n\t\t\t\t\t\t\t<div class=\"col-md-4\">\n\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"603\" height=\"440\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/02\/CTA-block-image.jpg\" class=\"card-img img-object-cover\" alt=\"a person sitting at a desk using a computer\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/02\/CTA-block-image.jpg 603w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/02\/CTA-block-image-300x219.jpg 300w\" sizes=\"auto, (max-width: 603px) 100vw, 603px\" \/>\t\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"d-flex col-md\">\n\t\t\t\t<div class=\"card-body align-self-center p-4 p-md-5\">\n\t\t\t\t\t\n\t\t\t\t\t<h2>Azure AI Studio<\/h2>\n\n\t\t\t\t\t<div class=\"mb-3\">\n\t\t\t\t\t\t<p>Build AI solutions faster with prebuilt models or train models using your data to innovate securely and at scale.<\/p>\n\t\t\t\t\t<\/div>\n\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"link-group\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-studio\" class=\"btn btn-link text-decoration-none p-0\" target=\"_blank\">\n\t\t\t\t\t\t\t\t<span>Try now<\/span>\n\t\t\t\t\t\t\t\t<span class=\"glyph-append glyph-append-chevron-right glyph-append-xsmall\"><\/span>\n\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t<\/div>\n\n\t\t\t\t\t<\/div>\n\t<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>To help customers meet  new AI quality and safety challenges, we\u2019re announcing new tools now available or coming soon to Azure AI Studio for generative AI app developers.<\/p>\n","protected":false},"author":45,"featured_media":32870,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ms_queue_id":[],"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","_alt_title":"","footnotes":"","msx_community_cta_settings":[]},"categories":[1454],"tags":[2671,2747],"audience":[3057,3055,3053,3056],"content-type":[1465],"product":[1803,2756,2758,1795],"tech-community":[],"topic":[],"coauthors":[579],"class_list":["post-32790","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-machine-learning","tag-ai","tag-generative-ai","audience-data-professionals","audience-developers","audience-it-decision-makers","audience-it-implementors","content-type-announcements","product-azure-ai","product-azure-ai-content-safety","product-azure-ai-studio","product-azure-openai","review-flag-machi-1680286585-314","review-flag-new-1680286579-546"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog<\/title>\n<meta name=\"description\" content=\"Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog\" \/>\n<meta property=\"og:description\" content=\"Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Azure Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/microsoftazure\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-28T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-01-15T22:34:45+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Sarah Bird\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.png\" \/>\n<meta name=\"twitter:creator\" content=\"@azure\" \/>\n<meta name=\"twitter:site\" content=\"@azure\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sarah Bird\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\"},\"author\":[{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/sarah-bird\/\",\"@type\":\"Person\",\"@name\":\"Sarah Bird\"}],\"headline\":\"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications\",\"datePublished\":\"2024-03-28T13:00:00+00:00\",\"dateModified\":\"2025-01-15T22:34:45+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\"},\"wordCount\":1431,\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp\",\"keywords\":[\"AI\",\"Generative AI\"],\"articleSection\":[\"AI + machine learning\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\",\"name\":\"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp\",\"datePublished\":\"2024-03-28T13:00:00+00:00\",\"dateModified\":\"2025-01-15T22:34:45+00:00\",\"description\":\"Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.\",\"breadcrumb\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp\",\"width\":1200,\"height\":675,\"caption\":\"A colorful illustration of red and blue cubes floating towards a purple computer screen. The red cubes are blocked by a shield in front of the computer screen.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog home\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI + machine learning\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"name\":\"Microsoft Azure Blog\",\"description\":\"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.\",\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\",\"name\":\"Microsoft Azure Blog\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"width\":512,\"height\":512,\"caption\":\"Microsoft Azure Blog\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/microsoftazure\",\"https:\/\/x.com\/azure\",\"https:\/\/www.instagram.com\/microsoftdeveloper\/\",\"https:\/\/www.linkedin.com\/company\/16188386\",\"https:\/\/www.youtube.com\/user\/windowsazure\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/c202d869dd6f3cb29ea80999e19313a9\",\"name\":\"Jordan Davis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g4accb07cb584a4dd53673b002bf33930\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g\",\"caption\":\"Jordan Davis\"},\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/jordandavis\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog","description":"Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/","og_locale":"en_US","og_type":"article","og_title":"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog","og_description":"Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.","og_url":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/","og_site_name":"Microsoft Azure Blog","article_publisher":"https:\/\/www.facebook.com\/microsoftazure","article_published_time":"2024-03-28T13:00:00+00:00","article_modified_time":"2025-01-15T22:34:45+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.png","type":"image\/png"}],"author":"Sarah Bird","twitter_card":"summary_large_image","twitter_image":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.png","twitter_creator":"@azure","twitter_site":"@azure","twitter_misc":{"Written by":"Sarah Bird","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#article","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/"},"author":[{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/sarah-bird\/","@type":"Person","@name":"Sarah Bird"}],"headline":"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications","datePublished":"2024-03-28T13:00:00+00:00","dateModified":"2025-01-15T22:34:45+00:00","mainEntityOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/"},"wordCount":1431,"publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp","keywords":["AI","Generative AI"],"articleSection":["AI + machine learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/","name":"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp","datePublished":"2024-03-28T13:00:00+00:00","dateModified":"2025-01-15T22:34:45+00:00","description":"Learn more on how Prompt Shields, Groundedness detection, and other responsible AI tools in Azure help prevent, evaluate, and monitor AI risks and attacks.","breadcrumb":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#primaryimage","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/03\/2_Prompt-Shields-Blue.webp","width":1200,"height":675,"caption":"A colorful illustration of red and blue cubes floating towards a purple computer screen. The red cubes are blocked by a shield in front of the computer screen."},{"@type":"BreadcrumbList","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-new-tools-in-azure-ai-to-help-you-build-more-secure-and-trustworthy-generative-ai-applications\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog home","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/"},{"@type":"ListItem","position":2,"name":"AI + machine learning","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/"},{"@type":"ListItem","position":3,"name":"Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications"}]},{"@type":"WebSite","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","name":"Microsoft Azure Blog","description":"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.","publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization","name":"Microsoft Azure Blog","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","width":512,"height":512,"caption":"Microsoft Azure Blog"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/microsoftazure","https:\/\/x.com\/azure","https:\/\/www.instagram.com\/microsoftdeveloper\/","https:\/\/www.linkedin.com\/company\/16188386","https:\/\/www.youtube.com\/user\/windowsazure"]},{"@type":"Person","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/c202d869dd6f3cb29ea80999e19313a9","name":"Jordan Davis","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g4accb07cb584a4dd53673b002bf33930","url":"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g","caption":"Jordan Davis"},"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/jordandavis\/"}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Azure Blog","distributor_original_site_url":"https:\/\/azure.microsoft.com\/en-us\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/32790","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/users\/45"}],"replies":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/comments?post=32790"}],"version-history":[{"count":0,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/32790\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media\/32870"}],"wp:attachment":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media?parent=32790"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/categories?post=32790"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tags?post=32790"},{"taxonomy":"audience","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/audience?post=32790"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/content-type?post=32790"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/product?post=32790"},{"taxonomy":"tech-community","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tech-community?post=32790"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/topic?post=32790"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/coauthors?post=32790"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}