Skip to main content
IN PREVIEW

Public Preview: Safety evaluations for generative AI applications in Azure AI Studio

Published date: 28 March, 2024

Today, many organizations lack the resources to stress test their generative AI application so they can confidently progress from prototype to production. First, it can be challenging to build a high-quality test dataset that reflects a range of new and emerging risks, such as jailbreak attacks. Even with quality data, evaluations can be a complex and manual process, and development teams may find it difficult to interpret the results to inform effective mitigations.

To help generative AI app developers with these challenges, today we’re announcing the public preview of automated safety evaluations in Azure AI Studio. These safety evaluations measure an application’s susceptibility to jailbreak attempts and to producing violent, sexual, self-harm, and hateful content. They also provide natural language explanations for each measurement to help inform appropriate mitigations. Developers can evaluate an application using their own test dataset or simply generate a high-quality test dataset using adversarial prompt templates developed by Microsoft Research. With this capability, Azure AI Studio can also help augment and accelerate manual red teaming efforts by enabling red teams to generate and automate adversarial prompts at scale. Learn more on TechCommunity or read the documentation.

  • Azure AI Studio
  • Features

Related Products