Azure AI Content Safety
Safeguard user and AI-generated text and image content.
Build AI applications responsibly with Azure AI Content Safety
Detect and filter violence, hate, sexual and self-harm content
Monitor text, images, and multimodal content
Analyze human and AI-generated content.
Advanced capabilities to uphold safety and security of foundation models
Safeguard your generative AI applications
Build an advanced safety system for foundation models to detect and mitigate harmful content and risks in user prompts and AI-generated outputs. Use Prompt Shields to detect and block prompt injection attacks, groundedness detection to pinpoint ungrounded or hallucinated materials, and protected material detection to identify copyrighted or owned content.
Configure your content filters to reflect standards and policies
Adjust content filters to align with your responsible AI policies, community guidelines, or unique application scenarios. Configure severity thresholds across hate, violence, self-harm, and sexual categories to tailor their tolerance. Further customize and enhance the performance of your content filters by creating blocklists with your specific terms and keywords.
Seamless integration and deployment
Build a safety system that operates uniformly across all foundation models in our AI Studio model catalog, including Azure OpenAI Service, or use the Azure AI Content Safety API. Azure AI Content Safety provides flexibility and compatibility, without ever compromising on safety.
Comprehensive security and compliance, built in
-
Microsoft invests more than $1 billion annually on cybersecurity research and development.
-
We employ more than 3,500 security experts who are dedicated to data security and privacy.
Flexible consumption-based pricing
Pricing for Azure AI Content Safety is based on a pay-as-you-go consumption model
Get started with an Azure free account
1
2
After your credit, move to pay as you go to keep building with the same free services. Pay only if you use more than your free monthly amounts.
3
Azure AI Content Safety resources and documentation
Engage the Azure community
Frequently asked questions
-
Azure OpenAI Service uses Azure AI Content Safety as a content management system that works alongside core models to filter content. This system works by running both the input prompt and AI-generated content through an ensemble of classification models aimed at detecting misuse.
-
Taking advantage of the foundation model Florence, as well as content filtering capabilities of Azure AI Content Safety, Azure AI Vision can identify if an image triggers hate, sexual, violent, or self-harm classification.
-
In the Standard, or S1, tier, there are two types of APIs., For the text API, the service is billed for the amount of text records submitted to the service. For the image API, the service is billed for the number of images submitted to the service.
-
A text record in the S1 tier contains up to 1,000 characters as measured by Unicode code points. If a text input into the Content Safety API is more than 1,000 characters, it counts as one text record for each unit of 1,000 characters. For instance, if a text input sent to the API contains 7,500 characters, it will count as eight text records. If a text input sent to the API contains 500 characters, it will count as one text record.
-
Usage is stopped if the free-tier transaction limit is reached. Customers cannot accrue overages on the free tier.