Azure AI Content Safety
Use AI to monitor text and image content for safety.
Monitor content with advanced AI-powered language and vision models
Azure AI Content Safety is a content moderation platform that uses AI to keep your content safe. Create better online experiences for everyone with powerful AI models that detect offensive or inappropriate content in text and images quickly and efficiently.
Language models analyze multilingual text, in both short and long form, with an understanding of context and semantics
Vision models perform image recognition and detect objects in images using state-of-the-art Florence technology
AI content classifiers identify sexual, violent, hate, and self-harm content with high levels of granularity
Content moderation severity scores indicate the level of content risk on a scale of low to high
Make confident content moderation decisions
Apply advanced language and vision models to accurately detect unsafe or inappropriate content and automatically assign severity scores in real time. Enable your business to review and prioritize flagged items and to take informed action.
Improve user and brand safety across languages
Support brand and customer safety globally. Multilingual capabilities in Azure AI Content Safety enable content moderation in English, German, Spanish, French, Portuguese, Italian, and Chinese.
Apply AI responsibly
Establish responsible AI practices by monitoring both user-and AI-generated content. Azure OpenAI Service and GitHub Copilot rely on Azure AI Content Safety to filter content in user requests and responses, ensuring AI models are used responsibly and for their intended purposes.
Comprehensive security and compliance, built in
-
Microsoft invests more than USD1 billion annually on cybersecurity research and development.
-
We employ more than 3,500 security experts who are dedicated to data security and privacy.
Flexible consumption-based pricing
Pricing for Azure AI Content Safety is based on a pay-as-you-go consumption model
Get started with an Azure free account
1
2
After your credit, move to pay as you go to keep building with the same free services. Pay only if you use more than your free monthly amounts.
3
Azure AI Content Safety resources and documentation
Engage the Azure community
Frequently asked questions
-
Azure OpenAI Service uses Azure AI Content Safety as a content management system that works alongside core models to filter content. This system works by running both the input prompt and AI-generated content through an ensemble of classification models aimed at detecting misuse.
-
Taking advantage of the foundation model Florence, as well as content filtering capabilities of Azure AI Content Safety, Azure AI Vision can identify if an image triggers hate, sexual, violent, or self-harm classification.
-
In the Standard, or S1, tier, there are two types of APIs., For the text API, the service is billed for the amount of text records submitted to the service. For the image API, the service is billed for the number of images submitted to the service.
-
A text record in the S1 tier contains up to 1,000 characters as measured by Unicode code points. If a text input into the Content Safety API is more than 1,000 characters, it counts as one text record for each unit of 1,000 characters. For instance, if a text input sent to the API contains 7,500 characters, it will count as eight text records. If a text input sent to the API contains 500 characters, it will count as one text record.
-
Usage is stopped if the free-tier transaction limit is reached. Customers cannot accrue overages on the free tier.