Saltar al contenido principal
EN VERSIÓN PRELIMINAR

Public Preview: Azure AI Content Safety

Fecha de publicación: 24 mayo, 2023

Announcing Azure AI Content Safety, a new Azure AI service that helps to create secure online spaces. With cutting edge AI models, it can detect hateful, violence, sexual, and self-harm content in images and text, and assign severity scores, allowing businesses to limit and prioritize what content moderators need to review. Unlike most solutions used today, Azure AI Content Safety can handle nuance and context, reducing the number of false positives and easing the load on human content moderator teams.

 

Unique Azure AI Content Safety capabilities include:

  • Content Classifications: Azure AI Content Safety classifies harmful content into four categories: sexual, violence, self-harm, hate
  • Severity Scores: It returns with a severity level for each unsafe content category on a scale of 0, 2, 4, 6.
  • Semantic Understanding: Using natural language processing it comprehends meaning and the context of language. It can analyze text in both short form and long form.
  • Multilingual Models:  Understands multiple languages. 
  • Computer Vision: Powered by Microsoft’s Florence foundation model to perform advanced image recognition. This technology is trained with billions of text-image pairs.
  • Customizable Settings: With customizable settings to address specific business regulations and policies.
  • Real-Time: Our platform detects harmful content in real-time
     

Learn more: https://aka.ms/contentsafety 

Get started in Azure AI Content Safety Studio

  • servicios de Azure AI
  • Services
  • Microsoft Build

Productos relacionados