Questions? Feedback? powered by Olark live chat software
Ignorar Navegação

Announcing Face Redaction for Azure Media Analytics

Publicado em 12 setembro, 2016

Program Manager, Azure Media Services

Azure Media Redactor is a part of Azure Media Analytics, and offers scalable redaction in the cloud. This Media Processor (MP) will perform anonymization by blurring the faces of selected individuals, and is ideal for use in public safety and news media scenarios. The use of body worn cameras in policing and public spaces is becoming increasing commonplace, which places a larger burden on these departments when videos are requested for disclosure through Freedom of Information or Public Records acts. Responding to these requests take time and money as the faces of minors or bystanders must be blurred out.

A video with multiple faces can take hours to manually redact just a few minutes of footage. This service can reduce the labor intensive task of manual redaction to just a few simple touchups.

Azure Media Analytics

Azure Media Analytics is a collection of speech and vision services offered at enterprise scale, compliance, security, and global reach. For other Media Analytics processors offered by Azure, see Milan Gadas blog post Introducing Azure Media Analytics.

You can access these features in our new Azure portal, through our APIs with the presets below, or using the free Azure Media Services Explorer tool.

Redaction will be a free public preview for a limited time, and will be in all public datacenters starting around mid September. China and US Gov datacenters will be included in the GA release.

Face Redaction

Facial redaction works by detecting faces in every frame of video and tracking the face object both forwards and backwards in time, so that the same individual can be blurred from other angles as well.

Redaction is still a difficult problem for machines to solve. It is normal to see a few false positives and false negatives, especially with difficult video where there is low light or fast movement.

Since automated redaction results may not be 100% covered, we provide ways to modify the final output to achieve full redaction.

In addition to a fully automatic mode, there is a two pass workflow which allows the selection/de-selection of found faces via a list of IDs, and to make arbitrary per frame adjustments using a metadata file in JSON format. This workflow is split into ‘Analyze’ and ‘Redact’ modes, as well as a single pass ‘Combined’ mode that runs both in one job.

Combined mode

This will produce a redacted mp4 automatically without any manual input.

Media Processor Name: “Azure Media Redactor”

Stage File Name Notes
Input asset foo.bar Video in WMV, MOV, or MP4 format
Input config Job configuration preset {'version':'1.0', 'options': {'Mode':’combined’}}
Output asset foo_redacted.mp4 Video with blurring applied

Input example:

Output example:

 

Analyze mode

The “analyze” pass of the two pass workflow will take a video input and produce a JSON file of face locations, and jpg images of each detected face.

Stage File Name Notes
Input asset foo.bar Video in WMV, MPV, or MP4 format
Input config Job configuration preset {'version':'1.0', 'options': {'Mode':’analyze’}}
Output asset foo_annotations.json Annotation data of face locations in JSON format. This can be edited by the user to modify the blurring bounding boxes. See sample below.
Output asset foo_thumb%06d.jpg [foo_thumb000001.jpg, foo_thumb000002.jpg] A cropped jpg of each detected face, where the number indicates the labelId of the face

Output Example:

Download full

{
  "version": 1,
  "timescale": 50,
  "offset": 0,
  "framerate": 25.0,
  "width": 1280,
  "height": 720,
  "fragments": [
    {
      "start": 0,
      "duration": 2,
      "interval": 2,
      "events": [
        [ 
          {
            "id": 1,
            "x": 0.306415737,
            "y": 0.03199235,
            "width": 0.15357475,
            "height": 0.322126418
          },
          {
            "id": 2,
            "x": 0.5625317,
            "y": 0.0868245438,
            "width": 0.149155334,
            "height": 0.355517566
          }
        ]
      ]
    },

… truncated

Redact Mode

The second pass of the workflow takes a larger number of inputs that must be combined into a single asset.

This includes a list of IDs to blur, the original video, and the annotations JSON. This mode uses the annotations to apply blurring on the input video.

Stage File Name Notes
Input asset foo.bar Video in WMV, MPV, or MP4 format. Same video as in step 1.
Input asset foo_annotations.json annotations metadata file from phase one, with optional modifications.
Input asset foo_IDList.txt (Optional) Optional new line separated list of face IDs to redact. If left blank, this will blur all faces.
Input config Job configuration preset {'version':'1.0', 'options': {'Mode':’redact’}}
Output asset foo_redacted.mp4 Video with blurring applied based on annotations

 

Example Output

This is the output from an IDList with one ID selected.

 

Understanding the annotations

The Redaction MP provides high precision face location detection and tracking that can detect up to 64 human faces in a video frame. Frontal faces provide the best results, while side faces and small faces (less than or equal to 24x24 pixels) are challenging.

The detected and tracked faces are returned with coordinates  indicating the location of faces, as well as a face ID number indicating the tracking of that individual. Face ID numbers are prone to reset under circumstances when the frontal face is lost or overlapped in the frame, resulting in some individuals getting assigned multiple IDs.

For detailed explanations for each attribute, visit the Face Detector blog.

Getting started

To use this service, simply create a Media Services account within your azure subscription and use our REST API/SDKs or with the  Azure Media Services Explorer (v3.44.0.0 or higher).

For sample code, check out the sample code on our documentation page and replace the presets with the ones above and the Media Processor name “Azure Media Redactor”.

Contact us

Keep up with the Azure Media Services blog to hear more updates on the Face Detection Media Processor and the Media Analytics initiative!

Send your feedback and feature requests to our UserVoice page.

If you have any questions about any of the Media Analytics products, send an email to amsanalytics@microsoft.com.