Starting today the Azure Media Redactor public preview will be available in all public Azure regions as well as US Government and China datacenters. This preview is free of charge for the time being. There is currently a ten minute limit on processed video length which will be removed with the next release.
Please see the previous blog post for general information. In this post we will cover step-by-step how to run a full redaction workflow using AMSE (Azure Media Services Explorer) and an overview of open source sample code to help you get started.
Azure Media Services Explorer workflow
The easiest way to get started with Redactor is to use the open source AMSE tool on GitHub. You can run a simplified workflow the Combined mode if you don’t need access to the annotation JSON or the face jpg images.
Once you upload an asset, right click, find Azure Media Redactor, and run it in either Combined, Analyze, or Redact modes. For the specific input and output assets for each mode, please see our documentation page.
Azure Media Redactor Visualizer open source tool
We have also released an open source visualizer tool which is designed to help developers just starting with the annotations format with parsing and using the output.
After you clone the repo, you will need to download FFMPEG from their official site in order to run the project.
If you are a developer trying to parse the JSON annotation data, look inside Models.MetaData for sample code examples. Note that you must download and include a couple ffmpeg executables in the output folder of the project to run. See the GitHub page for full details.
Keep up Azure Media Services on the Azure blog for more updates on the Face Detection Media Processor and the Media Analytics initiative.
Send your feedback and feature requests to our UserVoice page.
If you have any questions about any of the Media Analytics products, send an email to firstname.lastname@example.org.