Azure Media Services now supports Live Transcription in preview
Updated: 18 November, 2019
Azure Media Services provides a platform which you can use to ingest, transcode, and dynamically package and encrypt your live video for delivery via industry-standard protocols like HLS and MPEG-DASH. Live Transcriptions is a new feature in the v3 APIs, wherein you can enhance the streams delivered to your viewers with machine-generated text that is transcribed from spoken words in the audio feed.
When you publish your live stream using the HLS or MPEG-DASH streaming protocols, then along with video and audio, the service will also deliver the transcribed text in protocol-compliant fragments. You can then play back this video, audio, and text stream using a new build (version 2.3.3 or later) of Azure Media Player. The transcription relies on the Speech-To-Text feature of Cognitive Services.
Frequently asked questions
- What does the preview support?
The feature is available in West US 2, and supports English. For further detail, please refer to this document.
- How can I start using this feature?
You can create an Azure Media Service account in West US 2, and build an application using our preview REST APIs, described in the Media Services v3 OpenAPI Specification document. CLI, PowerShell, or SDKs are not yet supported for this preview.