Cognitive Services Pricing - Custom Speech Service PREVIEW

Use intelligence APIs to enable vision, language, and search capabilities.

The Custom Speech Service lets you create custom speech recognition models and deploy them to a speech-to-text endpoint that’s tailored to your application. With Custom Speech Service, you can customize the language model of the speech recognizer, so it learns the vocabulary of your application and the speaking style of your users. You can also customize the acoustic model of the speech recognizer to better match the application’s expected environment and user population.

Pricing details

Model adaptation is free.

Instance Features PricePREVIEW
Free Model Deployments 1 model free per month
Model Adaptation 3 hours free per month
Accuracy Tests 2 hours free per month
Scale Out N/A
No Trace N/A
Request Pricing 2 hours free per month
S2 Model Deployments $-/model/month
Model Adaptation Unlimited
Accuracy Tests 2 hours free and then $-/hour
Scale Out $-/unit/day where each unit allows you to send five concurrent requests
No Trace $-/model/month
Request Pricing 2 hours free and then $-/hour

Support & SLA

  • Free billing and subscription management support are included.
  • Need tech support for preview services? Use our forums.
  • We guarantee that Cognitive Services running in the standard tier will be available at least 99.9 percent of the time. No SLA is provided for the free trial. Read the SLA.
  • No SLA during preview period. Learn more.


  • Tier 1 can process up to four pieces of audio (i.e. four transcriptions) at the same time and still respond in real time. If the user sends more than four concurrent pieces of audio each subsequent piece of audio is rejected and sent back with an error code indicating too many concurrent recognitions. The same applies to Tier 2 where 12 simultaneous transcriptions can be processed. The Free tier offers one concurrent transcription. It is assumed that the audio will be uploaded in real-time. If audio is uploaded faster, for concurrency purposes the request will still be assumed to be ongoing until the duration of the audio has passed (even though the recognition result might be sent back earlier).

    Note: If higher level of concurrency is required please contact us.

  • The language model is a probability distribution over sequences of words. The language model helps the system decide among sequences of words that sound similar, based on the likelihood of the word sequences themselves. For example, “recognize speech” and “wreck a nice beach” sound alike but the first hypothesis is far more likely to occur, and therefore will be assigned a higher score by the language model. If you expect voice queries to your application to contain particular vocabulary items, such as product names or jargon that rarely occur in typical speech, it is likely that you can obtain improved performance by customizing the language model. For example, if you were building an app to search MSDN by voice, it’s likely that terms like “object-oriented” or “namespace” or “dot net” will appear more frequently than in typical voice applications. Customizing the language model will enable the system to learn this.

  • The acoustic model is a classifier that labels short fragments of audio into one of several phonemes, or sound units, in each language. These phonemes can then be stitched together to form words. For example, the word “speech” is comprised of four phonemes “s p iy ch”. These classifications are made on the order of 100 times per second. Customizing the acoustic model can enable the system to learn to do a better job recognizing speech in atypical environments. For example, if you have an app designed to be used by workers in a warehouse or factory, a customized acoustic model can more accurately recognize speech in the presence of the noises found in these environments.

  • Short Phrase recognition supports utterances up to 15 seconds long. When used with the Speech Client library, as data is sent to the server, the client will receive multiple partial results and one final multiple N-best choice result.

  • Long Dictation recognition supports utterances up to two minutes long. When used with the Speech Client library, as data is sent to the server, the client will receive multiple partial results and multiple final results, based on where the server indicates sentence pauses.

  • For instance, if a customer uses the S1 tier to process one million transcriptions they will be charged the tier price ($-), the first 100,000 transcriptions are billed at $- per 1,000 transcriptions and the remaining 900,000 transcriptions are billed at $- per 1,000 transcriptions. So, in effect, the customer is billed $- + 100,000 * ($- / 1,000) + 900,000 * ($- / 1,000) = $4500.

  • Please see the Custom Speech Service information on the Microsoft Cognitive Services webpage and on the Custom Speech Service website.

  • Custom model deployment is the process of wrapping a custom model then exposing it as a service. The resulting deployed custom model exposes an endpoint via which it can be accessed. Users can choose to deploy as many models as they require.

  • Custom Speech Service enables users to adapt baseline models based on their own acoustic and language data. We call this process model customization.

  • When a custom model is created, users have the option to upload test data to evaluate the newly created model. Users can test the new custom models with as much data as they require i.e. execute unlimited accuracy tests.

  • When a custom model has been deployed, its URI can process one audio request at a time. For scenarios that send more than one audio request simultaneously to that URI, users can opt to scale out at a rate of five concurrent requests at the time. This is achieved by purchasing scale units. Each scale unit guarantees up to five simultaneous concurrent audio requests at a cost of $200 per scale unit. For example, if a user envisages hitting that endpoint with 23 audio requests at the same time, the user would need to purchase five scale units to guarantee up to 25 concurrent requests.

  • Log management enables users to switch off logging for their deployed models. Users concerned about privacy can opt to switch off logging for a deployed model at a rate of $20 per month.

  • Request pricing refers to the cost of processing audio requests by the endpoint of a deployed custom model.


Estimate your monthly costs for Azure services

Review Azure pricing frequently asked questions

Learn more about Cognitive Services

Review technical tutorials, videos, and more resources

Added to estimate. Press 'v' to view on calculator

Learn and build with $200 in credit, and keep going for free