Skip navigation

Cognitive Services Pricing – Custom Speech Service PREVIEW

Use intelligence APIs to enable vision, speech, language and knowledge capabilities

The Custom Speech Service lets you create custom speech recognition models and deploy them to a speech-to-text endpoint that’s tailored to your application. With Custom Speech Service, you can customise the language model of the speech recogniser, so it learns the vocabulary of your application and the speaking style of your users. You can also customise the acoustic model of the speech recogniser to better match the application’s expected environment and user population.

Pricing details

Model adaptation is free.

Free S2
Model deployments 1 model $-/model/month
Model adaptation 3 hours/month Unlimited
Accuracy tests 2 hours/month 2 hours free and then $-/hour
Scale Out N/A $-/unit/day where each unit allows you to send five concurrent requests
No trace N/A $-/model/month
Request pricing 2 hours/month 2 hours free and then $-/hour

Support and SLA

  • Free billing and subscription management support are included.
  • Need tech support for preview services? Use our forums.
  • We guarantee that Cognitive Services running in the standard tier will be available at least 99.9 per cent of the time. No SLA is provided for the free trial. Read the SLA
  • No SLA during preview period. Learn more.

FAQs

Custom Speech Service

  • Tier 1 can process up to four pieces of audio (i.e. four transcriptions) at the same time and still respond in real time. If the user sends more than four concurrent pieces of audio each subsequent piece of audio is rejected and sent back with an error code indicating too many concurrent recognitions. The same applies to Tier 2, where 12 simultaneous transcriptions can be processed. The Free tier offers one concurrent transcription. It is assumed that the audio will be uploaded in real time. If audio is uploaded faster, for concurrency purposes, the request will still be assumed to be ongoing until the duration of the audio has passed (even though the recognition result might be sent back earlier).

    Note: If a higher level of concurrency is required, please contact us.

  • The language model is a probability distribution over sequences of words. The language model helps the system to decide among sequences of words that sound similar, based on the likelihood of the word sequences themselves. For example, “recognize speech” and “wreck a nice beach” sound alike but the first hypothesis is far more likely to occur, and therefore will be assigned a higher score by the language model. If you expect voice queries to your application to contain particular vocabulary items, such as product names or jargon that rarely occur in typical speech, it is likely that you can obtain improved performance by customising the language model. For example, if you were building an app to search MSDN by voice, it’s likely that terms like “object-oriented”, “namespace” or “dot net” will appear more frequently than in typical voice applications. Customising the language model will enable the system to learn this.

  • The acoustic model is a classifier that labels short fragments of audio into one of several phonemes, or sound units, in each language. These phonemes can then be stitched together to form words. For example, the word “speech” is comprised of four phonemes “s p iy ch”. These classifications are made on the order of 100 times per second. Customising the acoustic model can enable the system to learn to do a better job recognising speech in atypical environments. For example, if you have an app designed to be used by workers in a warehouse or factory, a customised acoustic model can more accurately recognise speech in the presence of the noises found in these environments.

  • Short phrase recognition supports utterances up to 15 seconds long. When used with the Speech Client library, as data is sent to the server, the client will receive multiple partial results and one final multiple N-best choice result.

  • Long dictation recognition supports utterances up to two minutes long. When used with the Speech Client library, as data is sent to the server, the client will receive multiple partial results and multiple final results, based on where the server indicates sentence pauses.

  • For instance, if a customer uses the S1 tier to process one million transcriptions, they will be charged the tier price ($-), the first 100,000 transcriptions are billed at $- per 1,000 transcriptions, and the remaining 900,000 transcriptions are billed at $- per 1,000 transcriptions. So, in effect, the customer is billed $- + 100,000 * ($- / 1,000) + 900,000 * ($- / 1,000) = $4500.

  • Please see the Custom Speech Service information on the Microsoft Cognitive Services web page and on the Custom Speech Service website, www.cris.ai.

  • Custom model deployment is the process of wrapping a custom model then exposing it as a service. The resulting deployed custom model exposes an endpoint via which it can be accessed. Users can choose to deploy as many models as they require.

  • Custom Speech Service enables users to adapt baseline models based on their own acoustic and language data. We call this process model customisation.

  • When a custom model is created, users have the option to upload test data to evaluate the newly created model. Users can test the new custom models with as much data as they require execute unlimited accuracy tests, for example.

  • When a custom model has been deployed, its URI can process one audio request at a time. For scenarios that send more than one audio request simultaneously to that URI, users can opt to scale out at a rate of five concurrent requests at the same time. This is achieved by purchasing scale units. Each scale unit guarantees up to five simultaneous concurrent audio requests at a cost of $200 per scale unit. For example, if a user envisages hitting that endpoint with 23 audio requests at the same time, the user would need to purchase five scale units to guarantee up to 25 concurrent requests.

  • Log management enables users to switch off logging for their deployed models. Users concerned about privacy can opt to switch off logging for a deployed model at a rate of $20 per month.

  • Request pricing refers to the cost of processing audio requests by the endpoint of a deployed custom model.

General

  • Bing Search APIs are invoiced based on number of transactions (also known as API calls). These plans are pay-as-you-go and don’t incur additional costs for complex queries and more than 10 results (up to 50 results in most cases).

  • If you happen to exceed the mentioned number of transactions per second (TPS), your usage will be throttled to be within the mentioned limit. If your application needs to have higher TPS than the ones mentioned on this page, please get in touch with the Azure support team.

  • For billing purposes, a transaction is a successful Bing API call request (though there are caveats for DoS attacks). For logging and reporting purposes such as for the Bing Statistics Add-in, it is any Bing API call irrespective of whether it is successful or not.

  • You can change the tier of service at any time. Please make sure that you use appropriate keys in your API calls. If you are having enterprise agreement with Microsoft, please work with your account executive.

Resources

Estimate your monthly costs for Azure services

Review Azure pricing frequently asked questions

Learn more about Cognitive Services

Review technical tutorials, videos and more resources

Added to estimate. Press 'v' to view on calculator View on calculator

Learn and build with $200 in credit, and keep going for free