Azure Cosmos DB pricing
Globally distributed, multi-model database service
Start your Azure free account and get a $200 credit for 30 days. Plus now get 12 months of free access to Azure Cosmos DB.
Azure Cosmos DB is Microsoft’s globally distributed multi-model database. Azure Cosmos DB was built from the ground up with global distribution and horizontal scale at its core. It offers turnkey global distribution across any number of Azure regions by transparently scaling and replicating your data wherever your users are. Elastically scale throughput and storage worldwide, and only pay for the throughput and storage you need. Azure Cosmos DB guarantees single-digit-millisecond latencies in the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance and guarantees high availability with multi-homing capabilities – all backed by industry leading service level agreements (SLAs).
Azure Cosmos DB is truly schema-agnostic – it automatically indexes all the data without requiring you to deal with schema and index management. It’s also multi-model, natively supporting document, key-value, graph and column-family data models. With Azure Cosmos DB, you can access your data using APIs of your choice, as DocumentDB SQL (document), MongoDB (document), Azure Table Storage (key-value) and Gremlin (graph) are all natively supported.
US government entities are eligible to purchase Azure Government services from a licensing solution provider with no upfront financial commitment or directly through a pay-as-you-go online subscription.
Important: The price in R$ is merely a reference; this is an international transaction and the final price is subject to exchange rates and the inclusion of IOF taxes and an eNF will not be issued.
Azure Germany is available to customers and partners doing business in the European Union (EU), European Free Trade Association (EFTA) and United Kingdom (UK), and provides data residency in Germany with additional levels of control and data protection. You can also sign up for a free Azure Germany trial.
At any scale, you can store data and provision throughput capacity. Each container is billed hourly based on the amount of data stored (in GBs) and throughput reserved in units of 100 RUs/second, with a minimum of 400 RUs/second. Unlimited containers have a minimum of 100 RUs/second per partition.
During public preview, there’s no additional charge for using the Gremlin API.
|SSD Storage (per GB)||$- GB/month|
|Reserved RUs/second (per 100 RUs, 400 RUs minimum)||$-|
For high-throughput and high-storage workloads, you can create unlimited storage containers by defining a partition key at container creation. A partitioned container will seamlessly scale out as the quantity of stored data grows and reserved throughput increases.
Azure Cosmos DB Emulator (free)
Download the free Azure Cosmos DB Emulator to develop and test applications using Azure Cosmos DB from your local machine. Once you’re satisfied with how your application works, you can deploy it by just changing your configuration to point to an Azure Cosmos DB instance.
Planet scale with geo-replication
Azure Cosmos DB containers can be globally distributed to help you easily build apps with planet scale which means all your data is automatically replicated to the regions you specify. Your app continues to work with one logical endpoint, while your data is automatically served from the region closest to your users with an intuitive programming model for data consistency and 99.99 availability. Globally distributed containers are billed based on the storage consumed in each region and throughput reserved for each Azure Cosmos DB container x the number of regions associated with an Azure Cosmos DB database account. Standard data transfer rates apply for replication data transfer between regions. As an example, say that you have a database account spanning three Azure regions and two containers provisioned with 1 million RUs and 2 million RUs respectively. The total RUs provisioned for the first container will be 3 million RUs (1 million RUs x 3 regions) and the second one will be 6 million RUs (2 million RUs x 3 regions).
High throughput and low-latency queries
With Azure Cosmos DB, while you write a sustained volume of data, these data will be synchronously indexed to serve consistent SQL queries using a write-optimised, latch-free database engine designed for solid-state drives (SSDs) and low-latency access. Read and write requests are always served from your local region, while data is distributed globally. You can further optimise performance by customising automatic index behaviour.
Collections with pre-defined performance and size
Until 1 October 2017, current customers on S1, S2 or S3-sized collections can continue using them with a pre-defined 10 GB of storage and throughput quantities that vary with the instance size: an S1 instance provides 250 RU/sec. and is billed at $-/hour; an S2 instance provides 1000 RU/sec. and is billed at $-/hour; an S3 instance provides 2500 RU/sec. and is billed at $-/hour. If you want to reconfigure throughput for these collections, see Changing performance levels using the Azure Portal. If you want to take advantage of unlimited storage containers, you need to convert your previously created S1, S2 or S3 collections to use the limitless throughput and storage scale described above, as described in Partitioning and scaling in Azure Cosmos DB.
Support and SLA
- We provide technical support for all Azure services released to General Availability, including Azure Cosmos DB, through Azure Support, starting at $29/month. Billing and subscription management support is provided at no cost.
- SLA – We guarantee at least 99.99% of the time we will successfully process requests to perform operations against Azure Cosmos DB Resources. To learn more about our SLA, please visit the SLA page.
A request unit (RU) is the measure of throughput in Azure Cosmos DB. 1 RU corresponds to the throughput of the GET of a 1 KB item. Every operation in Azure Cosmos DB, including reads, writes, SQL queries and stored procedure executions has a deterministic request unit value based on the throughput required to complete the operation. Instead of thinking about CPU, IO and memory, and how they each affect your application throughput, you can think in terms of a single Request Unit measure.
A Request Unit consumed through Provisioned RUs per second or a one-minute bucket is the same.
For more information about Request Units and for help determining your container needs, please go here.
You’re billed with a flat, predictable hourly rate based on the overall capacity (RU/sec) that has been provisioned under your Azure Cosmos DB account during that period.
If you create an account in East US 2 using two single partitions with 500 RU/sec. and 700 RU/sec, respectively, you would have a total provisioned capacity of 1,200 RU/sec. You would thus be charged 12 x $- = $-/hour.
If your throughput needs changed and you increased each partition’s capacity by 500 RU/sec while also creating a new unlimited storage container that has a rate of 20,000 RU/sec., your overall provisioned capacity would be 22,200 RU/sec. (1,000 RU/sec. + 1,200 RU/sec. + 20,000 RU/sec.). Your bill would then change to: $- x 222 = $-/hour.
In a month of 720 hours, if 500 hours are provisioned at 1,200 RU/sec and 220 hours are provisioned at 22,200 RU/sec, your monthly bill will show 500 x $-/hour + 220 x $-/hour = $-/hour
Storage capacity is billed in units of the maximum hourly amount of data stored, in GB, over a monthly period. For example, if you utilised 100 GB of storage for half of the month and 50 GB for the second half of the month, you would be billed for an equivalent of 75 GB of storage during that month.
You’re billed the flat rate for each hour the container exists, regardless of usage or if the container is active for less than an hour. For example, if you create a container and delete it five minutes later, your bill will reflect a charge for one unit hour.
If you define your own performance for a container and you upgrade at 9:30 AM from 400 RUs to 1,000 RUs and downgrade at 10:45 AM back to 400 RUs, you will be charged for two hours of 1,000 RUs.
If you select a pre-defined collection performance level, and you upgrade at 9:30 AM from an S1 collection to an S3 collection, and downgrade at 10:45 AM back to S1, you will be charged for two hours of S3.
You can scale up or scale down the number of Request Units for each container within your Azure Cosmos DB account by using the Azure Portal, one of the supported SDKs or the REST API.
To move a collection of S1, S2 or S3 performance tier to a single partition with the same storage size, see Changing performance levels using the Azure Portal.
To move an existing single collection to an unlimited storage container, see Partitioning and Scaling in Azure Cosmos DB.
At an entry point, single partition has more throughput than S1 (400 RU/sec. versus 250 RU/sec.) at a lower price. You can also scale up to 10,000 RU/sec. versus 2,500 RU/sec. with S3. The great thing with the new provisioning model is that you can scale by increment of 100 RU/sec, so you don’t need to pay for S3 at 2,500 RU/sec when you only need 1,200 RU/sec.