Azure Cosmos DB is Microsoft’s globally distributed multi-model database. Azure Cosmos DB was built from the ground up with global distribution and horizontal scale at its core. It offers turnkey global distribution across any number of Azure regions by transparently scaling and replicating your data wherever your users are. Elastically scale throughput and storage worldwide, and only pay for the throughput and storage you need. Azure Cosmos DB guarantees single-digit-millisecond latencies in the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance and guarantees high availability with multi-homing capabilities – all backed by industry leading service level agreements (SLAs).
Azure Cosmos DB is truly schema-agnostic; it automatically indexes all the data without requiring you to deal with schema and index management. It’s also multi-model, natively supporting document, key-value, graph and column-family data models. With Azure Cosmos DB, you can access your data using APIs of your choice, as DocumentDB SQL (document), MongoDB (document), Azure Table Storage (key-value) and Gremlin (graph) are all natively supported.
Azure Government is available to US government entities to purchase physically and network isolated instances of Azure Government from a Licensed Azure Government Service Provider or Partner with no up-front financial commitment or fee. Or, you can sign up for a free Azure Government Trial.
Important: The price in R$ is merely a reference; this is an international transaction and the final price is subject to exchange rates and the inclusion of IOF taxes and an eNF will not be issued.
Azure Germany is available to customers and partners in the European Union (EU) and European Free Trade Association (EFTA) and provides data residency in Germany with additional levels of control and data protection with a modest price uplift over global cloud offerings (% varies per service).
Azure Cosmos DB is not available in the South Africa West region. Please select another region.
Azure Cosmos DB is not available in the South Africa North region. Please select another region.
At any scale, you can store data and provision throughput capacity. Each collection is billed hourly based on the amount of data stored (in GBs) and throughput reserved in units of 100 RUs/second, with a minimum of 400 RUs/second.
During public preview, there is no additional charge for using the Gremlin API.
|SSD Storage (per GB)||$- GB/month|
|Reserved RUs/second (per 100 RUs, 400 RUs minimum)||$-|
Add-on provisioning: request units per minute (preview)
You can now complement your provisioned throughput with an opt-in provisioned request units per minute feature. Provisioned request units per minute lets you consume a bucket of requests on a per minute basis (UTC). Request units per minute are capped at 1,000 request units per minute for every 100 provisioned throughput units per second. The price below reflects a 50% preview discount.
|Reserved request unit per minute (per 1,000 RUs)||$-|
For high-throughput and high-storage workloads, you can create partitioned collections by defining a partition key at collection creation. A partitioned collection will seamlessly scale out as the quantity of stored data grows and reserved throughput increases.
Azure Cosmos DB Emulator (Free)
Download the free Azure Cosmos DB Emulator to develop and test applications using Azure Cosmos DB from your local machine. Once you’re satisfied with how your application works, you can deploy it by just changing your configuration to point to an Azure Cosmos DB instance.
Planet scale with geo-replication
Azure Cosmos DB collections can be globally distributed to help you easily build apps with planet scale which means all your data is automatically replicated to the regions you specify. Your app continues to work with one logical endpoint, while your data is automatically served from the region closest to your users with an intuitive programming model for data consistency and 99.99 availability. Globally distributed collections are billed based on the storage consumed in each region and throughput reserved for each Azure Cosmos DB collection multiplied by the number of regions associated with an Azure Cosmos DB database account. Standard data transfer rates apply for replication data transfer between regions. As an example, say you have a database account spanning three Azure regions and two collections provisioned with 1 million RUs and 2 million RUs respectively. The total RUs provisioned for the first collection will be 3 million RUs (1 million RUs x 3 regions) and the second one will be 6 million RUs (2 million RUs x 3 regions).
High throughput and low-latency queries
With Azure Cosmos DB, you can write a sustained volume of data and it will be synchronously indexed to serve consistent SQL queries using a write-optimised, latch-free database engine designed for solid-state drives (SSDs) and low latency access. Read and write requests are always served from your local region, while data is distributed globally. You can further optimise performance by customising automatic index behaviour.
Collections with pre-defined performance and size
Until 1 August 2017, current customers on S1-, S2- or S3-sized collections can continue using them with a pre-defined 10 GB of storage and throughput quantities that vary with the instance size: an S1 instance provides 250 RU/sec. and is billed at $-/hr; an S2 instance provides 1000 RU/sec. and is billed at $-/hr; an S3 instance provides 2500 RU/sec. and is billed at $-/hr. If you want to reconfigure throughput for these collections, see Changing performance levels using the Azure Portal. If you want to take advantage of partitioned collections, you need to convert your previously created S1, S2 or S3 collections to use the limitless throughput and storage scale described above, as described in Partitioning and scaling in Azure Cosmos DB.
Support and SLA
- We provide technical support for all Azure services released to General Availability, including Azure Cosmos DB, through Azure Support, starting at $29.0/month. Billing and subscription management support is provided at no cost.
- SLA: We guarantee that at least 99.99% of the time we will successfully process requests to perform operations against Azure Cosmos DB Resources. To learn more about our SLA, please visit the SLA page.
- What is a request unit?
A request unit (RU) is the measure of throughput in Azure Cosmos DB. 1 RU corresponds to the throughput of the GET of a 1 KB document. Every operation in Azure Cosmos DB, including reads, writes, SQL queries and stored procedure executions has a deterministic request unit value based on the throughput required to complete the operation. Instead of thinking about CPU, IO and memory, and how they each affect your application throughput, you can think in terms of a single request unit measure.
A request unit consumed through Provisioned RU’s per second or a one-minute bucket is the same.
For more information about request units, and for help with determining your collection needs, please click here
- How does request unit usage appear on my bill?
You are billed with a flat, predictable hourly rate based on the overall capacity (RU/sec) that has been provisioned under your Azure Cosmos DB account during that period.
If you create an account in East US 2 using two single partitions with 500 RU/sec. and 700 RU/sec, respectively, you would have a total provisioned capacity of 1,200 RU/sec. You would thus be charged 12 x $- = $-/hr.
If your throughput needs changed and you increased each partition’s capacity by 500 RU/sec. while also creating a new partitioned collection using 20,000 RU/sec., your overall provisioned capacity would be 22,200 RU/sec. (1,000 RU/sec. + 1,200 RU/sec. + 20,000 RU/sec.). Your bill would then change to: $- x 222 = $-/hr.
In a month of 720 hours, if 500 hours are provisioned at 1,200 RU/sec. and 220 hours are provisioned at 22,200 RU/sec., your monthly bill will show: 500 x $-/hr + 220 x $-/hr = $-/hr
- How does request units per minute work?
You can now provision add-on request units per minute in addition to regular provisioned throughput. You can consume these add-on throughput units in a UTC minute window. If request units per minute is enabled, for every 100 RUs/second provisioned in your collection, you will be able to consume an additional 1,000 request units per minute.
For example, if you provision 400 RUs/second, it will allow you to consume an add-on of 4,000 request units per minute. Let’s say at 12:00:00 PM, your application needs more than 400 RUs/second. Starting from 12:00:01 PM to 12:01:00 PM, your application will be able to consume 4,000 additional requests units, while continuing to consume your provisioned throughput of 400 RU/s. Starting at 12:00:01 PM, if you consume all the 4,000 request units before 12:01:00 PM, you can’t consume anymore additional request units until the next UTC minute (starting at 12:01:01 PM). If you don’t consume all the 4,000 in a given minute window, the left-over request units don’t roll over to the next minute window.
For more information, see our documentation page about request units per minute.
- If I specify my own performance for a collection how is storage billed?
Storage capacity is billed in units of the maximum hourly amount of data stored, in GB, over a monthly period. For example, if you utilised 100 GB of storage for half of the month and 50 GB for the second half of the month, you would be billed for an equivalent of 75 GB of storage during that month.
- What if my collection is active for less than an hour?
You are billed the flat rate for each hour that the collection exists, regardless of usage or if the collection is active for less than an hour. For example, if you create a collection and delete it 5 minutes later, your bill will reflect a charge for 1 unit hour.
- When does the billing rate change after I upgrade a collection?
If you define your own performance for a collection and you upgrade at 9:30 a.m. from 400 RUs to 1,000 RUs and downgrade at 10:45 a.m. back to 400 RUs, you will be charged for two hours of 1,000 RUs.
If you select a pre-defined collection performance level and you upgrade at 9:30 a.m. from an S1 collection to an S3 collection, and downgrade at 10:45 a.m. back to S1, you will be charged for two hours of S3.
- How do I scale throughput per collection up or down?
You can scale up or scale down the number of request units for each collection within your Azure Cosmos DB account by using the Azure Portal, one of the supported SDKs or the REST API.
- How can I move from a S1/S2/S3 collection to a single partition?
To move a collection of S1, S2 or S3 performance tier to a single partition with the same storage size, see Changing performance levels using the Azure Portal.
To move an existing single collection to a partitioned collection, see Partitioning and scaling in Azure Cosmos DB.
- What do I get when leveraging single partition versus S1/S2/S3 collection?
At an entry point, single partition has more throughput than S1 (400 RU/sec. versus 250 RU/sec.) at a lower price. You can also scale up to 10,000 RU/sec. versus 2,500 RU/sec. with S3. The great thing with the new provisioning model is that you can scale by increment of 100 RU/sec. so you don’t need to pay for S3 at 2,500 RU/sec. when you only need 1,200 RU/sec.