Azure Cosmos DB pricing
Globally distributed, multi-model database service
Start your Azure free account and get a $200 credit for 30 days. Plus now get 12 months of free access to Azure Cosmos DB.
Azure Cosmos DB is Microsoft’s globally distributed multi-model database. Azure Cosmos DB was built from the ground up with global distribution and horizontal scale at its core. It offers turnkey global distribution across any number of Azure regions by transparently scaling and replicating your data wherever your users are. Elastically scale throughput and storage worldwide and pay only for the throughput and storage you need. Azure Cosmos DB guarantees single-digit-millisecond latencies at the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance and guarantees high availability with multi-homing capabilities—all backed by industry leading service level agreements (SLAs).
Azure Cosmos DB is truly schema-agnostic—it automatically indexes all the data without requiring you to deal with schema and index management. It is also multi-model, natively supporting document, key-value, graph and column-family data models. With Azure Cosmos DB, you can access your data using APIs of your choice, as DocumentDB SQL (document), MongoDB (document), Azure Table Storage (key-value) and Gremlin (graph) are all natively supported.
US government entities are eligible to purchase Azure Government services from a licensing solution provider with no upfront financial commitment or directly through a pay-as-you-go online subscription.
Important—The price in R$ is merely a reference; this is an international transaction and the final price is subject to exchange rates and the inclusion of IOF taxes. An eNF will not be issued.
Azure Germany is available to customers and partners doing business in the European Union (EU), the European Free Trade Association (EFTA), and in the United Kingdom (UK). It provides data residency in Germany with additional levels of control and data protection. You can also sign up for a free Azure Germany trial.
On Cosmos DB, you only pay for reserved throughput provisioned and data stored in containers (collection of documents or a table or a graph). Reserved throughput, billed as Request Units (RU) per second or RU/s), allows you to read from or write data into containers. Each container is billed on an hourly basis for throughput provisioned in units of 100 RU/second, with a minimum of 400 RU/second, and data stored (in GBs). Unlimited containers have a minimum of 100 RU/s per partition.
During public preview, there's no additional charge for using the Gremlin API.
|SSD Storage (per GB)||$- GB/mo|
|Reserved RUs/second (per 100 RUs, 400 RUs minimum)||$-|
For high-throughput and high-storage workloads you can create unlimited storage containers by defining a partition key at container creation. A partitioned container will seamlessly scale out as the quantity of stored data grows and reserved throughput increases.
Azure Cosmos DB Emulator (free)
Download the free Azure Cosmos DB Emulator to develop and test applications using Azure Cosmos DB from your local machine. Once you are satisfied with how your application works, you can deploy it by just changing your configuration to point to an Azure Cosmos DB instance.
Planet scale with geo-replication
Azure Cosmos DB containers can be globally distributed to help you easily build apps with planet scale, which means all your data is automatically replicated to the regions you specify. Your app continues to work with a logical endpoint, while your data is automatically served from the region closest to your users with an intuitive programming model for data consistency and 99.99% availability. Globally distributed containers are billed based on the storage consumed in each region and throughput reserved for each Azure Cosmos DB container times the number of regions associated with an Azure Cosmos DB database account. Standard data transfer rates apply for replication data transfer between regions.
High throughput and low latency queries
With Azure Cosmos DB, while you write a sustained volume of data, these data will be synchronously indexed to serve consistent SQL queries using a write-optimized, latch-free database engine designed for solid-state drives (SSDs) and low latency access. Read and write requests are always served from your local region while data is distributed globally. You can further optimise performance by customising automatic index behaviour.
Collections with pre-defined performance and size
Pre-defined collections are not available to new customers. Current customers on S1-, S2- or S3-sized collections can continue using them with a pre-defined 10GB of storage and throughput quantities that vary with the instance size—an S1 instance provides 250 RU/second and is billed at $-/hour; an S2 instance provides 1000 RU/second and is billed at $-/hour; an S3 instance provides 2500 RU/second and is billed at $-/hour. If you want to reconfigure throughput for these collections, see Changing performance levels using the Azure Portal. If you want to take advantage of unlimited storage containers, you need to convert your previously created S1, S2, or S3 collections to use the limitless throughput and storage scale described above, as described in Partitioning and scaling in Azure Cosmos DB.
Support & SLA
- We provide technical support for all Azure services released to General Availability, including Azure Cosmos DB, through Azure Support, starting at $29/month. Billing and subscription management support is provided for free.
- SLA—We guarantee at least 99.99% of the time we will successfully process requests to perform operations against Azure Cosmos DB Resources. To learn more about our SLA, please visit the SLA page.
A Request Unit (RU) is the measure of throughput in Azure Cosmos DB. 1 RU corresponds to the throughput of the GET of a 1 KB item. Every operation in Azure Cosmos DB, including reads, writes, SQL queries and stored procedure executions has a deterministic Request Unit value based on the throughput required to complete the operation. Instead of thinking about CPU, IO, and memory and how they each impact your application throughput, you can think in terms of a single Request Unit measure.
A Request Unit consumed through Provisioned RUs per second or a one-minute bucket is the same.
For more information about Request Units and for help determining your container needs, please go here.
You are billed with a flat, predictable hourly rate based on the overall capacity (RU/sec) that has been provisioned under your Azure Cosmos DB account during that period.
If you create an account in East US 2 using two single partitions with 500 RU/sec and 700 RU/sec, respectively, you would have a total provisioned capacity of 1,200 RU/sec. You would thus be charged 12 x $- = $-/hour.
If your throughput needs changed and you increased each partition’s capacity by 500 RU/sec while also creating a new unlimited storage container using 20,000 RU/sec, your overall provisioned capacity would be 22,200 RU/sec (1,000 RU/sec + 1,200 RU/sec + 20,000RU/sec). Your bill would then change to: $- x 222 = $-/hour.
In a month of 720 hours, if 500 hours are provisioned at 1,200 RU/sec and 220 hours are provisioned at 22,200 RU/sec, your monthly bill will show—500 x $-/hour + 220 x $-/hour = $-/hour
When you choose to make containers span across geographic regions, you are billed for the throughput and storage for each container in every region and the data transfer between regions. As an example, let’s assume you have a container in West US provisioned with throughput 10K RU/s and store 1TB of data this month. Let’s assume you add 3 regions - East US, North Europe, and East Asia, each with the same storage and throughput. Your total monthly bill will be (assuming 31 days in a month):
Item Usage (Month) Rate Monthly Cost Throughput bill for container in West US 10K RU/s * 24 * 31 $- per 100 RU/s per hour $- Throughput bill for 3 additional regions - East US, North Europe, and East Asia 3 * 10K RU/s * 24 * 31 $- per 100 RU/s per hour $- Storage bill for container in West US 1 TB $-/GB $- Storage bill for 3 additional regions - East US, North Europe, and East Asia 3 * 1TB $-/GB $- Total $-
Let’s also assume that you egress 100GB of data every month from the container in West US to replicate data into East US, North Europe and East Asia. You are billed for egress as per data transfer rates.
Storage capacity is billed in units of the maximum hourly amount of data stored, in GB, over a monthly period. For example, if you utilised 100 GB of storage for half of the month and 50 GB for the second half of the month, you would be billed for an equivalent of 75 GB of storage during that month.
You are billed the flat rate for each hour the container exists, regardless of usage or if the container is active for less than an hour. For example, if you create a container and delete it 5 minutes later, your bill will reflect a charge for 1 unit hour.
If you define your own performance for a container and you upgrade at 9:30 AM from 400 RUs to 1,000 RUs and downgrade at 10:45 AM back to 400 RUs, you will be charged for two hours of 1,000 RUs.
If you select a pre-defined collection performance level, and you upgrade at 9:30 AM from an S1 collection to an S3 collection, and downgrade at 10:45 AM back to S1, you will be charged for two hours of S3.
You can scale up or scale down the number of Request Units for each container within your Azure Cosmos DB account by using the Azure Portal, one of the supported SDKs, or the REST API.
To move a collection of S1, S2 or S3 performance tier to a single partition with the same storage size, see Changing performance levels using the Azure Portal.
To move an existing single collection to an unlimited storage container, see Partitioning and Scaling in Azure Cosmos DB.
At an entry point, single partition has more throughput than S1 (400 RU/sec versus 250 RU/sec) at a lower price. You can also scale up to 10,000 RU/sec versus 2,500 RU/sec with S3. The great thing with the new provisioning model is that you can scale by increment of 100 RU/sec, so you do not need to pay for S3 at 2,500 RU/sec when you only need 1,200 RU/sec.