What is caching?
Developers and IT professionals use caching to save and access key-value data in temporary memory faster and with less effort than data stored in conventional data storage. Caches are useful in multiple scenarios with multiple technologies, such as computer caching with random access memory (RAM), network caching on a content delivery network, a web cache for web multimedia data, or a cloud cache to help make cloud-apps more resilient. Developers often design applications to cache processed data and then repurpose it to serve requests faster than in standard database queries.
You can use caching to reduce database costs, deliver higher throughput and lower latency than most databases can offer, and boost the performance of cloud and web applications.
How does caching work for databases?
Developers can supplement a primary database with a database cache, which they can place within the database or application, or set up as a standalone layer. While they typically rely on a conventional database to store large, durable, complete datasets, they use a cache to store transient subsets of data for quick retrieval.
You can use caching with all types of data stores, including NoSQL databases as well as relational databases such as SQL server, MySQL, or MariaDB. Caching also works well with many specific data platforms such as Azure Database for PostgreSQL, Azure SQL Database, or Azure SQL Managed Instance. We recommend researching what type of data store will best meet your requirements before you start to configure a data architecture. For example, you would want to understand what PostgreSQL is before you use it to combine relational and unstructured data stores.
The benefits of cache layers, and what is Redis, anyway?
Developers use multi-level caches called cache layers to store different types of data in separate caches according to demand. By adding a cache layer, or several, you can significantly improve the throughput and latency performance of a data layer.
Redis is a popular open source in-memory data structure used to build high performing cache layers and other data stores. A recent study showed that adding Azure Cache for Redis to a sample application increased data throughput by over 800 percent and improved latency performance by over 1,000 percent1.
Caches can also reduce the total cost of ownership (TCO) for a data layer. By using caches to serve the most common queries and reduce database load, you can decrease the need to overprovision database instances, resulting in significant cost savings and lower TCO.
Types of caching
Your caching strategy depends on how your application reads and writes data. Is your application write-heavy, or is data written once and read frequently? Is the data that's returned always unique? Different data access patterns will influence how you configure a cache. Common caching types include cache-aside, read-through/write-through, and write-behind/write-back.
Cache-aside
For applications with read-heavy workloads, developers often use a cache-aside programming pattern, or "side-cache." They locate the side-cache outside the application, which can then connect with the cache to query and retrieve data or directly with the database if the data isn't in the cache. As the application retrieves the data, it copies it to the cache for future queries.
You can use a side-cache to help improve application performance, maintain consistency between the cache and the data store, and keep data in the cache from getting stale.
Read-through/write-through cache
Read-through caches keep themselves up to date, while with write-through caching, the application writes data to the cache and then to the database. Both caches sit in line with the database, and the application treats them as the main data store.
Read-through caches help simplify applications where the same data is requested over and over, but the cache itself is more complex, while the two step write-through process can create latency. Developers pair the two to help ensure data consistency between the cache and the database, reduce write-through cache latency, and make it easier to update the read-through cache.
With read-through/write-through caching, developers can simplify application code, increase cache scalability, and minimize database load.
Write-behind/write-back cache
In this scenario, the application writes data to the cache, which is immediately acknowledged, and then the cache itself writes the data back to the database in the background. Write-behind caches, sometimes known as write-back caches, are best for write-heavy workloads, and they improve write performance because the application doesn't need to wait for the write to complete before moving to the next task.
Distributed cache vs. session store
What is a session store?
Session-oriented applications track actions that users take while they're signed into the applications. To preserve that data when the user signs out, you can keep it in a session store, which improves session management, reduces costs, and speeds application performance.
How is using a session store different from caching a database?
In a session store, data isn't shared between different users, while with caching, different users can access the same cache. Developers use caching to improve the performance of a database or storage instance, while they use session stores to boost application performance by writing data to the in-memory store, eliminating the need to access a database at all.
Data that's written to a session store is typically short-lived, while data that’s cached with a primary database is usually meant to last much longer. A session store requires replication, high availability, and data durability to ensure that transactional data doesn’t get lost and users remain engaged. On the other hand, if the data in a side-cache gets lost, there’s always a copy of it in the permanent database.
Benefits of caching
Improved application performance
Reading data from an in-memory cache is much faster than accessing data from a disk-driven data store. And with faster access to data, the overall application experience significantly improves.
Reduced database usage and costs
Caching leads to fewer database queries, improving performance and reducing costs by limiting the need to scale database infrastructure and decreasing throughput charges.
Scalable and predictable performance
A single cache instance can handle millions of requests per second, offering a level of throughput and scalability that databases can't match. Caching also offers the flexibility you need whether you're scaling out or scaling up your applications and data stores. Then your application can let many users access the same files simultaneously, without increasing the load on back-end databases. And if an application often experiences spikes in usage and high throughput, in-memory caches can mitigate latency.
What is caching used for?
Output caching
Output caching helps increase webpage performance by storing the full source code of pages such as HTML and client scripts that a server sends to browsers for rendering. Every time a user views the page, the server caches the output code in the application's memory. This enables the application to serve requests without running page code or communicating with other servers.
Data caching and database caching
Database speed and throughput can be key factors in overall application performance. Database caching is used for frequent calls to data that doesn't change often, such as pricing or inventory data. It helps websites and applications load faster while increasing throughput and lowering data retrieval latency from back-end databases.
Storing user session data
Application users often generate data that must be stored for short periods. An in-memory data store like Redis is perfect for efficiently and reliably storing high volumes of session data like user input, shopping cart entries, or personalization preferences at a lower cost than storage or databases.
Message brokers and publish/subscribe architectures
Cloud applications often need to exchange data between services, and they can use caching to implement publish/subscribe or message broker architectures that reduce latency and accelerate data management.
Applications and APIs
Like browsers, applications save important files and data to quickly reload that information when needed. Cached API responses eliminate the demand or load on application servers and databases, delivering faster response times and better performance.
Free account
Try Azure cloud computing services free for up to 30 days.
Pay as you go
Get started with pay-as-you-go pricing. There's no upfront commitment—cancel anytime.
Add a nimble caching layer to your application with a fully managed Redis service. Learn how to get started with Azure Cache for Redis.
If you want to run flexible, file-based caching for high-performance applications, read about Azure HPC Cache.