Skip to main content

 Subscribe

Updated 25 October 2014 – The Redis Cache in now GA
The new Azure Redis Cache is really easy to plug into your Azure web app I had it plugged into my MVC Movie sample app, deployed to Azure and running in under 17 minutes (15 minutes to plug it in and test locally)

p1

The cache is about 100 times faster than banging on a database By fetching hot data from the cache, you not only speed up your app but you can reduce the DB load and increase its responsiveness for other queries

You can download my completed sample here

This is what I did to plug the Redis cache into my MVC movie sample

  1. Log on to the Azure  portal and select create a new cache

    This step can take up to 15 minutes, but I’m not counting that in my time For complete instructions see How to Use Azure Redis Cache  It’s critical you create the cache is the same location (data center) that you create your web site I tested this by moving my web site to a different location, and cache latency increased by a factor of 25 For detailed instructions see Create a Redis Cache You can download my MvcMovie as a starter sample Alternatively, you can download my completed sample and update the cache endpoint ( URL ) and credentials, then follow along
  2. Copy the cache name rediscachewindowsnet and the password (hit the keys button in the properties blade of the portal to see the cache name and password)
  3. Add the NuGet package StackExchangeRedis You’ll also need to restore the NuGet packages in my sample if you’re using that
  4. From the package manager console, run Update-Database You might have to exit and restart Visual Studio after restoring NuGet packages to see the Update-Database command
  5. Plug the connection info into your controller:
public class MoviesController : Controller
{
   private MovieDBContext db = new MovieDBContext();
   private static Lazy lazyConnection = new Lazy(() =>
   {
      return ConnectionMultiplexerConnect(KeysconStr);
   });

   public static ConnectionMultiplexer Connection
   {
      get
      {
         return lazyConnectionValue;
      }
   }

 

Warning: Never store credentials is source code To keep this sample simple, I’m showing them in the source code See Windows Azure Web Sites: How Application Strings and Connection Strings Work for information on how to store credentials

Note that the connection is stored as a static variable so you don’t have to create a new connection on each request A get method is used so you can check that the connection is valid, and if the connection has been dropped, the connection is reestablished

Create a new class containing the SampleStackExchangeRedisExtensions class:

public static class SampleStackExchangeRedisExtensions
{
   public static T Get(this IDatabase cache, string key)
   {
      return Deserialize(cacheStringGet(key));
   }

   public static object Get(this IDatabase cache, string key)
   {
      return Deserialize(cacheStringGet(key));
   }

   public static void Set(this IDatabase cache, string key, object value)
   {
      cacheStringSet(key, Serialize(value));
   }

   static byte[] Serialize(object o)
   {
      if (o == null)
      {
         return null;
      }
      BinaryFormatter binaryFormatter = new BinaryFormatter();
      using (MemoryStream memoryStream = new MemoryStream())
      {
         binaryFormatterSerialize(memoryStream, o);
         byte[] objectDataAsStream = memoryStreamToArray();
         return objectDataAsStream;
      }
   }

   static T Deserialize(byte[] stream)
   {
      BinaryFormatter binaryFormatter = new BinaryFormatter();
      if (stream == null)
         return default(T);

      using (MemoryStream memoryStream = new MemoryStream(stream))
      {
         T result = (T)binaryFormatterDeserialize(memoryStream);
         return result;
      }
   }
}

 

The SampleStackExchangeRedisExtensions class makes it easy to cache any serializable type  You’ll need to add the [Serializable] attribute to your model

[Serializable]
public class Movie

Find all the instances of  Movie movie = dbMoviesFind(id);

and replace them with:

//Movie movie = dbMoviesFind(id);
Movie movie = getMovie((int)id);

In the POST Edit and Delete methods, evict the cache with the following call:

ClearMovieCache(movieID);

Add the following code to the movie controller The getMovie method uses the standard on demand cache aside approach

Movie getMovie(int id)
{
   Stopwatch sw = StopwatchStartNew();
   IDatabase cache = ConnectionGetDatabase();
   Movie m = (Movie)cacheGet(idToString()); 

   if (m == null)
   {
      Movie movie = dbMoviesFind(id);
      cacheSet(idToString(), movie);
      StopWatchMiss(sw);
      return movie;
   }
   StopWatchHit(sw); 

   return m;
} 

private void ClearMovieCache(int p)
{
   IDatabase cache = connectionGetDatabase();
   if (cacheKeyExists(pToString()))
      cacheKeyDelete(pToString());
} 

void StopWatchEnd(Stopwatch sw, string msg)
{
   swStop();
   double ms = swElapsedTicks / (StopwatchFrequency / (10000));
   ViewBagcacheMsg = msg + msToString() +
       ” PID: ” + ProcessGetCurrentProcess()IdToString();
} 

void StopWatchMiss(Stopwatch sw)
{
   StopWatchEnd(sw, “Miss – MS:”);
} 

void StopWatchHit(Stopwatch sw)
{
   StopWatchEnd(sw, “Hit – MS:”);
}

Add the ViewBagcacheMsg code to the ViewsShared_Layoutcshtml file so you get timing information on every page:

@RenderBody()

@ViewBagcacheMsg

@ScriptsRender("~/bundles/jquery") @ScriptsRender("~/bundles/bootstrap") @RenderSection("scripts", required: false)

You can now test locally and get timing information Once a movie is cached, it stays cached (my DB is too small to every evict data because of memory pressure) Performance from your desktop to the cloud cache won’t be great On the sample download, you can click the ClearCache button to evict each cache entry

Monitoring the cache from the portal

From the portal you can get cache hit/miss statistics

sted2

You can only set alerts on items you check above Double click the Add Alert button and you can set up alerts for any item you’re monitoring In the image below, I’m monitoring key evictions over a 15 minute period A high rate of eviction indicates you’ll probably benefit from a bigger cache

met

Visual Studio makes it easy to publish to Azure Right click the web app and select publish  It’s critical you select the same web site region where you created the cache Ie latency is much higher as you would expect A cache located in a different region than you cache clients (in this case a web app) can also incur data transfer costs  On the settings tab make sure you check Execute Code First Migrations

pw

You can now test the app in the cloud and have much lower cache latency (assuming your cache and web site are in the same data center)

Stress testing the cache

The default time out for cache operations is 1000 MS (one second) You can use the following code to inc your code is correctly handling time out exceptions When #define NotTestingTimeOut is commented out, the timeout is lowered to 150 MS to make it easier to hit timeout exceptions under high load

#else
   #region StressTest
       private static Lazy lazyConnection = new Lazy(() =>
      {

      var config = new ConfigurationOptions();
            configEndPointsAdd(KeysURL);
            configPassword = Keyspasswd;
            configSsl = true;
            configSyncTimeout = 150;

           return ConnectionMultiplexerConnect(config);
      });
   #endregion
#endif

I recommend you disable session cache while stress testing You can do for the entire app with the flowing element in the webconfig file

Or you can use [SessionState(SessionStateBehaviorDisabled)] on your controller The updated getMovie method is more robust and will retry up to 3 times with time out exceptions:

      Movie getMovie(int id, int retryAttempts = 0)
      {
         IDatabase cache = ConnectionGetDatabase();
         if (retryAttempts > 3)
         {
            string error = "getMovie timeout with " + retryAttemptsToString()
               + " retry attempts Movie ID = " + idToString();
            Logger(error);

            ViewBagcacheMsg = error + " Fetch from DB";
            // Cache unavailable, get data from DB
            return dbMoviesFind(id);
         }
         Stopwatch sw = StopwatchStartNew();
         Movie m;

         try
         {
            m = (Movie)cacheGet(idToString());
         }

         catch (TimeoutException tx)
         {
            Logger("getMovie fail, ID = " + idToString(), tx);
            return getMovie(id, ++retryAttempts);
         }

         if (m == null)
         {
            Movie movie = dbMoviesFind(id);
            cacheSet(idToString(), movie);
            StopWatchMiss(sw);
            return movie;
         }
         StopWatchHit(sw);

         return m;
      }

The sample app has several methods you can call to load test the cache

qq

The WriteCache and ReadCache methods write or read 1 K items You can optionally append “/n” to the URL to write or read n*K items For example, https://azurewebsitesnet/Movies/ReadCache/3 will read 3K cached items

With a 150 MS time-out and hitting the cache hard, I was able to get about style=”background: white;color: black”>getMovie method correctly handles cache failures and returns the movie from the DB then writes a warning message to the log “getMovie timeout with 4 retry attempts Movie ID = 3 Fetch from DB

Production apps should be ready to handle cache failures If you’ve selected the basic cache (which doesn’t have a slave or failover), you’re guaranteed the cache will be unavailable for several minutes once a month while the hosting VM is patched While a standard cache has a master and slave (that is a failover cache) that has a very fast non-blocking first synchronization and auto-reconnection, you should write your code to correctly handle a cache failure

 

Azure Redis Cache  ASPNET Session State Provider

While it’s considered a best practice is to avoid using session state, some applications can actually have a performance/complexity benefit from using session data, while other apps outright require session state  The default in memory provider for session state does not allow scale out (running multiple instances of the web site) The ASPNET SQL Server session state provider will allow multiple web sites to use session state, but it incurs a high latency cost compared to an in memory provider The Redis session state cache provider is a low latency alternative that is very easy to configure and set up If your app uses only a limited amount of session state, you can use most of the cache for caching data and a small amount for session state

Add the RedisSessionStateProvider NuGet package to your web app (Specify prerelease See this article for full instructions) Edit the markup added to your root Webconfig file with your host URL and keys Be sure to set SSL to true:

 
     
     
    
    
    
  
               
      
      
    
  
  

You can now use session state in your web app The sample provides the WriteCache and ReadCache action menu’s (and UI) The Write cache allows the option of providing string route data, for example: https://azurewebsitesnet/SessionTest/WriteSession/Hello_joe will write “Hello_joe” to session state All your instances of the app will use the same Redis session cache, so you won’t have to use sticky sessions

Let me know how you like this topic and what you’d like me to cover next on the Redis cache

Follow me ( @RickAndMSFT )  on twitter where I have a no spam guarantee of quality tweets

Additional Resources

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning