Try our conversational search powered by Generative AI!

Johan Björnfot
Jan 27, 2010
  13917
(5 votes)

Dynamic Data Store caching – The internals

This post will give you some background in how caching works in Dynamic Data Store (DDS) and give you knowledge of what to expect when it comes to object instances and such when working with DDS.

Problem with caching

The classical problem when it comes to caching in multithreaded environments is how to deal with object instances.

The easiest approach is to just store the object instance somewhere and when the object is requested next time the same instance is returned. However a problem with this approach is if the object is not immutable and one thread changes the state of the instance. Then since the instance is “shared” all other requests for the same object will be affected by the change. Therefore the approach of caching object instances directly should only be done for immutable objects. This is for example how PageData instances work. The PageData instances that are cached are read-only and to get a writable instance you need to call CreateWritableClone which will return a new not shared instance.

For DDS we cache all objects regardless of they are immutable or not (we don’t have that information). To be able to do this we could not use the approach of caching object instances directly. Instead we cache the object data in an intermediate format and each time the object is requested a new instance is created and the object state is set from the intermediate format. The consequence of this is that the DDS cache will for two subsequent calls for the same object return two separate object instances (they will though have same “state”).  

Two levels of caching

An instance of DynamicDataStore contains an IdentityMap which can be seen as a first level cache. This first level cache stores the object in their “real” format and hence two subsequent Load calls to the same store instance will return the same object instance. The algorithm for loading an object from DDS is that first the IdentityMap is checked for the object, if not present there the “shared” cache (described above) is checked and if not found there the database is queried for the object. You can disable the IdentityMap on a store instance be setting property KeepObjectsInContext to false.

Performance

Since the DDS cache does not store fully instantiated object instances which can be directly delivered the “cost” to deliver an object is higher than for example the PageData cache. However the cost is much less than the cost it would be to load the object from database. Below is a simple test where a measurement is taken for how long it takes for DDS to deliver an object ten times. The first result will hit the IdentityMap inside the store instance meaning it will return the same instance for every call to Load. The second result disables the IdentityMap (this could be seen creating new instances of the store for each Load) meaning it will hit the “shared” cache each Load. The third test runs without IdentityMap and “shared” cache meaning it will Load from database each call.

public class TestObject
{
   public Identity Id { get; set; }
   public string Prop1 {get;set;}
   public Guid Prop2{get;set;}
   public DateTime Prop3{get;set;}
}
[TestMethod]
public void Cache_Measurement()
{
  string testStoreName = "testStore";
  try
  {
      DynamicDataStore store = DynamicDataStoreFactory.Instance.CreateStore(testStoreName, typeof(TestObject));
      CacheProvider.Instance = new HttpRuntimeCacheProvider();

      Stopwatch watch = new Stopwatch();
      
      //Note: Save will put object in cache and IdentityMap
      Identity id = store.Save(new TestObject() { Prop1 = "a string", Prop2 = Guid.NewGuid(), Prop3 = DateTime.Now });
      
      //First measure to load from IdentityMap
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from IdentityMap took {0} ms", watch.ElapsedMilliseconds.ToString()));

      //Now Load from "shared" cache
      store.KeepObjectsInContext = false;
      store.Refresh();
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from Shared cache took {0} ms", watch.ElapsedMilliseconds.ToString()));

      //Now Load from database
      CacheProvider.Instance = new NullCacheProvider();
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from database took {0} ms", watch.ElapsedMilliseconds.ToString()));

  }
  finally
  {
      DynamicDataStoreFactory.Instance.DeleteStore(testStoreName, true);
  }
}

Running the above unit test will give the result as:

Load from IdentityMap took 2 ms
Load from Shared cache took 6 ms
Load from database took 53 ms

Immutable objects

We have plans that it should be possible to decorate (e.g. by an interface or attribute) a type as immutable/read-only or “notcachable”. Then objects marked as immutable could be stored in “shared” cache in their final format which would gain performance. Other objects that are either big in size or loaded very rarely could be marked as “notcachable” to reduce the memory consumption for the application. How/when this will be implemented is not yet decided.

Jan 27, 2010

Comments

Bjørn Egil Hansen
Bjørn Egil Hansen Sep 19, 2012 12:39 PM

Good article!

I have a comment on the performance measures:

In the second case, "Load from 'Shared cache'", the first call will hit the database and you will get a penalty of roughly 5 ms, hence load from IdentityMap and Shared cache doesn't seem to be that much different.

-Bjørn Egil Hansen

joel.williams@auros.co.uk
joel.williams@auros.co.uk Jul 14, 2013 01:38 PM

Did you ever get around to implementing the notcachable attribute?

fatso83
fatso83 Nov 30, 2016 12:58 PM

Your example shows how to disable the cache using code, but it sets the static CacheProvider.Instance, which I would assume is used by all episerver stores. That doesn't fly if we would like to keep caching for most stores, but disable it for some. I have now browsed the entire API without really getting anywhere on how to do this. All I have found is to configure the datastore via xml, which would set some default cacheProvider and your code - both of which seem to apply globally.

Is there any way of setting a cache provider per store, say disabling cache for some but not others?

Please login to comment.
Latest blogs
Solving the mystery of high memory usage

Sometimes, my work is easy, the problem could be resolved with one look (when I’m lucky enough to look at where it needs to be looked, just like th...

Quan Mai | Apr 22, 2024 | Syndicated blog

Search & Navigation reporting improvements

From version 16.1.0 there are some updates on the statistics pages: Add pagination to search phrase list Allows choosing a custom date range to get...

Phong | Apr 22, 2024

Optimizely and the never-ending story of the missing globe!

I've worked with Optimizely CMS for 14 years, and there are two things I'm obsessed with: Link validation and the globe that keeps disappearing on...

Tomas Hensrud Gulla | Apr 18, 2024 | Syndicated blog

Visitor Groups Usage Report For Optimizely CMS 12

This add-on offers detailed information on how visitor groups are used and how effective they are within Optimizely CMS. Editors can monitor and...

Adnan Zameer | Apr 18, 2024 | Syndicated blog