Johan Björnfot
Jan 27, 2010
  14906
(5 votes)

Dynamic Data Store caching – The internals

This post will give you some background in how caching works in Dynamic Data Store (DDS) and give you knowledge of what to expect when it comes to object instances and such when working with DDS.

Problem with caching

The classical problem when it comes to caching in multithreaded environments is how to deal with object instances.

The easiest approach is to just store the object instance somewhere and when the object is requested next time the same instance is returned. However a problem with this approach is if the object is not immutable and one thread changes the state of the instance. Then since the instance is “shared” all other requests for the same object will be affected by the change. Therefore the approach of caching object instances directly should only be done for immutable objects. This is for example how PageData instances work. The PageData instances that are cached are read-only and to get a writable instance you need to call CreateWritableClone which will return a new not shared instance.

For DDS we cache all objects regardless of they are immutable or not (we don’t have that information). To be able to do this we could not use the approach of caching object instances directly. Instead we cache the object data in an intermediate format and each time the object is requested a new instance is created and the object state is set from the intermediate format. The consequence of this is that the DDS cache will for two subsequent calls for the same object return two separate object instances (they will though have same “state”).  

Two levels of caching

An instance of DynamicDataStore contains an IdentityMap which can be seen as a first level cache. This first level cache stores the object in their “real” format and hence two subsequent Load calls to the same store instance will return the same object instance. The algorithm for loading an object from DDS is that first the IdentityMap is checked for the object, if not present there the “shared” cache (described above) is checked and if not found there the database is queried for the object. You can disable the IdentityMap on a store instance be setting property KeepObjectsInContext to false.

Performance

Since the DDS cache does not store fully instantiated object instances which can be directly delivered the “cost” to deliver an object is higher than for example the PageData cache. However the cost is much less than the cost it would be to load the object from database. Below is a simple test where a measurement is taken for how long it takes for DDS to deliver an object ten times. The first result will hit the IdentityMap inside the store instance meaning it will return the same instance for every call to Load. The second result disables the IdentityMap (this could be seen creating new instances of the store for each Load) meaning it will hit the “shared” cache each Load. The third test runs without IdentityMap and “shared” cache meaning it will Load from database each call.

public class TestObject
{
   public Identity Id { get; set; }
   public string Prop1 {get;set;}
   public Guid Prop2{get;set;}
   public DateTime Prop3{get;set;}
}
[TestMethod]
public void Cache_Measurement()
{
  string testStoreName = "testStore";
  try
  {
      DynamicDataStore store = DynamicDataStoreFactory.Instance.CreateStore(testStoreName, typeof(TestObject));
      CacheProvider.Instance = new HttpRuntimeCacheProvider();

      Stopwatch watch = new Stopwatch();
      
      //Note: Save will put object in cache and IdentityMap
      Identity id = store.Save(new TestObject() { Prop1 = "a string", Prop2 = Guid.NewGuid(), Prop3 = DateTime.Now });
      
      //First measure to load from IdentityMap
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from IdentityMap took {0} ms", watch.ElapsedMilliseconds.ToString()));

      //Now Load from "shared" cache
      store.KeepObjectsInContext = false;
      store.Refresh();
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from Shared cache took {0} ms", watch.ElapsedMilliseconds.ToString()));

      //Now Load from database
      CacheProvider.Instance = new NullCacheProvider();
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from database took {0} ms", watch.ElapsedMilliseconds.ToString()));

  }
  finally
  {
      DynamicDataStoreFactory.Instance.DeleteStore(testStoreName, true);
  }
}

Running the above unit test will give the result as:

Load from IdentityMap took 2 ms
Load from Shared cache took 6 ms
Load from database took 53 ms

Immutable objects

We have plans that it should be possible to decorate (e.g. by an interface or attribute) a type as immutable/read-only or “notcachable”. Then objects marked as immutable could be stored in “shared” cache in their final format which would gain performance. Other objects that are either big in size or loaded very rarely could be marked as “notcachable” to reduce the memory consumption for the application. How/when this will be implemented is not yet decided.

Jan 27, 2010

Comments

Sep 19, 2012 12:39 PM

Good article!

I have a comment on the performance measures:

In the second case, "Load from 'Shared cache'", the first call will hit the database and you will get a penalty of roughly 5 ms, hence load from IdentityMap and Shared cache doesn't seem to be that much different.

-Bjørn Egil Hansen

joel.williams@auros.co.uk
joel.williams@auros.co.uk Jul 14, 2013 01:38 PM

Did you ever get around to implementing the notcachable attribute?

fatso83
fatso83 Nov 30, 2016 12:58 PM

Your example shows how to disable the cache using code, but it sets the static CacheProvider.Instance, which I would assume is used by all episerver stores. That doesn't fly if we would like to keep caching for most stores, but disable it for some. I have now browsed the entire API without really getting anywhere on how to do this. All I have found is to configure the datastore via xml, which would set some default cacheProvider and your code - both of which seem to apply globally.

Is there any way of setting a cache provider per store, say disabling cache for some but not others?

Please login to comment.
Latest blogs
Create a multi-site aware custom search provider using Search & Navigation

In a multisite setup using Optimizely CMS, searching for pages can be confusing. The default CMS search regardless of search provider does not...

dada | Jun 12, 2025

Tunning Application Insights telemetry filtering in Optimizely

Application Insights is a cloud-based service designed to monitor web applications, providing insights into performance, errors, and user behavior,...

Stanisław Szołkowski | Jun 12, 2025 |

JavaScript SDK v6: Lightest, Most Efficient SDK Yet

Need a faster site and less frontend bloat? JavaScript SDK v6 is here —and it’s the lightest, smartest SDK we’ve ever released for Optimizely Featu...

Sarah Ager | Jun 11, 2025

Boosting Indexing Efficiency: Reindex Pages Directly from Optimizely’s Navigation Pane

There can be various reasons why you might want to trigger indexing or reindexing of a page/node directly from the navigation pane. In my case, we...

Praful Jangid | Jun 11, 2025

How to Get Started with Google Gemini and Imagen in Optimizely CMS

Bringing AI into your editorial workflow has never been easier. With Epicweb’s AI Assistant now supporting Google Gemini and Imagen, editors workin...

Luc Gosso (MVP) | Jun 9, 2025 |

Avoid Field Type Conflicts in Search & Navigation

When using  Optimizely Search & Navigation , reusing the same property name across different content types is fine — as long as the type is...

dada | Jun 5, 2025