SaaS CMS has officially launched! Learn more now.

Johan Björnfot
Jan 27, 2010
  14050
(5 votes)

Dynamic Data Store caching – The internals

This post will give you some background in how caching works in Dynamic Data Store (DDS) and give you knowledge of what to expect when it comes to object instances and such when working with DDS.

Problem with caching

The classical problem when it comes to caching in multithreaded environments is how to deal with object instances.

The easiest approach is to just store the object instance somewhere and when the object is requested next time the same instance is returned. However a problem with this approach is if the object is not immutable and one thread changes the state of the instance. Then since the instance is “shared” all other requests for the same object will be affected by the change. Therefore the approach of caching object instances directly should only be done for immutable objects. This is for example how PageData instances work. The PageData instances that are cached are read-only and to get a writable instance you need to call CreateWritableClone which will return a new not shared instance.

For DDS we cache all objects regardless of they are immutable or not (we don’t have that information). To be able to do this we could not use the approach of caching object instances directly. Instead we cache the object data in an intermediate format and each time the object is requested a new instance is created and the object state is set from the intermediate format. The consequence of this is that the DDS cache will for two subsequent calls for the same object return two separate object instances (they will though have same “state”).  

Two levels of caching

An instance of DynamicDataStore contains an IdentityMap which can be seen as a first level cache. This first level cache stores the object in their “real” format and hence two subsequent Load calls to the same store instance will return the same object instance. The algorithm for loading an object from DDS is that first the IdentityMap is checked for the object, if not present there the “shared” cache (described above) is checked and if not found there the database is queried for the object. You can disable the IdentityMap on a store instance be setting property KeepObjectsInContext to false.

Performance

Since the DDS cache does not store fully instantiated object instances which can be directly delivered the “cost” to deliver an object is higher than for example the PageData cache. However the cost is much less than the cost it would be to load the object from database. Below is a simple test where a measurement is taken for how long it takes for DDS to deliver an object ten times. The first result will hit the IdentityMap inside the store instance meaning it will return the same instance for every call to Load. The second result disables the IdentityMap (this could be seen creating new instances of the store for each Load) meaning it will hit the “shared” cache each Load. The third test runs without IdentityMap and “shared” cache meaning it will Load from database each call.

public class TestObject
{
   public Identity Id { get; set; }
   public string Prop1 {get;set;}
   public Guid Prop2{get;set;}
   public DateTime Prop3{get;set;}
}
[TestMethod]
public void Cache_Measurement()
{
  string testStoreName = "testStore";
  try
  {
      DynamicDataStore store = DynamicDataStoreFactory.Instance.CreateStore(testStoreName, typeof(TestObject));
      CacheProvider.Instance = new HttpRuntimeCacheProvider();

      Stopwatch watch = new Stopwatch();
      
      //Note: Save will put object in cache and IdentityMap
      Identity id = store.Save(new TestObject() { Prop1 = "a string", Prop2 = Guid.NewGuid(), Prop3 = DateTime.Now });
      
      //First measure to load from IdentityMap
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from IdentityMap took {0} ms", watch.ElapsedMilliseconds.ToString()));

      //Now Load from "shared" cache
      store.KeepObjectsInContext = false;
      store.Refresh();
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from Shared cache took {0} ms", watch.ElapsedMilliseconds.ToString()));

      //Now Load from database
      CacheProvider.Instance = new NullCacheProvider();
      watch.Start();
      for (int i = 0; i < 10; i++)
      {
          TestObject test = store.Load<TestObject>(id);
      }
      watch.Stop();
      Debug.WriteLine(String.Format("Load from database took {0} ms", watch.ElapsedMilliseconds.ToString()));

  }
  finally
  {
      DynamicDataStoreFactory.Instance.DeleteStore(testStoreName, true);
  }
}

Running the above unit test will give the result as:

Load from IdentityMap took 2 ms
Load from Shared cache took 6 ms
Load from database took 53 ms

Immutable objects

We have plans that it should be possible to decorate (e.g. by an interface or attribute) a type as immutable/read-only or “notcachable”. Then objects marked as immutable could be stored in “shared” cache in their final format which would gain performance. Other objects that are either big in size or loaded very rarely could be marked as “notcachable” to reduce the memory consumption for the application. How/when this will be implemented is not yet decided.

Jan 27, 2010

Comments

Bjørn Egil Hansen
Bjørn Egil Hansen Sep 19, 2012 12:39 PM

Good article!

I have a comment on the performance measures:

In the second case, "Load from 'Shared cache'", the first call will hit the database and you will get a penalty of roughly 5 ms, hence load from IdentityMap and Shared cache doesn't seem to be that much different.

-Bjørn Egil Hansen

joel.williams@auros.co.uk
joel.williams@auros.co.uk Jul 14, 2013 01:38 PM

Did you ever get around to implementing the notcachable attribute?

fatso83
fatso83 Nov 30, 2016 12:58 PM

Your example shows how to disable the cache using code, but it sets the static CacheProvider.Instance, which I would assume is used by all episerver stores. That doesn't fly if we would like to keep caching for most stores, but disable it for some. I have now browsed the entire API without really getting anywhere on how to do this. All I have found is to configure the datastore via xml, which would set some default cacheProvider and your code - both of which seem to apply globally.

Is there any way of setting a cache provider per store, say disabling cache for some but not others?

Please login to comment.
Latest blogs
Optimizely SaaS CMS Concepts and Terminologies

Whether you're a new user of Optimizely CMS or a veteran who have been through the evolution of it, the SaaS CMS is bringing some new concepts and...

Patrick Lam | Jul 15, 2024

How to have a link plugin with extra link id attribute in TinyMce

Introduce Optimizely CMS Editing is using TinyMce for editing rich-text content. We need to use this control a lot in CMS site for kind of WYSWYG...

Binh Nguyen Thi | Jul 13, 2024

Create your first demo site with Optimizely SaaS/Visual Builder

Hello everyone, We are very excited about the launch of our SaaS CMS and the new Visual Builder that comes with it. Since it is the first time you'...

Patrick Lam | Jul 11, 2024

Integrate a CMP workflow step with CMS

As you might know Optimizely has an integration where you can create and edit pages in the CMS directly from the CMP. One of the benefits of this i...

Marcus Hoffmann | Jul 10, 2024

GetNextSegment with empty Remaining causing fuzzes

Optimizely CMS offers you to create partial routers. This concept allows you display content differently depending on the routed content in the URL...

David Drouin-Prince | Jul 8, 2024 | Syndicated blog

Product Listing Page - using Graph

Optimizely Graph makes it possible to query your data in an advanced way, by using GraphQL. Querying data, using facets and search phrases, is very...

Jonas Bergqvist | Jul 5, 2024