🔧 Maintenance Alert: World will be on Read-Only Mode on February 18th, 10:00 PM – 11:00 PM EST / 7:00 PM – 8:00 PM PST / 4:00 AM – 5:00 AM CET (Feb 19). Browsing available, but log-ins and submissions will be disabled.

Anders Hattestad
May 25, 2012
  4363
(2 votes)

Cache Dynamic Data Store items

When I use DDS I always use a pattern where I have a public static Items implementation that I can do my query against. This logic is placed inside a common class all my DDS tables inherit from

Code Snippet
  1. public class BaseData<T> : IDynamicData, ISaveMe where T : BaseData<T>, new()
  2. {
  3.     public EPiServer.Data.Identity Id { get; set; }
  4.     public static IOrderedQueryable<T> Items
  5.     {
  6.         get
  7.         {
  8.             return Store.Items<T>();
  9.  
  10.         }
  11.     }

So when I needed to speed things up a bit I changed my Items implementation to instead return a memory list with all my items,

Code Snippet
  1. public static IQueryable<T> Items
  2. {
  3.     get
  4.     {
  5.         if (Cache())
  6.             return ItemsMemory;
  7.  
  8.         return Store.Items<T>();
  9.  
  10.     }
  11. }
  12.  
  13. public static bool Cache()
  14. {
  15.     string tmp = System.Web.Configuration.WebConfigurationManager.AppSettings.Get("Cache_"+typeof(T).Name);
  16.     if (!string.IsNullOrEmpty(tmp))
  17.         return true;
  18.     return false;
  19. }
  20.  
  21. static IQueryable<T> ItemsMemory
  22. {
  23.     get
  24.     {
  25.         if (_itemsMemory == null)
  26.         {
  27.             WriteNewCacheData();
  28.         }
  29.         return _itemsMemory.AsQueryable();
  30.     }
  31. }
  32. static List<T> _itemsMemory = null;
  33. static object lockObject = new object();
  34. static DateTime _lastCleard = DateTime.MinValue;
  35. public static void WriteNewCacheData()
  36. {
  37.     if (Cache())
  38.     {
  39.         WriteNewCacheData((from item2 in Store.Items<T>() select item2).ToList());
  40.     }
  41. }
  42. static void WriteNewCacheData(List<T> data)
  43. {
  44.     lock (lockObject)
  45.     {
  46.         _lastCleard = DateTime.Now;
  47.         _itemsMemory = data;
  48.     }
  49. }

Since all my DDS classes inherits from the same base, I made my change so I could turn on memory cache using appsettings.

When I retrieve a object and want to make a change to I just make the change and save it using my LazyDDSSave class. Even before the item is saved all new querys will access the changed object since it’s the same object. One could make a CreateWritebleClone implementation update the cache when one have done the save, but I didn’t.

Its only when we create a new object or delete a object we need to change the number of elements in the memory cache.

Code Snippet
  1. public virtual void Save()
  2. {
  3.     LazyDSSSave.Current.AddToSave(this);
  4. }
  5. public static void Save(T item)
  6. {
  7.     LazyDSSSave.Current.AddToSave(item);
  8. }
  9. public static void Delete(T item)
  10. {
  11.     Store.Delete(item);
  12.     if (Cache())
  13.     {
  14.         WriteNewCacheData((from item2 in Store.Items<T>() select item2).ToList());
  15.     }
  16. }
  17. public void SaveMe()
  18. {
  19.     Store.Save(this);
  20.     if (Cache())
  21.     {
  22.         var list = (_itemsMemory as List<T>);
  23.         if (list != null)
  24.             if (list.IndexOf(this as T) == -1)
  25.             {
  26.                 WriteNewCacheData((from item in Store.Items<T>() select item).ToList());
  27.             }
  28.     }
  29. }

I have selected a full reread from the data store when I delete or add a new item, but this could also be changed to just add or delete the object from the memory list.

If you are in a enterprise load balance server situation  one could either implement a event based reload or one could reload the memory list after a fixed amount of time.

I gain a lot performance by just using my implementation above in a project I worked on. I had many updates on objects, and not very many delete or new ones.

Since I then cache all my DDS tables (or most of them) I don’t need to cache my results from query's against my DDS tables So when updates are done I don’t need to invalidate my aggregate cache. That saves me a lot of worries Smile.

Have uploaded the base class in the code section here

May 25, 2012

Comments

May 28, 2012 08:12 AM

I have a similar setup when working with DDS.
Great inspiration how to improve it - Thanks!

Please login to comment.
Latest blogs
The missing globe can finally be installed as a nuget package!

Do you feel like you're dying a little bit every time you need to click "Options" and then "View on Website"? Do you also miss the old "Globe" in...

Tomas Hensrud Gulla | Feb 14, 2025 | Syndicated blog

Cloudflare Edge Logs

Optimizely is introducing the ability to access Cloudflare's edge logs, which gives access to some information previously unavailable except throug...

Bob Davidson | Feb 14, 2025 | Syndicated blog

Comerce Connect calatog caching settings

A critical aspect of Commerce Connect is the caching mechanism for the product catalog, which enhances performance by reducing database load and...

K Khan | Feb 14, 2025

CMP DAM asset sync to Optimizely Graph self service

The CMP DAM integration in CMS introduced support for querying Optimizly Graph (EPiServer.Cms.WelcomeIntegration.Graph 2.0.0) for metadata such as...

Robert Svallin | Feb 13, 2025