November Happy Hour will be moved to Thursday December 5th.
November Happy Hour will be moved to Thursday December 5th.
Hi George,
Have you tried the below solution suggested by Episerver?
[HttpPost]
public ActionResult Remap()
{
var errors = new StringBuilder();
foreach (var storeDefinition in StoreDefinition.GetAll())
{
try
{
var type = TypeResolver.GetType(storeDefinition.StoreName);
if (type != null)
{
storeDefinition.Remap(type);
storeDefinition.CommitChanges();
}
else
{
var provider = _dataStoreProviderFactory.Create();
provider.SaveStoreDefinition(storeDefinition);
}
}
catch (Exception ex)
{
errors.AppendLine($"Error remapping '{storeDefinition.StoreName}': {ex.Message} {ex.StackTrace}");
}
}
ViewBag.Error = errors.ToString();
ViewBag.Success = string.IsNullOrEmpty(ViewBag.Error);
return View("Index");
}
For more info please refer
https://marisks.net/2017/10/19/fixing-dds-mapping-issue/
Thanks
Ravindra
Hi Ravindra,
Interestingly the issue seems to appear without any intervention or code changes. We had a production deploy on the 18th that went smoothly then the issue started appearing 3-4 days later on some servers while not on others. All of the deployments are automated and the environments are identical.
I believe the remapping code you posted will do the same thing (on calling that controller action) as the suggested `autoRemapStores = "true"` setting in the web.config (on startup).
While it might be a step in the right direction I don't think it solves the root cause of why we are running into this issue?
Also, I'm hesitant to suggest remapping the stores when the same shared CMS DB (and therefore the same DDS) works on some server instances, briefly works, or plainly refuses to run.
From the looks of it `EPiServer.ApplicationModules.Security.SiteSecret`is an internal value that somehow goes off the rails and takes the DDS with it.
Any further advice would be appreciated.
Thanks,
Jacob.
Hi Hassan
The dynamic data store allows us, as developers, to store compile-time and run-time data types. This can be handy when you don't know for sure the type of class you might need to store.
Hi,
Did you find a solution to this problem?
We are having an issue where we cannot reach the /episerver/cms interface (500 error) and are seeing the same arror as you in our logs. We tried to add autoRemapStores = "true" in web.config, but it did not solve the problem.
For us it also happened a week after our latest deploy and only in our production environment, we cannot reproduce it in our test environments.
We are also seeing this error in our logs (not sure if its a cause or a symptom of another problem)
Module 'EPiServer.Events.EventsInitialization, EPiServer.Events, Version=11.10.4.0, Culture=neutral, PublicKeyToken=8fe83dea738b45b7' throw exception during Uninitialize due to shutdown System.NullReferenceException: Object reference not set to an instance of an object. at EPiServer.Azure.Events.AzureEventProvider.Uninitialize() at EPiServer.Events.Providers.Internal.EventProviderService.Uninitialize() at EPiServer.Events.EventsInitialization.Uninitialize(InitializationEngine context) at EPiServer.Framework.Initialization.Internal.ShutdownTracker.<Stop>b__8_1(IInitializableModule m)
Thanks! /Mia
Hi Mia,
We have run into the issue three times since the original posting, twice on production and once on our QA servers. I managed to save a copy of the "broken" QA database and setup a standalone environment on QA to try and fix it.
We are currently investigating further and we have an open support case with Episerver. I'll keep this thread updated with a fix if we find one.
Jacob.
A quick update to this thread,
We've found a SQL index on tblSystemBigTable
called nci_wi_tblSystemBigTable_4FD780B5ECF451D08C02458505EE0B76
in our case. Deleting this index brings the sites back up, adding it back in brings back this remapping issue.
We're trying to identify where this index comes from as we don't have any code that manually adds indexes. Currently, we're investigating if it's from Azure somewhere for monitoring/performance optimization.
EDIT: We've now confirmed this index is created by Azure SQL auto-tuning. Timestamp matches when the DDS sync issue started appearing on the servers. Removing the index and disabling this action seems to be working.
Mia, are you using Azure SQL Databases?
Cheers,
Jacob.
Hi Jacob,
We found the same index in our db and deleting it also solved our problem :)
Thank you so much for your help!
/Mia
We are receiving the following error:
We have tried the suggested actions but there is no change. Frustratingly, it occasionally works after multiple application pool recycles only to fail again about 15 minutes later.
Has anyone experienced similar issues who can offer a resolution?
Thanks