Hello everyone!We are currently experiencing an issue that has started happening more and more frequently recently and we don't really know how to proceed to track it down...
For some reason our application pool decides to restart about 1-2 a week nowadays and once it has sucessfully restarted once it only runs for 5-10 minutes before restarting again in a restart loop. It keeps doing this until we perform an IIS reset or restart the entire webserver. Once we do this the problem always stops.
In order to try and figure out why the application is restarting I have added some code to catch the reason in the "Application_End" event in Global.asax.cs:
protected void Application_End()
HttpRuntime runtime = (HttpRuntime) typeof(HttpRuntime).InvokeMember("_theRuntime",
if (runtime == null)
string shutDownMessage = (string) runtime.GetType().InvokeMember("_shutDownMessage",
string shutDownStack = (string) runtime.GetType().InvokeMember("_shutDownStack",
" ==== APPLICATION END EVENT ====" + Environment.NewLine + shutDownMessage + Environment.NewLine
+ shutDownStack + " ==============================="
With this code in place we see this message in our logs each time the application restarts:
FATAL Piab.CMS.Global: ==== APPLICATION END EVENT ====
BuildResult change, cache key=cindexingservice.svc.1f180bcd
HostingEnvironment initiated shutdown
HostingEnvironment caused shutdownIt's always the same message each time. Even the "1f180bcd" part seems to be static. The only file called "Indexingservice.svc" in our project comes from the EPiServer.Search package and is located at the \IndexingService\IndexingService.svc path. I'm not sure if this means that our problem is somehow related to indexing/search and the EPiServer.Search package or if the restarts would still happen with a different message if we remove it.
We have a testing/staging server but unfortunately we've never seen this error there and we've never been able to reproduce it ourselves.
I have googled all I can for similar error messages but the only thing I find is the Microsoft.NET reference source code that decides to return the "BuildResult change, cache key=" part of the message I now see in the logs...
Has anyone of you guys seen this type of error and/or does anyone have any idea about how to proceeed troubleshooting something like this?No modified dates are changing on this file itself or any other file in the "bin" folder... We've also tried to disable the IIS file system watcher completely (so dynamic recompilations should not be happening?):
<httpRuntime fcnMode="Disabled" ... />
Since we disabled this the application no longer restarts automatically if we manually edit a .cshtml file for example so the IIS should not be monitoring any files modified/created date and kick off any dynamic recompilations because of such things...
Were you able to resolve this?
We are experiencing the same issue in our EpiServer CMS Admin environment, which is hosted as an Azure App Service. Our Admin environment uses EpiServer's Search (Lucene index, not Epi Find) for indexing and searching of Media content. Coincidentally we had also added shutdown logging on Application_End event:
2019-08-29 14:42:46,681  FATAL : Application is shutting down. Host: RD0003FF6F3B8B Process: 3884 Reason: BuildManagerChange2019-08-29 14:42:46,692  FATAL : Application is shutting down continued. Host: RD0003FF6F3B8B Message: BuildResult change, cache key=cindexingservice.svc.1f180bcdHostingEnvironment initiated shutdownHostingEnvironment caused shutdown
Similarly, this happens once or twice a week - but in our case, it doesn't seem to get into a restart loop as you describe.
It does however, often fail to shutdown the existing .NET AppDomain, so we often find multiple AppDomains running and both processing the EpiServer scheduled jobs, and the queued Epi Find index tasks - which causes us real headache with old versions of content appearing in our Epi Find index.
So far, we haven't tried much to address this, similarly found very little information online about BuildManagerChange events and how it works.
But it's interesting that disabling the IIS file system watcher didn't work. I had suspected the cause might be the shared disk on the Azure App Service falsely eventing a file system change on the compiled output of the indexingservice.svc file. I also suspected that the '1f180bcd' is a hash of the indexingservice.svc contents.
So my next step in troubleshooting this (when I find time) was to try and dive into HttpRuntime.Cache and see if this key 'cindexingservice.svc.1f180bcd' is present with a file cache dependency and get the path to that file.
If I find anything I will post here.
Hello!No this is very much still an ongoing issue for us. It feels like it's been happening for almost a year or more but it's been happening more and more recently as of late. I don't think this was causing application restart loops at first for us either and 1-2 restarts a week would just pause the website for 1-2 minutes and wasn't necessarily being noticed much or reported back to us. But something we updated or something we changed trying to track this issue down has made it much worse for us with the restart loop. As I think I explained in my first post, once this issue happens once for us the application only lives for 5-10 more minutes before it happens again. Only a full IISReset will allow the application to run fine for a couple of more days before the issue occurs again.I have already tried to search for the cache key 'cindexingservice.svc.1f180bcd' inside HttpRuntime.Cache but I have never found it there. I suspect that this key is stored inside "HttpRuntime.Cache.InternalCache" but that is an internal field for .NET framework which I don't think we can access :-/.That field is used a lot in this file at least https://referencesource.microsoft.com/#system.web/Compilation/BuildResultCache.cs,7b600c15d3b0aebb and on line 274 there you can see the code that decides to print the "BuildResult change, cache key=" message.I'm so glad though that we're not the only ones with this issue. Perhaps it is something related to EPiServer or EpiServer.Search then as well since we have both of those components in common. The cache key is mentioning a file used by the EPiServer.Search Lucene indexer after all as well...Any updates if you find more information are very welcomed and I promise to keep this thread updated as well. We have registered this bug with our EPiServer partner as well which is currently trying to investigate it but we've had no success so far.
I wrote some utility code that uses reflection to pull out details of the cached item stored in "HttpRuntime.Cache.InternalCache" which may help you debug:
var internalCacheMember = typeof(System.Web.Caching.Cache).GetProperty("InternalCache", BindingFlags.Instance | BindingFlags.NonPublic);
var internalCache = internalCacheMember.GetValue(System.Web.HttpRuntime.Cache) as CacheStoreProvider;
var buildResult = internalCache.Get("cindexingservice.svc.1f180bcd");
var resultAssemblyMember = typeof(System.Web.Compilation.AssemblyBuilder).Assembly.GetType("System.Web.Compilation.BuildResultCompiledAssembly")
.GetProperty("ResultAssembly", BindingFlags.Instance | BindingFlags.NonPublic);
var tempAssembly = resultAssemblyMember.GetValue(buildResult) as System.Reflection.Assembly;
logger.Log("Location: " + tempAssembly.Location);
The results I got was a path generated assembly stored in the Temporary ASP.NET Files which is difficult to monitor in our environment:
Location: D:\local\Temporary ASP.NET Files\root\b7011411\71ab0411\App_Web_3nj3gt4f.dll
In our case I think it might be something to do with our hosting provider (Azure) falsely reporting a disk change to this file, which triggers the cache dependency to eject the cached item "cindexingservice.svc.1f180bcd". Likely because of some quirk with how Azure App Service handles storage of local temporary files. Out of interest, are you hosting in Azure as well?
Did you make any progress with your EpiServer partner?
Cool! I didn't realize we could access items from the InternalCache. We received a suggestion from our partner to delete all files in this folder (Temporary ASP.NET Files) as that sometimes solves weird errors. Each time we publish new code through our continious integration server it stops the IIS, runs a script to delete everything there, publishes and then restarts the IIS. So unless those steps or the script isn't fully working that should already be happening pretty frequently.
We are not hosting the site on Azure but on a Windows 2016 server we are managing ourselves. The location for us is:
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\c28f5259\e22ef605
We don't have an App_Web_3nj3gt4f.dll file in there though (but tons of other App_Web_xxxxxxxx.dll files). I suspect that the c28f5259\e22ef605\ and 3nj3gt4f parts of these URLs are not static.
It looks like most files in that folder have a created and modified date that equals the last iisreset I had to do this tuesday. Some of them have newer dates though.
We are still waiting for our partner to do further investigation but this is good stuff. Perhaps we need to monitor if some other process is accessing or changing files in the Temporary ASP.NET folder or something... It seems like Azure it NOT something we have in common though, but perhaps we can find something else...
Were you able to solve this? We're experiencing the same issue, often multiple times a day. We're not having issues with looping start, but we're seeing frequent restarts that interrupts scheduled jobs etc.
_shutDownMessage=BuildResult change, cache key=cindexingservice.svc.1f180bcd
HostingEnvironment initiated shutdown
HostingEnvironment caused shutdown
_shutDownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
We're running Episerver on self-hosted Windows Server.
Now that's a weird bug. I'll keep an eye out for a solution...
Load balanced solution? Do you have a shared disc for temporary files?
Yes - we're doing load balancing between two public servers. This issue only occurs on a private server - used by only editors and to run scheduled jobs.
As far as I know, we do not have a shared disk for temporary files. Do you mean for instance
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\
As mentioned in the posts above?
It might be a long shot, but i suggest trying:
.NET stores the built assembly in memory cache in the background. It does this for the indexingservice.svc too (which is compiled into an assembly in the background).
Looks like the cache was cleared because of a dependency was changed. Either file itself or the path to it.
The key used:
is c (for build result) + assembly name (indexingservice.svc) + hash of path to that assembly.
It will add a dependency in the background to the virtual path of that assembly too so if that is changed for some reason, it will clear that cache item which will trigger a shutdown. If you burn that cache key manually it will also trigger a shutdown with that message. Not sure why it's called though.
to the CMS Web.config file did not help, unfortunately. I'll try to add it to the Commerce Web.config also, just in case.
We're suspecting (but not certain) that this problem started a while ago when our hosting provider migrated our virtual machines to another virtualization platform. They created snapshots of the machines running Episerver and moved them, so the web servers should be identical.
Load balancing might be a common factor maybe. Says in documentation
"In a load-balanced environment, the supported scenario is to install the search service on one of the machines, and configure that machine in the search settings for other machines".
That might be worth double checking...
We are trying a solution where we moved away from a run-time-complied IndexingService.svc to a pre-compiled service host. So far it seems to be working..
This github gist shows the changes we made:
Thank you for writing up your solution! I have made and deployed the changes, so we'll see in a day or two if this solved our issues as well.
In addition to following the steps in your gist, I also had to ignore the route to IndexingService using:
I suspect that this is because we have aspNetCompatibilityEnabled = True in our config file:
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true">
This seems to have solved all our issues with unexpected application restarts.
Thanks a lot!
Great! Yeah, issue seems to be with the code inside the svc file is compiled when it is hit in runtime. Moving that to a code behind file will add it to a normal dll that isn't dynamically compiled. That is probabably the best workaround.
Another setting to play around with if you have app pool recycling is this one (numRecompilesBeforeAppRestart):
<compilation targetFramework="4.6.2" numRecompilesBeforeAppRestart="50" />
This defaults to 15 I think so if you have a large site with a lot of dynamically compiled files (cshtml, ascx etc) this is a good setting to increase a bit or you will get some unneccessary restarts of app pool.