November Happy Hour will be moved to Thursday December 5th.
AI OnAI Off
November Happy Hour will be moved to Thursday December 5th.
If you can expoert locally, running a profiler like dotTrace would reveal a lot of information, including why it stalls
I'd be happy to take a look at the trace
Unfortunantly I can not do it locally, as we dont have much content there.
In theory you can grab a production db and do the test locally.
but the stall is likely caused by a Garbage collection, maybe gen 2. do you have something like profiler trace enabled, or can you capture memory dump?
all in all this should be handled via our support channel
Hey,
For a client i need to do a larger content export of 100k+ content items, and it stalls after ~80k(same exact amount every time) items everu time.
There are no log entries about this at all, and already have the following set:
maxAllowedContentLength="154857600"
maxRequestLength="302400"
Any ideas on how to due this ? and where is the package stored on server while being generated ? I see a massive amount of diskspace being used, but cant seem to find the temp-files.
Regards,
Klaus