Probably because streams are not handled correctly and you are reading every image into memory byte array somewhere. Might be in SqlBlobprovider I guess...
This will fill up heap memory with many images and since byte array of images are large the garbage collector will be less effective. Just guessing mind you :)
It would seem that you're working with pretty large files in order to get an OOM exception. Is that something you could check?
/Steve
Run it in batches will probably work as workaround. Copy 100 at a time...
It's not writing all in once, but each file, in a loop. It could be that the GC does not release it in time, so maybe force GC once a while during import (if this is a one-time thing).
/Steve
Yes but maybe limit scheduled job to run 100 items at the time in a loop...wait 10 mins to let GC work...run it again to handle 101-200 etc
Ugly but will probably work :)
Can you show the relevant code in scheduled job? Maybe you are keeping references to the objects that keep them from being GCed?
You can force the GC, so you don't have to wait for so long :-) Ugly, but sometimes ugly is what it takes!
Hi guys,
Thank you all for the answers!
Sorry for the late reply.
I was working on another issues, but now I'm back to that.
So, here you can see part of our job code:
if (item.File == null) { var overview = item.Detail.Overview; var imageFolder = overview.ImagesFolder.ContentLink; var imageFile = _contentRepository.GetDefault<ImageFile>(imageFolder); var extension = Path.GetExtension(item.Url); var resolutions = item.Variants.GroupBy(variant => new { variant.Width, variant.Height }).Select(group => group.Key); imageFile.Name = string.Join("_", resolutions.Select(resolution => resolution.Width + "-" + resolution.Height)) + extension; try { using (var client = new WebClient()) { var myUri = new Uri(item.Url, UriKind.Absolute); var blob = _blobFactory.CreateBlob(imageFile.BinaryDataContainer, extension); using (var stream = client.OpenRead(myUri)) { try { //NNumber of times called: 846 //The exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll blob.Write(stream); } catch (OutOfMemoryException ex) { } } imageFile.BinaryData = blob; _contentRepository.Save(imageFile, SaveAction.Publish); item.File = imageFile; } }
blob.Write(stream); is the key place
The size of the image files is around 3 MB (maximum).
About the leaving more time for the GC, i tried with Thread.Sleep(1000) for every single image - without success...
The separation in batches will definitely do the job, but It'll cost more time for refactoring. The strange here is that with the default provider (EPiServer.Framework.Blobs.FileBlobProvider) it works fine, and that makes me think that maybe the DB connection is the reason for OOM - less connections to the DB more data in the heap...
Here you can see some memory measurement - http://prntscr.com/d8dap5
Hello,
I have a scheduled job, which tries to save several thousand images in the EPi CMS and I want to use SqlBlobProvider to achive this, becasue of the many benefits, which it provides.
When I use the default EPi provider (EPiServer.Framework.Blobs.FileBlobProvider), the scheduled job completed without any problems - around 4000 images are saved to the local file storage.
But when i switch to the SqlBlogProvider, OutOfMemoryException occurrs and the job fails.
(EPiServer version - 9.12.3)
Do you have any suggestions?