Blog posts by Magnus Rahl2023-11-14T20:37:51.0000000Z/blogs/Magnus-Rahl/Optimizely WorldSupport for .NET 8/blogs/Magnus-Rahl/Dates/2023/11/support-for--net-8/2023-11-14T20:37:51.0000000Z<p>As .NET 8 is released at <a href="https://www.dotnetconf.net/">Dotnetconf</a> today, me and the teams are also happy to announce support for .NET 8 in Optimizely products!</p>
<p>We have tested the latest CMS, Commerce and Search & Navigation packages as well as the most common Optimizely provided addons using the .NET 8 release candidates. After a quick verification with the RTM version of .NET 8 I can now officially say that we support .NET 8.</p>
<p>Our DXP cloud service also supports running .NET 8 applications. Which runtime version is used is determined by which version you compile your project for, through the metadata that the build outputs to the [ProjectName].runtimeconfig.json file. If you want to you can also update the transitive dependencies from our packages to the Microsoft.Extensions.* packages. We have tested running in .NET 8 both with the lowest dependency version we require (6.0) and the 8.0 versions of those.</p>
<p>As <a href="/link/325febcc52684a61b2c0982caf39890a.aspx">mentioned before</a> we continue to release assemblies compiled for .NET 6, as it is not required for us as library authors to change our compilation target for you to run in .NET 8 now that we have verified compatibility. We will change our compilation target too in the future, most likely as part of a future major version release.</p>
<p><span>If you run into any issues, as usual </span><a href="https://www.optimizely.com/support/">contact support</a><span>.</span></p>
<p><span>Now, go <a href="https://get.dot.net/8">get .NET 8</a> and build even better applications with Optimizely!</span></p>Update on .NET 8 support/blogs/Magnus-Rahl/Dates/2023/10/update-on--net-8-support/2023-10-03T11:35:15.0000000Z<p>With .NET 8 now in release candidate stage I want to share an update about our current thinking about .NET 8 support in Optimizely CMS and Customized Commerce and our DXP cloud platform.</p>
<p>With the .NET 8 RC1 we have done basic verification of the compatibility of CMS 12 and Commerce 14 in site projects compiled against .NET 8 and running in a .NET 8 runtime in our DXP platform. So far so good.</p>
<p>We will continue to do more thorough testing of our packages running in .NET 8 before we officially announce .NET 8 compatibility. This might not be in time for the .NET 8 release in November, but keep in mind that .NET 6 is fully supported by Microsoft until November 2024. .NET 7 as an STS release is supported until May 2024. Our goal is to provide full .NET 8 support in time for that, so that any customers currently running on .NET 7 can continue to be fully supported. </p>
<p>As you probably know, and as also described in <a href="/link/c6bc57aaf7fb4b8b929e167534282e47.aspx">the post announcing .NET 7 support</a>, modern .NET has a high level of backward compatibility. A project can compile and run against a newer .NET version than libraries it references are compiled against. Supporting .NET 8 does not require a library assembly changing its compilation target to .NET 8, unless it is affected by breaking changes in .NET 8.</p>
<p>For these reasons, we will continue to ship CMS 12 and Commerce 14 with assemblies compiled for .NET 6. We will not add .NET 8 compiled assemblies unless that is the only way to resolve a regression bug resulting from running in .NET 8. We will eventually switch our compilation target to .NET 8, but most likely as part of a future major version release.</p>Vulnerability in CMS 12 shell module configuration/blogs/Magnus-Rahl/Dates/2023/9/vulnerability-in-cms-12-shell-module-configuration/2023-09-28T12:40:35.0000000Z<h2>Introduction</h2>
<p>A potential security vulnerability has been identified in Optimizely CMS 12, triggered by a certain shell module configuration. To be vulnerable the application needs to fulfil these conditions:</p>
<ul>
<li>Use EPiServer.CMS and/or EPiServer.CMS.UI.Core packages of version 12.0.0 or higher.</li>
<li>Have a module.config file in <span style="text-decoration: underline;">the root folder</span> of the deployed application. Module.config files inside separate module directories/zips are <strong>not</strong> affected, only if the file is in the application root.</li>
<li>Have the clientResourceRelativePath attribute present on the <module> element in the module.config file and the clientResourceRelativePath attribute set to an empty string.</li>
</ul>
<p>If <span style="text-decoration: underline;">all</span> these conditions are met there is a potential vulnerability allowing an attacker using an <span style="text-decoration: underline;">authenticated account</span> to access sensitive data in the application.</p>
<h2>Risk</h2>
<p>The risk of this vulnerability is high. Mitigation is in place for all DXP service customers.</p>
<h2>Affected versions</h2>
<p>All versions higher than 12.0.0 of EPiServer.CMS.UI.Core. Note that the project might not be referencing this package directly, but rather it being a transitive dependency of for example the EPiServer.CMS package of the same version.</p>
<h2>Remediation</h2>
<p>The underlying issue has been fixed in <a href="/link/1933ba72787346df9003b7a4c7d1cff8.aspx?epsremainingpath=bug/CMS-30098">CMS-30098</a>. Update to version 12.22.8 or later of the EPiServer.CMS (or EPiServer.CMS.UI or EPiServer.CMS.UI.Core, depending on which package you reference directly) to receive this fix.</p>
<p>As an alternative workaround if you cannot take an update right now, edit the module.config to remove the clientResourceRelativePath completely instead of having it set to an empty string. </p>
<p>Depending on the setup of the specific shell module and module.config, either of these changes might mean you also need to change the location of files in the paths referenced by the module.config, and/or change how paths to those files are resolved in the application to match.</p>
<p>These module files are related to editor customizations, scripts and stylesheets used in shell modules, so make sure to test such customizations. Please contact support if you run into a scenario you cannot solve.</p>
<h2>Questions</h2>
<p>Please contact application support at <a href="mailto:support@optimizely.com">support@optimizely.com</a></p>
<h2>Risk definitions</h2>
<p>Low – little to no potential impact on Optimizely or customer environments/data. Vulnerability has low exploitability, for example: requirement for local or physical system access, zero reachability to/executability within Optimizely products/code.</p>
<p>Medium – some potential impact on Optimizely or customer environments/data. Vulnerability has medium exploitability, for example: requirement to be located on the same local network as the target, requirement for an individual to be manipulated via social engineering, requirement for user privileges, vulnerability achieves limited access to Optimizely products/code.</p>
<p>High – high potential impact on Optimizely or customer environments/data. Vulnerability has high exploitability, for example: achieves high level access to Optimizely products/code, could elevate privileges, could result in a significant data loss or downtime.</p>
<p>Critical – very significant potential impact on Optimizely or customer environments/data. Vulnerability has very high exploitability, for example: achieves admin/root-level access to Optimizely products/code. Vulnerability does not require any special authentication credentials/knowledge of Optimizely products/environments. </p>XSS vulnerability in CMS 11 and 12/blogs/Magnus-Rahl/Dates/2023/8/xss-vulnerability-in-cms-11-and-12/2023-08-16T13:16:01.0000000Z<h2>Introduction</h2>
<p>A potential security vulnerability was detected for Optimizely CMS that could affect CMS 11 installations before v11.37.1 and CMS 12 installations before v12.16.0. </p>
<ul>
<li>In a CMS 11 installation where request validation has been disabled, the vulnerability allows execution of JavaScript included in a manipulated URL. This allows the possibility to run arbitrary JavaScript code in the context of the logged in user.</li>
<li>In a CMS 12 installation, the vulnerability allows execution of JavaScript included in a manipulated URL.</li>
</ul>
<h2>Example attack</h2>
<p>In CMS 11, when the request validation has been either completely or partially disabled by configuring requestValidationMode in the applications web.config file, harmful requests are allowed to reach the application.</p>
<p>An attacker provides a manipulated URL that includes harmful JavaScript code that a user can interact with. After successful authentication and authorization, the supplied JavaScript is executed in the context of the browser, the web application, and the permissions of the user. The attack is only possible for authenticated users and in installations where request validation is completely or partially disabled.</p>
<h2>Risk</h2>
<p>Overall, the risk of the vulnerability is <strong>low-medium</strong>. The attack is possible for only authenticated users and requires user interaction to execute. The issue was fixed in CMS v11.37.1 (<a href="/link/1933ba72787346df9003b7a4c7d1cff8.aspx?epsremainingpath=bug/CMS-28190">CMS-28190</a>) and CMS v12.16.0 (<a href="/link/1933ba72787346df9003b7a4c7d1cff8.aspx?epsremainingpath=bug/CMS-26236">CMS-26236)</a>. Mitigation is in place for all DXP service customers.</p>
<h2>Remediation</h2>
<ul>
<li><span>If using CMS 11, please update Optimizely CMS to the latest <a href="https://nuget.optimizely.com/package/?id=EPiServer.CMS.UI&v=11.37.1">version</a></span><span>.</span></li>
<li><span>If using CMS 12, please update to the latest <a href="https://nuget.optimizely.com/package/?id=EPiServer.CMS&v=12.16.0">version</a>.</span></li>
<li><span>As a general best practice, it is recommended to restrict the number of users with admin privileges.</span></li>
</ul>
<h2><span>Questions</span></h2>
<p><span>Please contact the security engineering team at <a href="mailto:securityeng@optimizely.com">securityeng@optimizely.com</a>.</span></p>
<h2><span>Risk definitions</span></h2>
<p>Low – little to no potential impact on Optimizely or customer environments/data. Vulnerability has low exploitability, for example: requirement for local or physical system access, zero reachability to/executability within Optimizely products/code.</p>
<p>Medium – some potential impact on Optimizely or customer environments/data. Vulnerability has medium exploitability, for example: requirement to be located on the same local network as the target, requirement for an individual to be manipulated via social engineering, requirement for user privileges, vulnerability achieves limited access to Optimizely products/code.</p>
<p>High – high potential impact on Optimizely or customer environments/data. Vulnerability has high exploitability, for example: achieves high level access to Optimizely products/code, could elevate privileges, could result in a significant data loss or downtime.</p>
<p>Critical – very significant potential impact on Optimizely or customer environments/data. Vulnerability has very high exploitability, for example: achieves admin/root-level access to Optimizely products/code. Vulnerability does not require any special authentication credentials/knowledge of Optimizely products/environments. </p>Demystifying CMS & Commerce Cache Memory Usage/blogs/Magnus-Rahl/Dates/2023/5/demystifying-cms--commerce-cache-memory-usage/2023-05-15T14:15:57.0000000Z<p>We occasionally receive questions or escalations to support regarding the memory usage of Optimizely CMS and Customizable Commerce applications. They are questions along the lines of "why is it using so much memory" or "do you have a memory leak".</p>
<p><strong>TL;DR:</strong> Your application pushing the server up to 90 % memory utilization isn't necessarily a problem. It is likely <em>by design - we put that memory to work!</em></p>
<h2>In-memory Caching and its Benefits</h2>
<p>We use SQL server to store the structured data in CMS and Commerce (and Blob storage for assets/binaries). SQL server is a very high performance system for this type of application and we have developed quite efficient usage of it over the years. But just like any other separate system it just cannot compete with in-process memory access, which may be several orders of magnitude faster. On the other hand, while memory is cheaper than ever, it is still more expensive than persistent storage (which ultimately backs a database). So we don't have unlimited amounts of it.</p>
<p>But, assume that in a given period, a small subset of all data (pages, products...) will be accessed much more frequently than other data, following a 80/20-like or exponential drop-off distribution. By that characteristic, we can dramatically decrease the number of calls to the database by keeping just that smaller subset of the total data in memory instead of fetching it from the database every time it is requested. This is how in-memory caching can improve performance quite dramatically even if it only holds a subset of the total data.</p>
<h2>Cache Memory Usage</h2>
<p>So we basically want cache as much of the frequently accessed data that the available memory allows us to. But we cannot let it grow without bounds, or the application will run out of memory. Also, the cache is less important than the memory allocated to do actual processing in the application (like serving requests), so we want to make sure there is always some headroom to do that. So we <em>trim (</em>or <em>compact</em>)<em> </em>the cache to keep memory usage under control. We aim to keep memory usage at or below a certain percentage limit of the total available memory, currently the <a href="https://docs.developers.optimizely.com/content-management-system/docs/monitoring-memorycache">default is 90%</a>.</p>
<p>If that limit is hit, the cache will be trimmed to evict some fraction its items, starting with the least recently used (LRU) items until the trim target is met. The most frequently accessed items will remain in the cache (unless, of course, they are expire or are evicted because they are changed).</p>
<h2>Interaction with Garbage Collection</h2>
<p>However, for the memory held by cached objects to actually be freed up, they also have to be colleted by the .NET Garbage Collector (GC). So for this reason the cache trimmer always waits for a full GC run before it trims again. We leave the timing of the GC to the .NET runtime, which means you might not see an immediate reduction in memory usage after the cache has been trimmed. But generally it will happen soon after because the GC also sees that the application is under memory pressure (and we actually piggyback on the GC:s memory metrics for determining the memory load).</p>
<h2>Cache Growth and Trim Cycle</h2>
<p>After the cache is trimmed there will eventually be requests for items not in the cache (<em>cache misses</em>), items which will then be put into the cache after being read from the DB. So the cache starts growing again, but not necessarily containing the same items (LRU, remember). What happens over time depends on the application and the data. Most applications will probably go through peridical cache trims with periods of cache growth in between, looking something like in the image below (with three separate instances):<br /><br /><img src="/link/9e039167ff1140a7bc6fe977cfd25d94.aspx" width="1295" alt="Graph of memory consumption over time in an application undergoing cycles of cache growth and trims." height="612" /></p>
<h2>Conclusion and Next Steps</h2>
<p>As you have probably gathered by now, high memory consumption in a CMS/Commerce application isn't a bad thing in itself. It is probably doing its job and making the best use of the memory you have made available to it. </p>
<p>That said, it shouldn't run out of memory. Now that you know more about the cache trimming, configuration of memory thresholds and interaction with the GC you are also better equipped to troubleshoot other memory issues. If the memory grows beyond the configured threshold for the in-memory cache, there is likely something else in the application leaking memory. A few additional hints that may help you troubleshoot (applicable to CMS 12 and above):</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-7.0#configure-logging">Configure logging</a> on the <em>Information </em>level for the namespace <em>EPiServer.Framework.Cache</em>. You will be able to see messages for what memory pressure the cache trimmer sees, and what actions it determines from it.</li>
<li>You can adjust the <a href="https://docs.developers.optimizely.com/content-management-system/docs/monitoring-memorycache">configured limit</a> for the cache down to use less memory. But remember that it is still dependent on Garbage collections to actually free up memory, and the runtime may not do a full GC until it is under memory pressure.</li>
<li>Using a memory profiler and/or analyzing memory dumps is of course a good way to figure out what is using up your memory.</li>
</ul>CMS Core 12.12.0 delisted from Nuget feed/blogs/Magnus-Rahl/Dates/2023/2/cms-core-12-12-0-delisted-from-nuget-feed/2023-02-03T08:52:22.0000000Z<p>We have decided to delist version 12.12.0 of the CMS Core packages from our Nuget feed. As a consequence, we are also temporarily delisting version 12.16.0 of the CMS UI packages, since those depend on having CMS Core 12.12.0 or higher, a dependency that can no longer be fulfilled. We will republish CMS UI 12.16.0 as soon as we have a patched 12.12.1 version of CMS Core out, which is estimated to be in a few days.</p>
<p>We have made this decision because of two issues in 12.12.0:</p>
<ul>
<li><a href="/link/1933ba72787346df9003b7a4c7d1cff8.aspx?epsremainingpath=bug/CMS-26554">CMS-26554</a>: The way that database upgrade scripts are generated requires the name of updated objects to be [dbo] qualified when the update script is executed from our new CLI tool. Some objects turned out to lack this qualifier which triggered this issue when they were now updated, i.e. part of the update script. This slipped through our testing since automatic database upgrades done at site startup does not require this strict qualifier, only the CLI tool does.</li>
<li><a href="/link/1933ba72787346df9003b7a4c7d1cff8.aspx?epsremainingpath=bug/CMS-26514">CMS-25614</a>: There has been a change in lazy loading of properties that in certain scenarios will cause an exception. It requires a certain combination of property size, configuration and the operation executed to be triggered which is why it wasn’t caught in tests.</li>
</ul>
<p>We have seen a number of support tickets, primarily around the first issue, and to prevent further spread of the issue until have got a fix out we have decided to delist the affected versions.</p>
<p>If you are in the process of updating your solution to these versions, hold your updates for now until you can update to the patched version.</p>
<p>Our sincerest apologies for any inconveniences this might cause.</p>
<h3>Update 2023-02-06</h3>
<p>CMS Core 12.12.1 has been released and CMS UI 12.16.0 has been re-listed. However, installing EPiServer.CMS 12.16.0 still resolves version 12.12.0 of the CMS Core packages. To force it to use the patched version, you can add a reference to EPiServer.Hosting 12.12.1 to your project:</p>
<pre class="language-markup"><code><PackageReference Include="EPiServer.CMS" Version="12.16.0" />
<!-- EPiServer.Hosting is a dependency of EPiServer.CMS, but by default
version 12.12.0 will be resolved. This forces it to use version 12.12.1
of the Core CMS packages -->
<PackageReference Include="EPiServer.Hosting" Version="12.12.1" /></code></pre>
<p>We will release new versions of e.g. EPiServer.CMS that update the version requirements and we are also evaluating removing 12.12.0 completely to avoid issues.</p>
<h3>Update 2023-02-06</h3>
<p>We have removed Core 12.12.0 from the Nuget feed, which means 12.12.1 should be resolved instead. But 12.12.0 could still be in your local Nuget cache, in which case it will still be resolved. So if you see 12.12.0 getting resolved, clear your Nuget cache. You can do this from the Package Manager options in Visual Studio, or by calling:</p>
<pre>dotnet nuget locals all --clear</pre>
<p>We will also soon release a version 12.16.1 of the EPiServer.CMS package as well as the CMS UI packages with updated dependency ranges so that only Core 12.12.1 and above will be resolved, regardless of caches.</p>
<h3>Update 2023-02-08</h3>
<p>EPiServer.CMS 12.16.1 and the same version of CMS UI packages with updated dependencies to use Core 12.12.1 have ben released. Prefer to use this version over 12.16.0.</p>Five, Six, Seven, (Eight) - Out With the Old, In With the New .NET/blogs/Magnus-Rahl/Dates/2022/10/five-six-seven-eight---out-with-the-old-in-with-the-new--net/2022-10-28T13:04:53.0000000Z<p>As you all probably know, Microsoft plans to <a href="https://devblogs.microsoft.com/dotnet/announcing-dotnet-7-preview-7/">release .NET 7 in November</a>, including <a href="https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/">even more performance improvements</a>. Although we will continue developing the Optimizely CMS and Commerce products in .NET 6, the latest LTS version, we will support running these platforms on .NET 7.</p>
<h2>.NET LTS Versions and Backward Compatibility</h2>
<p>Optimizely CMS 12 and Commerce 14 currently ship with assemblies compiled for .NET 6, the current “Long Term Support” (LTS) version of .NET. In contrast, .NET 7 is a “Standard Term Support” release (previously known as a “Current” release), like .NET 5 was. If you want to learn more about the difference, check the <a href="https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core">.NET support policy</a> for details.</p>
<p>There is also a very high level of backward compatibility between versions of .NET, so assemblies compiled targeting an earlier version of .NET in most cases run without changes in the newer version of the runtime. In fact, not even Microsoft themselves add new compilation targets to all packages for every version of .NET. For example the latest version of the official SQL client, <a href="https://www.nuget.org/packages/Microsoft.Data.SqlClient/5.0.1">Microsoft.Data.SqlClient</a>, doesn’t ship with assemblies compiled for anything newer than .NET Core 3.1, the LTS version prior to .NET 6. And it runs in all of .NET 5, 6 and 7.</p>
<p>For these reasons, we will continue to ship CMS 12 and Commerce 14 with assemblies compiled for .NET 6. We will not add .NET 7 compiled assemblies unless that is the only way to solve a compatibility problem.</p>
<h2>.NET 7 Supported by Optimizely Packages and in DXP Cloud Services</h2>
<p>We will however make some adjustments to the next versions of our packages. We have had a very cautious approach to dependencies, where we declare an upper bound to dependency version ranges to not allow using new major versions of the dependencies. While this acts as an extra layer of protection to avoid breaking changes in those dependencies (it can be overridden however), it is not considered <a href="https://learn.microsoft.com/en-us/dotnet/standard/library-guidance/dependencies#nuget-dependency-version-ranges">best practice</a>.</p>
<p>Not by coincidence, the change of removing the upper bound on external dependencies will allow you to run CMS 12 and Commerce 14 on .NET 7, because it will allow the 7.0 versions of many Microsoft.* and System.* packages that we depend on. We have verified these versions are compatible using the .NET 7 release candidates. These changes will be released starting with <a href="https://nuget.optimizely.com/package/?id=EPiServer.CMS.UI&amp;v=12.14.0">EPiServer.CMS.UI 12.14.0</a> and <a href="https://nuget.optimizely.com/package/?id=EPiServer.CMS.Core&amp;v=12.11.0">EPiServer.CMS.Core 12.11.0</a>.</p>
<p>Optimizely Cloud Services also supports running .NET 7 solutions, so once .NET 7 itself and our packages with updated version ranges are out, nothing should prevent you from running on .NET 7. You are of course also free to continue running on .NET 6 as both 6 and 7 will continue to be supported until after .NET 8 arrives. If you run into any issues, as usual <a href="https://www.optimizely.com/support/">contact support</a>.</p>
<h2>Caveat: Patch awaiting .NET 7</h2>
<p>Our testing on .NET 7 uncovered an unintended breaking change in .NET itself. This has been <a href="https://github.com/dotnet/runtime/issues/77173">reported and fixed</a>, but the fix will not be included in the initial release of .NET 7. It is <a href="https://github.com/dotnet/runtime/pull/77388">currently targeting 7.0.1</a>. If patches follow the same <a href="https://github.com/dotnet/core/blob/main/release-notes/6.0/README.md">cadence as for .NET 6</a>, this should be released about a month after the initial .NET 7 release. We will include a workaround for the problems it causes in CMS/Commerce, but it can cause issues with your code or 3rd party components. For that reason, we advise you hold off running production applications on .NET 7 until Microsoft releases a patched version of .NET 7.</p>
<h2>Thank You and Good-Bye to .NET 5</h2>
<p>Running applications in .NET 5 has been unsupported by Microsoft since May 2022, and if such applications would run into a problem, Microsoft would point to upgrading to .NET 6 as the first step of solving the problem. As long as your application is running fine<span>, </span>it can stay on .NET 5, but the next time you deploy you should switch to .NET 6 anyway. <span>Therefore</span> there is no reason for us to continue to include .NET 5 compiled assemblies in new versions of our packages. We consider this a change in dependency requirements, which is not a breaking change in Semantic Versioning. Since we have not made any breaking changes in our APIs between the .NET 5 and 6 versions, we will hence not mark this by a new major version. The change will be released in a minor version of CMS 12 and Commerce 14.</p>
<h2>ASP.NET Core is the Name of the Game</h2>
<p>Finally, a sidenote on naming things. As .NET Core became .NET 5 it became more difficult to contrast with .NET Framework 4.x without being explicit with versions, which eventually makes the information look stale as new versions are released. To avoid this, we’re adopting ASP.NET Core, a brand that continues to be used in .NET 5+, as a collective name for modern .NET versions when we want to contrast with .NET Framework/ASP.NET.</p>Addressing vulnerability in Newtonsoft.Json/blogs/Magnus-Rahl/Dates/2022/7/addressing-vulnerability-in-newtonsoft-json/2022-07-01T14:34:46.0000000Z<p>We have received questions about the recently <a href="https://github.com/advisories/GHSA-5crp-9r3c-p9vr">disclosed vulnerability</a> <span>in Newtonsoft.Json prior to version 13.0.1. Having the dependency doesn't mean you're automatically vulnerable, but since several of our packages depend on Newtonsoft.Json, DXP solutions (including custom code) are theoretically vulnerable. </span></p>
<p>The vulnerability was disclosed after 5 PM June 22, and came to our knowledge the next day, June 23. We started investigating immediately and had verified remediation steps early June 24, in the hands of our support teams to respond to customers/partners reaching out about this.</p>
<p>Publishing this information more broadly now is of course a tradeoff between reaching more of our customers and partners, and drawing attention to the vulnerability. However, because Newtonsoft.Json is the #1 used .NET library, it is well known that we are a .NET solution, and the dependency can be seen in public information on Nuget, we decided to go ahead and publish this information together with the remediation.</p>
<h2>Remediation</h2>
<p>Newer versions like CMS 12, Commerce 14 and Find 14 are not vulnerable since they require Newtonsoft.Json 13.0.1. </p>
<p>On slightly older solutions, you can simply update Newtonsoft.Json in your solution to version 13.0.1 or later, for example <a href="https://docs.microsoft.com/en-us/nuget/consume-packages/install-use-packages-visual-studio">using Nuget Package Manager.</a></p>
<p>On yet older versions (earlier verisons of CMS 11, Commerce 13, Find 13) you may run into a version restriction. You can override this version restriction by updating the package using the <a href="https://docs.microsoft.com/en-us/nuget/consume-packages/install-use-packages-powershell">Package Manager Console</a> and supplying the -IgnoreDepencencies flag:</p>
<pre>Update-Package Newtonsoft.Json -Version 13.0.1 -IgnoreDependencies</pre>
<p>Or simply edit the packages.config file to set the version of Newtonsoft.Json to 13.0.1.</p>
<p>We have gone back quite a few versions and verified that forcing the version restriction does not have any negative side-effects. We have done this as far back as EPiServer.Commerce 13.14.0, EPiServer.CMS 11.14.0 (EPiServer.CMS.UI 11.23.1), EPiServer.Find 13.2.6, EPiServer.Find.Commerce 11.2.0 and EPiServer.ContentDeliveryApi 2.21.0.</p>
<p>If you have any questions, please <a href="https://support.optimizely.com/hc/en-us">reach out to support</a>.</p>Resolving Nuget dependency conflicts in Project SDK (PackageReference) model/blogs/Magnus-Rahl/Dates/2022/3/resolving-nuget-dependency-conflicts-in-project-sdk-packagereference-model/2022-03-15T09:41:15.0000000Z<p>After the release of CMS 12 and Commerce 14 we have received reports from partners about package incompatibility issues when installing and updating our Nuget packages. In this post I'll try to address both the background for why this is happening, as well as give you methods for working around these issues.</p>
<p><strong>I do recommend you read this whole post, but for the impatient here is the TL;DR:</strong></p>
<ul>
<li>The dependency issues aren't specific to Optimizely/Episerver packages, but are made worse by some practices we adopted years ago to balance several aspects of dependencies and breaking changes.</li>
<li>The dependency issues aren't actually related to CMS 12 or .NET 5/6, but to the change from old style csproj+packages.config to SDK style projects with PackageReference (which doesn't support classic ASP.NET project and hence hasn't been viable until ASP.NET Core / .NET 5) which resolves transitive dependencies dynamically according to <a href="https://docs.microsoft.com/en-us/nuget/concepts/dependency-resolution">these rules</a>.</li>
<li>You can work around the issues by following the hints in the errors and warnings (NU1107 and NU1608). It may take multiple iterations and adding multiple packages before you get to a compatible set.</li>
<li>Getting to the latest versions of all packages will often be more complex than staying with the top-most "umbrella" packages like EPiServer.CMS and EPiServer.Commerce which may give you slightly older versions.</li>
<li>Multi-project solutions will be more complex, especially if you reference lower level dependencies rater than the "umbrella" packages.</li>
<li>Studying the dependency hierarchies and version restrictions of EPiServer.* packages as well as understanding the <a href="https://docs.microsoft.com/en-us/nuget/concepts/dependency-resolution">Nuget depencency resolution rules</a> can help you gain a better understanding for which dependencies you need to add (and which ones you may be able to remove). Some hints about version relationships in the last section (appendix) of this post.</li>
</ul>
<h2>Background I: Shift from packages.config to PackageReference</h2>
<p>In the old world, nuget packages and their versions would be listed in packages.config. In this scenario, all packages would be listed, inlcuding transitive dependencies of packages you install (installed packages' dependencies, dependencies of those dependencies and so on). Because of this, the set of packages and versions is unambiguous.</p>
<p>If you want to install another package that shares a dependency with a package you already have installed, you will have to make sure to install a version of that common dependency that is compatible with both. But once you've done that (which is often automatic, e.g. the new package requires a newer version of the dependency, so the dependency is upgraded), it is once again unambigous which verison should be used.</p>
<p>Enter SDK style projects and PackageReference. PackageReference allows you to specify only a top level dependency, and all the transitive dependencies of that package are implicit. To figure out which versions of those packages that are actually used when packages are restored, <a href="https://docs.microsoft.com/en-us/nuget/concepts/dependency-resolution">Nuget has a set of rules it follows</a>.</p>
<p>These rules are somewhat similar to what happened in the old age packages.config when installing, but the result of a dependency resolution is not (automatically) "documented" or "written to lock file" in the same way. The resolution happens by the rules each time there is a restore (e.g. when opening or building the project). So the dependencies have to be <em>unambigous by the resolution rules</em>, which is a stricter / more difficult to fulfill requirements. You may have to resort to add additional dependencies to act as a tiebreak on the resolution rules, manually creating a (partial) lock file.</p>
<h2>Background II: Optimizely/Episerver dependency structure</h2>
<p>We maintain a large set of packages across many teams. Some packages are more closely related, maintained by the same team and on the same release cycle. Others are on independent release cycles. Overall we are balancing a number of different perspectives through our dependencies. To pick three:</p>
<ol>
<li>Decoupling and autonomy of teams.</li>
<li>Minimizing number of breaking changes (major versions), for example by using "pubternal" API:s between packages we control.</li>
<li>Modularizaiton and decoupling on package level, depending on simplest possible package (no UI where not required, no web stack where not required).</li>
</ol>
<p>1 and 3 drive towards splitting packages smaller, creating chains of dependencies. 2. requires more close control of dependency ranges between packages that use pubternal API:s that may break between major verisons - we tie versions 1:1 to know that we can break pubternal APIs without risking incompatibilities. 3. creates a complex dependency graph where there are multiple paths to the same dependency.</p>
<p>Take for example the EPiServer.CMS package, often referred to as the "umbrella" package. Its purpose is to get you the dependencies you need to run a CMS website, with UI. If you don't want the UI, don't run a website (like building a library) etc, you would reference something else. But this is basically the "top" level for a plain CMS project. In the other end is our foundational EPiServer.Framework package. Let's take a look at three different paths between these:</p>
<pre>EPiServer.CMS 12.1.0 -> EPiServer.Hosting [12.0.3, 13) -> EPiServer.Framework 12.0.3<br />EPiServer.CMS 12.1.0 -> EPiServer.CMS.AspNetCore.HtmlHelpers [12.0.3, 13) -> EPiServer.CMS.AspNetCore.Mvc 12.0.3 -> EPiServer.CMS.AspNetCore.Routing 12.0.3 -> EPiServer.CMS.AspNetCore.Templating 12.0.3 -> EPiServer.CMS.AspNetCore 12.0.3 -> EPiServer.CMS.Core 12.0.3 -> EPiServer.Framework 12.0.3<br />EPiServer.CMS 12.1.0 -> EPiServer.CMS.UI [12.1.0, 13) -> EPiServer.CMS.UI.Core 12.1.0 -> EPiServer.CMS.AspNetCore.Templating [12.0.3, 13) => EPiServer.CMS.AspNetCore 12.0.3 => EPiServer.CMS.Core 12.0.3 -> EPiServer.Framework 12.0.3</pre>
<p>Note the semi open version ranges [12.0.3, 13) meaning "any 12.x version higher than 12.0.3". On restore, Nuget will be processing these graphs from the top (the project). When it finds a version range, it will by default pick the <em>lowest </em>version in the range, and then move on from there. While traversing these dependency trees it can detect version conflicts between the different paths, but it will always stick to the version it already selected from a range. In the example above, all of the different versions are resolved in a compatible way, using the version 12.0.3 of the 1:1 version mapped packges in the "CMS Core" family (see appendix in the end of this article for more info).</p>
<p>The mix between the complex dependency graph and the 1:1 requirements is what creates issues with the Nuget algorithm. This is best illustrated with an example.</p>
<h2>Example: Adding DXP support to plain CMS project</h2>
<p>So the EPiServer.CMS package gives you what you need for a plain CMS site. But in the interest of modularity it does not automatically pull in the requirements to run on the DXP service (because you could be running elsewhere). To deploy your solution to the DXP service you need to add the EPiServer.CloudPlatform.Cms package.</p>
<p>Let's say you start with a plain project referencing EPiServer.CMS 12.1.0. This is an old version already, but I chose it becuase it illustrates one of the dependency resolution problems well. </p>
<p>Now you want to be compatible with DXP, so you install EPiServer.CloudPlatform.Cms, and just pick the latest version 1.0.3.</p>
<pre>Install EPiServer.CloudPlatform.Cms 1.0.3.</pre>
<p><img src="/link/3b3bf71b8c124c1b91e3c68fdf7be012.aspx" width="1295" alt="" height="166" /></p>
<p>This fails! Why does it fail? Let's look at one specific path of the dependency graph of EPiServer.CloudPlatform.Cms:</p>
<pre>EPiServer.CloudPlatform.Cms 1.0.3 -> EPiServer.CMS.AspNetCore [12.0.4, 13) -> EPiServer.CMS.Core 12.0.4 -> EPiServer.Framework 12.0.4</pre>
<p>Compare this to just one of the paths of EPiServer.CMS 12.1.0 we looked at in the previous section: </p>
<pre>EPiServer.CMS 12.1.0 -> EPiServer.CMS.AspNetCore.HtmlHelpers [12.0.3, 13) -> EPiServer.CMS.AspNetCore.Mvc 12.0.3 -> EPiServer.CMS.AspNetCore.Routing 12.0.3 -> EPiServer.CMS.AspNetCore.Templating 12.0.3 -> EPiServer.CMS.AspNetCore 12.0.3 -> EPiServer.CMS.Core 12.0.3 -> EPiServer.Framework 12.0.3</pre>
<p>As you can see from the first dependency (EPiServer.CMS 12.1.0 -> EPiServer.CMS.AspNetCore.HtmlHelpers [12.0.3, 13)), we're fully compatible with the 12.0.4 version of EPiServer.CMS.AspNetCore.HtmlHelpers (becuase it accepts a range from 12.0.3 to 13) and all the other 12.0.4 packages. But since nuget <em>picks the lowest version in the range and sticks to it</em>, it ends up with a conflict on several packages, including EPiServer.Framework that it mentions (EPiServer.CMS path wants 12.0.3, EPiServer.CloudPlatform.Cms path wants 12.0.4). </p>
<p>At this stage you have to help. By adding a direct dependency to the project you can tiebreak this conflict, because of the <a href="https://docs.microsoft.com/en-us/nuget/concepts/dependency-resolution#nearest-wins">"nearest wins" rule</a> in the dependency resolution rules a direct project dependency will always win over tansitive dependencies.</p>
<pre>Install EPiServer.Framework 12.0.4, then try again with EPiServer.CloudPlatform.Cms 1.0.3.</pre>
<p><img src="/link/fc27599b052747c594c1635855707d89.aspx" width="1295" alt="" height="177" /></p>
<p>This still fails! But let's continue following the hints:</p>
<pre>Install EPiServer.CMS.AspNetCore.Routing 12.0.4, then try again with EPiServer.CloudPlatform.Cms 1.0.3.</pre>
<p><strong>Success!</strong> But wait, even though the installation succeded, there are a couple of warnings.</p>
<pre>warning NU1608: Detected package version outside of dependency constraint: EPiServer.Hosting 12.0.3 requires EPiServer.Framework (= 12.0.3) but version EPiServer.Framework 12.0.4 was resolved.<br />warning NU1608: Detected package version outside of dependency constraint: EPiServer.CMS.AspNetCore.Mvc 12.0.3 requires EPiServer.CMS.AspNetCore.Routing (= 12.0.3) but version EPiServer.CMS.AspNetCore.Routing 12.0.4 was resolved.</pre>
<p>Let's look at a couple of the dependency paths again:</p>
<pre>EPiServer.CMS 12.1.0 -> EPiServer.Hosting [12.0.3, 13) -> EPiServer.Framework 12.0.3<br />EPiServer.CMS 12.1.0 -> EPiServer.CMS.AspNetCore.HtmlHelpers [12.0.3, 13) -> EPiServer.CMS.AspNetCore.Mvc 12.0.3 -> EPiServer.CMS.AspNetCore.Routing 12.0.3 -> EPiServer.CMS.AspNetCore.Templating 12.0.3 -> EPiServer.CMS.AspNetCore 12.0.3 -> EPiServer.CMS.Core 12.0.3 -> EPiServer.Framework 12.0.3</pre>
<p>So let's install the version of EPiServer.Hosting that depends on EPiServer.Framework 12.0.4, which by no coincidence is EPiServer.Hosting 12.0.4.</p>
<pre>Install EPiServer.Hosting 12.0.4.</pre>
<p>One warning down, one to go!</p>
<pre>warning NU1608: Detected package version outside of dependency constraint: EPiServer.CMS.AspNetCore.Mvc 12.0.3 requires EPiServer.CMS.AspNetCore.Routing (= 12.0.3) but version EPiServer.CMS.AspNetCore.Routing 12.0.4 was resolved.</pre>
<pre>Install EPiServer.CMS.AspNetCore.Mvc 12.0.4.</pre>
<p>All is good now, right?</p>
<pre>warning NU1608: Detected package version outside of dependency constraint: EPiServer.CMS.AspNetCore.HtmlHelpers 12.0.3 requires EPiServer.CMS.AspNetCore.Mvc (= 12.0.3) but version EPiServer.CMS.AspNetCore.Mvc 12.0.4 was resolved.</pre>
<p>Wait, what? There's a new warning? Ok, let's be persistent, install EPiServer.CMS.AspNetCore.HtmlHelpers 12.0.4 too:</p>
<pre>Install EPiServer.CMS.AspNetCore.HtmlHelpers 12.0.4.</pre>
<p><strong>Success! No more warnings!</strong></p>
<p>You now have this set of compatible packages:</p>
<pre><PackageReference Include="EPiServer.CloudPlatform.Cms" Version="1.0.3" /><br /><PackageReference Include="EPiServer.CMS" Version="12.1.0" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.HtmlHelpers" Version="12.0.4" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.Mvc" Version="12.0.4" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.Routing" Version="12.0.4" /><br /><PackageReference Include="EPiServer.Framework" Version="12.0.4" /><br /><PackageReference Include="EPiServer.Hosting" Version="12.0.4" /></pre>
<p>Optionally, perhaps you have already figured out from the dependency chains you've seen (or from reading the appendix): Because EPiServer.CMS.AspNetCore.Mvc and EPiServer.CMS.AspNetCore.Routing are transitive dependencies of (and in 1:1 version relationship with) EPiServer.CMS.AspNetCore.HtmlHelpers, you may actually be able to remove these two dependencies to clean up a bit. Indeed that works, leaving you with this set of compatible packages:</p>
<pre><PackageReference Include="EPiServer.CloudPlatform.Cms" Version="1.0.3" /><br /><PackageReference Include="EPiServer.CMS" Version="12.1.0" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.HtmlHelpers" Version="12.0.4" /><br /><PackageReference Include="EPiServer.Framework" Version="12.0.4" /><br /><PackageReference Include="EPiServer.Hosting" Version="12.0.4" /></pre>
<h2>Updating with additional dependencies</h2>
<p>When you used to update something like the umbrella package EPiServer.CMS in the old packages.config world, nuget would also update (overwrite) all its dependencies, e.g. EPiServer.Framework. This is not the case with PackageReference. While transitive dependencies are implicit, direct dependencies are taken very explicitly. Sadly this can cause problems when updating a higher level package like EPiServer.CMS. Let's start from the state in the step before, and try to update EPiServer.CMS to 12.3.2, the latest version at the time of writing.</p>
<pre>Update EPiServer.CMS to 12.3.2</pre>
<p><img src="/link/a802e750356642bf9a30d391380c1468.aspx" width="1295" alt="" height="201" /></p>
<p>This fails, because EPiServer.CMS 12.3.2 requires 12.3.0 or higher of the CMS Core family packages. This is incompatible with the 12.0.4 versions you have now explicitly installed. The way this shows is it recommends adding EPiServer.CMS.AspNetCore.Templating 12.3.0, so let's try that:</p>
<pre>Install EPiServer.CMS.AspNetCore.Templating 12.3.0</pre>
<p><img src="/link/bc15ea54101c4b28bef09112696b72b7.aspx" width="1295" alt="" height="265" /></p>
<p>But that is in conflict with the currently installed EPiServer.Framework 12.0.4. So let's try to update EPiServer.Framework first.</p>
<p>Update EPiServer.Framework to 12.3.0, then add EPiServer.CMS.AspnetCore.Templating 12.3.0.<br /><br />This succeeds but not unsurprisingly you get new warnings because we still have the older versions of EPiServer.Hosting and EPiServer.CMS.AspNetCore.HtmlHelpers expecting an older verison of EPiServer.Framework. So update those too:</p>
<pre>Update EPiServer.Hosting and EPiServer.CMS.AspnetCore.HtmlHelpers to 12.3.0.</pre>
<p>Now we can finally update EPiServer.CMS:</p>
<pre>Update EPiServer.CMS to 12.3.2</pre>
<p>You now have this set of compatible packages without warnings:</p>
<pre><PackageReference Include="EPiServer.CloudPlatform.Cms" Version="1.0.3" /><br /><PackageReference Include="EPiServer.CMS" Version="12.3.2" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.HtmlHelpers" Version="12.3.0" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.Templating" Version="12.3.0" /><br /><PackageReference Include="EPiServer.Framework" Version="12.3.0" /><br /><PackageReference Include="EPiServer.Hosting" Version="12.3.0" /> </pre>
<p>But that update of EPiServer.CMS was a bit messy. Let's take a step back and think about where it all started. It was EPiServer.CloudPlatform.Cms that needed version 1.0.4 of EPiServer.Framework. But now you have 12.3.0. And EPiServer.CMS 12.3.2 has EPiServer.Framework 12.3.0 as a transitive dependency. So what if you just try to remove extra packages you added, leaving only EPiServer.CMS and EPiServer.CloudPlatform.Cms. Would it still work? Indeed it does! Here's a minimal set of compatible packages:</p>
<pre><PackageReference Include="EPiServer.CloudPlatform.Cms" Version="1.0.3" /><br /><PackageReference Include="EPiServer.CMS" Version="12.3.2" /></pre>
<h2>Updating specific packages</h2>
<p>Ok, so let's again use the previous end state as a starting point for the next scenario. We know EPiServer.CMS 12.3.2 implicitly gives us version 12.3.0 of the CMS Core packages. At the time of writing the newest version of for example EPiServer.CMS.Core is 12.4.1. What if that contains a bugfix you want? You then have two options. Either you wait until there is a EPiServer.CMS package that has EPiServer.CMS.Core 12.4.1 (or higher) as a dependency. It will eventually get there, but it might take some time.</p>
<p>Or you reference the package you want directly. Let's try that, install EPiServer.CMS.Core 12.4.1. It fails, once again with an error similar to the one you saw in the beginning:</p>
<p><img src="/link/2cc5dc0c52e148d5b165e9cb66017eb7.aspx" width="1295" alt="" height="124" /></p>
<p>So you could go through the same steps again, starting by adding EPiServer.Framework and walking through errors and warnings. Or you shortcut this based on what you learned earlier.</p>
<p>You ended up having to reference EPiServer.Framework, EPiServer.CMS.AspNetCore.HtmlHelpers and EPiServer.Hosting. Why those specifically? HtmlHelpers and Hosting because they are at top of the dependency chain from EPiServer.CMS, where Nuget would pick the lowest compatible version. Framework because it is at the bottom where different resolution paths could come to conflicing resolutions, and by bringing it up to a top level dependency you act as a tiebreaker. So what about adding version 12.4.1 of those instead? <strong>If you do it manually in the csproj you can just add them all at once</strong>. If you do it using the tools, you have to preemptively break the tie, and install EPiServer.Framework first:</p>
<pre>Install EPiServer.Framework 12.4.1, EPiServer.Hosting 12.4.1 and EPiServer.CMS.AspNetCore.HtmlHelpers 12.4.1.</pre>
<p>You now have this set of compatible packages:</p>
<pre><PackageReference Include="EPiServer.CloudPlatform.Cms" Version="1.0.3" /><br /><PackageReference Include="EPiServer.CMS" Version="12.3.2" /><br /><PackageReference Include="EPiServer.CMS.AspNetCore.HtmlHelpers" Version="12.4.1" /><br /><PackageReference Include="EPiServer.Framework" Version="12.4.1" /><br /><PackageReference Include="EPiServer.Hosting" Version="12.4.1" /></pre>
<h2>More packages or projects, more problems, but same solutions</h2>
<p>If you add Commerce (or Forms, or...) you risk running into more issues like this, especially if you are trying to stay current on the main packages' depencendies like in the last example. But to solve those issues the method is the same. Follow the hints in errors and warnings and work your way through. Look at the dependencies of packages and you can work out how to simplify the set of packages you reference directly. We will look into if we can simplify this (without sacrificing something more valuable) but at least now you know in principle how to work around issues you run in to.</p>
<p>Similarily, if you have a multi-project solution, your lower layer projects will work the same as packages when your higher level projects are built, i.e. the <a href="https://docs.microsoft.com/en-us/nuget/concepts/dependency-resolution#nearest-wins">nearest wins rule</a> will take those projects into account. This may produce slightly different results than if you have only one web project, but in principle is the same. The key thing to remember is to keep the same / compatible dependency versions across the projects.</p>
<h2>Appendix: Package "families"</h2>
<p>This section describes packages that are versioned together and in most cases are dependent 1:1 (at the time of writing, there may be future changes). You can use this as a reference to know which packages to update to resolve a conflict in versions on one or more of the packages. If you have PackageReference elements to several packages in the same family, you can just manually update all of them at once to reference the same version.</p>
<p>You may even want to declare a PropertyGroup with a named property holding the version, and reference this in multiple places in your csproj. If you have a multi-project solution you can extract the PropertyGroup to a separate file (convention is to use the ".props" suffix) and use the Import element in each of your projects to have access to the parameter across dependencies and have a consistent dependency version throughout.</p>
<h3>The CMS Core family</h3>
<p>EPiServer.CMS.AspNetCore.*<br />EPiServer.CMS.Core<br />EPiServer.Framework<br />EPiServer.Framework.AspNetCore<br />EPiServer.Hosting</p>
<h3>The CMS UI family</h3>
<p>EPiServer.CMS.UI*</p>
<h3>The Commerce family</h3>
<p>EPiServer.Commerce.Core<br />EPiServer.Commerce.UI*</p>
<h3>The Find family</h3>
<p>EPiServer.Find*<br />(Except EPiServer.Find.Commerce)</p>
<h3>The Content API family</h3>
<p>EPiServer.ContentDeliveryApi.*<br />EPiServer.ContentManagementApi<br />(Note: Not all packages in this family are updated to .NET 5)</p>
<h3>The Product Recs family</h3>
<p>EPiServer.Tracking.Commerce<br />EPiServer.Personalization.Commerce</p>Rolling out support for .NET 6/blogs/Magnus-Rahl/Dates/2022/2/rolling-out-support-for--net-6/2022-02-22T13:35:26.0000000Z<p>In September last year, we released the Optimizely DXP platform for the modern version of the Microsoft .NET platform - .NET 5 (formerly known as .NET Core). This release wave included CMS version 12 and Commerce version 14 as well as platform support in our DXP cloud service.</p>
<p>In November Microsoft released the new version of this modern track of .NET: .NET 6. We have of course also been getting questions about .NET 6 support in Optimizely products. And specifically, whether projects on older versions (CMS 11, Commerce 13) using .NET Framework 4.x should upgrade to .NET 5 or wait for .NET 6 (spoiler: No need to wait, but also the wait is soon over).</p>
<h2>Backwards compatibility in .NET (Core)</h2>
<p>To understand what to expect from upgrades to .NET 5 and 6 it helps to think about their origins: .NET 5 and 6 are iterations of .NET Core, with .NET 5 following .NET Core 3.1. You can use .NET Core libraries in .NET 5 applications as long as they are not affected by any of the breaking changes in .NET 5, which are relatively few from .NET Core 3.1.</p>
<p>The same goes for .NET 6, it has relatively few breaking changes from .NET 5 and can use .NET 5 (and .NET Core) libraries in the same way. For example, the <a href="https://www.nuget.org/packages/Microsoft.Data.SqlClient">Microsoft.Data.SqlClient</a> package does not contain assemblies compiled for .NET 5 or 6, since its .NET Core 3.1 assembly runs without modification in .NET 5 and 6.</p>
<p>In contrast, the changes from .NET Framework 4.x to the modern .NET are more significant, but hardly any difference between going from 4.x to 5 and directly from 4.x to 6.</p>
<h2>Preparing for .NET 6, prioritizing .NET 5</h2>
<p>Even before we released our .NET 5 support, we did test upgrades of our core packages to the .NET 6 preview version. And we confirmed our expectation that there were no major changes required. Reassured by this, we focused our efforts on tooling to support upgrades of .NET Framework solutions (CMS 11 / Commerce 13), including the newly released <a href="/link/6c6be0e699f3452da121d82cc7c6a2d5.aspx">preview of self-service migration tooling for our cloud platform</a>, as well as continuing to convert additional packages and addons to .NET 5.</p>
<h2>.NET 6 wave starts today!</h2>
<p>Today we are also announcing the first wave of support for .NET 6. This consists of two pieces:</p>
<ul>
<li>Version 12.4.0 of the CMS Core packages. Note that this is a minor version release, i.e. there are no breaking changes in our APIs. The package supports both .NET 5 and .NET 6 solutions through multi-targeting (including both .NET 5 and .NET 6 compiled assemblies).</li>
<li>DXP cloud service support for .NET 6 solutions. Similar to the software, the platform also supports both .NET 5 and .NET 6 applications, and will automatically choose the correct runtime for your deployment package.</li>
</ul>
<p>There is a reason I call this the <em>first</em> wave, and I’ll be completely transparent: We haven’t updated (or otherwise verified compatibility) all Optimizely packages yet. We are updating more packages as dependencies become available. Quite a few are under way and confirmed to not have any incompatibilities (but some require updated dependencies e.g. on Microsoft packages). We will continue to release .NET 5/6 multi targeted packages for CMS UI, Commerce, Find and select other packages in the coming weeks.</p>
<h2>Should I choose .NET 5 or .NET 6?</h2>
<p>The CMS Core packages we release today are the only ones that we know are affected by a breaking change in .NET 6 and hence <em>need</em> a .NET 6 compiled assembly to run properly in all scenarios. As far as we know all other Optimizely packages that today support .NET 5 also support .NET 6 without changes other than dependencies.</p>
<p>Because of this, we chose to release the first packages now so that you, depending on what packages you use, can potentially get started with .NET 6. If you are using a specific Optimizely addon or even a 3rd party package that is only compiled for .NET 5 (or even .NET Core, and certainly .NET Standard) it will most likely run fine in .NET 6 already.</p>
<p>If are working on a project, new or an upgrade from .NET Framework, you can continue to target .NET 5. A later upgrade to .NET 6 is likely going to be minimal effort.</p>
<h2>Using .NET 6 today</h2>
<p>Or if you want to, you can target .NET 6 today. Depending on what packages you use it will be more or less tricky. This is a setup I used to run a CMS site in .NET 6 on DXP today:</p>
<pre class="language-markup"><code><PackageReference Include="EPiServer.CMS" Version="12.3.1" />
<PackageReference Include="EPiServer.CloudPlatform.Cms" Version="1.0.3" />
<!-- Extra top level dependencies needed to force CMS 6 compatible version of CMS Core -->
<PackageReference Include="EPiServer.CMS.AspNetCore.HtmlHelpers" Version="12.4.0" />
<PackageReference Include="EPiServer.Hosting" Version="12.4.0" />
<!-- Extra top level dependencies needed until our CloudPlatform and AspnetIdentity packages are updated to declare .NET 6 dependencies -->
<PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.Extensions" Version="6.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.0" />
<PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="6.0.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.Abstractions" Version="6.0.0" />
<PackageReference Include="Microsoft.Extensions.Options" Version="6.0.0" /></code></pre>
<p>As you can see a few extra explicit top level dependencies are required to force the right versions of everything. Fewer and fewer of these will be necessary as we release packages with updated dependencies. So waiting a few weeks will make this simpler.</p>
<h2>Minimal API:s Limitation</h2>
<p>[Added 2022-03-28]</p>
<p><span>Note that the new hosting API using `WebApplication` that was introduced together with MinimalAPIs in ASP.NET Core 6 is not supported at this time. This is due to the a change in setup order that comes with this model which causes some issues with the way the CMS Initialization Modules currently work. So for the time being you will be limited to using the standard hosting model that relies on a separate Startup class when building your CMS applications.</span></p>
<h2>Related: Resolving dependency conflits</h2>
<p>[Added 2022-03-30]</p>
<p>I also recommend reading this blog post on <a href="/link/d8767e6e7fdf43aabd8ba8c3ae8ef28d.aspx">Resolving Nuget dependency conflicts in Project SDK (PackageReference) model</a>.</p>
<h2>Related: Detailed .NET 6 support status</h2>
<p>[Added 2022-09-05]</p>
<p>I have heard some misconceptions about our .NET 6 support, probably relating to this gradual rollout method we chose. To clarify: All packages that include a .NET 6 target are of course tested in .NET 6 and officially support .NET 6. By now this includes all of the core platform plus the absolute majority of Optimizely maintained modules and addons. You can check the <a href="/link/42b7913c321d4886b468000231d9baa4.aspx">status of a specific package here</a>.</p>EPiServer CMS 11.18.0 delisted from Nuget feed/blogs/Magnus-Rahl/Dates/2020/8/episerver-cms-11-18-0-delisted-from-nuget-feed/2020-08-28T09:32:29.0000000Z<p>We are investigating a compatibility issue where EPiServer.CMS.Core 11.18.0 can cause problems with other parts of the platform, e.g. Search and Navigation (Find).</p>
<p>For the time being we have de-listed this version from the Episerver nuget feed while we continue to investigate the issue.</p>
<p>If you have already deployed a solution using this version, please <a href="https://www.episerver.com/service-support/contact-support">reach out to Episerver application support</a> to determine if you can be affected by this and what steps you should take to resolve any issues you might have from it.</p>
<p>If you are in the process of updating your solution to this version, hold off on the upgrade for now.</p>
<p>Our sincerest apologies for any inconveniences this might cause.</p>
<p>Originally published 2020-08-28 11:32 CEST</p>
<p><strong>Update 2020-08-28 15:33 CEST:</strong></p>
<p>We took the decision to not only delist, but to delete the 11.18.0 version from the Nuget feed. This will cause issues restoring packages for anyone who has already updated, but that is intentional, a way to draw attention to the issue with that version.</p>
<p>We are working on a fixed 11.18.1 version that we hope to have QA:ed and released early next week. Anyone updating from a pre-11.18 version to 11.18.1 will be safe from this issue. Anyone who has deployed with 11.18.0 are still adviced to reach out to Application support to determine what steps to take.</p>
<p><strong>Update 2020-09-01 09:16 CEST:</strong></p>
<p>The patched 11.18.1 version has been released.</p>
<p><strong>Update 2020-09-02:</strong></p>
<p>There are reports of issues when updating from 11.18.0 to 11.18.1 since the former has been removed from the feed. If you run into such issues, you can <a href="/link/63af44d995d34d7081fefb3f6f9abac0.aspx" title="Episerver CMS 11.18.0 (NOT for production use)">download CMS 11.18.0 (NOT for production use) here</a>.</p>
<p><strong>Update 2020-09-29:</strong></p>
<p>CMS 11.19.0 was released 2020-09-15. This version automatically handles compatibility with Find. If Find 13.2.8 or earlier is used, CMS automatically falls back to the pre-11.18.0 behavior to ensure there are no compatibility issues. If Find 13.2.9 or later is used, CMS can use the new memory optimized Content proxies. You can still force it to use the old Castle proxies by using this appSetting:</p>
<p><span><add key="episerver:setoption:EPiServer.Core.ContentOptions.ProxyType,EPiServer" value="Castle" /></span></p>
<p>In the absence of any setting, the default is to use the new optimized proxies (equal to setting the above AppSetting to "Optimized").</p>Multi-site support in the Personalization Native Integration for Commerce/blogs/Magnus-Rahl/Dates/2018/7/personalization-2-0-commerce-native-integration-multi-site-support/2018-07-25T14:34:22.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>This week we released <strong>version 2.0 of the Commerce native integration package for Personalization</strong> (Perform/Reach), EPiServer.Personalization.Commerce.</p>
<p>The reason for the new major version is that we have had to make breaking changes in some public APIs to enable the new <strong>long awaited feature in this release: Multi-site support</strong>. For current implementations the required changes are very limited.</p>
<h2>Introduction to the scope concept</h2>
<p>Central to the multi-site support is the concept of a <em>scope</em>. Tracking requests in the Personalization native integration APIs as well as the catalog feed export are now, explicitly or implicitly, associated with a scope. Each scope has its own Personalization service account configuration. Tracking, recommendations and catalog feeds are therefore completely siloed for each site even if the sites run the same code in an Episerver multi-site solution.</p>
<p>We have strived to build the multi-site support around the scope concept in such a way that it is open and extensible but with default behavior that will allow many solutions to use it without, or with only minimal, customization.</p>
<p>I will try to illustrate the different new capabilities by using a number of scenarios. These are scenarios are based on actual customer cases that we have used in framing and designing the new feature. The scenarios are somewhat ordered by increasing complexity, but the list is neither complete in the sense of listing every imaginable scenario nor in using every possible customization.</p>
<h2>Scenario 1: Single site solution</h2>
<p>This scenario hasn't really changed since the previous version. You can keep the same API calls for tracking and recommendations. The configuration does not need to define any scopes, you can keep the old configuration keys, e.g.</p>
<pre class="language-xml"><code><add key="episerver:personalization.BaseApiUrl" value="myhost.peerius.com"/>
<add key="episerver:personalization.Site" value="mysite"/>
<add key="episerver:personalization.ClientToken" value="myClientToken"/>
<add key="episerver:personalization.AdminToken" value="myAdminToken"/></code></pre>
<p>Note however that the old configuration keys from the beta stage are no longer supported, only the keys with "episerver:personalization." prefix.</p>
<p>If you have used the extension points in namespace EPiServer.Personalization.Commerce.CatalogFeed to customize the catalog feed export you will notice that the interface definitions have changed and you need to add a string scope parameter to your implementations. You will however not need to use that new parameter for anything in this scenario.</p>
<h2>Scenario 2: Multi-site solution with separate catalogs and SiteDefinition scope</h2>
<p>In this scenario, each scope corresponts to a <em>site definition</em> (site defined in "Manage Websites" in Episerver CMS Admin mode), i.e. there is a one-to-one mapping between a site definition and a Personalization service account. Furthermore, each site displays products from its own distinct catalog.</p>
<p>The site definition scope is what we consider the default, and this is reflected in the tracking and recommendation API (CommerceTrackingAttribute, ITrackingService.Track extension methods) in the way that you do not need to specify a scope. It will default to the site definition of the current request.</p>
<p>You do however need to provide configuration for the different Personalization service accounts, mapped to the correct scope. Here's an (incomplete) example configuration:</p>
<pre class="language-xml"><code><add key="episerver:personalization.ScopeAliasMapping.Alias1" value="eec884ae-4272-4710-99d0-392cce987422"/>
<add key="episerver:personalization.ScopeAliasMapping.Alias2" value="8ac1c69d-e03f-4831-8c79-07c850ca67ec"/>
<add key="episerver:personalization.BaseApiUrl.Alias1" value="host1.peerius.com"/>
<add key="episerver:personalization.BaseApiUrl.Alias2" value="host2.peerius.com"/>
<add key="episerver:personalization.CatalogNameForFeed.Alias1" value="Site1Catalog"/>
<add key="episerver:personalization.CatalogNameForFeed.Alias2" value="Site2Catalog"/></code></pre>
<p>As mentioned the default scope corresponds to the site definition, respresented by its Guid ID (from EPiServer.CMS.UI 11.5.0 this is visible in admin mode, but it can be found in earlier versions as well). To make the rest of the configuration simpler, you are first required to define an alias for each scope. The catalog feed export job will also iterate over these defined scopes to create the catalog feed for each Personalization service endpoint.</p>
<p>Each scope alias is then used to define the required endpoint settings (BaseApiUrl, but also Site, ClientToken and AdminToken omitted in this example for brevity) and the name of the catalog to export for this site.</p>
<h2>Scenario 3: Multi-site solution with shared catalog and SiteDefinition scope</h2>
<p>The configuration in this scenario is the same as Scenario 2, except it will define the same catalog name for both scopes (or omit this setting, in which case the first catalog is used for export).</p>
<p>For this to work, the products in the catalog need to have distinct URL:s in the exported catalog feed, and the URL:s should be the display URL for the product in each site where it exists. This can come in several sub-scenarios (again, this is not an exhaustive list):</p>
<h3>Scenario 3A: Product URL:s differ only in site domain</h3>
<p>In this scenario, all products in the catalog are available in all sites, and their relative URL:s are the same in all sites, e.g. www.site1.com/products/productX and www.site2.com/products/productX. This scenario normally does not require any additional steps, as the catalog feed export by default will try to resolve the scope as a site definition ID and then use that site definition's URL to convert the resolved relative URL to an absolute URL.</p>
<h3>Scenario 3B: Product URL:s relative part differs in different sites.</h3>
<p>In this scenario, the relative URL is constructed in different ways for different sites. By default the relative URL is built by the default URL resolver, which uses the primary parent category of the product to build the URL. This can be customized by registering a custom implementation of the IEntryUrlService interface (namespace EPiServer.Personalization.Commerce.CatalogFeed):</p>
<pre class="language-csharp"><code>public class CustomEntryUrlService : IEntryUrlService
{
public string GetExternalUrl(EntryContentBase entry, string scope)
{
// Construct the relative URL of the provided entry when viewed in
// the provided scope.
}
}</code></pre>
<h3>Scenario 3C: Distinct set of products displayed in each site</h3>
<p>In this scenario, only a subset of the catalog, e.g. a specific root category or products with a certain "tag" are displayed in each site. To support this you need register a custom implementation of the ICatalogItemFilter interface (namespace EPiServer.Personalization.Commerce.CatalogFeed):</p>
<pre class="language-csharp"><code>public class CustomCatalogItemFilter : ICatalogItemFilter
{
public bool ShouldFilter(CatalogContentBase content, string scope)
{
// Determine if the provided content is to be included in the provided scope,
// e.g. by looking at its categories or a custom property.
}
}</code></pre>
<p>Depending on your URL structure you may also need to couple this with the approaches from 3A or 3B.</p>
<h2>Scenario 4: Multi-site solution with products shared between catalogs</h2>
<p>I list this scenario separately to highlight an important point. It might look like Scenario 2 since there are multiple catalogs, the requirements are more like Scenario 3. Products are cross-linked between catalogs, for example by using a central resource catalog and then linking products to the site specific catalogs. This means that you need to make sure that the URL:s are correct when exporting the catalog feed for each site. Depending on your (primary) category structure you could otherwise end up with the wrong URL:s in one, several or even all your catalog feeds, and probably also duplicated URL:s across feeds, which is not allowed.</p>
<h2>Scenario 5: Single or multi-site solution with custom sub-site scope definition</h2>
<p>In this scenario, the scope does not map one-to-one with a site definition, but instead the current scope is set by the implementation. For example it could be a subsection of the site. This requires the implementation to pass the scope name to the tracking method calls:</p>
<pre class="language-csharp"><code>var scope = "customScope1";
var trackingData = _trackingDataFactory.CreateProductTrackingData(productCode, httpContext);
var responseData = await _trackingService.TrackAsync(trackingData, httpContext, currentContent, scope);</code></pre>
<p>The configuration would again be similar to Scenario 2, but define settings for the custom scopes instead of (or in addition to) the default site definition scopes:</p>
<pre class="language-xml"><code><add key="episerver:personalization.ScopeAliasMapping.Alias1" value="customScope1"/>
<add key="episerver:personalization.ScopeAliasMapping.Alias2" value="customScope2"/></code></pre>
<p>In this scenario have to register a custom implementation of the IFeedUrlConverter interface (namespace EPiServer.Personalization.Commerce.CatalogFeed), which is used to convert relative URLs in a scope to absolute URLs. As mentioned in Scenario 3A the default converter will try to interpret the scope as a site ID and use the site URL, so with a custom scope definition you will need to do this conversion yourself:</p>
<pre class="language-csharp"><code>public class CustomFeedUrlConverter : IFeedUrlConverter
{
public string GetExternalUrl(string relativePath, string scope)
{
// Determine the base URL for the scope and use it to convert
// the provided relative path to an absolute URL.
}
}</code></pre>
<p>Note that the IFeedUrlConverter implementation will be used both when making product URL:s absolute for the catalog feed data, but also to create the URL for the feed download endpoint (used in the callback from the Personalization service). Normally this conversion should be the same, but it is worth pointing out that the feed download URL should also be rewritten to one that is accessible from the internet so that the Personalization service can reach it.</p>
<p>You also have to register a custom implementation of the ICookieService interface (namespace EPiServer.Personalization.Common) and store the session information for the different scopes in different cookies, as you will be keeping several Personalization sessions alive on the same domain.</p>
<p>Depending on your catalog setup you may also need to implement IEntryUrlService as in scenario 3B and/or ICatalogItemFilter as in scenario 3C to make sure each scope has contains a distinct set of products with the correct URL:s used to view them.</p>
<h2>Scenario 6: Multi-site solution with custom cross-site scope</h2>
<p>In this scenario, the same scope is used on several sites. It could be one global scope, or it could be multiple cross-cutting sub-site scopes, i.e. products under brand1.company.com/accessories and brand2.company.com/accessories are considered in the same scope, while brand1.company.com/spareparts is in a different scope.</p>
<p>This scenario basically has the same requirements as Scenario 4. An addition is that since the Personalization session information is stored in cookies and the cookies are set per domain, the visiting users will by default get separate sessions and identities for each of the domains, even if they are in the same scope. If your sites run on subdomains of the same main domain you can however use your ICookieService implementation to set the domain of the cookies to the main domain and through that get Personalization sessions that flow across the domains.</p>
<h2>Final words</h2>
<p>Other scenarios than the ones listed above are certainly possible, but I hope these serve to illustrate the most common ones and some advanced ones, as well as introducing some of the new and updated extension points that allow customization for multi-site solutions.</p>
<p>For more information, consult the <a href="/link/876f8c0a171648a9881b1a14c3ba5967.aspx">breaking changes article for Personalization 2.0</a> and the developer guide article about the <a href="/link/6ca2c31616d74cf0a9e4952cc772f854.aspx">personalization multi site feature</a>.</p>
</body>
</html>ASP.NET Cache memory bug fixes released/blogs/Magnus-Rahl/Dates/2018/4/asp-net-cache-memory-bug-fixes-released/2018-04-30T21:09:16.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>Fixes for <a href="/link/070e092007214dfbb8357ae517c9cf5a.aspx">two bugs I found in ASP.NET Cache memory management</a> are included in .NET Framework 4.7.2 which was released today (<a href="https://support.microsoft.com/en-us/help/4054530/microsoft-net-framework-4-7-2-offline-installer-for-windows">download here</a>). The two bugs would under most conditions allow the ASP.NET Runtime Cache, used heavily in Episerver applications, to use up almost all memory. In Azure Web Apps (and therefore in DXC Service) this would in turn cause unnecessary application recycles because of how the Proactive Auto Heal feature (enabled by default) reacts to high memory usage. You can read more in my <a href="/link/070e092007214dfbb8357ae517c9cf5a.aspx">previous blog post</a>.</p>
<p>While you can download and install the update today on servers you control yourself, it will be some more time before the update is desployed where it is needed the most, on Azure Web Apps. There is no definitive time table for this, but there is a github <a href="https://github.com/Azure/app-service-announcements/issues/89">announcement </a>and <a href="https://github.com/Azure/app-service-announcements-discussions/issues/37">discussion</a>. A possible estimate might be drawn from the previous deployment: .NET Framework 4.7.1 was released 2017-10-17 (<a href="https://blogs.msdn.microsoft.com/dotnet/2017/10/17/announcing-the-net-framework-4-7-1/">source</a>), Azure deployments started around 2017-12-04 (<a href="https://github.com/Azure/app-service-announcements-discussions/issues/17#issuecomment-348404389">source</a>) and were complete by 2018-01-05 (<a href="https://github.com/Azure/app-service-announcements/issues/51#issuecomment-355614963">source</a>).</p>
<p>In the meantime I strongly advice sites to continue to use the workaround (explained in my previous blog post) of setting the cache's privateBytesPollTime setting to 29 seconds and explicitly configure a <span>percentagePhysicalMemoryUsedLimit, 90 % or below to stay clear of Proactive Auto Heal, e.g.</span></p>
<pre class="language-xml"><code><cache percentagePhysicalMemoryUsedLimit="80" privateBytesPollTime="00:00:29" /></code></pre>
<h2>Update 2018-05-01</h2>
<p>If possible, also consider updating to CMS 11.1 or later. From that verison, a much shorter cache timeout is used for items inserted through <a href="/link/a93fdf30496042669df2edd7a960d94e.aspx">scheduled jobs</a>. There is also a new API making it possible to control the Content cache timeout for a specific call, e.g. when doing a batch read of large amounts of content that you don't expect to be accessed very often.</p>
<p>For any CMS version, you can also consider configuring the pageCacheSlidingExpiration option of the <a href="/link/53ad09cc5bd946fc83a787d8bad50afc.aspx">episerver config section</a>. The default for this option is 12 hours, which considering it is a sliding expiration, is probably overkill for most applications.</p>
</body>
</html>Increased Flexibility in Commerce Catalog URLs/blogs/Magnus-Rahl/Dates/2018/1/increased-flexibility-in-commerce-catalog-urls/2018-01-22T16:40:37.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>As all of you probably know there are two built-in ways to route to catalog items in Episerver Commerce: The hierarchical route composed of the catalog/category/entry hierarchy and the "SEO" route which uses a single url segment, which for obvious reasons needs to be globally unique in the site. The hierarchical route story is a different one and we are now improving it in line with partner and customer feedback.</p>
<h3>Default: Require Unique URL Segments to Avoid Conflicts</h3>
<p>The default hierarchical URL/route is basically going to be /{catalog name}/{category segment}/{entry segment} (this varies a bit depending on setup, but assume this for the sake of argument) where there can of course be multiple categories nested in the URL. But since an entry can be linked to multiple categories there can also be multiple routes to the same entry. This is where the uniqueness gets tricky.</p>
<p>To make entries routable in all categories, Commerce requires the entry segments to be globally unique. That way there is no risk an entry can be linked to a category where another entry is already using the same segment, causing the two to have the same URL in that category. However, in some catalogs it is clear that you would ideally want to use the same url segment for different entries in different categories, and this constraint does more harm than good.</p>
<h3>New Option: Avoid Conflicts when Publishing and Monitor Conflicts Later</h3>
<p>In Commerce 11.7.1 (soon to be released) we are introducing the AppSetting <strong>episerver:commerce.UseLessStrictEntryUriSegmentValidation</strong>, which when set to true will drop the global uniqueness constraint for entry segments. Instead, it will ensure uniqueness only with entries/categories in the same category.</p>
<p>However, this validation only happens when publishing the entry and only for its main parent category. This means that if you link entries to multiple categories you risk creating conflicts. For that reason, we are also introducing a new scheduled job <strong>Find Catalog Uri Conflicts</strong>. The job will find conflicts and write information about them to three places:</p>
<ul>
<li>Write WARN messages to the log.</li>
<li>Send emails to addresses specified in the <strong>episerver:commerce.UriSegmentConflictsEmailRecipients</strong> AppSetting (semicolon separated list of email addresses).</li>
<li>Write to the scheduled job output.</li>
</ul>
<p>Here is some example output:</p>
<p><img src="/link/f4ffbe96780c4e96a9015f59c5170e2d.aspx" alt="Image uri-conflict-mail.png" /></p>
</body>
</html>Two bugs in ASPNET that break Cache memory management/blogs/Magnus-Rahl/Dates/2017/11/two-bugs-in-aspnet-that-break-cache-memory-management/2017-11-08T14:37:23.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p><strong>TL;DR: A bug in ASPNET memory monitoring prevents the cache monitoring / scavenging tread from running if the memory usage in the host at the time of application startup happens to be in a specific range in relation to the configured cache thresholds. Another bug causes the same behavior if no cache threshold is explicitly configured (falling back to default). This can further cause problems in Azure Web Apps because it triggers the Proactive Auto Heal feature, repeatedly recycling the application for no reason. The workaround is to set the privateBytesPollTime option to less than 30 seconds and also provide a value for percentagePhysicalMemoryUsedLimit (90 % or less to not conflict with Proactive Auto Heal).</strong></p>
<h3>Intro and previous investigations</h3>
<p>In my <a href="/link/ff7f511e2b474a5898e58308a6cf207f.aspx">previous blog post</a> I started to investigate a problem with the ASPNET HTTP runtime cache and <a href="https://blogs.msdn.microsoft.com/appserviceteam/2017/08/17/proactive-auto-heal/">Azure Proactive Auto Heal</a> and the <strong>buggy behavior of the configured memory usage threshold</strong>. I found an apparent correlation between certain settings for <a href="https://msdn.microsoft.com/en-us/library/system.web.configuration.cachesection.percentagephysicalmemoryusedlimit(v=vs.110).aspx">percentagePhysicalMemoryUsedLimit</a> and the application failing to keep the cache memory usage under control, causing Proactive Auto Heal to recycle the application. But I also acknowledged that <strong>the behavior was intermittent, and I now know why</strong>.</p>
<h3>A quick overview of ASPNET memory management internals</h3>
<p>ASPNET (this refers to .net Framework 4.7.1) uses the <strong>AspNetMemoryMonitor</strong> class to monitor the application's memory usage. Internally it uses the <strong>LowPhysicalMemoryMonitor</strong> class to <strong>periodically</strong> compare the current memory pressure to the configured (or default) thresholds, and <strong>trigger cache scavenging/trimming</strong> if needed. Or, at least it is supposed to do this...</p>
<h3>Bug 1: Periodic checks fail to start under certain conditions</h3>
<p>For the periodic memory checks, LowPhysicalMemoryMonitor uses a <a href="https://msdn.microsoft.com/en-us/library/system.threading.timer(v=vs.110).aspx">System.Threading.Timer</a> instance that is initialized like this in its constructor:</p>
<pre class="language-csharp"><code>this._timer = new Timer(new TimerCallback(this.MonitorThread), (object) null, -1, this._currentPollInterval);</code></pre>
<p>Note the third argument, -1, which means that the timer is initialized with a due date (next time it will trigger the callback) "infinitely" far off in the future. So essentially the<strong> timer is disabled from the start</strong>.</p>
<p>The framework will then call the <strong>LowPhysicalMemoryMonitor.Start</strong> method which calls <strong>AdjustTimer(false)</strong>. AdjustTimer is actually called within the timer callback, to adjust the time for the next callback according to the current memory pressure. The callback will run more often if the memory pressure is high. So AdjustTimer is designed to do more than just start the timer for the Start method. It looks like this (reflected code):</p>
<pre class="language-csharp"><code>internal void AdjustTimer(bool disable = false)
{
lock (this._timerLock)
{
if (this._timer == null)
return;
if (disable)
this._timer.Change(-1, -1);
else if (this.PressureLast >= this.PressureHigh)
{
if (this._currentPollInterval <= 5000)
return;
this._currentPollInterval = 5000;
this._timer.Change(this._currentPollInterval, this._currentPollInterval);
}
else if (this.PressureLast > this.PressureLow / 2)
{
int num = Math.Min(LowPhysicalMemoryMonitor.s_pollInterval, 30000);
if (this._currentPollInterval == num)
return;
this._currentPollInterval = num;
this._timer.Change(this._currentPollInterval, this._currentPollInterval);
}
else
{
if (this._currentPollInterval == CacheSizeMonitor.PollInterval)
return;
this._currentPollInterval = CacheSizeMonitor.PollInterval;
this._timer.Change(this._currentPollInterval, this._currentPollInterval);
}
}
}</code></pre>
<p><strong>PressureLast</strong> is the last <strong>observed memory pressure</strong> (% physical memory used).<br /><strong>PressureHigh</strong> is the configured (or default) <strong>percentagePhysicalMemoryUsedLimit</strong>.<br /><strong>PressureLow</strong> is a value calculated from PressureHigh, <strong>lower than PressureHigh</strong> (generally it is PressureHigh - 9).<br /><strong>s_pollInterval</strong> is a default timer interval that is initialized to the value of the <a href="https://msdn.microsoft.com/en-us/library/system.web.configuration.cachesection.privatebytespolltime(v=vs.110).aspx">privateBytesPollTime</a> setting, converted to milliseconds. The <strong>default</strong> for privateBytesPollTime is <strong>2 minutes</strong>.<br /><strong>_currentPollInterval</strong> is the timer interval to use, which is also set to the Timer using the Change method. This field is <strong>initialized to 30 seconds</strong>.</p>
<p>Now, assume AdjustTimer is called by Start to start the timer. The fields will have the values described above, and most importantly, PressureLast is some value it has actually measured. So it can be anything. It can fulfill <strong>PressureHigh > PressureLast > PressureLow / 2</strong>, going into <strong>the second else if.</strong> That will get the minimum of 30000 and s_pollInterval, which by default is 120000, so 30000 is used. Then it <strong>compares 30000 to _currentPollInterval and does an early exit if they are the same, which they are</strong> because that is what _currentPollInterval was initialized with! This means <strong>the timer is never activated</strong>, AdjustTimer is never called again, and the LowPhysicalMemoryMonitor and the <strong>cache scavenging</strong> handled by it<strong> will remain inactive for the lifetime of the application</strong>.</p>
<p>How does this explain the apparent correlation with percentagePhysicalMemoryUsedLimit I described in the previous blog post? Because values in the range identified in that post are likely to arrange PressureLow and PressureHigh in such a way that a typical PressureLast value at application startup would match the scenario described above. At low values for <span>percentagePhysicalMemoryUsedLimit </span>it is much more likely to end up in the PressureLast >= PressureHigh scenario, and at high values it is more likely to end up in the PressureLast <= PressureLow / 2 scenario.</p>
<h3>Bug 2: Periodic checks are stopped if default memory threshold is used</h3>
<p>Let's take a look at <strong>LowPhysicalMemoryMonitor.ReadConfig</strong>, which is called from the constructor:</p>
<pre class="language-csharp"><code>private void ReadConfig(CacheSection cacheSection)
{
int val2 = cacheSection != null ? cacheSection.PercentagePhysicalMemoryUsedLimit : 0;
if (val2 == 0)
return;
this._pressureHigh = Math.Max(3, val2);
this._pressureLow = Math.Max(1, this._pressureHigh - 9);
LowPhysicalMemoryMonitor.s_pollInterval = (int) Math.Min(cacheSection.PrivateBytesPollTime.TotalMilliseconds, (double) int.MaxValue);
}</code></pre>
<p>Note the first line. The<strong> default for percentagePhysicalMemoryUsedLimit</strong> if it is omitted from config is actually <strong>0</strong>. So unless you set that setting, you will end up in the <strong>early exit in line 3</strong>. The most important implication of that is that <strong>the last line never runs, so s_pollInterval retains its default value of 0</strong>.</p>
<p>Now say that the Bug 1 was not triggered and the timer is now running, periodically calling into AdjustTimer. Then take look again at the PressureLast > PressureLow / 2 case. Because<strong> s_pollInterval is now 0, that will be used as the new interval set to the timer</strong>. The call chain from Timer.Change ends up in System.Threading.TimerQueue.UpdateTimer where the interval (period) is used like this:</p>
<pre class="language-csharp"><code>timer.m_period = (period == 0) ? Timeout.UnsignedInfinite : period;</code></pre>
<p><strong>So passing 0 disables the timer</strong>. And again we are in a situation where the memory monitoring and <strong>cache scavenging will be off for the lifetime of the application</strong>.</p>
<h3>Workaround</h3>
<p>Bug 1 can be avoided by specifying a <strong>privateBytesPollTime setting of less than 30 seconds</strong>, forcing the first call to AdjustTimer to adjust the timer and thereby starting it, even in the PressureLast > PressureLow / 2 scenario.<br />Bug 2 can be avoided simply by <strong>specifying a setting for percentagePhysicalMemoryUsedLimit</strong> other than the default of 0.</p>
</body>
</html>Bug in ASPNET Cache causes problems with Azure Proactive Auto Heal/blogs/Magnus-Rahl/Dates/2017/11/bug-in-aspnet-cache-causes-problems-with-azure-proactive-auto-heal/2017-11-06T13:57:27.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<h2><strong>Update 2017-11-08</strong></h2>
<p><strong>I have found the root cause for this buggy behavior, and have posted details and better workarounds in <a href="/link/070e092007214dfbb8357ae517c9cf5a.aspx">another blog post</a>.</strong></p>
<h2>Original post</h2>
<p><strong>TL;DR default settings for cache memory usage don't work well with the recently launched Proactive Auto Heal feature for Azure Web Apps. Also, certain custom settings don't have the expected behavior, leaving the problem with Proactive Auto Heal intact. It seems you should avoid setting percentagePhysicalMemoryUsedLimit to around 50 %, the application will use less memory with a higher setting, e.g. 75 %.</strong></p>
<h3>Conflicting thresholds in Proactive Auto Heal and Cache scavenging</h3>
<p>In august, <a href="https://blogs.msdn.microsoft.com/appserviceteam/2017/08/17/proactive-auto-heal/">Microsoft introduced the Proactive Auto Heal feature</a>, which will <strong>recycle Azure Web Apps</strong> under certain conditions. One such condition is <strong>memory usage above 90 %</strong> for more than 30 seconds. This feature is enabled by default. It is also completely external to the hosting environment, so unlike the application pool memory limit in IIS it is unknown to the application.</p>
<p>This creates a <strong>conflict with the in-memory HTTP runtime cache</strong>. Cache is of course used to enable fast access to frequently used data, so if a large amount of memory is used for cache it is generally not a problem (actually quite the opposite). Using up <em>all</em> memory is of course a problem, which is why the .NET HTTP runtime cache will remove infrequently used items from the cache when the memory usage reaches a certain level, a process known as cache scavenging or cache trimming.</p>
<p>The problem is that the default maximum level of <strong>memory usage the cache tries to maintain can lie dangerously close to, or even over, the threshold for Proactive Auto Heal</strong>. So the application is suddenly recycled even though it is operating normally.</p>
<h3>How Auto Heal recycles can hurt and how to (supposedly) configure around them</h3>
<p><strong>This is not only wasteful, but even creates problems in some scenarios</strong>. In an Episerver implementation, in particular one using Commerce, the total amount of content is vastly larger than what would fit in the server memory. So only the most recent and frequently used data (Episerver uses sliding cache expirations) will be available in cache, the rest will either have expired or been removed by cache scavenging. Some tasks, like indexing or exporting data, need to read through all the data, meaning (parts of) the cache can be churned multiple times and cache scavenging running high. This of course puts strain on the web server (and DB backend) performing the operation, but generally isn't a problem. However it becomes a big problem if the process is killed by Proactive Auto Heal, since there is no way to restart these tasks from where they were stopped.</p>
<p><strong>The thresholds for cache scavenging are configurable</strong>. The system.web/caching/cache configuration element has the <a href="https://msdn.microsoft.com/en-us/library/system.web.configuration.cachesection.percentagephysicalmemoryusedlimit(v=vs.110).aspx">percentagePhysicalMemoryUsedLimit</a> and <a href="https://msdn.microsoft.com/en-us/library/system.web.configuration.cachesection.privatebyteslimit(v=vs.110).aspx">privateBytesLimit</a> attributes for this purpose. If you don't set them, a value is automatically calculated. The used values are available through the <a href="https://msdn.microsoft.com/en-us/library/system.web.caching.cache.effectivepercentagephysicalmemorylimit(v=vs.110).aspx">EffectivePercentagePhysicalMemoryLimit</a> and <a href="https://msdn.microsoft.com/en-us/library/system.web.caching.cache.effectiveprivatebyteslimit(v=vs.110).aspx">EffectivePrivateBytesLimit</a> properties of System.Web.Caching.Cache. A quick test showed that even for a low memory app service plan like S1 (1.75 GB RAM), the <strong>default EffectivePercentagePhysicalMemoryLimit is set at 97 %</strong>, clearly showing the problem with <strong>Proactive Auto Heal recycling the application at 90 %</strong>.</p>
<p>One way to get arund this is simply to disable Proactive Auto Heal. It has benefits though, so you think that a better option to limit the cache memory usage by specifiying the above mentioned config attributes. We tried that for a customer and <strong>set percentagePhysicalMemoryLimit all the way down to 50 %</strong>. And nothing changed. <strong>It still ran over 90 %</strong> and was recycled. We also tried setting privateBytesLimit to less than 50 % of the available memory of the machine. Again, no effect. What is going on?</p>
<h3>Reproducing problems with the Cache configuration</h3>
<p><strong>So I set out to do some experiments</strong>. To rule out any issues in Episerver I started out with a <strong>plain .net 4.7 MVC template site</strong> from Visual studio and altered the Index method of the HomeController so that I can trigger it to insert some data to the cache:</p>
<pre class="language-csharp"><code>public ActionResult Index(int? cache)
{
for (int i = 0; i < cache; i++)
{
HttpContext.Cache.Insert(Guid.NewGuid().ToString(), new byte[40000]);
}
return View();
}</code></pre>
<p>I deployed it to an Azure app service on an S1 (1.75 GB RAM) service plan and started loop in Powershell which will pass the Index method's cache parameter to <strong>insert 100 items totalling a little under 4 MB into the cache at 500 ms intervals</strong>.</p>
<pre class="language-csharp"><code>while($true) { Invoke-WebRequest http://mytestsite.azurewebsites.net/?cache=100; Start-Sleep -m 500 }</code></pre>
<p>With the <strong>default cache settings this meant the memory usage would grow up to about 90 %</strong> where it seemed it started scavenging the cache to maintain that level (even though EffectivePercentagePhysicalMemoryLimit was 97 %, so it seems to start scavenging a bit below the threshold which is reasonable). But <strong>often it was recycled by Proactive Auto Heal</strong> because it stayed over the 90 % limit for too long.</p>
<p>I then started experimenting with different cache settings. I first tried <strong>percentagePhysicalMemoryUsedLimit setting of 50 and observed the same issue as we had seen before - the setting seemed to be ignored</strong> and it behaved like the default configuration, using up the memory and getting recycled. <strong>I then tried a setting of 40, and the behavior was completely different</strong>, it indeed seemed to limit the memory usage to about 40 %. WTF?</p>
<h3>Results: Certain configured values have surprising effects</h3>
<p>I tried several different settings and got different results, as you can see in these screenshots with the different percentagePhysicalMemoryUsedLimit settings overlayed:</p>
<p><img src="/link/acc007f197e348a19abbc4b6a59d981a.aspx" /></p>
<p><img src="/link/a8ca9efacb87454098b1dc34c0931ec1.aspx" /></p>
<p><img src="/link/a563321632fe45919f4109e5f7c67282.aspx" /></p>
<p>At a setting of 75, it used 75 %. 60 used 60, 45 used 45 %. Could 50 somehow be a magic number (not in any good way)? I tried circling it in. 47 worked as expected. 48 and it was again using up all the memory. 55 used up all the memory, as well as 58. 59 worked as expeced. I haven't tried every possible setting, but <strong>it seems that a setting in the range 48-58 makes it lose control of memory usage</strong>. At least for this plan size, but it is also consistent with the 50 % setting not working on the customer plan which was a larger size.</p>
<h3>More surprises: Intermittent error condition?</h3>
<p>So this seems to be a bug or at least unexpected behavior, just report it to Microsoft, right? Well... I set the setting back to 50 and suddenly it works as expected? WTF? I have since gone over different settings multiple times and the main pattern still seems clear: <strong>Within a range around 50, the setting is ignored and the memory usage goes out of control </strong>(or actually not completely, disabling Proactive Auto Heal to allow the app to live shows that it can stay stable for a long time at about 90 %)<strong>. But sometimes it works as expected.</strong></p>
<h3>Conclusion: Use these config values</h3>
<p>So in the end I still don't have a clean repro for a bug report, but the conclusion is the same: <strong>If you run your site on Azure Web Apps with Proactive Auto Heal switched on, set percentagePhysicalMemoryUsedLimit to a value below 90 % to avoid unnecessary recycles, but above 60 %</strong> to avoid this strange behavior and to make the most use of your server memory. And validate that it behaves as expected for your instance size.</p>
<p>And as a final note: privateBytesLimit does not seem to have any effect at all in .net 4.7 (or at lest not Azure app services), it does not even affect the reported <span>EffectivePrivateBytesLimit.</span></p>
</body>
</html>Improved Memory Efficiency in Commerce 11.1/blogs/Magnus-Rahl/Dates/2017/8/improved-memory-efficiency-in-commerce-11-1/2017-08-14T21:11:38.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p><strong>TL;DR:</strong> Episerver Commerce 11.1 contains improved caching strategies for Catalog Content allowing site implementations to make better use of the available memory.</p>
<h3>Caching in the Catalog APIs</h3>
<p>Episerver Commerce uses a Content Provider to make catalog data available through the Content API, which is the recommended API to use in site implementations. The Catalog Content Provider uses the low-level catalog DTO and MetaObject APIs to construct the Content instances.</p>
<p>These low-level APIs are also public and have their own cache, which means several representations of the same base data will be cached when loading Content. However, a site implementation built using the Content APIs is unlikely to, at least frequently, use the low-level APIs to access the same catalog data.</p>
<h3>Improved Caching Strategies for Content, Relations and Associations</h3>
<p>Because of this, in Commerce 11.1 the caching strategies have been changed so that when the low-level APIs are used by the Catalog Content Provider, only the Content is inserted into the cache. Similarily, when using the IRelationRepository and IAssociationRepository APIs, only the high level Relation/Association objects are cached, not the underlying DTOs. If an implementaton uses the low-level APIs directly, the caching strategies are the same as before.</p>
<h3>Memory Used Better, Not Necessarily Less</h3>
<p>As you may have realized, this doesn't really decrease the memory usage at the time of loading Content from the database, as the DTO and MetaObject will still have to be allocated. The difference lies in how quickly that memory can be recovered and reused. Before this change, the DTO and MetaObject wouldn't go out of scope until they were trimmed from cache, and by the time they were trimmed they may have been promoted to the garbage collection generation 1 or 2 making it harder to recover the memory. With this change, they will go out of scope quickly and can easily be garbage collected, allowing the application to make better use of the available memory, for example holding more Content items in cache and reducing the cache churn.</p>
</body>
</html>Follow Up on Planned Breaking Changes in Commerce 2017/blogs/Magnus-Rahl/Dates/2017/6/follow-up-on-planned-breaking-changes-in-commerce-2017/2017-06-15T08:43:18.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>This is a follow up post to <a href="/link/97f214e7f280414fa4c1eceba0cbaaab.aspx">Planned Breaking Changes in Commerce 2017</a>, which promised an update on the scope of the release. We now have the scope set, and are closing on dev complete for the included features. It is very close to what we planned a few months ago, so most of this is reiteration:</p>
<h2>Improved Entry Sorting</h2>
<p>It is now possible to correctly use the SortOrder of NodeEntryRelations to determine the order in which items are returned e.g. when calling IContentLoader.GetChildren to render a product listing. Marketers can set the order by dragging and dropping items in the Catalog UI. The Content's ParentLink property is independent of the sort order, and has separate interactions in the UI.</p>
<h2>Reworked Catalog Import</h2>
<p>Refactoring the catalog import code allowed us to squeeze out better performance, both in terms of import speed and memory consumption.</p>
<h2>Improved API for Relations</h2>
<p>The IRelationRepository and its data classes have been redesigned to use a Parent/Child terminology which we hope is easier to understand and use. In addition, the caching strategies for Relations (and Associations) has been improved and the data classes now implement IReadOnly (requring cloning before changing).</p>
<h2>Order System API Improvements</h2>
<ul>
<li>Return order form support has been added to the abstractions.</li>
<li>The default order calculators have been corrected to not include order level discounts in the subtotal.</li>
<li>The workflow calculations have been aligned with the order calculators.</li>
<li>Some APIs related to payments have been changed to improve creation of payments from the abstractions, data available for processing and to better support redirecting payment providers.</li>
<li>Shipping APIs have been extended to get rates by market.</li>
</ul>
<h2>Removing Legacy</h2>
<ul>
<li>The WorkflowsVNext feature switch (i.e. use the new Promotion System) is now active by default.</li>
<li>The legacy promotion system has been obsoleted.</li>
<li>The legacy asset system has been removed.</li>
<li>The dependency to nSoftware has been completely removed due to changes in how it is licensed. We will provide source code to help you build the equivalent functionality using your own licensed nSoftware library.</li>
<li>The ApplicationID concept has been removed, affecting a number of APIs as well as tables, indexes etc.</li>
</ul>
<h2>Other</h2>
<ul>
<li>A number APIs that have been marked as obsolete for a long time have been removed.</li>
<li>The .NET Framework requirement has been lifted from 4.5 to 4.52.</li>
</ul>
<h2>What's next?</h2>
<p>We have already started our QA process for several of the dev complete features. We will get the rest to dev complete together with compatible versions of Service API and Find for Commerce and get it all to QA. It is unlikely to be ready within Q2 as we previously estimated, but we hope it will be released soon thereafter. When it is eventually released we will of course also publish more detailed documentation on the new features and breaking changes.</p>
</body>
</html>Planned Breaking Changes in Commerce 2017/blogs/Magnus-Rahl/Dates/2017/2/planned-breaking-changes-in-commerce-2017/2017-02-27T08:59:14.2170000Z<p>We are working on a number of changes in Commerce that will change some behavior and APIs in a way that by Semantic Versioning is considered breaking and therefore will be released as a new major version of Commerce. The current estimate for this new major version is Q2 2017.</p>
<p>Details on the breaking changes will be announced at a later stage when we have a more exact feature set defined for this release. Here is a list of some candidates we are considering. <strong>As usual, all this information his is subject to change.</strong></p>
<h3>Improving Entry Sort Order on Categories</h3>
<p>The SortOrder of NodeEntryRelations is currently used to determine catalog entries' home category (ParentLink in the Content model for catalog content). This makes it hard to use SortOrder for defining the actual order of entries in a category. We will introduce a separate way of defining the home category, creating a clearly defined use for sort order. This will enable us to add the feature of working with sort orders in the Catalog UI to control the way categories are displayed in the site implementation.</p>
<h3>Reworked Catalog Import</h3>
<p>In addition to the change in behavior of SortOrder and a new element for defining the home category in the catalog XML itself, we will initiate some long overdue code improvements of the Catalog import. While this is mostly under the hood, it requires us to change the public API of the catalog import.</p>
<h3>Improved API usability for IRelationRepository</h3>
<p>The terminology of Source/Target for relations has been a source of much confusion and the target of much criticism. It will be superseded by a Parent/Child model (the old model will like remain but be marked as obsolete). </p>
<h3>Order Calculator Changes</h3>
<p>Some default calculator implementations don't work as expected, for example they include order level discounts in the subtotal instead of the order total.</p>
<h3>Removing Legacy APIs and Components</h3>
<ul>
<li>Remove the legacy Asset system.</li>
<li>Rework/remove the dependency from payment providers to nSoftware.</li>
<li>Make the "VNext" workflows the default and require configuration to use the old Promotion system, and deprecate it.</li>
<li>Possibly remove concept of ApplicationId available on many APIs, since it is rarely used and does not have full support throughout the platform anyway.</li>
</ul>
<h3>Clean Up Previously Deprecated APIs</h3>
<p>There are a number of APIs that have been marked as obsolete for a long time, and will now be removed.</p>Planned Breaking Changes in Commerce 2016/blogs/Magnus-Rahl/Dates/2016/10/planned-breaking-changes-in-commerce-2016/2016-10-19T10:16:12.6400000Z<p>As previously announced, new major versions of <a href="/link/04e21d4454e74ce0abcdb448c2a4ffe1.aspx">CMS</a> and <a href="/link/4b536a6f4e1a4376a856ea949dcfca7e.aspx">CMS UI</a> are coming. Compatible versions of Commerce, Find for Commerce and Service Api will be available from day one. In order to create a compatible version of Commerce, we have to do a small number of low-impact breaking changes in Commerce, which means Commerce too will have a new major version.</p>
<p>The breaking change consists of removing some types that support the Commerce UI (not intended to be used directly) from the EPiServer.Commerce.Core package. The removed types are:</p>
<ul>
<li>EPiServer.Business.Commerce.Providers.ProductSearchProviderBase</li>
<li>EPiServer.Business.Commerce.Providers.ProtectedProductSearchProvider</li>
<li>EPiServer.Business.Commerce.Providers.ProductSearchProvider</li>
<li>EPiServer.Commerce.Catalog.CatalogMetadataExtender</li>
<li>EPiServer.Commerce.Marketing.PromotionDataMetadataExtender</li>
<li>EPiServer.Commerce.SpecializedProperties.DictionarySelectionFactory</li>
</ul>