Blog posts by Daniel Ovaska2023-11-22T09:59:48.0000000Z/blogs/Daniel-Ovaska/Optimizely WorldChange datacenter for DXP/blogs/Daniel-Ovaska/Dates/2023/11/change-datacenter-for-dxp/2023-11-22T09:59:48.0000000Z<p>Did you know it is actually possible to switch DXP datacenter to a new one? I didn't. But a customer of mine really needed to switch to a data center in Sweden from a North Europe one to avoid some geofencing issues as well as speed up response times for end customer and this is actually possible using the <a href="https://docs.developers.optimizely.com/digital-experience-platform/v1.3.0-DXP-for-CMS11-COM13/docs/migration-to-cms-12-commerce-14">migration tool from Optimizely</a>. If you didn't know already, using an Azure data center in Sweden has been possible only for a year or two so older customers in Sweden has been forced to use the North Europe one instead.</p>
<p>The main intent of the migration tool is to be able to migrate a site from CMS 11 => 12 but it can also be used to only switch data centers if you really need to. </p>
<p>As a tech lead for this project I had a few concerns about this.</p>
<ol>
<li>How long downtime would it be?</li>
<li>How well can content be synched and tested? Can it be done multiple times?</li>
<li>How are certificates and custom domain handled? Do we have to redo those for new environments from scratch?</li>
<li>Is it as easy as it seems with the migration tool or does it have a million hidden bugs that I will step on when actually doing it?</li>
<li>What about build pipelines? Will they need to be redone?</li>
</ol>
<p>The short answer, it worked great!</p>
<p>The process on a high level is:</p>
<ol>
<li>Let Optimizely set up a new DXP instance and prepare the migration tool.</li>
<li>Click the start button on migration tool</li>
<li>Rebuild deploy pipeline and deploy code to new site environments (integration, preprod, production) using new access tokens</li>
<li>Sync content and files to new site. Integration => new integration. Preproduction => new preproduction and finally for production.</li>
<li>Test. At this time you still have temporary hostnames on new site.</li>
<li>Go live! Move hostnames for integration => new integration, continue with the other environments. Sync content again if needed.</li>
<li>Be happy!</li>
</ol>
<p>You can read more in detail about each single step in the link above about the migration tool.</p>
<p>The old environments will still be there if you need to rollback for any reasons. We didn't so I haven't tried that part of the process.</p>
<p>Downtime was short. A handful of minutes for production environment in total. It will need to be in maintainance mode for last step when moving hostnames.<br /><br />During the testing phase it was easy to move content and files multiple times between the old DXP to the new DXP instance. It took approximately 20 minutes for a large site to synch both db and blobs. Certificates and custom hostnames were moved by the tool and I didn't have to do any actions for that. Custom X509 client certificates used to identify the servers need to be reinstalled in new environment. We had one of those which support helped us to move. Make sure to get a pfx file that includes private key in that case. A cer file won't cut it. </p>
<p>The tool to migrate between new and old solution worked flawlessly in our case. I was definitely impressed. </p>
<p>Build pipeline from Azure devops => integration had to be redone. You need to generate some new access token for the new DXP instance and use those to deploy instead. This was done before lunch one day and no major hazzle. You can clone the existing pipeline and do a few variable changes and that is it.</p>
<p>So there you have it! If you need to switch datacenter to get close to your end users when Microsoft decides to open a new one closer to your location; Do it! Overall it was actually a surprisingly smooth experience.</p>
<p>Optimizely has you covered for this scenario.</p>
Keeping your website up and running in a hostile environment/blogs/Daniel-Ovaska/Dates/2023/8/keeping-the-website-up-and-running-in-a-hostile-environment/2023-08-16T09:39:50.0000000Z<p>Unfortunately the world as a whole is a less safe place now than it was a few years ago and internet also follows this trend. Getting your site hacked or attacked by a denial of service attack (DDoS) is getting more frequent. It's worth considering the current threat level and who you are protecting against:</p>
<ol>
<li>Bored hobby hackers</li>
<li>Organized crime and hacker groups</li>
<li>State actors</li>
</ol>
<p>Unfortunately many sites will now be a target to threat level 3 in this list - state actors. This is worth thinking about when choosing how much effort to spend in this area.</p>
<p>I've compiled two relevant checklists to keep your favorite site sailing smooth in this rough weather:</p>
<ol>
<li><strong>Security checklist</strong> - <a href="/link/16ba2cce8b5d496691d3b0ad9d09adb6.aspx">How to avoid getting hacked </a><br />This will guide you through how to get as much security as possible for whatever budget you have. Give a decent developer a day or two to close as many of these as possible and your site will be less likely to be hacked</li>
<li><strong>Performance checklist</strong> - <a href="/link/47a403875cfb4f47ae221c3050138e12.aspx">How to keep that site up n running against a DDoS attack</a><br />Optimizely DXP combined with good programming practices will make your site much less vulnerable. Also improves conversion rate of your end users so twice the benefit of doing these.</li>
</ol>
<p>Happy coding everyone!</p>Importing data into CMS with a scheduled job/blogs/Daniel-Ovaska/Dates/2022/6/creating-pages-in-cms-programmatically/2022-06-15T15:47:52.0000000Z<p>This is a short guide to how to create and update pages in CMS programmatically intended for new developers to the CMS.</p>
<h2>Why importing data and create pages?</h2>
<p>The most common reason is that there is a requirement to import data from an external source and show it on the site like users, documents from Sharepoint, press releases, available positions at the company or similar.<br />This can be done in two ways, either by getting the external data using a scheduled job and creating pages in the CMS for it or having a single page in the CMS that loads the relevant data from the external datasource every time. I often prefer creating pages in CMS for it because that will ensure great performance at all times and keep on running even if the external data source is down for a few minutes. This blog post is about that use case.<br />Avoid storing external data in CMS if you have 10k+ items. I would probably use an EF database solution instead then. </p>
<h2>How?</h2>
<ol>
<li>Create a scheduled job that will run the import.<br />Use the ScheduledPlugIn attribute and inherit from ScheduledJobBase. Set a unique guid (generate online) and some name and description for the administrators. The guid will make it possible to switch class name and namespace later if you need so don't forget it.<br />
<pre class="language-csharp"><code>[ScheduledPlugIn(DisplayName = "Import data", Description = "Imports great data about ....", GUID = "9d074410-05c1-4125-a09d-1170dd531234")]
public class ImportJob : ScheduledJobBase
{
...
}</code></pre>
</li>
<li>Create a separate root page in the content tree that will contain the created pages.<br /><br /></li>
<li>Add a setting on start page that points to the root page for the import.<br /><br /></li>
<li>Use this setting in the scheduled job to find the root page where items should be imported.<br /><br /></li>
<li>Get the items from data source, create a separate page type with the relevant properties that needs to be stored. Think of it like modelling a table in the database. Try avoid to store more than one piece of information per field if possible. One property for first name and one for last name is better than a single Name property that merges these together. If you have more than one type of objects, create a second pagetype to store the second object in instead of adding separate fields to the first. Use inheritance between the pages if it makes sense.<br /><br /></li>
<li>Clear the old import if needed using the .Delete method on the contentRepository. <br />
<pre class="language-csharp"><code> var children = contentRepository.GetChildren<PageData>(siteSettingsParentPage,new System.Globalization.CultureInfo("sv"));
if(children.Any())
{
foreach(var child in children)
{
contentRepository.Delete(child.ContentLink,true);
}
}</code></pre>
<br /><br /></li>
<li>Use the IContentRepository to save the pages:<br />
<pre class="language-csharp"><code> var pageToImport= contentRepository.GetDefault<ImportedPageType>(siteSettings.ParentPage, new System.Globalization.CultureInfo("sv"));
pageToImport.Name = data.Name;
pageToImport.CustomDataToImport= data.ImportantData;
contentRepository.Save(clinicPage, EPiServer.DataAccess.SaveAction.Publish | EPiServer.DataAccess.SaveAction.SkipValidation, EPiServer.Security.AccessLevel.NoAccess);</code></pre>
<p>To get a new page to store the data in, use the GetDefault method and specify the content type you wish to use and the parent page and language branch. <br />Fill those properties with data from the external data source.<br />Use the .Save method on the contentRepository to store it in the CMS. If you don't specify the NoAccess flag, it's likely the scheduled job won't work when running automatically since the scheduled job runs as an anonymous user as default. It's also possible to set the PrincipalInfo.CurrentPrincipal for this purpose if you need to run the scheduled job as a different user. If it works when running manually but fails when running automatically, this is normally the cure. <br /><br /></p>
<pre class="language-csharp"><code>if (HttpContext.Current == null)
{
PrincipalInfo.CurrentPrincipal = new GenericPrincipal(
new GenericIdentity("Scheduled job service account"),
new[] { "Administrators" });
}</code></pre>
<p>Also notice the SkipValidation flag in the .Save call earlier. This is not mandatory but often you want to migrate data even if it doesn't look like it should to be sure you have everything. If you need to skip validation for that page type when creating pages automatically, then this is the one.<br /><br />The SaveAction.Publish will make sure those new pages will be visible on site. If you want an editor to review them first, use SaveAction.CheckIn or SaveAction.RequestApproval instead. The latter is only used if you are using approval workflows on that content which is pretty rare but happens.</p>
</li>
<li>Try to avoid having more than 100 pages below a single parent. Edit mode doesn't really work well above that. Structure them with additional folder by date, category or alphabetically to avoid this depending on what type of pages you have. <br /><br /></li>
<li>Log everything! Importing data from an external source can be tricky to debug. Add plenty of logging from the start to save some time bug hunting later.<br />
<pre class="language-csharp"><code> private ILogger importLog = LogManager.GetLogger(typeof(ImportJob));</code></pre>
<br />
<pre class="language-csharp"><code> importLog.Information($"GetAllData webservice returned {instructions.Count()} items");</code></pre>
<p>Make sure the log itself doesn't throw an error. For instance, the call to instructions.Count() above can't ever be null. If it is, not only will the job fail but the logging will be disabled by it which will make it difficult to find. This happened to me recently.</p>
</li>
<li>For greater migrations it's usually good to limit blast radius. Start with a subset of data if possible that affects fewer end users and launch that first. When that is stable, continue with the rest.</li>
<li>Remember to test performance with realistic amounts for data early. Autogenerating fake content with a scheduled job like above is a good idea. Measure, improve, measure again until it's fast enough. </li>
<li>For large amounts of children, remember the method <br />
<pre class="language-csharp"><code> contentRepository.GetBySegment(parentLink, "id-of-item", new System.Globalization.CultureInfo("sv"));</code></pre>
As long as you know the id of the item, it's pretty fast to get it by using the urlsegement. If possible, avoid using GetChildren() if there are 100+ children. </li>
<li>Work with dictionaries<> instead of list<> if you are loading all items and doing lookups based on id for large collections. Cache it!</li>
<li>Test run it and show off your new stable solution to the customer!</li>
</ol>
<p>I hope this post helps someone looking to do their first import job to the CMS. Leave a comment if it does! Or if you want me to add something you feel is missing,<br /><br />Happy coding!</p>Optimizely Meetup in Uppsala 27th of April!/blogs/Daniel-Ovaska/Dates/2022/4/optimizely-meetup-in-uppsala-27th-of-april/2022-04-12T08:41:08.0000000Z<p><strong>Visma invites to an IRL Optimizely meetup for all developers focusing on Optimizely CMS or commerce in Uppsala / Sweden. During the evening at we will present the new CMS 12 with best practices regarding performance and scalability for large sites and amazing addons! There will also be food, friends and some tasty beverages involved. Thanks to Optimizely and Visma for sponsoring this event.</strong></p>
<p><strong>When: <br /></strong>27th of April 17.30-21.00<strong></strong></p>
<p><strong>Where: <br /></strong>Elite Hotel Academia<br />Suttungs gränd 6, 753 19 Uppsala / Sweden</p>
<p><strong>Agenda:</strong></p>
<p>17:30-17:40 - Welcome<br />17:40-18:20 - Building huge enterprise sites with Optimizely CMS, best practices (Daniel Ovaska, Optimizely MVP)<br />18:20-18:35 - Break<br />18:35-19:15 - Optimizely on.net 5, tips n tricks and the journey to DXP (Luc Gosso, Optimizely MVP)<br />19:15-19:30 - Break<br />19:30-20:10 - Adaptive Images for Optimizely CMS and best practice for developing add-ons (Ted Nyberg, Optimizely MVP)<br />20:10-21:00 - Friends, food and beverages</p>
<p>This is a free event but remember to<br /><strong>Sign up here!</strong></p>
<p><a href="https://www.visma.se/it-konsulttjanster/workshops-och-kurser/meetup/">https://www.visma.se/it-konsulttjanster/workshops-och-kurser/meetup/</a></p>
<p>Happy coding!</p>Performance checklist/blogs/Daniel-Ovaska/Dates/2022/2/performance-checklist-for-checklist-for-large-sites/2022-02-11T16:51:30.0000000Z<p>One of my favorite areas in software development is getting a solution to run really fast. This is pretty easy for small websites but gets exponentially tricky for larger websites under heavy load. I've worked with improving performance on Optimizely CMS and similar .NET sites for close to 20 years now and below are my top suggestions for frontend, backend, CMS specific and team work processes. But let's start by asking an important question:</p>
<h2><strong>Why</strong> <strong>is performance vital to a site?</strong></h2>
<p><strong>Conversion</strong><br />Performance might seem technical and boring but it is essential to understand the underlying business case here. You, as a developer, will not get time to focus on performance if you can't explain why. <br /><br />Without decent performance a large percent of the total revenue from the site will be lost. It is even possible to estimate how much and for major sites we are talking millions $USD.<br />Though content and nice design is good, if the page doesn't respond fast enough, your customer will get bored and never see it. Let's check out some statistics:</p>
<p><strong><em><a href="https://www.forbes.com/sites/steveolenski/2016/11/10/why-brands-are-fighting-over-milliseconds/?sh=7860a2be4ad3">Amazon found out </a>that 0.1s extra load time cost them 1% in sales.</em></strong></p>
<p><a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-new-industry-benchmarks/">Google reports</a> that increasing page load time from<br /><em><strong>1s to 5s - the probability of bounce increases 90%</strong></em></p>
<p><a href="https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates/">Cloudflare reports that </a><br /><strong><img src="/link/b4decc4d833e494daf465b812ac53b46.aspx" /></strong></p>
<p>Using these numbers for a site that has an annual revenue of 100M $USD and a response time of 4.2s, the Amazon estimate will be 2/0.1* 1% = 20% increased revenue = +20M $USD annual revenue.</p>
<p>Performance of the website is a business critical problem whether the information on the website is generating revenue or saving lives.<br /><br /></p>
<p><strong>Protection vs denial of service attacks (DDoS)</strong></p>
<p>Good performance combined with a nice CDN like Cloudflare also protects against DDoS attacks. The better performance your site has overall, the more effort it takes to bring it down with bots. For best effect strive for fast response times, good firewall that can filter out bots and elastic scaling that can increase computation power when needed. Worth noting that Optimizely DXP solves both elastic scaling and CDN with DDoS protection and this checklist will solve the response times. </p>
<p>Added some levels to indicate in what order I would do them in. <br /><br /></p>
<ul>
<li><strong>Level A</strong><span> </span>- Most important fixes / low effort that all sites should have yesterday.</li>
<li><strong>Level AA</strong><span> </span>- Important / medium effort</li>
<li><strong>Level AAA</strong><span> </span>- If you complete these you are probably top 1% of the sites out there</li>
</ul>
<h2><strong>How can performance be improved?</strong></h2>
<h3><strong>Measure performance (A)</strong></h3>
<p>Always start by measuring and proving what is wrong. Avoid randomly improving code in your application. For this a good tool to measure performance effectively is needed. My personal tool of choice here is application insights from Microsoft to check server statistic but there are many others that will work. Visual Studio has a decent tool for checking CPU cost of methods. dotTrace from jetbrains will also work. I prefer to work with application insights because I can keep that up and running on production environment and it's also the standard tool used by DXP.<br /><br />For client rendering statistics, google chrome developer tools works well enough. Lighthouse report usually gives a good hint on any issues.</p>
<ul>
<li>The request total response time from server</li>
<li>The request rendering time on client <br />(These two together will tell you if you have a frontend issue or a backend issue)</li>
<li>The number of outgoing calls a page has normally (SQL and web api calls) and how long these take<br />(Aim for 0-2 outgoing calls per page request on average. If you have many more, the application is likelly to be vulnerable when load increases)<br />This will tell you if you have a n+1 issue with some product list or similar where you do a separate expensive call for each item in the list. This should be avoided.</li>
</ul>
<p>Measure performance (client+server) before and after each deploy to production and use it as an KPI that is tracked by product owner and stakeholders.</p>
<h3>Checklist of performance problems and solutions</h3>
<h3>Frontend technical checklist</h3>
<ul>
<li><strong>Use heavy javascript frameworks with care (A)</strong><br />Javascript frameworks like react and vue are fun to use but if something can be decently built using html and css only, do it. If you really need heavy js frameworks for part of your application, then use them on separate pages only to maximize performance for the rest of the application.This is also why my recommendation is also to stay away from headless for normal content heavy parts of the sites and only use for user interaction heavy functionality. Below are statistics of average rendering time depending on framework from <a href="https://timkadlec.com/remembers/2020-04-21-the-cost-of-javascript-frameworks/">httparchive</a><br /><img src="/link/5dd5bec904ce4935930366c421f2cad4.aspx" width="229" height="289" /><br />Does React bring enough worth to warrant 2.7s extra load time to your solution? That is 27% lost sales revenue if used on public product pages (according to Amazon study). Do use it, but think really hard on where and when it is needed. It's easy to build a huge monolithic javascript frontend that is hell to optimize / replace. <br /><br /></li>
<li><strong>Minimize number of requests and payload (A)</strong><br />CSS and javascript should be bundled, gzipped/brotli and minified. Brotli is the best compression and can be enabled by DXP but still as an experimental feature. I'll personally stick to gzip for now. If your main issue is slow frontend and large scripts, give brotli compression a shot. <br /><br /></li>
<li><strong>Set max age and allow public caching (A)</strong><br />by using response headers on css js. Will also activate CDN for these resources and help keeping server CPU fast.<br /><br /></li>
<li><strong>Set a versioning number or hash on css, js and image filenames (AA)</strong><br />to simplify updates combined with caching. There is a nuget package to create a unique hash path for images for CMS called Episerver.CDNSupport that simplifies this for image blobs.<br /><br /></li>
<li><strong>Use a CDN to serve images (AA)</strong><br />Cloudflare is used for CMS cloud. Check that you <a href="/link/f2555bbb4fd74dff83a6b643536fe570.aspx">send the correct cache headers from server</a> and get a HIT in response headers for images from cloudfront.<br /><br /></li>
<li><strong>Can some things be loaded after the page has rendered (AA)<br /></strong>Can some functionality be hidden by default and rendered only when needed to maximize rendering performance above the fold?<br /><br /></li>
<li><strong>Avoid making the page too heavy - scale images (A)</strong><br />For mobile users especially, the total size in Mb of page is vital to keep to a minimum. Do you really need that heavy image slider functionality at page load if it means increasing your bounce rate on a performance critical page? Scale those images by using <a href="/link/5a228738b5994fe2885fe6c835550f61.aspx">image processor (v11)</a> , or <a href="https://github.com/vnbaaij/ImageProcessor.Web.Episerver">this for CMS 12</a> or similar addon to generate mobile friendly image sizes. Don't trust editors to remember to do it. <br /><br /></li>
<li><strong>Can some html be loaded async? (AA)<br /></strong>I'm looking at you "mobile menu". Often sites have a large part of their site tree with 1000s of html tags here.<br /><br /></li>
<li><strong>Avoid changing layout as things are loaded (AAA)</strong><br />Try to have correct height and width by setting lineheights, image sizes etc correct from start. <br /><br /></li>
<li><strong>Make the initial view of page (above the fold) super fast by using inline scripts and css for this part (AAA)</strong><br />The rest of supportive functionality can be done the normal way with external css and js files. Try to avoid blocking rendering by using defer on js that is not needed for initial view. </li>
</ul>
<h3>Optimizely CMS specific checklist</h3>
<ul>
<li><strong>Avoid having more than a hundred content items below a single node (A)</strong><br />Insert a separate container page and divide the content items by date or alphabetically. If you don't, edit mode will not work well and performance for GetChildren method will be problematic for that node.<br /><br /></li>
<li><strong>Avoid loading 100s of content items for a single page request (A)</strong><br />Often this is done when constructing the menu, especially the mobile one. Avoid loading the entire tree structure at once and load them only when user expand that part of tree. Be on a lookout for GetChildren that is called recursEvily over a large part of the content tree or GetDescendants followed by loading the actual pages sequentially. If any of these returns 100s of pages, try to solve it in another way. Sure, these calls get cached eventually by the CMS which will save you from crashing the site most of the time but for larger sites with hundreds of editors this is not something you want to rely on. Cache end result with object cache, rethink the design slightly or use Episerver Find are your main tools to avoid this. <br /><br /><strong></strong></li>
<li><strong>Personalization, access rights and caching (AA)</strong><br />I usually stay away from output cache and using CDN for html (default setting is not to cache html for DXP for this reason) and instead try to use object caching for expensive calls to avoid issues with this. But if you need to also use CDN or output caching to get that extra boost of performance, you will need to remember personalization / visitor groups. If you cache and minimize external calls for pages and cache the menu separately for anonoumous users you can normally solve it without the need of CDN and output cache. <br />If you decide to go with output cache anyway, there is a big advantage if the html sent with the initial response is free from personalization. Then personalized funtionality can be rendered later using the content delivery api + javascript. Then output cache and CDN can be applied to the server rendered html if needed. <br /><br /></li>
<li><strong>Avoid looking up content in structure with GetChildren (A)</strong><br />Use a content reference on start page if possible. Use Episerver Find if needed. Crawling through the page tree with GetChildren gets expensive.<br /><br /></li>
<li><strong>Try to avoid using nested blocks (AAA)<br /></strong>Blocks are great but try to avoid unneccessary complexity. Do your editors really need that degree of freedom at the expense of complexity?<br />It can normally be solved in a simpler way. Will impact editor experience more than performance though so ranking this low in this checklist.<br /><br /></li>
<li><strong>Custom visitor group criteria (AA)<br /></strong>If you build new criterias, ensure these don't create a lookup per request for each user. One request to database is ok per user. One for every request is usually not. Since these are evaluated per request you need to be extra careful with these so that they are fast.<br /><br /></li>
<li><strong>For simple blocks that don't need a controller, don't use one (AAA)</strong><br />If you are simply passing along the CMS content and rendering it, you don't need a controller. If you need to load additional data via an api and do more complex work, do use a controller.<br /><br /></li>
<li><strong>Check log level (A)<br /></strong>Avoid setting log to info or all in production except during troubleshooting. Log4net set to info on root will bring down your production site.<br /><br /></li>
<li><strong>Search & Navigation for menues (formerly Episerver Find) (AA)<br /></strong>If ýou are using Find to build menues, find news articles to display in a list or something similar per page request, caching the result is needed. Find has a nice extension for that. Only cache things shown to anon users to avoid issues with access rights and personalization. If you don't do this, there is a risk of both poor performance and running into request limitations for Find.<br />
<pre class="language-csharp"><code>var result = client.Search<BlogPost>()
.StaticallyCacheFor(TimeSpan.FromMinutes(5))
.GetResult();</code></pre>
<p>Check through application insights and check how many requests are done towards Find to see if you have an issue with this. Normal site search is normally not neccessary to cache.</p>
</li>
<li><strong>Elastic scaling of servers (AAA)<br /></strong>DXP uses Azure webapps below the hood that automatically can be increased if CPU runs into issues during high load. This is an amazing technology that is needed for large scale sites. If you don't use DXP, it can be set up using Azure manually as well. If you have a well trained cloud savvy organisation already and have very special demands on infrastructure in cloud, that is definitely an option. I would recommend DXP as the main option though. On premise is possible but then you will lose the elastic scaling ability which increases likelyhood of CPU related performance issues.<br /><br /></li>
<li><strong>Warmup sites (AAA)<br /></strong>Many sites need to run a few requests to populate cache etc before starting to respond well. This is useful both when the site is elastically scaling and when deploying new code to the site to avoid performance drops. The start page is automatically warmed up by default in DXP. If you need other pages to be hit with a request that is possible as well.<br />Read more in detail using this article:<br /><a href="/link/56f8c9b590b04acbb6c1fa4d08bc0b80.aspx">https://world.optimizely.com/documentation/developer-guides/archive/dxp-cloud-services/development-considerations/warm-up-of-sites/</a><br /><br /></li>
</ul>
<h3>Backend general checklist</h3>
<ul>
<li><strong>Avoid multiple dependecy calls for a page (A)</strong><br />If you get data from an api, cache it if possible. If you are getting lists of items, make sure you are getting enough information to avoid having to call the api again for each item. Use a scheduled job if possible to load expensive information in advance, store in Episerver Find / DDS or as content items in content tree since these are cached. DDS is pretty slow so only a good idea for very small amounts of data like settings. In application insights you can check how many requests are made out from your application to be able to render a page. Keep those few and fast. One quick external call on average is ok. More than that per request you are normally in trouble. If you have as many as 10 or more per page request it will very difficult to keep fast and stable. This is probably the number one reason sites go down in my experience. Good to measure after each deploy for key pages.<br /><br /></li>
<li><strong>Don't scale images for every request (A)</strong><br />Make sure they are cached in CDN after or cache the scaled images in some other way<br /><br /></li>
<li><strong>Avoid using locks (A)</strong><br />Locks in code to avoid a method running twice at the same time looks easy but can often cause serious deadlocks in production. Try to solve in another way by making your code run well even if it happens to run twice sometimes (idempotent). If you really need to use locks, make sure to log extensively to be able to find problems quick. Symptoms for a problem with lock is low CPU but high request times from server while dependencies response fast. Use memory dumps from production to prove the problem. These performance problems are the most difficult to find. I've seen a lock that wasn't needed bring down quite a few large sites.<br /><br /></li>
<li><strong>Use load balancing (AA)</strong><br />Make sure they have the same machinekey and avoid storing objects in session state. CMS does this by default in DXP if you use that. Avoid using session state in your solution to simplify sharing load. If you cache expensive calls the standard cache to use is the <a href="/link/d8535ea8672442888befed8959dc8b7b.aspx">syncronized object cache</a> that automatically invalidates cache on all servers when it expires.<br /><br /></li>
<li><strong>Use IHttpClientFactory (AA)</strong><br />HttpClient is the class that is used to connect to external apis. Unfortunately it is really difficult to use correctly. Always instantiate using the IHttpClientFactory. .NET 5 has good support for this by passing in the HttpClient in constructor if you are using CMS 12. Below that you will need a separate package from Microsoft.Extensions to make it work but it's also possible for CMS 11 and .NET 4.7. If you see socketexceptions during heavy load in the logs, that is a good indication that you need to check this. Also made my integration calls around 30% faster just by fixing this.<br /><br /></li>
<li><strong>Use small cookies (AA)</strong><br />Avoid storing too much data in cookies. Use ids instead of storing entire large objects if needed. I've seen a large intranet get really slow from this single issue. Difficult to find. I remember I got slow response time even for static resources like css and js because of this.<br /><br /></li>
<li><strong>Use scheduled jobs / queue for expensive operations (AA)</strong><br />Some things like import jobs, sending notification emails etc can run during night time to keep server happy during daytime. For expensive operations that can be done without user feedback, using <a href="https://www.hangfire.io/">hangfire </a>or a custom queue in Azure can be a good option.<br /><br /></li>
<li><strong>Use CMS 12 and .NET 5+ if possible (AA)<br /></strong>.NET 5+ has a lot of free performance gains out of the box. Enitiy Framework core 5+ for building custom database implementations can be lightning fast. I'm still amazed of an api we build that has a 5-10ms response time with quite a few tables and millions of rows in database. <br /><br /></li>
<li><strong>Use streams, avoid byte[] (AA)</strong><br />Avoid passing byte[] along. If you see a byte[] somewhere in code, check if it's possible to implement as a stream. For instance, uploading a profile picture to file doesn't have to upload the entire image into a byte[] first and then store to disk. .NET memory management will not like you if pass along a lot of heavy objects like byte[]. <br /><br /></li>
<li><strong>Avoid microservice architecture if you don't need it (AA)<br /></strong>Microservices can be useful for very large solutions especially if they can be called directly from frontend, the solution have multiple clients that need the data (like external system / phone app) and it has multiple development teams. They do add an additional layer / integration call though and increase the complexity of the solution. Debugging and monitoring will become trickier. For smaller single team solutions without multiple client solutions, they are often a waste of implementation time and performance. That being said, they have helped me upgrading a huge solution one piece at the time which I like.<br /><br /></li>
<li><strong>Shut down integrations that doesn't work (AAA)</strong><br />Polly is a good framework that acts as a circuitbreaker. You can also use the blocks functionality in CMS to act as a feature switch and shut off the functionality that isn't working manually. Integrations that starts misbehaving can pull down your solution if you don't do this. Monitor integrations using Application Insights.<br /><br /></li>
<li><strong>Async await (AAA)</strong><br />Can be useful to increase throughput through application. Do mind the additional complexity though. Easy to create deadlocks if not used all the way. If you need to do x calls that are not depending on eachother, start the calls at once and then await them after to get them all to run in parallell. This can save quite some time in some cases. Do use on new sites. Probably not worth the hassle on older that doesn't already have it. Await creates a state machine, creates a new thread and runs the code on that one and then switches back to run on original thread. It also handles exceptions in a nice way. These facts are key to optimize how to use it.<br />-Set .ConfigureAwait(false) normally to improve performance and avoid deadlocks. This will skip the second thread switch that you normally don't need.<br />-Avoid return await. Why create a state machine if you are not going to run any code but return right away? Return the task instead and let calling method have the await instead. <br />Task.Run can definitely run async methods but will not have the good exception handling. Use await instead if possible.<br />If you need to run an async method syncronous, use .GetAwaiter().GetResult() as last resort instead of .Result or .Wait to get better exception handling. If possible, rewrite to use async await the whole way instead of mixing. <br /><a href="https://www.youtube.com/watch?v=My2gAv5Vrkk">Best practice for async await</a></li>
</ul>
<h3>Teamwork checklist</h3>
<p>Almost all websites work well when they are small and new but then slowly start to degrade over the years. How you teamwork and your development process during this time will affect the performance of the solution in the long run. These actions below are mindset and process improvement to avoid ending up with a huge solution that is slowly dying. As a tech lead for a major solution, keep these in mind</p>
<ul>
<li><strong>Cognitive limit (A)<br /></strong>It's only possible to improve architecture on a solution if you can understand it. The size and the complexity of the solution is the enemy here. This is related to all other points in this section.<br /><br /></li>
<li><strong>Non-functional requirements (NFRs) should be part of the requirements (A)</strong><br />The problem with performance is that the business are usually focused on feature development. Things that are visible and bring instant value to users. Someone needs to be in charge of adding architecture upgrades and performance improvements to the backlog. One way is to set a fixed percentage like 30% of storypoints / development budget to go to improving architecture (tech enablers). Team needs to help product owner here or it tends to not get done. Only focusing on features and adding time / budget pressure on team will result in a very slow site in the long run, 100% of the times.<br /><br /></li>
<li><strong>Methodology - fixed price or agile? (A)</strong><br />Fixed priced project are more prone to performance problems due to the above. They often lack requirements for NFRs and time pressure limiting team own initiatives will more likely result in problems in this section. For business critical major sites, use an agile approach and don't ignore the NFRs part of the solution. Use the most senior developer to help with user stories for this.<br /><br /></li>
<li><strong>No single part of the solution should be mainained by multiple teams (AA)</strong><br />This doesn't place a limit of the size of the solution. It just means that if you need both a fast car and the ability to dig, you build two different vehicles instead trying to build a single really fast tractor. <br />If the solution gets bigger than a single team can autonomously maintain, start thinking on how to split it up in vertical slices into <a href="https://martinfowler.com/bliki/BoundedContext.html">multiple loosely coupled solutions</a> that can be deployed separately. Do this early to avoid SEO issues later on. In DXP you can first start by extracting a part of the solution into a separate site since you can have multiple sites without additional license fees. If that site grows in functionality to the size that it needs a separate team, split it off into a new instance so that it can be upgraded and deployed separately. <br />Improving performance means architecture changes. Architecture changes when you have several teams and a gigantic solution are painful and tend to not get done. Don't underestimate this business risk. I would rate this as A for importantce but setting it as AA because it can be problematic to shift to for existing monolithic sites.<br /><br /></li>
<li><strong>Make it easy to phase out functionality (AA)</strong><br />As the site grows in functionality, it will be more difficult to maintain and keep the same performance. For a large site with an expected lifespan more than a decade you need to start planning how to remove functionality from the start. In the long run that will start to hurt maintainability and performance. <br />-Use Episerver blocks as feature switches for major functionality. If you have implemented a new functionality using a block, it will be easy to gradually test it and phase it in for a subset of your users. Perfect for limiting blast radius if something goes wrong and simplify deploys. They are also easy to phase out when they need to be replaced by a new functionality. Don't overuse them though and try not to nest them since it makes the site unnecessary complex.<br />-Use feature folders in .NET to simplify removing functionality<br />-Have a plan to remove functionality from frontend, including corresponding js,html and css. Feature folders and some kind of modularization can usually go a long way towards this goal.</li>
</ul>
<p>Happy optimizing! <br />If you have more suggestions on performance improvements or if some of this advice help you, drop a comment!</p>Uploading blobs to DXP/blogs/Daniel-Ovaska/Dates/2021/12/uploading-blobs-to-dxp/2021-12-06T10:32:59.0000000Z<p>I recently went searching for how to upload blobs into DXP and thought it would probably be good to share my experience since I didn't find any relevant documentation for this area.</p>
<p>When signing up to the DXP service you will get access to the paas portal at <a href="https://paasportal.episerver.net/">https://paasportal.episerver.net/</a>. You might think you could upload blobs there but you will be wrong. That portal will make it possible to move content and blobs between environments however. </p>
<p>There is also a <a href="/link/cecd04dddeea47f89a14d7f163913728.aspx">deployment api</a> that you can use to automate deploys from Azure devops or similar. Perfect for deploying code and configurations to you new web app but sadly not blobs. </p>
<p>For integration you will also get access to <a href="https://portal.azure.com">portal.azure.com</a> where you can see your resources in Azure including webapp and Application insights. This is normally where you will also see your connected storage account for a normal Azure web site. But alas, no.</p>
<h2>Uploading using Azure Storage Explorer</h2>
<p>The "secret" tool is the Azure Storage Explorer that you can <a href="https://azure.microsoft.com/en-us/features/storage-explorer/#overview">download and install </a>yourself. Ask Optimizely support for an account to your integration environment and you are in business! Add your new account by clicking on the person icon and far far bottom left you can find and "Add an account" option. Choose "Storage account or service"</p>
<p><img src="/link/4dba4dbf6e494fc7ad4803938582333d.aspx" /></p>
<p>and then set the Account name and key you get from Optimizely support. You are in business!</p>
<p><img src="/link/3b92b38375d44552a8422a11c1d6e97e.aspx" /></p>
<p>Upload folder is great for sharing bacpac files with Optimizely support and there, finally</p>
<p><em><strong>/mysitemedia</strong></em></p>
<p>is where you need to upload your precious blobs. </p>
<h3>Configure solution to use the blobs</h3>
<p>While you are at it you might want to configure your solution to actually use those blobs as well. </p>
<p>For your Web.Integration.config you will also need to point out this folder to tell the CMS to use your new blobs:</p>
<pre class="language-markup"><code>...
<blob defaultProvider="azureblobs">
<providers>
<add name="azureblobs" type="EPiServer.Azure.Blobs.AzureBlobProvider,EPiServer.Azure"
connectionStringName="EPiServerAzureBlobs" container="mysitemedia"/>
</providers>
</blob>
...</code></pre>
<p>Happy coding!</p>How many websites can you have in one Optimizely DXP CMS cloud solution?/blogs/Daniel-Ovaska/Dates/2021/11/how-many-websites-can-you-have-in-one-optimizely-dxp-cms-solution/2021-11-08T12:03:23.0000000Z<p>There are multiple consideration here to dive into</p>
<ul>
<li>License</li>
<li>Content editing</li>
<li>Performance</li>
<li>Team productivity and architecture evolution</li>
</ul>
<p><strong>License</strong></p>
<p>From a license perspective it's possible to have multiple sites in DXP without an addditional cost. The cost is however depending on page views. For minor sites that isn't used much it can be a good solution to share a DXP instance to reduce costs.<br /><a href="/link/a92390248cf64fc098d582de29c932f0.aspx">https://world.optimizely.com/documentation/developer-guides/digital-experience-platform/development-considerations/</a></p>
<p><strong>Content editing</strong></p>
<p>Optimizely CMS has good support for editing experience with multiple sites. It's possible to restrict access rights easily so that editors can only work with content they should work with. It's possible to share common content across multiple sites if needed using blocks for instance. It's also possible to build news feeds etc that combines content from multiple sites easily. Content editing across multiple sites will not be an issue normally even with hundreds of editors.</p>
<p><strong>Performance</strong></p>
<p>Performance it's good to understand that the sites are hosted together using the same resources in the background. They use the same SQL server and the same app service in Azure. With good coding there are really no technical limits to performance if you use caching and CDN correctly. But the larger the codebase is, the more difficult it will become to keep it optimized for performance. This is true for both frontend and backend. <br />A single site with a couple of integrations and a simple frontend is easy. 10 sites with 100s of backend integrations and shared frontend becomes very tricky. <br />Generally it's not the amount of content that is a limit but the additional complexity of the code as the codebase grows that will be the limit. That additional complexity will start hurting the performance and later also the availability of the site.</p>
<p>From a technical perspective it's advised to keep this is mind when expanding the functionality of the site or even the number of sites. Is the solution starting to become too complex? A good measurement for this will be given below.</p>
<p><strong>Team productivity and architecture</strong></p>
<p>First it's good to be aware of that organization and software architecture are two sides of the same coin. This is known as Conways law:</p>
<p><em>Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.</em></p>
<div class="templatequotecite"><em>— Melvin E. Conway</em></div>
<div class="templatequotecite">The technical architecture will allow any amount of sites, content types and integrations. Unfortunately this will require a larger standing development team. As long as you can keep the development team size below 10 people this will not become a major issue. DO add more sites to you DXP instance in this case. When you reach this limit however, the advice is to avoid adding more sites to this solution and instead start aiming for a second DXP instance for these new sites. There are valid pros with staying with a single solution even beyond a standing team of 10 developers. It's easier to keep the same design, reduce license cost etc but in my opinion they are not worth the overall business risk with having a too large and complex solution and the performance / availablity risks that entails. <br />The not so uncommon decay process of a single monolithic solution is:<br /><br />Larger solution => more complex => developers can't grasp the whole solution => bad performance => difficult to fix => replace development partner => lose know-how => worse performance => hurts conversion rate => replace solution and build new</div>
<div class="templatequotecite">
<div class="templatequotecite">For further guidance on when and why to split a large solution keep an eye out for articles about strategic domain driven design, bounded contexts and scaling agile.</div>
<div class="templatequotecite"><a href="https://youtu.be/4GK1NDTWbkY">Spotify development model </a></div>
<div class="templatequotecite">is a good start to explain the problem with scaling development to multiple teams and the general direction of the solution. Good resource to explain the issue to management and why this is a wise investment. </div>
</div>
<div class="templatequotecite"><strong>Summary</strong></div>
<div class="templatequotecite">How many websites can you have in one Optimizely DXP CMS cloud solution? Unlimited. But...<br />As long as you are not near a standing devops team of around 10+ developers, feel free to keep adding sites and features to your DXP instance. There are no technical,editorial or license limits with DXP. After 10+ developers, your development organization however is a limiting factor. Above team size of 10+, team work considerations and long term architecture evolution and application lifecycle start becoming more important. Merely dividing the developers into multiple teams on paper will not help since the teams will be as tightly coupled as the solution they work in. This is where breaking off suitable functionality into subsites and similar starts to become a better idea from a business perspective.</div>
<p> </p>Switching language as editor returns 404 Not found/blogs/Daniel-Ovaska/Dates/2021/5/switching-language-as-editor-returns-404-not-found/2021-05-26T08:27:25.0000000Z<p>Short blog post about an issue I had with a website that might help others with the same problem. For a while I've had the problem that when switching language in edit mode using the normal menu it doesn't work and returns a 404.</p>
<p><img src="/link/a864124cf29b4b1c9311985a3c3c6551.aspx" width="468" height="188" /></p>
<p>All other functionality seems to work fine however and changing the language in querystring instead also works as a workaround. Today I got bored and started bug hunting and found the solution. It's an https site but was missing the https protocol setting in admin. The scheme is missing in the host name setting below.</p>
<p><img src="/link/864c4683498940fdac6229062ef139e1.aspx" /></p>
<p>Editing the protocol and setting the site to https correctly fixed the issue</p>
<p><img src="/link/05a45d34e5ac4d5498b29aa9d63444e2.aspx" /></p>
<p>Such a simple solution to a weird problem with switching language! Hope it helps someone!</p>
<p>Happy coding!</p>What countries does your site leak data to?/blogs/Daniel-Ovaska/Dates/2021/3/what-countries-do-you-leak-user-data-to/2021-03-05T11:09:16.0000000Z<p>A modern website built using your favorite Episerver CMS has a lot of external script resources that are being fetched from all around the world. This is both a good and a bad thing. You can get a lot of value from tools such as google analytics, hotjar, google translate etc but since you are running this scripts in the users browser you are also potentially leaking user information to these companies. This might be an issue in these GDPR times. </p>
<p>An easy way to check where you are getting your scripts from is to copy / paste this little script into your google chrome browser console:</p>
<p><a href="https://github.com/tomper00/privacy-test-your-site/blob/main/scan-site.js">https://github.com/tomper00/privacy-test-your-site/blob/main/scan-site.js</a><br />(Kudos to Tomas Persson for the script)</p>
<p>This will give you information similar to this for a common swedish site:</p>
<p><img src="/link/94bd573e8e2c4d02bdbe3c3ddc8502c6.aspx" /></p>
<p>So what information are you sending to the US? Probably more than you think...</p>
<p>Happy coding! Stay safe!</p>
<p>Daniel Ovaska<br />Binary True AB</p>
Security issue with multiple package sources/blogs/Daniel-Ovaska/Dates/2021/2/security-issues-with-multiple-package-sources/2021-02-25T11:10:49.0000000Z<p><strong>Scenario</strong></p>
<p>You are using a private nuget feed for a single package v 1.0.0 and a public nuget feed for the rest of your packages. An attacker can then upload a new package to public nuget feed using the same name as your private package but with a higher bug fix version v 1.0.1. Unless you have thought about this scenario your build server will look across all package sources and pick the most updated version (the faked 1.0.1 version on the public feed). So if you are using a private package source you are still not safe unless that is the only source you are using for your packages.</p>
<p><strong>Solution</strong></p>
<p>More detailed information can be found here about this vulnerability (<a href="https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-24105">https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-24105</a>) along with suggestions for how to mitigate the risks.</p>
<p>For high security scenarios:<br /><br /></p>
<ul>
<li><strong>Use one feed.</strong> If you have a private feed with internal packages, make that the only feed and add secure handling of public packages to that feed if you need. All projects should then use that one safe feed.<br /><a href="https://azure.microsoft.com/sv-se/services/devops/artifacts/">https://azure.microsoft.com/sv-se/services/devops/artifacts/</a> is a good option that can deliver that if you don't want to create your own.</li>
</ul>
<p>Additional mitigation:</p>
<ul>
<li><strong>Lock down your versions and make sure your build server can't update minor version automatically</strong>. <br />Both npm and nuget can generate such a lock file easily that can then be used to force build server to use a specific version of all dependencies. Use nuget restore --lockedmode on CI servers to use the lock file.</li>
<li><strong>Make sure one package can only be downloaded from one source.</strong> <br />Use scope and npmrc file for npm to specify source per package. Use id prefix for nuget packages to block private packages from being uploaded to public feeds.</li>
</ul>
<p>This is not an Episerver specific vulnerability but good to be aware of if security is important for your site.</p>
<p>I've added it to my <a href="/link/16ba2cce8b5d496691d3b0ad9d09adb6.aspx">security checklist for Episerver solutions</a>, If you haven't gone through that for your site, I would suggest starting at the top and work your way down until you reach a decent level for your security requirements. </p>
<p>Stay safe, don't get hacked! Happy coding!</p>Episerver page gives 404/blogs/Daniel-Ovaska/Dates/2021/2/episerver-page-gives-404/2021-02-09T15:39:36.0000000Z<p>Short blog post about an issue I just encountered.</p>
<p>Scenario:</p>
<p>The site works great. After implementing some new functionality (a new page type with a header in our case), the start page just suddenly stops working and responds with 404. Episerver edit mode still works however. What just happened and how to solve it?</p>
<p>Solution:</p>
<ol>
<li>Turn on full logging to get information about routing and details around that.<br /><br /></li>
<li>Logs gives this:<br />2021-02-09 15:30:54,085 [43] DEBUG EPiServer.Web.Routing.Segments.Internal.NodeSegment: Url 'https://dev.customerweb.local/' was routed to content with id '5' and language was set to 'sv'<br />2021-02-09 15:30:54,085 [43] TRACE EPiServer.Framework.Cache.ObjectInstanceCacheExtensions: Trying to Read the cacheKey = 'EP:LanguageBranch'<br />2021-02-09 15:30:54,085 [43] TRACE EPiServer.Framework.Cache.ObjectInstanceCacheExtensions: Trying to Read the cacheKey = 'EP:LanguageBranch'<br />2021-02-09 15:30:54,085 [43] TRACE EPiServer.Framework.Cache.ObjectInstanceCacheExtensions: Trying to Read the cacheKey = 'EPPageData:5'<br />2021-02-09 15:30:54,085 [43] TRACE EPiServer.Framework.Cache.ObjectInstanceCacheExtensions: Trying to Read the cacheKey = 'EPPageData:5:sv'<br />2021-02-09 15:30:54,085 [43] TRACE EPiServer.Framework.Cache.ObjectInstanceCacheExtensions: Trying to Read the cacheKey = 'EPPageData:5'<br />2021-02-09 15:30:54,086 [43] DEBUG EPiServer.Web.TemplateResolver: StartPage: Selected CustomerName.Web.Features.SiteLayout.header.HeaderController. (tag='', channel='', category='MvcController')<br />2021-02-09 15:30:54,063 [134] ERROR EPiServer.Global: Unhandled exception in ASP.NET<br />System.Web.HttpException (0x80004005): The file '/link/43F936C99B234EA397B261C538AD07C9.aspx' does not exist.</li>
<li>The first line in log tells you that Episerver was successful in routing to the correct content, it has id 5 and also the language 'sv'. Yey! So far so good!</li>
<li>The debug line before the error however<br />TemplateResolver: StartPage: Selected CustomerName.Web.Features.SiteLayout.header.HeaderController<br />Wuuut?! It's trying to use the new HeaderController to render the page with!? Aha!</li>
<li>The newly developed HeaderController looks like this:<br />
<pre class="language-csharp"><code>public class HeaderController : PageController<SitePageData></code></pre>
Unfortunately the controller that renders the startpage looks like this:<br />
<pre class="language-csharp"><code>public class DefaultPageController : PageController<SitePageData></code></pre>
So what happens is that Episerver gets confused what controller it should use to render the startpage. Earlier it used the DefaultsPageController and everything was fine but now with the new HeaderController, it selected that one instead. This is also visible in the logs <br /><em>TemplateResolver: StartPage: Selected CustomerName.Web.Features.SiteLayout.header.HeaderController<br /></em></li>
<li>Solution: Make the controllers more specific to avoid letting Episervers TemplateResolver guess. The short story is to be careful when using common parent classes like SitePageData in our case. It's a bad idea to have two controllers handling that. There is a nice detailed guide here:<br /><a href="/link/f99c83f5bae045e59a09abc4117e7965.aspx">https://world.episerver.com/documentation/developer-guides/CMS/rendering/selecting-templates/ </a></li>
</ol>
<p>Hope that helps someone googling 404 issues with Episerver. Check your logs and check that it's the right controller that is assigned to handle the call to your content or you can get funny issues. </p>
<p>Happy coding!</p>Improved url caching in Episerver/blogs/Daniel-Ovaska/Dates/2020/9/improved-url-caching-in-episerver/2020-09-23T12:43:25.0000000Z<p>If you try to get a url from the UrlResolver Episerver CMS will try to cache it for you which is great! It works well until you change the Url segment on the page. Then it fails to update that cached url and you might end up with 404s for the links to that page and its children for a couple of minutes until the cache clears. I've tried it in both versions 11.12 and 11.19 and the bug is easily reproducable in an Alloy site.<br /><em>Note: This bug was later fixed in 11.20 which makes this work around unneccessary.</em></p>
<ol>
<li>Just change url segement (Name in URL) field on alloy track page to alloy-track-2 or similar. Publish. </li>
<li>Go to start page and refresh it (CTRL F5). Try clicking on alloy track in top navigation</li>
<li>404</li>
</ol>
<p>Restarting the site will of course solve it. Waiting a couple of minutes will too. If you don't like any of these options or waiting for a bug fix that takes care of this issue you can use my little workaround below. It's also a nice example of how you can use standard .NET object oriented programming to extend and tweak Episerver behaviour. Find the interface you are interested in, tweak it, register it in ioc, use it.</p>
<ol>
<li><strong>New url cache handler</strong><br />Lets make a new url cache! It will be fun! The one Episerver uses under the hood uses the IContentUrlCache interface, so let's make a new one that supports clearing the cached urls when badly needed e.g. when someone decides to change that "Name in URL" field. I'm adding a RemoveAll() method and a new master key that is common for all urls to make it easy to clear them all.</li>
</ol>
<pre class="language-csharp"><code>using EPiServer;
using EPiServer.Core;
using EPiServer.Framework;
using EPiServer.Framework.Cache;
using EPiServer.Framework.Web;
using EPiServer.Globalization;
using EPiServer.Logging.Compatibility;
using EPiServer.Web;
using EPiServer.Web.Internal;
using EPiServer.Web.Routing;
using EPiServer.Web.Routing.Internal;
using EPiServer.Web.Routing.Segments;
using EPiServer.Web.Routing.Segments.Internal;
using System;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.Globalization;
using System.Linq;
using System.Web;
using System.Web.Routing;
namespace DanielOvaska
{
public class ImprovedContentUrlCache : IContentUrlCache
{
private const string UrlPrefix = "ep:url:";
private const string DependecyPrefix = "ep:url:d:";
private readonly IObjectInstanceCache _cache;
private readonly AncestorReferencesLoader _ancestorLoader;
private readonly TimeSpan _cacheExpirationTime;
public ImprovedContentUrlCache(
IObjectInstanceCache cache,
AncestorReferencesLoader ancestorLoader,
RoutingOptions contentOptions)
{
this._cache = cache;
this._ancestorLoader = ancestorLoader;
this._cacheExpirationTime = contentOptions.UrlCacheExpirationTime;
if (this._cacheExpirationTime <= TimeSpan.Zero)
throw new ArgumentException("The cache expiration time should be greater than zero");
}
public string Get(ContentUrlCacheContext context)
{
var url = this._cache.Get(this.GetCacheKey(context)) as string;
return url;
}
public void Remove(ContentUrlCacheContext context)
{
this._cache.Remove(this.GetDependencyKey(context.ContentLink));
}
public void RemoveAll()
{
this._cache.Insert(_masterKeyForAllUrls, "cleared at " + DateTime.Now.ToLongTimeString(), null);
}
public void Insert(string url, ContentUrlCacheContext context)
{
ContentReference contentLink = context.ContentLink;
IEnumerable<ContentReference> ancestors = this._ancestorLoader.GetAncestors(contentLink, AncestorLoaderRule.ContentAssetAware);
TimeSpan cacheExpirationTime = this._cacheExpirationTime;
this._cache.Insert(this.GetCacheKey(context), (object)url, new CacheEvictionPolicy(cacheExpirationTime, CacheTimeoutType.Sliding, Enumerable.Empty<string>(), this.CreateDependencyKeys(contentLink, ancestors)));
}
private string _masterKeyForAllUrls = "ImprovedContentUrlCache";
internal IEnumerable<string> CreateDependencyKeys(
ContentReference contentLink,
IEnumerable<ContentReference> ancestors)
{
yield return _masterKeyForAllUrls;
yield return this.GetDependencyKey(contentLink);
foreach (ContentReference ancestor in ancestors)
yield return this.GetDependencyKey(ancestor);
}
internal string GetCacheKey(ContentUrlCacheContext context)
{
return "ep:url:" + context.GetHashCode().ToString();
}
internal string GetDependencyKey(ContentReference contentLink)
{
return "ep:url:d:" + contentLink.ToReferenceWithoutVersion().GetHashCode().ToString();
}
}
}</code></pre>
<p>2. <strong>Dependency injection of new class.</strong> <br />Now we must tell Episerver to use our improved url cache handler instead. You can do that easily by a few lines when configuring the IoC container:</p>
<pre class="language-csharp"><code> [InitializableModule]
public class DependencyResolverInitialization : IConfigurableModule
{
public void ConfigureContainer(ServiceConfigurationContext context)
{
//Implementations for custom interfaces can be registered here.
context.ConfigurationComplete += (o, e) =>
{
//Register custom implementations that should be used in favour of the default implementations
context.Services.AddSingleton<IContentUrlCache,ImprovedContentUrlCache>();
...</code></pre>
<p>3. <strong>Reacting to published event</strong><br />Let's hook into the published event to clear url cache if and only if any editor has been tampering with url segments to avoid getting those pesky 404s. I'm going to clear them all to avoid messing with tricky cache dependencies since it also affects their children...and language versions etc. Just clear it. Changing urls on an existing page should be a rare event so it really shouldn't have any real performance issues. Let's add some code so that cache will only be cleared if the url segment has been changed. We don't want to kill the cache every time an editor publishes a typo fix to a random page. That would be bad for performance.</p>
<pre class="language-csharp"><code>[ModuleDependency(typeof(EPiServer.Web.InitializationModule))]
public class ChangeEventInitialization : IInitializableModule
{
private ILogger _log = LogManager.GetLogger(typeof(ChangeEventInitialization));
public void Initialize(InitializationEngine context)
{
var events = ServiceLocator.Current.GetInstance<IContentEvents>();
events.PublishedContent += Events_PublishedContent;
}
private void Events_PublishedContent(object sender, EPiServer.ContentEventArgs e)
{
_log.Information($"Published content fired for content {e.ContentLink.ID}");
var urlCache = ServiceLocator.Current.GetInstance<IContentUrlCache>();
var page = e.Content as PageData;
if(page!=null)
{
var contentVersionRepository = ServiceLocator.Current.GetInstance<IContentVersionRepository>();
var versions = contentVersionRepository.List(e.ContentLink);
if(versions.Count()>1)
{
var contentRepository = ServiceLocator.Current.GetInstance<IContentRepository>();
var previousPage = contentRepository.Get<PageData>(versions.ToArray()[1].ContentLink);
if(previousPage.URLSegment!=page.URLSegment)
{
var improvedUrlCache = urlCache as ImprovedContentUrlCache;
if (improvedUrlCache != null)
{
_log.Information($"Removing cached urls due to content update");
improvedUrlCache.RemoveAll();
}
}
}
}
}
public void Uninitialize(InitializationEngine context)
{
var events = ServiceLocator.Current.GetInstance<IContentEvents>();
events.PublishedContent -= Events_PublishedContent;
}
}</code></pre>
<p>Hopefully that helps someone until Episerver fixes that cache invalidation of urls themselves. Then feel free to remove this little work around! Also remember that changing urls on a page is usually a bad idea due to SEO and incoming links anyway but that's another discussion. </p>
<p>Happy coding!</p>To use headless or not, that is the question/blogs/Daniel-Ovaska/Dates/2020/9/to-use-your-headless-or-not-use-your-head/2020-09-18T15:40:41.0000000Z<p>Noone can have missed the headless wave going through the web developer community lately. Episerver supports building a headless solution very well with the <a href="/link/6ffa5cb8173a414eac25740deeafdbc8.aspx">content delivery api</a> that was added from v11.4. I have personally used it a lot to build SPA (single page application) solutions on top of Episerver using both vue, react and angular and it works great. The latter versions of Episerver also makes it possible to build support for editors too. But what is really the pros and cons with headless? When should you use it and when should you stay away from it?</p>
<p><strong>Pros</strong>:</p>
<ul>
<li>Easier to implement complex user experience parts with lots of interactions</li>
<li>Frontend developers get more power and can more easily implement a feature without help</li>
</ul>
<p><strong>Cons</strong>:</p>
<ul>
<li>Performance can be tricky to achieve for first page load especially on large enterprise sites, which hurt conversion rate and SEO</li>
<li>Editor capabilities are limited. It's trickier to support more advanced scenarios like A/B testing, previewing, blocks etc</li>
<li>Routing and SEO. For a full SPA that also takes care of routing, SEO and deeplinking is more difficult</li>
<li>Implementing standard functionality that is powered by content made by editors takes longer time with headless than traditional architecture</li>
<li>Increases complexity of overall solution if used too much</li>
<li>Javascript frameworks have a best-before date similar to milk</li>
<li>Can easiliy grow to become a difficult-to-manage monolithic solution which may become difficult to replace in a few years</li>
<li>Backend developers get less power and will have more difficulty to implement a feature without help</li>
</ul>
<h3>Headless + powerful javascript framework X is great for user interaction-heavy parts of the site</h3>
<p>To summarize, headless CMS combined with a rich Javascript framework like react, angular or vue is a great tool to solve a specific problem. Pages or parts of your website that has a lot of user interaction benefit from using a headless approach with a more powerful javascript framework. It can be complex forms, graphs or specific tools (think gmail, google maps, power BI, advanced checkout in an e-commerce and similar views). Be aware of the potential downside of initial load speed can have on conversion rate (especially on mobile) and SEO though. A page that loads in 4.2s will have approximately half the conversion rate that one that loads in 2.4s. Read more <a href="https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates/">here</a>.</p>
<h3>Traditional site without heavy javascript framework is best for content-heavy site</h3>
<p>For strict content heavy parts headless is not the best match. If your focus is content, rather than fancy user interaction, your site benefit more from a traditional architecture without the headache from headless and heavy javascript and focus more on Episerver pages and blocks powered by standard html, css and backend to give the editors maximum power. To the end user you will be able to get the absolute best performance and load times especially on mobile devices. SEO, A/B testing and a rich experience for editors to experiment with content is very easy to support. Add a content delivery network, (CDN) on top of that will give you the best performance anywhere around the globe. </p>
<h3>Summary</h3>
<p>Architecture and design patterns like headless are tools to solve a problem. If you don't have the problem, don't apply the solution.</p>
<p>If you have a part of the site that is focused on content and a part of the site focused on user interaction, use both headless and traditional backend architecture, but avoid mixing them, to get maximum benefits for both users and developers. </p>
<p>Happy coding!</p>Fix problems with edit mode/blogs/Daniel-Ovaska/Dates/2020/2/fix-problems-with-edit-mode/2020-02-14T10:57:56.0000000Z<p>Episerver edit mode normally works great. But sometimes a developer manages to crash it. I'll go through the most common issues you can check for to make it work again</p>
<ol>
<li><strong>WebSockets not enabled on IIS</strong><br />Easy to fix by adding the feature to IIS. If you forget this you will get a warning in edit mode and some newer functionality like support for projects won't work. <br /><img src="/link/d920ccd330a144ec9ffdc8c77a163004.aspx" width="480" height="284" /><br /><br /></li>
<li><strong>Parts of the nuget package for edit mode was lost along the way</strong><br />Double check that you have the relevant zip files below modules on the server. Sometimes that get lost in the deploy process somewhere.<br /><img src="/link/e84c64a77e184ea7a43b50e1b02b07cb.aspx" /><br />If they don't exist on your development environment, reinstall the relevant nuget packages.</li>
<li><strong>Make sure edit mode hasn't been removed</strong><br />Sometimes you do this if you have a separate editor server. It can look similar to this in webconfig. Notice that the allow tag has been removed. Common symphoms are that you get a 404 or 403 after logging in. <br /><img src="/link/2dc3f5da0210441cbe6f99de2dd01b61.aspx" /></li>
<li><strong>Custom properties<br /></strong>If edit mode doesn't work properly or goes blank for some content types, check in admin mode if there are some custom properties on them. Try removing them in development environment to see if that is what causes the issue. Remove old properties that isn't used anymore on that content type.<br /><br /></li>
<li><strong>Check edit mode for js errors</strong><br />Use Chrome developer tools or similar and check console. This can give you a good hint what isn't working. Often custom properties like above.<br /><br /></li>
<li><strong>PropertyList and Url properties can't be saved</strong><br />A common problem if you add a Url property to a property list is that you forget to handle the json transformation properly. Add some attributes to handle the Url property correctly. If you don't, saving the property list won't work correctly. <br />
<pre class="language-csharp"><code>[JsonProperty]
[JsonConverter(typeof(UrlConverter))]
[Display(Name = "Link1", Order = 200)]
public virtual Url UrlLink { get; set; }
</code></pre>
<br />
<p>Hope that helps someone with troubleshooting if edit mode gives you a hard time. </p>
<p>Happy coding!<br /><br />PS If you blog and use web[dot]config somewhere in your blog, cloudflare will block you from creating that blog. DS</p>
</li>
</ol>Creating the perfect Episerver integration with HttpClient/blogs/Daniel-Ovaska/Dates/2020/1/creating-the-perfect-httpclient-api-call/2020-01-15T14:31:34.0000000Z<p>Building an integration that keeps working during heavy user load is tricky. </p>
<p>Since Episerver uses .NET as underlying framework, a lot of integrations involve consuming different web apis. A key class here is to use the HttpClient class. It's easy to use to build integrations that works during light user load. Unfortunately this class is the worst mess that Microsoft has ever created. It looks easy but don't be fooled. It's like nitroglycerin. If you sneeze in its general direction it will explode in your face.</p>
<p>There are a couple of bottlenecks you will run into. The first one is that you will run out of sockets on your server. Then there is also memory consumption and max free threads to consider underneath the hood.</p>
<p>Here are some advice how to keep it working during heavy load (and get better performance during light load):</p>
<ol>
<li><strong>Reuse HttpClient instances<br /></strong><br />It is intended to be reused for many calls. Do not wrap the HttpClient in a using statement (<em>even though it weirdly enough has a Dispose())</em>. Do not create a new instance for each call. If you create a new instance every time you will lose a lot of performance at low load and get SocketExceptions and crash site at high loads. A good pattern is to have an IHttpClientFactory that stores instances that can be reused<br />
<pre class="language-csharp"><code>public class HttpClientFactory : IHttpClientFactory
{
protected static readonly ConcurrentDictionary<string, HttpClient> HttpClientCache = new ConcurrentDictionary<string, HttpClient>();
public HttpClient GetForHost(Uri uri)
{
var key = $"{uri.Scheme}://{uri.DnsSafeHost}:{uri.Port}";
return HttpClientCache.GetOrAdd(key, k =>
{
var client = new HttpClient()
{
/* Other setup */
};
var sp = ServicePointManager.FindServicePoint(uri);
sp.ConnectionLeaseTimeout = 60 * 1000; // 1 minute
return client;
});
}
}</code></pre>
So if you are building a repository class that needs an HttpClient you can have the IHttpClientFactory in the repository contructor as a dependency and grab a new instance from that one.<br />
<pre class="language-csharp"><code>public class ProductRepository:IProductRepository
{
private readonly HttpClient _client;
public ProductRepository(IHttpClientFactory httpClientFactory)
{
_client = httpClientFactory.GetForHost("[product base url]");
}
}</code></pre>
<p>Except solving the possible out of sockets problem I actually got 30% faster calls using this improvement only in a project. Setting up a completely new HttpClient including https handshake etc is an expensive operation. In .NET core this is standard but there is another way to inject named instances into your repositories that you should use instead.</p>
</li>
<li><strong>Set ServicePoint default connection limit</strong><br /><br />Unfortunately .NET has a very low limit of how many concurrent connections an HttpClient instance can have. If you use asyncronous programming, which you should, with async await you should really increase this value. If you don't you will get a <span>TaskCanceledException when you run out of connections.</span> You can easily do that in application startup with:<br />
<pre class="language-csharp"><code>protected void Application_Start()
{
...
ServicePointManager.DefaultConnectionLimit = int.MaxValue;;
...
}</code></pre>
<p>Mind you, don't hammer an external api too hard with x number of simultanous calls. They can get angry. With great power comes great responsibility.</p>
</li>
<li><strong>Make sure that HttpClient respects DNS changes</strong><br /><br />Reusing a single HttpClient has a hidden problem that you need to know about. Let's say you have a cloud environment and are swapping slots. This means that in the background the DNS is changing to another IP. If you have a static HttpClient that lives forever, that change won't be picked up until you restart the entire application. That's a little evil. That's why HttpClientFactory above has the obscure setting: <br />
<pre class="language-csharp"><code>var sp = ServicePointManager.FindServicePoint(uri);
sp.ConnectionLeaseTimeout = 60 * 1000; // 1 minute</code></pre>
<p>This will take care of any nasty DNS change that occurs while your super fast Episerver website just keeps on running. Close to light speed. And beyond.</p>
</li>
<li><strong>Dispose of HttpResponse object</strong><br /><br />
<pre class="language-csharp"><code>HttpResponse response;
try
{
//Create call with http client and set response...
}
finally
{
if (response != null)
response.Dispose();
}</code></pre>
I've seen some strange behaviours if I forgot this one with <span>TaskCanceledException as a result. In the response there is the content stream which in some cases can stay open even though you are done with it. So always dispose this object in your favorite way. Either by using try catch finally like above or even better with the keyword using(var response ...) {}. Especially important to do it if the response is a failed call. <br /><br /></span></li>
<li><span><strong>Set Timeout large enough to handle large files (default is 100 seconds)</strong><br /><br /></span><span>Otherwise you will also get a TaskCanceledException weirdly enough.</span><span><span><br /></span></span>
<pre class="language-csharp"><code>//Add to httpclient factory above
var client = new HttpClient()
{
Timeout = TimeSpan.FromMinutes(10);
};</code></pre>
<p>Easy to forget when you are on your superfast local machine and downloading small files. You need to make it work on a large file on a poor network = long download time. HttpClient will close with a TaskCanceledException if the request takes longer that 100s. Only add this one if you need it though.</p>
</li>
<li><strong>Avoid storing large files in memory, use streams all the way with HttpCompletionOption.ResponseHeadersRead</strong><br /><br />Do use streams instead of byte[]. <br />Do use HttpCompletionOption.ResponseHeadersRead. Otherwise it won't start streaming until the entire file is loaded into memory.<br />Do dispose the response object, either by calling Dispose() yourself on the reponse object or by the <strong>using</strong> keyword below.<br /><br />
<pre class="language-csharp"><code>using (var response = await httpClient.GetAsync(
"https://test.test.com/test/",
HttpCompletionOption.ResponseHeadersRead))
{
if (response.IsSuccessStatusCode)
{
using (var stream = response.Content.ReadAsStreamAsync())
{
//Save to disc using the stream
}
}
}</code></pre>
<p><br />Stream it directly to the user or to a file or whereever you want it. Using a byte[] will create a very expensive object in memory in the background. During heavy load and many files you will end up spending a lot of memory and CPU just juggling objects on the large object heap in the background. Use streams all the way to the destination.</p>
</li>
<li><strong>Enable support for gzipped response from server</strong><br /><br />
<pre class="language-csharp"><code>HttpClientHandler handler = new HttpClientHandler()
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};
var client = new HttpClient(handler)
{
/* Other setup */
};</code></pre>
<p>Why send more bytes than you have to?</p>
</li>
</ol>Content events in Episerver/blogs/Daniel-Ovaska/Dates/2019/6/content-events-in-episerver/2019-06-19T13:21:00.0000000Z<p><strong>So you need to do something when content changes?</strong></p>
<p>Knowing when content changes can be important in many use cases. You might need to update a search index with the new information, send an email to some editor or similar.</p>
<p>That is easy to support using content events in Episerver but there are a few gotyas. Let's start by listening to the most common content event, PublishedContent, that are raised in Episerver and then examine a few edge cases. You can do this by creating your own initialization module and attach some eventhandlers by using the IContentEvent interface like this:</p>
<pre class="language-markup"><code>[ModuleDependency(typeof(EPiServer.Web.InitializationModule))]
public class ChangeEventInitialization : IInitializableModule
{
private ILogger _log = LogManager.GetLogger(typeof(ChangeEventInitialization));
public void Initialize(InitializationEngine context)
{
var events = ServiceLocator.Current.GetInstance<IContentEvents>();
events.PublishedContent += Events_PublishedContent;
}
private void Events_PublishedContent(object sender, EPiServer.ContentEventArgs e)
{
_log.Information($"Published content fired for content {e.ContentLink.ID}");
}
public void Uninitialize(InitializationEngine context)
{
var events = ServiceLocator.Current.GetInstance<IContentEvents>();
events.PublishedContent -= Events_PublishedContent;
}
}</code></pre>
<p>Done!</p>
<p>Or, not really. You covered the most obvious change of content but there are a few others you need to be aware of.</p>
<p>So let's dig into some pitfalls you might have forgotten to handle. </p>
<ol>
<li><strong>Wastebasket</strong><br /><br />Throwing things into the trash (or restoring) will cause a move event, not a delete event.<br />Makes sense really but easy to miss. So if you need to reindex an item that ends up in wastebasket this is a good thing to know.<br /><br /></li>
<li><strong>Move event</strong><br /><br />Only the page that is being moved will trigger the move event, not the children. <br />If you cast the event args to the MoveContentEventArgs class you can see it also has a property called Descendents. This contains the affected child content. Remember that you need to handle the descendents if you have a move event.<br />
<pre class="language-markup"><code>private void Events_MovedContent(object sender, EPiServer.ContentEventArgs e)
{
var eventargs = e as MoveContentEventArgs;
if(eventargs!=null)
{
// use eventargs.Descendents to get every content item that is affected...
}
}</code></pre>
</li>
<li><strong>Delete event</strong><br /><br />The Deleted event will send you the id of the wastebasket as contentlink if the user empties the wastebasket. <br />Hmm, ok. It is the wastebasket that triggers the delete but you might have expected to get the contentlink to the deleted content here. <br />The actual deleted content you need to handle can be gotten by casting the ContentEventArgs to DeleteContentEventArgs class and then checking the DeletedDescendents property like.<br />
<pre class="language-markup"><code>private void Events_DeletedContent(object sender, EPiServer.DeleteContentEventArgs e)
{
var eventArgs = e as DeleteContentEventArgs;
if(eventArgs!=null)
{
// use eventArgs.DeletedDescendents to get affected content...
}
} </code></pre>
</li>
<li><strong>Url changes</strong><br /><br />Changing the url segment (called Name in Url in edit mode) on a page and publishing it will trigger a publish event. On that page. But not on the page descendents.<br />Problem is that the url segment is also used for the full urls of the children. So you might need to handle that in the published event. One way to get if url has been changed is to store the old url in the publishing event in the ContentEventArgs Items collection and then check it in the published event. <br /><br />
<pre class="language-markup"><code>private void Events_PublishingContent(object sender, EPiServer.ContentEventArgs e)
{
var urlResolver = ServiceLocator.Current.GetInstance<IUrlResolver>();
var oldUrl = urlResolver.GetUrl(new ContentReference(e.Content.ContentLink.ID));
e.Items.Add("Url", oldUrl);
}</code></pre>
<br />
<pre class="language-markup"><code>private void Events_PublishedContent(object sender, EPiServer.ContentEventArgs e)
{
var urlResolver = ServiceLocator.Current.GetInstance<IUrlResolver>();
var url = urlResolver.GetUrl(e.ContentLink);
if (e.Items["Url"]!=null)
{
var oldUrl = e.Items["Url"].ToString();
if(url!=oldUrl)
{
//Handle that url for all children has now been changed...reindex them etc...
}
}
}</code></pre>
<p>
</p></li>
<li>
<p><strong>Access rights</strong><br /><br />Changing access rights on content is another thing that you might forget to handle. This creates an event too but you need to use the IContentSecurityRepository to handle it. The published event will not trigger in this case. Remember that changing access rights can also affect children since access rights are normally inherited. You will only get this event for the node that is change and then you need to handle all descendents yourself if you need to reindex etc.<br /><br /></p>
<pre class="language-markup"><code>//In initialization init:
var contentSecurityRepo = ServiceLocator.Current.GetInstance<IContentSecurityRepository>();
contentSecurityRepo.ContentSecuritySaved += ContentSecurityRepo_ContentSecuritySaved;
//and then create event handler method
private void ContentSecurityRepo_ContentSecuritySaved(object sender, ContentSecurityEventArg e)
{
_log.Information($"ContentSecuritySaved fired for content {e.ContentLink.ID}");
}</code></pre>
</li>
</ol>
<h2>Summary and source code for a new more inclusive ContentChange event</h2>
<p>The basic event handling for content in Episerver is easy to find but to handle all types of content changes is more difficult. Hopefully this post will help you find a few of the most common pitfalls. In a future version of Episerver I would hope that Episerver CMS can also have a simpler event to find out if a content item has been changed in any way (including access rights and children etc).</p>
<p>I'll finish this post with adding some example code to create the backbone of such a new event called ContentChanged that you can modify to fit your specific need in your project. This event will be triggered if the content have been changed either by being moved, deleted, published, url changed on parent etc and will include a property with AffectedContent that will include all descendents that may have been affected by the action. To save some space I've only use a single initialization module to hook up all events.<br /><br />Happy coding!</p>
<h3>Example code for new ContentChange event handling</h3>
<pre class="language-markup"><code> //Initialization module to hook up all events and setup a new event type for ContentChanged
[ModuleDependency(typeof(EPiServer.Web.InitializationModule))]
public class ChangeEventInitialization : IInitializableModule
{
private ILogger _log = LogManager.GetLogger(typeof(ChangeEventInitialization));
public void Initialize(InitializationEngine context)
{
var events = ServiceLocator.Current.GetInstance<IContentEvents>();
var contentSecurityRepo = ServiceLocator.Current.GetInstance<IContentSecurityRepository>();
contentSecurityRepo.ContentSecuritySaved += ContentSecurityRepo_ContentSecuritySaved;
events.MovedContent += Events_MovedContent;
events.PublishingContent += Events_PublishingContent;
events.PublishedContent += Events_PublishedContent;
events.DeletedContent += Events_DeletedContent;
ExtendedContentEvents.Instance.ContentChanged += Instance_ContentChanged;
}
private void Instance_ContentChanged(object sender, ContentChangedEventArgs e)
{
_log.Information($"Events ContentChanged fired for content {JsonConvert.SerializeObject(e)}");
}
private void Events_PublishingContent(object sender, EPiServer.ContentEventArgs e)
{
_log.Information($"Events_PublishingContent fired for content {e.Content.ContentLink.ID}");
var urlResolver = ServiceLocator.Current.GetInstance<IUrlResolver>();
var oldUrl = urlResolver.GetUrl(new ContentReference(e.Content.ContentLink.ID));
e.Items.Add("Url", oldUrl);
_log.Information($"Old url: {oldUrl}");
}
private void ContentSecurityRepo_ContentSecuritySaved(object sender, ContentSecurityEventArg e)
{
_log.Information($"ContentSecuritySaved fired for content {e.ContentLink.ID}");
var action = ContentAction.AccessRightsChanged;
var affectedContent = new List<ContentReference>();
var contentRepository = ServiceLocator.Current.GetInstance<IContentRepository>();
var descendants = contentRepository.GetDescendents(e.ContentLink);
affectedContent.AddRange(descendants);
affectedContent.Add(e.ContentLink);
ExtendedContentEvents.Instance.RaiseContentChangedEvent(new ContentChangedEventArgs(e.ContentLink, action, affectedContent));
}
private void Events_DeletedContent(object sender, EPiServer.DeleteContentEventArgs e)
{
_log.Information($"Deleted content fired for content {e.ContentLink.ID}");
var eventArgs = e as DeleteContentEventArgs;
if(eventArgs!=null)
{
var action = ContentAction.ContentDeleted;
var affectedContent = new List<ContentReference>();
affectedContent.AddRange(eventArgs.DeletedDescendents);
if(e.ContentLink.ID!=ContentReference.WasteBasket.ID)
{
affectedContent.Add(e.ContentLink);
}
ExtendedContentEvents.Instance.RaiseContentChangedEvent(new ContentChangedEventArgs(e.ContentLink, action, affectedContent));
}
}
private void Events_MovedContent(object sender, EPiServer.ContentEventArgs e)
{
_log.Information($"Moved content fired for content {e.ContentLink.ID}");
var eventargs = e as MoveContentEventArgs;
if(eventargs!=null)
{
var action = ContentAction.ContentMoved;
if(eventargs.TargetLink.ID == ContentReference.WasteBasket.ID)
{
action = ContentAction.ContentMovedToWastebasket;
}
if(eventargs.OriginalParent.ID==ContentReference.WasteBasket.ID)
{
action = ContentAction.ContentMovedFromWastebasket;
}
var affectedContent = new List<ContentReference>();
affectedContent.AddRange(eventargs.Descendents);
affectedContent.Add(e.ContentLink);
ExtendedContentEvents.Instance.RaiseContentChangedEvent(new ContentChangedEventArgs(e.ContentLink, action, affectedContent));
}
}
private void Events_PublishedContent(object sender, EPiServer.ContentEventArgs e)
{
_log.Information($"Published content fired for content {e.ContentLink.ID}");
var urlResolver = ServiceLocator.Current.GetInstance<IUrlResolver>();
var url = urlResolver.GetUrl(e.ContentLink);
_log.Information($"New url: {url}");
if (e.Items["Url"]!=null)
{
var oldUrl = e.Items["Url"].ToString();
if(url!=oldUrl)
{
_log.Information($"Url changed for {e.ContentLink.ID}");
var contentRepository = ServiceLocator.Current.GetInstance<IContentRepository>();
var descendants = contentRepository.GetDescendents(e.ContentLink);
var affectedContent = new List<ContentReference>();
affectedContent.AddRange(descendants);
affectedContent.Add(new ContentReference(e.ContentLink.ID));
ExtendedContentEvents.Instance.RaiseContentChangedEvent(new ContentChangedEventArgs(e.ContentLink, ContentAction.UrlChanged, affectedContent));
}
else
{
ExtendedContentEvents.Instance.RaiseContentChangedEvent(new ContentChangedEventArgs(e.ContentLink, ContentAction.ContentPublished, new List<ContentReference>()));
}
}
}
public void Uninitialize(InitializationEngine context)
{
var events = ServiceLocator.Current.GetInstance<IContentEvents>();
var contentSecurityRepo = ServiceLocator.Current.GetInstance<IContentSecurityRepository>();
contentSecurityRepo.ContentSecuritySaved -= ContentSecurityRepo_ContentSecuritySaved;
events.MovedContent -= Events_MovedContent;
events.PublishingContent -= Events_PublishingContent;
events.PublishedContent -= Events_PublishedContent;
events.DeletedContent -= Events_DeletedContent;
ExtendedContentEvents.Instance.ContentChanged -= Instance_ContentChanged;
}
public void Preload(string[] parameters)
{
}
}
//New event args class that can store a list of descendents that were affected
//and the type of source event
public class ContentChangedEventArgs : EventArgs
{
public ContentReference SourceContentLink { get; }
public ContentAction Action { get; }
public ContentChangedEventArgs(ContentReference sourceContentLink, ContentAction action, IEnumerable<ContentReference> affectedContent)
{
SourceContentLink = sourceContentLink;
Action = action;
AffectedContent = affectedContent;
}
/// <summary>
/// Includes references to all affected content including the content that triggered the event
/// </summary>
public IEnumerable<ContentReference> AffectedContent { get; }
}
//New enum to specify the original action that changed the content.
//Can be extended if needed to include the entire source event
public enum ContentAction
{
ContentPublished,
ContentDeleted,
ContentMoved,
AccessRightsChanged,
UrlChanged,
ContentMovedToWastebasket,
ContentMovedFromWastebasket
}
//Some infrastructure to make it possible to listen on the changeevent,
///raise a new event etc.
public class ExtendedContentEvents
{
public const string CreatingLanguageEventKey = "ContentChangedEvent";
private EventHandlerList Events
{
get
{
if (_events == null)
throw new ObjectDisposedException(this.GetType().FullName);
return _events;
}
}
private EventHandlerList _events = new EventHandlerList();
private static object _keyLock = new object();
private static ExtendedContentEvents _instance;
internal const string ChangedEvent = "ChangedEvent";
public static ExtendedContentEvents Instance
{
get
{
if (_instance == null)
{
lock (_keyLock)
{
if (_instance == null)
_instance = new ExtendedContentEvents();
}
}
return _instance;
}
}
private object GetEventKey(string stringKey)
{
object obj;
if (!_eventKeys.TryGetValue(stringKey, out obj))
{
lock (_keyLock)
{
if (!this._eventKeys.TryGetValue(stringKey, out obj))
{
obj = new object();
_eventKeys[stringKey] = obj;
}
} }
return obj;
}
private Dictionary<string, object> _eventKeys = new Dictionary<string, object>();
public event EventHandler<ContentChangedEventArgs> ContentChanged
{
add
{
Events.AddHandler(this.GetEventKey("ContentChangedEvent"), (Delegate)value);
}
remove
{
Events.RemoveHandler(this.GetEventKey("ContentChangedEvent"), (Delegate)value);
}
}
public virtual void RaiseContentChangedEvent(ContentChangedEventArgs eventArgs)
{
var eventHandler = Events[GetEventKey(CreatingLanguageEventKey)] as EventHandler<ContentChangedEventArgs>;
if (eventHandler != null)
{
eventHandler((object)this, eventArgs);
}
}
public void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize((object)this);
}
protected virtual void Dispose(bool disposing)
{
if (!disposing)
return;
if (_events != null)
{
_events.Dispose();
_events = (EventHandlerList)null;
}
if (this != _instance)
return;
_instance = null;
}
}</code></pre>Performance - GetChildren vs GetBySegment/blogs/Daniel-Ovaska/Dates/2019/3/performance---getchildren/2019-03-05T13:34:35.0000000Z<p>GetChildren is a decent method that is also cached in the background. But if you have 1000s of children you will get some performance hits. Another good option if you are only searching for a single content item is to use GetBySegment method. This one works well with large amounts of children with excellent performance. </p>
<p>I tried it on folder that has 13000+ children (yes, bad idea) and these were the results when running on local machine</p>
<ol>
<li>GetChildren<br />
<pre class="language-markup"><code>
foreach (var folder in ContentRepository.GetChildren<ContentFolder>(parent))
{
if (folder.Name == customerId)
{
...
}
}</code></pre>
17.857s</li>
<li>GetBySegment<br />
<pre class="language-markup"><code> var customerFolder = ContentRepository.GetBySegment(parent, customerId, LanguageSelector.AutoDetect());</code></pre>
0.147s</li>
</ol>
<p>So that's a pretty huge performance gain by more than a factor 100. Hope it helps someone out there to pick the right tool for the job. </p>
<p>Few children => GetChildren<br />Many children => GetBySegment or Episerver Find...</p>
<p>Happy coding everyone!</p>Testing Episerver content delivery API/blogs/Daniel-Ovaska/Dates/2018/9/trying-out-episerver-content-delivery-api/2018-09-13T10:39:32.0000000Z<p>This will be my shortest blog post yet but hopefully it will save someone an hour when reading about my gotyas. </p>
<p>Content delivery API is an API you will likely want to use if you need to get Episerver data for your client side application built on React, Angular, Vue or their friends. Installing it is pretty easy but it has a few quirks still. It still has a dependency on Episerver Find for instance so if you don't have that you are out of luck currently. You can follow this excellent guide here that helped me to get it up and running</p>
<p><a href="https://mmols.io/getting-started-with-the-episerver-content-delivery-api/">https://mmols.io/getting-started-with-the-episerver-content-delivery-api/</a></p>
<ul>
<li>Episerver needs to be at 11.4.0 or later</li>
<li>You need to use the new identity authentication (if you haven't upgraded from membership provider, now is the time!) <br />It's fast but you might run into password hashing issues.</li>
<li>Remember that Episerver Find needs to be version 12.x.x or earlier. <br />If you use a brand new Alloy site you can uninstall the version 13 and reinstall the latest version 12. <br />A bit annoying but no biggie since it only takes 5 mins.</li>
<li>For an alloy site you also need some initialization that has already been done on the commerce quicksilver site that Matthew references in step 2. You will find some missing classes for this configuration here which took me a while.</li>
</ul>
<p><a href="https://github.com/episerver/Quicksilver/tree/master/Sources/EPiServer.Reference.Commerce.Site/Infrastructure/WebApi">https://github.com/episerver/Quicksilver/tree/master/Sources/EPiServer.Reference.Commerce.Site/Infrastructure/WebApi</a></p>
<ul>
<li>I also noticed that in the official documentation they have an infrastucture zip file with the files about. </li>
<li>When trying out the api from postman or similar, remember that the api is sensitive to language and that this is sent using a header. If you end up getting 404s then add the relevant language header (sv-SE or similar) to specify the language.</li>
<li>If you get the exception "A route named 'MS_attributerouteWebApi' is already in the route collection. Route names must be unique". It means that you are trying to register attribute routing multiple times. Remove one of the config.MapHttpAttributeRoutes() if you have multiple. The content delivery apis will try to register this too and you can shut this off by using appsettings if you already have this in your project. <br />
<pre class="language-xml"><code><add key="episerver:contentdeliverysearch:maphttpattributeroutes" value="false" /></code></pre>
<pre class="language-xml"><code><add key="episerver:contentdelivery:maphttpattributeroutes" value="false" /></code></pre>
</li>
</ul>
<p>So far I have an alloy + angular 6 site up and running that gets all content from content delivery api where all text (and blocks) support direct on page editing. Works!</p>Meetup in Uppsala wed June 20th/blogs/Daniel-Ovaska/Dates/2018/6/meetup-in-uppsala-wed-june-20th/2018-06-18T15:35:45.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>Just a few spots left!</p>
<p>Host this time is <a href="https://www.visma.se/kontakt/">Visma</a> and the event takes place at UKK and starts at 17. See you there!</p>
<p>Register and read more information <a href="https://www.meetup.com/Episerver-Uppsala/events/251011389/">here!</a></p>
</body>
</html>Using Troy Hunts Pwned Passwords API/blogs/Daniel-Ovaska/Dates/2018/3/using-troy-hunts-pwned-passwords-api/2018-03-13T02:24:08.0000000Z<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>Troy Hunt built a great <a href="https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/">API </a>to check if a password has been compromised (pwned). </p>
<p>Let's check out how to use it to make sure that your users don't use unsecure passwords!</p>
<h2>Query the API</h2>
<p>The first part is how to query the api. A simple repository with a single "GetOwnedCount" method can then look like:</p>
<pre class="language-csharp"><code>public class OwnedPasswordRepository : IOwnedPasswordRepository
{
static HttpClient client = new HttpClient();
public string BaseUrl { get; set; } = "https://api.pwnedpasswords.com/range/";
public int GetOwnedCount(string password)
{
var hashedPassword = Hash(password);
var searchResultsString = client.GetStringAsync(BaseUrl + hashedPassword.Substring(0, 5)).Result;
var resultsArray = searchResultsString.Split(new[] { "\r\n" }, System.StringSplitOptions.RemoveEmptyEntries);
var key = hashedPassword.Substring(5);
foreach (var resultString in resultsArray)
{
var values = resultString.Split(':');
if (key == values[0])
{
var ownedPasswords = Int32.Parse(values[1]);
return ownedPasswords;
}
}
return 0;
}
public static string Hash(string input)
{
using (var sha1 = new SHA1Managed())
{
var hash = sha1.ComputeHash(Encoding.UTF8.GetBytes(input));
var sb = new StringBuilder(hash.Length * 2);
foreach (byte b in hash)
{
sb.Append(b.ToString("X2"));
}
return sb.ToString();
}
}
}</code></pre>
<h2>Block users without secure passwords</h2>
<p>For identity this can be done by implementing a new passwordvalidator class. Let's inherit the existing and spice it up:</p>
<pre class="language-csharp"><code>public class OwnedPasswordValidator: PasswordValidator
{
private readonly LocalizationService localizationService;
private readonly IOwnedPasswordRepository _ownedPasswordRepository;
public OwnedPasswordValidator(IOwnedPasswordRepository ownedPasswordRepository) :base()
{
_ownedPasswordRepository = ownedPasswordRepository;
localizationService = ServiceLocator.Current.GetInstance<LocalizationService>();
}
private ILogger _log = LogManager.Instance.GetLogger(typeof(OwnedPasswordValidator).ToString());
public string BaseUrl { get; set; } = "https://api.pwnedpasswords.com/range/";
public const string DefaultErrorMessage = "Your password occurs in hacked databases {0} times. Try another password!";
public int MaxAllowedOwnedPasswords { get; set; } = 0;
public const string OwnedPasswordErrorKey = "/OwnedPasswordError";
static HttpClient client = new HttpClient();
public override Task<IdentityResult> ValidateAsync(string password)
{
IdentityResult resultToReturn = IdentityResult.Success;
var baseResult = base.ValidateAsync(password).Result;
if(baseResult.Succeeded)
{
try
{
var ownedPasswordsCount = _ownedPasswordRepository.GetOwnedCount(password);
if (ownedPasswordsCount > MaxAllowedOwnedPasswords)
{
resultToReturn = IdentityResult.Failed(string.Format(localizationService.GetString(OwnedPasswordErrorKey, DefaultErrorMessage), ownedPasswordsCount));
}
}
catch(Exception ex)
{
_log.Error("Failed to call owned passwords service.",ex);
}
}
else
{
resultToReturn = baseResult;
}
return Task.FromResult(resultToReturn);
}
}</code></pre>
<p>Ok, so far so good. We have our own password validator class. But how to force Episerver identity based site to use it? Easiest is to take control of the registration in owin startup. Let's create some IAppBuilder extensions for initialization. </p>
<pre class="language-html"><code>/// <summary>
/// Some helper methods to use with Episerver identity based sites.
/// You can simply use it in your owin Startup.cs
///
/// app.AddCustomCmsAspNetIdentity<ApplicationUser>();
/// </summary>
public static class IdentityExtensions
{
public static IAppBuilder AddCustomCmsAspNetIdentity<TUser>(this IAppBuilder app) where TUser : IdentityUser, IUIUser, new()
{
return app.AddCustomCmsAspNetIdentity<TUser>(new ApplicationOptions());
}
public static IAppBuilder AddCustomCmsAspNetIdentity<TUser>(this IAppBuilder app, ApplicationOptions applicationOptions) where TUser : IdentityUser, IUIUser, new()
{
applicationOptions.DataProtectionProvider = app.GetDataProtectionProvider();
app.CreatePerOwinContext<ApplicationOptions>((Func<ApplicationOptions>)(() => applicationOptions));
app.CreatePerOwinContext<ApplicationDbContext<TUser>>(new Func<IdentityFactoryOptions<ApplicationDbContext<TUser>>, IOwinContext, ApplicationDbContext<TUser>>(ApplicationDbContext<TUser>.Create));
app.CreatePerOwinContext<ApplicationRoleManager<TUser>>(new Func<IdentityFactoryOptions<ApplicationRoleManager<TUser>>, IOwinContext, ApplicationRoleManager<TUser>>(ApplicationRoleManager<TUser>.Create));
app.CreatePerOwinContext<ApplicationUserManager<TUser>>(new Func<IdentityFactoryOptions<ApplicationUserManager<TUser>>, IOwinContext, ApplicationUserManager<TUser>>(ApplicationUserManagerInitializer<TUser>.Create));
app.CreatePerOwinContext<ApplicationSignInManager<TUser>>(new Func<IdentityFactoryOptions<ApplicationSignInManager<TUser>>, IOwinContext, ApplicationSignInManager<TUser>>(ApplicationSignInManager<TUser>.Create));
app.CreatePerOwinContext<UIUserProvider>(new Func<IdentityFactoryOptions<UIUserProvider>, IOwinContext, UIUserProvider>(ApplicationUserProvider<TUser>.Create));
app.CreatePerOwinContext<UIRoleProvider>(new Func<IdentityFactoryOptions<UIRoleProvider>, IOwinContext, UIRoleProvider>(ApplicationRoleProvider<TUser>.Create));
app.CreatePerOwinContext<UIUserManager>(new Func<IdentityFactoryOptions<UIUserManager>, IOwinContext, UIUserManager>(ApplicationUIUserManager<TUser>.Create));
app.CreatePerOwinContext<UISignInManager>(new Func<IdentityFactoryOptions<UISignInManager>, IOwinContext, UISignInManager>(ApplicationUISignInManager<TUser>.Create));
ConnectionStringNameResolver.ConnectionStringNameFromOptions = applicationOptions.ConnectionStringName;
return app;
}
}</code></pre>
<p>Ok, this looks tricky but to be honest it's really exactly what Episerver does below the hood except for one line:<br /><br /></p>
<pre class="language-html"><code> app.CreatePerOwinContext<ApplicationUserManager<TUser>>(new Func<IdentityFactoryOptions<ApplicationUserManager<TUser>>, IOwinContext, ApplicationUserManager<TUser>>(ApplicationUserManagerInitializer<TUser>.Create));</code></pre>
<p>If you are observant you can see we have added a custom Create method. Below the hood that Create() method does this:</p>
<pre class="language-html"><code>public static class ApplicationUserManagerInitializer <TUser> where TUser : IdentityUser, IUIUser, new()
{
public static ApplicationUserManager<TUser> Create(IdentityFactoryOptions<ApplicationUserManager<TUser>> options, IOwinContext context)
{
var userManager = ApplicationUserManager<TUser>.Create(options, context);
userManager.PasswordValidator = new OwnedPasswordValidator(new OwnedPasswordRepository())
{
RequiredLength = 6,
RequireNonLetterOrDigit = true,
RequireDigit = true,
RequireLowercase = true,
RequireUppercase = true,
MaxAllowedOwnedPasswords = 0
};
return userManager;
}
}</code></pre>
<p>So you can see above that we switch out the PasswordValidator to the our own. Only one step left now. We need to initialize this in Startup.cs with this line to use our new custom password validator:</p>
<pre class="language-html"><code>//Comment out this:
//app.AddCmsAspNetIdentity<ApplicationUser>();
app.AddCustomCmsAspNetIdentity<ApplicationUser>();</code></pre>
<h2>Test drive</h2>
<p>There you go! Take if for a test spin and check it out by trying to create a user with hacked password like: <a href="mailto:P@ssw0rd">P@ssw0rd</a></p>
<p>Nuget package is available for Episerver 11 with id BinaryTrue.OwnedPassword. </p>
<p>If you want to copy paste the code instead, head over to the <a href="https://github.com/danielovaska/PndedPassword/">github page</a>.</p>
<p><img src="/link/9cc73eb471fb4aafae9800e71a8ad25e.aspx" alt="Image Hacked2.PNG" /></p>
</body>
</html>