<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><title type="text">Blog posts by Antti Alasvuo</title><link href="http://world.optimizely.com" /><updated>2025-11-30T16:04:41.0000000Z</updated><id>https://world.optimizely.com/blogs/Antti-Alasvuo/</id> <generator uri="http://world.optimizely.com" version="2.0">Optimizely World</generator> <entry><title>Mastering Optimizely DXP: How to Download Blobs Like a Pro with PowerShell</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2025/11/how-to-download-optimizely-dxp-blobs-with-a-powershell-script/" /><id>&lt;p&gt;In 2021 I wrote a &lt;a title=&quot;Step by step instructions to download DXP blobs&quot; href=&quot;/link/bfe815302aae4a888f3ee5384a49d322.aspx&quot;&gt;blog post with detailed instuctions&lt;/a&gt; on how to download blobs from Optimizely DXP environment. I at least have used that blog post ever since as my notes &quot;What were the steps&quot; when ever I&#39;ve needed to download projects blobs to my local. But every time I&#39;ve downloaded blobs I have thinked that I should write a simple PowerShell script that does all the steps for me after I have provided the API credential and environment information. It would be also nice, if the script would print out the container names and I could select which container to download.&lt;/p&gt;
&lt;p&gt;Well it took me four years to find the time to write the PowerShell script but here it is.&lt;/p&gt;
&lt;h2&gt;Prerequisities&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;EpiCloud PowerShell module installed
&lt;ul&gt;
&lt;li&gt;Optimizely &lt;a title=&quot;EpiCloud module&quot; href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/deploy-using-powershell&quot;&gt;EpiCloud module&lt;/a&gt; install and other documentation related to it&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AzCopy installed
&lt;ul&gt;
&lt;li&gt;Download Microsoft &lt;a title=&quot;Download AzCopy&quot; href=&quot;https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy&quot;&gt;AzCopy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Optimizely DXP API credential or access to PAAS-portal to create API credential&lt;/li&gt;
&lt;li&gt;&lt;a title=&quot;PowerShell script to download Optimizely DXP blobs&quot; href=&quot;https://gist.github.com/alasvant/f41f8ffec7826b94efdc1b5796ae4d36&quot;&gt;Get my script&lt;/a&gt; from GitHub Gist (might be updated in future) and save it as download-dxp-blobs.ps1, or use the one this page&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can check my old blog post for the &lt;a title=&quot;install instructions&quot; href=&quot;/link/bfe815302aae4a888f3ee5384a49d322.aspx&quot;&gt;install instructions&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Using the PowerShell script&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Open command prompt and navigate to the folder where you have the script&lt;/li&gt;
&lt;li&gt;The Optimizely EpiCloud module is not signed and therefore You have to run PowerShell bypassing execution policy, so in command prompt run the following command:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;powershell.exe -ExecutionPolicy Bypass&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Now execute the script with the required arguments
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;&amp;amp; .\download-dxp-blobs.ps1&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-ApiKey [your API key]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-ApiSecret [your API secret]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-ProjectId [your project id]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-Environment [Optimizely DXP environment string]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-DownloadPath [path where to download in double quotes]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-RetentionHours [how many hours SAS link is valid]&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;This is optional, default is four hours&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Sample command:&lt;/p&gt;
&lt;pre class=&quot;language-html&quot;&gt;&lt;code&gt;&amp;amp; .\download-dxp-blobs.ps1 -ApiKey 1234567890123456238937223 -ApiSecret 9874677436746763443434dsaJKDSSJKHD -ProjectId 4D50395E-3805-498E-AD8C-82D7F530A596 -Environment Production -DownloadPath &quot;C:\Downloads\DXP\ProjectXBlobs&quot; -RetentionHours 6&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sample output from the script:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/1af2ef6f570841a187d140e7d6518680.aspx&quot; alt=&quot;&quot; width=&quot;1100&quot; height=&quot;323&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;PowerShell script for reference&lt;/h3&gt;
&lt;p&gt;Here is the script for reference, but you can also get it from&amp;nbsp;&lt;a title=&quot;PowerShell script to download Optimizely DXP blobs&quot; href=&quot;https://gist.github.com/alasvant/f41f8ffec7826b94efdc1b5796ae4d36&quot;&gt;this gist&lt;/a&gt;.&lt;/p&gt;
&lt;pre class=&quot;language-html&quot;&gt;&lt;code&gt;# Simple script to download Optimizely DXP blobs
# Created by Antti Alasvuo
# https://github.com/alasvant
# https://world.optimizely.com/blogs/Antti-Alasvuo/
#
# Gist: https://gist.github.com/alasvant/f41f8ffec7826b94efdc1b5796ae4d36
#
# ApiKey, ApiSecret and ProjectId You get from Optimizely PAAS-portal (DXP Management Portal) from API tab,
# create new API credential there if you don&#39;t have one yet
#
# Environment: Integration, Preproduction or Production
#
# DownloadPath, is the path where the selected container is downloaded to
#
# RetentionHours, how many hours the created SAS link is valid (how long this script can access the containers blobs)
# Default is 4 hours, you should ensure it is greater value than how long it takes to download all the blobs

# Get needed arguments for Optimizely cmd-lets
param (
 [Parameter(Mandatory)]
 [string]
 $ApiKey,
 [Parameter(Mandatory)]
 [string]
 $ApiSecret,
 [Parameter(Mandatory)]
 [string]
 $ProjectId,
 [Parameter(Mandatory)]
 [string]
 $Environment,
 [Parameter(Mandatory)]
 [string]
 $DownloadPath,
 [int]
 $RetentionHours = 4
)

# Not using Connect-EpiCloud because then when calling Get-EpiStorageContainer the &quot;table&quot;
# output is printed to console where ever I try to redirect the output :D

Write-Host &#39;Getting DXP containers..&#39;
Write-Host &#39;&#39;

$containers = Get-EpiStorageContainer -ClientKey $ApiKey -ClientSecret $ApiSecret -ProjectId $ProjectId -Environment $Environment | Select-Object -ExpandProperty storageContainers

# Just a counter used in the loop to have not zero based numbers in the selection
$foreachCounter = 1

foreach ($containerName in $containers)
{
	Write-Host (&quot;{0}) {1}&quot; -f ($foreachCounter++), $containerName)
}

$selectedIndex = -1

do
{
	$isInputValid = [int]::TryParse((Read-Host &#39;Enter container number (and press enter)&#39;), [ref]$selectedIndex)
	
	if ((-not $isInputValid))
	{
		Write-Host &#39;Invalid value entered.&#39;
	}
	elseif (($selectedIndex -lt 1) -or ($selectedIndex -gt $containers.length))
	{
		Write-Host &#39;Invalid number selected for container.&#39;
		$isInputValid = $false
	}
} while (-not $isInputValid)

$selectedContainerName = $containers[$selectedindex-1]

Write-Host &#39;&#39;
Write-Host (&quot;Selected container: {0}&quot; -f $selectedContainerName)

Write-Host &#39;&#39;
Write-Host &#39;Acquiring SAS link..&#39;

$sasLink = Get-EpiStorageContainerSasLink -ClientKey $ApiKey -ClientSecret $ApiSecret -ProjectId $ProjectId -Environment $Environment -StorageContainer $selectedContainerName -RetentionHours $RetentionHours | Select-Object -ExpandProperty sasLink

Write-Host &#39;SAS link acquired.&#39;
Write-Host &#39;&#39;

Write-Host &#39;Starting download using azure copy.&#39;
Write-Host &#39;&#39;

azcopy copy $sasLink $DownloadPath --recursive

Write-Host &quot;Done.&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Footer note: Would you have guessed that the blog post heading was created by Copilot :D&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</id><updated>2025-11-30T16:04:41.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Swapcode.Optimizely.AuditLog updated to v1.4.1</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2023/5/swapcode-auditlog-updated-to-v1-4-1/" /><id>&lt;p&gt;If you are using my audit log add-on Swapcode.Optimizely.AuditLog then I suggest that you update it in your solution.&lt;/p&gt;
&lt;p&gt;I&#39;ve been waiting now for few years for Optimizely to fix the styles used in the Change Log tool in admin view, but seems like they don&#39;t have time to fix the styles for the table.&lt;/p&gt;
&lt;p&gt;In the past the log messages have not been visible without using browser developer tools to see the HTML source..&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/8f8b41f7dda6497f91c5d196b10b3ca7.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In this new version I&#39;ve changed the structure of the activity log entry, and now you can see usually most of the relevant information.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/0baeaffb018447cf897284e642ab6c7f.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;There is still issue with the Change Log styling, you need to scroll the table to right, and the last column size depends on your browser window size, but there is nothing I can do about it ;-D&lt;/p&gt;
&lt;p&gt;Another change I did was I added a new extension method for IServiceCollection, AddAuditLog(), which you should start using instead of the old way.&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// replace old code
services.AddEmbeddedLocalization&amp;lt;AuditLogInitializationModule&amp;gt;();

// with the new extension method, which should be done after call to .AddCms()
.AddAuditLog();&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The add-on is still using the initialization module to register the event handlers but this is preparation to change the implementation in a following update, so the AddAuditLog method actually just currently registers the embedded resources ;-)&lt;/p&gt;
&lt;p&gt;The updated NuGet package should show up to the Optimizely NuGet feed shortly, but in the meantime you can manually get the updated NuGet package from my GitHub &lt;a href=&quot;https://github.com/alasvant/Swapcode.Optimizely.AuditLog/releases/tag/v1.4.1&quot;&gt;repository releases&lt;/a&gt;.&lt;/p&gt;</id><updated>2023-05-20T09:21:07.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Fix failing Remove Unrelated Content Assets Optimizely scheduled job</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2023/2/fix-failing-remove-unrelated-content-assets-optimizely-scheduled-job/" /><id>&lt;p&gt;Our Remove Unrelated Content Assets scheudled job started to fail with message &quot;&lt;em&gt;The DELETE statement conflicted with the REFERENCE constraint &quot;FK_tblContentProperty_tblContent2&quot;. The conflict occurred in database &quot;epicms&quot;, table &quot;dbo.tblContentProperty&quot;, column &#39;ContentLink&#39;.&lt;/em&gt;&quot;. All we knew some heavy content deletion and imports were done, but that is normal work supported by CMS (maybe not done in such magnitude often, but anyways). &lt;strong&gt;@Optimizely&lt;/strong&gt; hint, that error message in scheduled job views is pretty useless - we have no real data like the offending content id to easily try to understand what is wrong.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/970c635497fa4e4caf3ee3ca75b2b022.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So at this stage we only know that something is failing in database because of a constraint but we have no clue what content is causing this.&lt;/p&gt;
&lt;h3&gt;What content is causing the issue&lt;/h3&gt;
&lt;p&gt;So next step is, how do we figure out the content causing this - I did remember from past that the EPiServer.DataAccess.Internal.ContentSaveDB does actually log content ids but on DEBUG level.&lt;/p&gt;
&lt;p&gt;So what I did was get the DXP database exported and restored to local environment and then changed the log4net logging level to log everything.&lt;/p&gt;
&lt;p&gt;So in your log4net (usually in Optimizely CMS 11 projects named by default EPiServerLog.config) you need to change under root node the level node value to All or Debug, and then also in the appender configuration comment out the threshold node or change its value to Debug for example.&lt;/p&gt;
&lt;p&gt;Now when executing the job we get a lot more logging and should look in the log file just before the exception is logged.&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;DEBUG EPiServer.DataAccess.Internal.ContentSaveDB: Deleting content 65015
ERROR EPiServer.DataAbstraction.ScheduledJob: Job EPiServer.Util.CleanUnusedAssetsFoldersJob failed for the job &#39;Remove Unrelated Content Assets&#39; with jobId =&#39;e652f3bd-f550-40e8-8743-2c39cda651dc&#39; 
System.Data.SqlClient.SqlException (0x80131904): The DELETE statement conflicted with the REFERENCE constraint &quot;FK_tblContentProperty_tblContent2&quot;. The conflict occurred in database &quot;preprod-cms&quot;, table &quot;dbo.tblContentProperty&quot;, column &#39;ContentLink&#39;.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I&#39;ve cleaned a bit of trace messages away from the above log snippet, but you get the point from it.&lt;/p&gt;
&lt;p&gt;Now we can see that the system is about to delete content with id 65015 and that is the content id that causes the exception.&lt;/p&gt;
&lt;h3&gt;Manage content&lt;/h3&gt;
&lt;p&gt;Now that we know the content id we can navigate to CMS edit mode and modify the url to contain the offending content id like so &lt;em&gt;#context=epi.cms.contentdata:///&lt;strong&gt;65015&lt;/strong&gt;&amp;amp;viewsetting=viewlanguage:///en&lt;/em&gt; and hit enter to navigate to that content. In our case this was a content assets folder and it contained some images, but there is no way to delete this content from edit mode.&lt;/p&gt;
&lt;p&gt;But what about the &quot;Manage content&quot; tool in admin view under tools? If you would just simply navigate to the &quot;Manage content&quot; tool you would see site(s) resources in hierarchial view and thats it, but there is an old trick. Right click &quot;Manage Content&quot; under tools and select to open link in new tab, switch to the new tab and look at the url, this should give you the hint that the page accepts content id as url paramter. Edit the url and enter the offending content id to the url, in our case ManageContent.aspx?id=65015 and hit enter.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/cf8a9283cdec43dfbba2393342308e45.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;It looks like nothing changed BUT you are now actually editting that content with content id 65015. So our goal was to get rid of this content, so click &quot;&lt;em&gt;Move to trash&lt;/em&gt;&quot; button (not the Delete, there is a reason for it).&lt;/p&gt;
&lt;p&gt;The page will reload and the id in url changes to 4 (content asset folder root), the id might be different in your case if the type of the content was something else.&lt;/p&gt;
&lt;p&gt;Next switch to edit view and go to trash (View Trash). Now you should see the content in trash.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/44cf4e0d4dc44d57a50f16e7ae768120.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Before I moved the content to Trash, I had emptied the trash, so all content is now something that can be emptied from trash. In our case the content was the content folder so all content inside it can be seen here too. You can hover your mouse cursor over the folder and you can see it has the content id we just moved (like double checking). Next step is to delete the content from trash, you can use the &quot;Empty Trash&quot; button or one by one delete item(s).&lt;/p&gt;
&lt;h3&gt;Execute Remove Unrelated Content Assets&lt;/h3&gt;
&lt;p&gt;Final step is to check did we correct the issue, so navigate to admin view and then manually start the Remove Unrelated Content Assets scheduled job.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/e0427d115f6d4d859396d84744b6627e.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Success!&lt;/p&gt;
&lt;h3&gt;Outro&lt;/h3&gt;
&lt;p&gt;In the Content Management view I instructed to press the &quot;Move to trash&quot; button instead of &quot;Delete&quot; as there was a reason for it. I think I initially tried that approach and it didn&#39;t work but I&#39;m writing this blog post a month after I wrote the steps down, so I really don&#39;t remember anymore for sure and I don&#39;t have time to setup this again :D But if recall correctly it has to with how the job is implemented, it reads &quot;events&quot; from &quot;Change log&quot; and if the delete button was used in the Manage Content some info is still missing and it will not work, so it really needs to be done -&amp;gt; move to trash and then delete.&lt;/p&gt;</id><updated>2023-02-09T10:26:55.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Optimizely Forms uploaded attachments authentication issue with OpenID Connect</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2022/12/optimizely-forms-uploaded-attachments-access-issue/" /><id>&lt;p&gt;Customer is using &lt;a href=&quot;https://docs.developers.optimizely.com/integrations/v1.1.0-apps-and-integrations/docs/optimizely-forms&quot;&gt;Optimizely Forms&lt;/a&gt; to create various editor designed/configured forms to the website. In our case one had the option for the end-user to upload images using the form, and when the form was successfully submitted then an email was sent to editors containing also direct links to the uploaded images.&lt;/p&gt;
&lt;p&gt;Initially no one complained anything about the functionality, until one day a bug ticket was raised &quot;I get authentication error when I click the uploaded file link.&quot;&lt;/p&gt;
&lt;p&gt;Navigating to the uploaded file link from email in an incognito browser indeed showed OpenID Connect authentication error and from logs we could see the full reason:&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;// Error message split on multiple lines for easier reading

Microsoft.IdentityModel.Protocols.OpenIdConnect.OpenIdConnectProtocolInvalidNonceException: IDX21323: RequireNonce is &#39;[PII is hidden. For more details, see https://aka.ms/IdentityModel/PII.]&#39;.
OpenIdConnectProtocolValidationContext.Nonce was null, OpenIdConnectProtocol.ValidatedIdToken.Payload.Nonce was not null.
The nonce cannot be validated. If you don&#39;t need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to &#39;false&#39;. Note if a &#39;nonce&#39; is found it will be evaluated.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The second line says what happened, validation context nonce is null but payload nonce wasn&#39;t and therefore nonce cannot be validated.&lt;/p&gt;
&lt;p&gt;Nonce is used to prevent replay attacks and is on by default in Microsoft implementation (see Optimizely documentation &lt;a href=&quot;https://docs.developers.optimizely.com/content-cloud/v11.0.0-content-cloud/docs/integrate-azure-ad-using-openid-connect&quot;&gt;Integrate Azure AD using OpenID Connect&lt;/a&gt; for CMS 11 / .NET Framework 4.x), and by default you should not turn off this feature, it is on by default for a reason ;-) And as a side note, by default Optimizely Forms uploaded files are access restricted, otherwise someone from Internet could access any of the files if they could guess the urls.&lt;/p&gt;
&lt;h2&gt;Investigation, what goes wrong?&lt;/h2&gt;
&lt;p&gt;In development environment everything works as expected - even when user is not authenticated and navigates to the file url the authentication flow works correctly, editor is authenticated and then can access the uploaded files.&lt;/p&gt;
&lt;p&gt;Back to the &quot;&lt;em&gt;Nonce&lt;/em&gt;&quot;, when a user is not logged in and the authentication flow is started the redirect to authentication sets a nonce cookie named &quot;OpenIdConnect.nonce.[generated-characters-here]&quot;, and this nonce cookie is used by the validator when the user returns from the authentication. So we can confirm that this cookie is correctly set in development environment but when checking this in DXP there is no nonce cookie set by the response.&lt;/p&gt;
&lt;p&gt;As we know that CloudFlare is used in front of the DXP services which basically is Azure App Service (with some goodies), could the issue then be in CloudFlare configuration, so that it doesn&#39;t pass the nonce cookie to client in this case?&lt;/p&gt;
&lt;p&gt;We contacted the Optimizely Support and got the confirmation that the default CloudFlare cache rule cause this behavior.&lt;/p&gt;
&lt;h2&gt;Solution&lt;/h2&gt;
&lt;p&gt;Uploaded files are stored under the &quot;File upload&quot; Forms element, under folder &quot;Uploaded Files&quot;. You can check the uploaded files by editing the Optimizely Forms form (Form container) =&amp;gt; edit the &quot;File upload&quot; Forms element and then go to the &quot;Media tab&quot; and scroll to &quot;For This Block&quot; and expand the node and you can see the &quot;Uploaded Files&quot; folder.&lt;/p&gt;
&lt;p&gt;As we know the files get the url from the structure, for example &quot;my-demo.file.png&quot; would get something like this as the url &quot;/contentassets/ae5ba1d328f2532122679f73da8d1578/uploaded-files/my-demo-file_547166305085297447.png&quot;, so based on that we could have an ignore pattern like &quot;/contentassets/*/uploaded-files/*&quot;, to not use caching and to allow the authentication flow to work correctly.&lt;/p&gt;
&lt;p&gt;We contacted Optimizely support, and they added this new rule to CloudFlare for our customer, and now the authentication flow works correctly. Do note that you should only ask for this in the case you really need it, as this is not default configuration (at least not at the time when I wrote about this).&lt;/p&gt;</id><updated>2022-12-27T13:05:28.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Custom placeholders in Optimizely Forms submission emails</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2022/6/custom-place-holders-in-optimizely-forms-submission-emails/" /><id>&lt;p&gt;In this post I&#39;ll show you how you can easily create your own custom placeholders and use those in Optimizely Forms.&lt;/p&gt;
&lt;p&gt;The idea for this blog post came from the requirement of a customer project where they wanted to have the Forms submit timestamp in an email (email sent when a user submits a form).&lt;/p&gt;
&lt;p&gt;By default only the fields editor has added to the Forms form are available in the placeholders (summary field is a special placeholder) but when a form is submitted by the user there are also the system columns like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hosted page, from what page the form was submitted&lt;/li&gt;
&lt;li&gt;timestamp, when the form was submitted&lt;/li&gt;
&lt;li&gt;language, language of the forms FormContainerBlock&lt;/li&gt;
&lt;li&gt;user, username of th euser if the user was logged in&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So in this customer case we already have the forms submitted timestamp and would be ideal to just be able to use that instead of starting to create a completely custom email sending functionality, just to be able to add a timestamp to the email. &lt;a href=&quot;https://docs.developers.optimizely.com/content-cloud/v1.2.0-forms/docs/placeholder-api&quot;&gt;Optimizely Plaholder API&lt;/a&gt; to the rescue ;-) ...yes yes it is still in beta, but it has been that for a long time (&lt;a href=&quot;/link/76974ad8d2a84c1b989ad0ac453ab663.aspx?releaseNoteId=AFORM-854&amp;amp;amp;amp;amp;epsremainingpath=ReleaseNote/&quot;&gt;Improve PlaceHolder API (BETA)&lt;/a&gt;).&lt;/p&gt;
&lt;h2&gt;Custom placeholder provider&lt;/h2&gt;
&lt;p&gt;Implementing a custom placeholder provider is as easy as creating a class that implements the interface &lt;strong&gt;&lt;em&gt;EPiServer.Forms.Core.Internal.IPlaceHolderProvider&lt;/em&gt;&lt;/strong&gt; from &lt;strong&gt;EPiServer.Forms.Core.dll&lt;/strong&gt; (EPiServer.Forms.Core NuGet package).&lt;/p&gt;
&lt;p&gt;The interface defines two properties and one method:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;int Order { get; set; }&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Priority of the PlaceHolderProvider to process AvailablePlaceHolders&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;IEnumerable&amp;lt;PlaceHolder&amp;gt; ExtraPlaceHolders { get; }
&lt;ul&gt;
&lt;li&gt;Custom PlaceHolders will be merged into AvailablePlaceholders&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;IEnumerable&amp;lt;PlaceHolder&amp;gt; ProcessPlaceHolders(IEnumerable&amp;lt;PlaceHolder&amp;gt; availablePlaceHolders, FormIdentity formIden, HttpRequestBase requestBase = null, Submission submissionData = null, bool performHtmlEncode = true);
&lt;ul&gt;
&lt;li&gt;Called when &quot;replacing&quot; the placeholders with the actual data&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Define your custom placeholders&lt;/h3&gt;
&lt;p&gt;In the ExtraPlaceHolders property we define the custom placeholders this provider supports:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// Note the placeholder key value is what is displayed in placeholder dropdown

/// &amp;lt;summary&amp;gt;
/// Form submitted placeholder.
/// &amp;lt;/summary&amp;gt;
private const string FormSubmittedTimestamp = &quot;Form submitted&quot;;

/// &amp;lt;summary&amp;gt;
/// Form submitted by user placeholder.
/// &amp;lt;/summary&amp;gt;
private const string FormSubmittedBy = &quot;Form submitted by&quot;;

/// &amp;lt;summary&amp;gt;
/// Form submitted from page placeholder.
/// &amp;lt;/summary&amp;gt;
private const string FormSubmittedFromPage = &quot;Form submit page&quot;;

public IEnumerable&amp;lt;PlaceHolder&amp;gt; ExtraPlaceHolders =&amp;gt; new PlaceHolder[] {
    new PlaceHolder(FormSubmittedTimestamp, null),
    new PlaceHolder(FormSubmittedBy, null),
    new PlaceHolder(FormSubmittedFromPage, null)
};&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The PlaceHolder object takes two arguments:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First the &quot;&lt;em&gt;Key&lt;/em&gt;&quot; which is the placeholder string&lt;/li&gt;
&lt;li&gt;Second is the value&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So here three placeholder strings are created and &quot;&lt;em&gt;null&lt;/em&gt;&quot; is used for the value for all of them.&lt;/p&gt;
&lt;h3&gt;Process custom placeholders&lt;/h3&gt;
&lt;p&gt;The replacement of the placholders are done in the ProcessPlaceHolders method. Extension methods used in the code are available in &lt;a href=&quot;https://gist.github.com/alasvant/b114f32c0f991efbbe0a628a6fe6ddee&quot;&gt;GitHub Gist&lt;/a&gt;.&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;public IEnumerable&amp;lt;PlaceHolder&amp;gt; ProcessPlaceHolders(IEnumerable&amp;lt;PlaceHolder&amp;gt; availablePlaceHolders, FormIdentity formIden, HttpRequestBase requestBase = null, Submission submissionData = null, bool performHtmlEncode = true)
{
    if (availablePlaceHolders == null)
    {
        // the DefaultPlaceHolderProvider throw null exception too if the availablePlaceHolders is null
        // the interface documentation doesn&#39;t say anything about exceptions (which ones should be thrown)
        // so do it the same way
        throw new ArgumentNullException(nameof(availablePlaceHolders));
    }

    // get the submission data
    var data = submissionData?.Data;

    // if we don&#39;t have data, then do nothing
    if (data == null)
    {
        return availablePlaceHolders;
    }

    //
    // NOTE! Extension methods used in this code can be seen in gist:
    // https://gist.github.com/alasvant/b114f32c0f991efbbe0a628a6fe6ddee
    //

    foreach (var ph in availablePlaceHolders)
    {
        if (FormSubmittedTimestamp.Equals(ph.Key, StringComparison.Ordinal))
        {
            // get the submitted timestamp
            if (data.TryGetSubmitTime(out DateTime submitted))
            {
                data.TryGetLanguage(out string languageCode);

                try
                {
                    // the timestamp is DateTimeKind.Utc, format the timestamp using
                    // the forms culture or the default culture
                    ph.Value = $&quot;{submitted.ToString(GetCultureInfo(languageCode))} UTC&quot;;
                }
                catch {}
            }
        }
        else if(FormSubmittedBy.Equals(ph.Key, StringComparison.Ordinal))
        {
            // get the submitted by user
            if (data.TryGetSubmitUser(out string username))
            {
                ph.Value = string.IsNullOrWhiteSpace(username) ? AnonymousUsername : username;
            }
            else
            {
                ph.Value = AnonymousUsername;
            }
        }
        else if (FormSubmittedFromPage.Equals(ph.Key, StringComparison.Ordinal))
        {
            // get the form hosted page
            if (data.TryGetHostedPage(out ContentReference hostedPage))
            {
                LoaderOptions loadingOptions;

                // try to get the language code
                if (data.TryGetLanguage(out string languageCode))
                {
                    loadingOptions = new LoaderOptions { LanguageLoaderOption.FallbackWithMaster(GetCultureInfo(languageCode)) };
                }
                else
                {
                    loadingOptions = new LoaderOptions { LanguageLoaderOption.MasterLanguage() };
                }

                // try to load the content in the forms language and fallback to masterlanguage
                if (_contentLoader.Service.TryGet(hostedPage, loadingOptions, out PageData page))
                {
                    // and then resolve the url using the language the content was actually loaded in
                    var pageUrl = _urlResolver.Service.GetUrl(page);
                    ph.Value = $&quot;{page.Name}, {pageUrl}&quot;;
                }
            }
        }
    }

    return availablePlaceHolders;
}&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Custom placeholders in action&lt;/h2&gt;
&lt;p&gt;Now the editor can use the custom placeholders in the email template.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/6a9d81aea97448bbb93369222759034e.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;And then in the email we can see the values like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/5f6a758385f24831a0c4decd2496fa12.aspx&quot; /&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Form submitted&lt;/em&gt;, is formatted using the culture of the submitted form, in these samples in Finnish, Swedish and English&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Form submitted by&lt;/em&gt;, anonymous or the logged in user&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Form submitted from page&lt;/em&gt;, contains page name in the correct culture and the url of the page&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Full source code in GitHub gists&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://gist.github.com/alasvant/b114f32c0f991efbbe0a628a6fe6ddee&quot;&gt;FormSubmissionDataExtensions.cs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://gist.github.com/alasvant/5cfe6b643ecef37cdc31d9a85abc7383&quot;&gt;FormsSystemColumnsPlaceholderProvider.cs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</id><updated>2022-06-18T11:54:03.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Using dotnet CLI to create your new Optimizely project</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2021/12/using-dotnet-cli-to-create-your-new-optimizely-project/" /><id>&lt;p&gt;As everyone knows Optimizely CMS and Commerce have entered the .NET 5 era a while back. There is not yet much blog posts or Optimizely help documentation how you actually create the solution with projects and other needed files in a real world project. Lets face it, we don&#39;t create new solutions daily and even the seasoned developers might forget &quot;what they were supposed to do when starting a new customer project&quot;. So here is my take to add to the existing guides and documentation.&lt;/p&gt;
&lt;h3&gt;What do we want to achieve?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;we want to have a solution&lt;/li&gt;
&lt;li&gt;which has two projects
&lt;ul&gt;
&lt;li&gt;one for CMS&lt;/li&gt;
&lt;li&gt;one class library (lets pretend we will place our content models here)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;it has a nuget.config which includes the Optimizely NuGet feed&lt;/li&gt;
&lt;li&gt;(it is a shame that the .editorconfig template is not available with SDK 5.x but comes in SDK 6.x otherwise that would have been added here too)&lt;/li&gt;
&lt;li&gt;added to version control&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And the solution would look like this in Visual Studio (ignore the readme.txt in the image)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/ad81c1d92fa74385941bd385574e0535.aspx&quot; alt=&quot;Image&amp;#32;vs-solution.png&quot; /&gt;&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You need to have Visual Studio v16.8 or newer installed (comes with 5.0.100 .NET 5 SDK)&lt;/li&gt;
&lt;li&gt;Optimizely templates and Episer CLI tool installed
&lt;ul&gt;
&lt;li&gt;see install instructions from the &quot;&lt;a href=&quot;/link/b8dfde46279b431cab83e33c62168130.aspx&quot;&gt;Setting up development environment&lt;/a&gt;&quot; documentation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;(optional: git, if not having ;-) then skip the git commands)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;dotnet CLI commands to create the solution&lt;/h2&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// open command prompt and create a new folder which will hold the solution, projects and other files
mkdir MyAwesomeSolution
cd MyAwesomeSolution

// initialize git
git init

// OPTIONAL: this step is mandatory only if you have .NET 6 SDK(s) installed
// as the default is that the latest SDK is used, with global.json we can &quot;lock&quot; to a certain version
dotnet new globaljson --sdk-version 5.0.100

// open the created global.json in a text editor and add &quot;allowPrerelease&quot; and &quot;rollForward&quot; properties (and save it)
// we don&#39;t allow prereleases and we allow latest minor SDK version upgrades, so we stick to the 5.x versions
{
  &quot;sdk&quot;: {
    &quot;allowPrerelease&quot;: false,
    &quot;version&quot;: &quot;5.0.100&quot;,
    &quot;rollForward&quot;: &quot;latestMinor&quot;
  }
}

// next create the solution file
dotnet new sln -n &quot;My Awesome Solution&quot;

// create a folder to hold the sources, its common to have the source files/projects under &#39;src&#39; folder
mkdir src

// next add the empty CMS project to the src folder
// -n is used for the project name
// -o is used to specify where the output from template should be placed in
dotnet new epicmsempty -n MyCompany.Web -o src\MyCompany.Web

// next add the class library
dotnet new classlib -n MyCompany.ContentTypes -o src\MyCompany.ContentTypes

// next the projects need to be added to the solution
// NOTE! The --in-root is used so that the command doesn&#39;t create a &quot;src&quot; solution folder
// but places the projects directly under the solution, remove the --in-root option to create a solution folder
dotnet sln &quot;My Awesome Solution.sln&quot; add --in-root src/MyCompany.Web/MyCompany.Web.csproj
dotnet sln &quot;My Awesome Solution.sln&quot; add --in-root src/MyCompany.ContentTypes/MyCompany.ContentTypes.csproj

// next we need nuget.config
dotnet new nugetconfig

// and then we need to add the Optimizely NuGet feed to it
dotnet nuget add source https://nuget.optimizely.com/feed/packages.svc -n &quot;Optimizely NuGet Feed&quot;

// and then add the gitignore file
dotnet new gitignore

// and then stage and commit our files
git add -A
git commit -m &quot;Initial commit.&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Open the solution in Visual Studio&lt;/h3&gt;
&lt;p&gt;Now it is time to open the solution in Visual Studio, and you should be able to build it and all should be fine.&lt;/p&gt;
&lt;p&gt;But remember the step where the nuget.config was added? At least I&#39;m used to have the nuget.config visible in &quot;Solution Items&quot;, but we never added that file to the solution using &quot;dotnet sln add...&quot;, well there is a reason for that - currently you can only add project files with the &quot;dotnet sln add&quot; command, so if you want to have the file in &quot;Solution Items&quot; you need to do it in Visual Studio.&lt;/p&gt;
&lt;h3&gt;Running the created solution&lt;/h3&gt;
&lt;p&gt;So we created empy Optimizely CMS site but we have no database for it or edit/admin user.&lt;/p&gt;
&lt;p&gt;As the default empty site is configured to use ASP.NET Identity we can use the &lt;em&gt;dotnet-episerver&lt;/em&gt; tool to create the database and add an admin user. Minimal documentation for the tool can be found from the &quot;&lt;a href=&quot;/link/893f1ecd15be4522a18bc054c8a59dbd.aspx&quot;&gt;Creating a starter project&lt;/a&gt;&quot; documentation or from the tool itself by executing &quot;&lt;em&gt;dotnet-episerver --help&lt;/em&gt;&quot; for general help and then same can be used for details on the command usage like &quot;&lt;em&gt;dotnet-episerver create-cms-database --help&lt;/em&gt;&quot;.&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// create databse for CMS
// -S is the server and instance where the database will be created, so in my case local machine and SQLEXPRESS named instance
// -E means that integrated security aka Windows authentication is used (using my credentials) when connecting to the database server
// if the database server uses only SQL authentication or your identity don&#39;t have access then you shouldn&#39;t use -E but -U and -P to define the SQL credentials to connect to the server
// -dn is the database name to be created
// -du is the user to be created that is used by the website to access the DB
// -dp is the users password
dotnet-episerver create-cms-database src\MyCompany.Web\MyCompany.Web.csproj -S .\sqlexpress -E -dn &quot;MyCompanyWeb&quot; -du &quot;mycompanywebuser&quot; -dp &quot;W3RySeCre7!2021&quot;

// next add the &quot;admin&quot; user
// -u is the username
// -p is the password
// -c is the connection string name (see MyCompany.Web projects appsettings.json)
dotnet-episerver add-admin-user src\MyCompany.Web\MyCompany.Web.csproj -u mycompanyuser -p MySecret!2021 -e mycompanyuser@local.local -c EPiServerDB&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After the above commands have been executed do note that adding the admin user generates a log file (log.txt), and naturally you don&#39;t want to have this added to version control, so delete the file (its placed to the solution root folder, where the command was executed from). Don&#39;t forget to commit your changes.&lt;/p&gt;
&lt;p&gt;Now you can run the website and login to edit/admin views with the created user, so hit F5. Browser naturally tries to request the root of the website and as there is no content, you need to manually change the address to the default Optimizely CMS edit url &quot;/episerver/cms/&quot; and hit enter.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/ed129b04d91a402983432a12f4a0b6d0.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Happy happy, joy joy ;-)&lt;/p&gt;</id><updated>2021-12-12T12:03:53.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Audit logging add-on for netcore-preview</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2021/9/audit-logging-add-on-for-netcore-preview/" /><id>&lt;p&gt;The .Net 5 public review has now been out for 2+ months and if you have missed that you definitely need to read post by Martin Ottosen &lt;a href=&quot;/link/1bc8e0286e4348c8b0d527ca35b5cfe9.aspx&quot;&gt;.Net 5 public preview&lt;/a&gt;, read the &lt;a href=&quot;/link/3ba6f4e13c99477fa35bcab293f3c66c.aspx&quot;&gt;.Net 5 preview documentation&lt;/a&gt; and have a look at the GitHub &lt;a href=&quot;https://github.com/episerver/netcore-preview&quot;&gt;netcore-preview repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Roughly a year ago I created the &lt;a href=&quot;https://nuget.optimizely.com/package/?id=Swapcode.Episerver.AuditLog&quot;&gt;Swapcode.Episerver.AuditLog&lt;/a&gt; package to log access right changes to the built-it activity log (Change log), you can see &lt;a href=&quot;https://swapcode.wordpress.com/2020/09/05/swapcode-episerver-auditlog-is-now-available-in-episerver-nuget-feed/&quot;&gt;old post about it&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I had some free time, so I decided why not prepare the package for the .NET 5 implementation and at the same time rename it to reflect the new Optimizely name. Code can be found from my GitHub repository &lt;a href=&quot;https://github.com/alasvant/Swapcode.Optimizely.AuditLog&quot;&gt;Swapcode.Optimizely.AuditLog&lt;/a&gt; and pre-built NuGet package from the &lt;a href=&quot;https://github.com/alasvant/Swapcode.Optimizely.AuditLog/releases&quot;&gt;releases&lt;/a&gt; (not in any feed as it is just a preview version, &lt;a href=&quot;https://github.com/alasvant/Swapcode.Optimizely.AuditLog/releases/download/v1.0.0-preview.1/Swapcode.Optimizely.AuditLog.1.0.0-preview.1.nupkg&quot;&gt;first version direct download link&lt;/a&gt;).&lt;/p&gt;
&lt;h3&gt;Installing the package&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;download the NuGet package&lt;/li&gt;
&lt;li&gt;create a local (disk or network share) NuGet source, and add the NuGet package there&lt;/li&gt;
&lt;li&gt;configure the new source in Visual Studio / NuGet.config of your project&lt;/li&gt;
&lt;li&gt;check the &#39;Include prerelease&#39; checkbox in Visual Studio so that you will see package&lt;/li&gt;
&lt;li&gt;install it to your project&lt;/li&gt;
&lt;li&gt;add configuration for the embedded localizations in your startup class, in ConfigureServices method after a call to services.AddCms();&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;services.AddEmbeddedLocalization&amp;lt;AuditLogInitializationModule&amp;gt;();&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Usage&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/link/d45855ae69b04d65a9e500b720ec3831.aspx&quot; /&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;login to Optimizely CMS&lt;/li&gt;
&lt;li&gt;navigate to Admin view -&amp;gt; Access Rights -&amp;gt; Set Access Rights
&lt;ul&gt;
&lt;li&gt;change a content items access rights&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;go to Admin view -&amp;gt; Tools -&amp;gt; Change Log
&lt;ul&gt;
&lt;li&gt;in &#39;Category&#39; select &#39;Content security&#39;&lt;/li&gt;
&lt;li&gt;in &#39;Action&#39; select what kind of entries you want to see&lt;/li&gt;
&lt;li&gt;click &#39;read&#39;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following screenshot shows how the results look like. Note there is a small issue with the data column, the content is not completely visible because of the preview versions CSS styles (but clever users can inspect the column with browsers developer tools to see the full message). Hopefully Optimizely will do something about that data column, so I don&#39;t need to format the message with br-tags :D&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/a85b4430f81d40f4931c6ce707712db7.aspx&quot; /&gt;&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;So, if you have been using this add-on then you can be sure it will be available also for the next .NET 5 version of Optimizely CMS.&lt;/p&gt;</id><updated>2021-09-12T13:15:51.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>How to download blobs from Optimizely (Episerver) DXP environment</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2021/6/how-to-download-blobs-from-optimizely-episerver-dxp-environment/" /><id>&lt;p&gt;Ever wanted to download the blobs from Optimizely DXP environment but don&#39;t know how to? In this post I will clarify the process how to download blobs from DXP environment and what tools you need to accomplish it.&lt;/p&gt;
&lt;p&gt;As we develop our customer solutions there quite often comes the situation that instead of creating some dummy content we would like to grap for example the latest content and blobs from the DXP production environment and work with that. As you might know the DXP service is built on top of Azure services you might think that you would simply login to the Azure Portal and navigate to the storage resources and get blobs from there, but as this is a service we don&#39;t get direct access to the underlying Azure resources via Azure Portal.&lt;/p&gt;
&lt;p&gt;Optimizely DXP has a self-service portal which allows us to do deployments, view application logs, copy content between environments and download content from DXP envrionment but this is limited to the database BACPAC file which doesn&#39;t include the blobs which we naturally also require. Optimizely DXP documentation has the information on how to get the blobs but that information is spread on multiple pages and no single page howto to exists (at least to my knowledge ;-) so that is the reason for this blog post). Just in case you don&#39;t know how to &lt;a href=&quot;/link/e857f1af908f40c0840a704ff8b640d8.aspx&quot;&gt;export the database to BACPAC file in DXP&lt;/a&gt; and how to &lt;a href=&quot;/link/858d3e9c8a7a41c7afab709a3e0558a4.aspx&quot;&gt;get access to the self-service portal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;EDIT 2025-11-30&lt;/strong&gt;&lt;/em&gt;, I&#39;ve written a new &lt;a title=&quot;Download Optimizely DXP blobs like a pro with PowerShell script&quot; href=&quot;/link/dc53a6a1c6d244e5aa6f4cff43c87371.aspx&quot;&gt;blog post which contains a PowerShell script&lt;/a&gt; to do all this with just single command, so if You end up here, you might want to have a look at that.&lt;/p&gt;
&lt;h1&gt;Download blobs from Optimizely DXP environment&lt;/h1&gt;
&lt;p&gt;Before you continue to the actual process ensure you have access to DXP self-service portal or a team mate can provide you with an environments API information (project ID, client secret and key).&lt;/p&gt;
&lt;p&gt;Rest of the post assumes you have access to the DXP self-service portal and we are settings things up from scratch.&lt;/p&gt;
&lt;h2&gt;Prerequisities&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;EpiCloud PowerShell module installed
&lt;ul&gt;
&lt;li&gt;Optimizely &lt;a href=&quot;/link/eb85a23d0c1843aaa36f121668e1059e.aspx#install&quot;&gt;EpiCloud module install and other documentation&lt;/a&gt; related to it&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AzCopy installed
&lt;ul&gt;
&lt;li&gt;Microsoft &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy&quot;&gt;AzCopy download&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;access to DXP self-service to create API credentials&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install EpiCloud PowerShell module&lt;/h3&gt;
&lt;p&gt;If this is your first time you install any PowerShell modules from &lt;a href=&quot;https://www.powershellgallery.com/&quot;&gt;PSGallery&lt;/a&gt; you should add it as a trusted source to avoid warning when installing modules from there.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;open Windows PowerShell
&lt;ul&gt;
&lt;li&gt;check is the PSGallery already added by running command
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;Get-PSRepository&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;if you get output listing PSGallery with InstallationPolicy &#39;Trusted&#39; you can skip the next command to add it as a trusted source&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;nbsp;add PSGAllery as trusted source with the following command
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;Set-PSRepository -Name &#39;PSGallery&#39; -InstallationPolicy Trusted&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;install EpiCloud module with the following command
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;Install-Module -Name EpiCloud -Scope CurrentUser&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NOTE!&lt;/strong&gt; The EpiCloud module is not signed so when you use it you need to run the PowerShell session with &quot;&lt;strong&gt;&lt;em&gt;powershell.exe -ExecutionPolicy -Bypass&lt;/em&gt;&lt;/strong&gt;&quot; (and note the correct capitalization)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Download and install AzCopy&lt;/h3&gt;
&lt;p&gt;Download the latest version from Microsoft AzCopy page: &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy&quot;&gt;https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;AzCopy is delivered as a zip file and you can extract it anywhere you like. I would suggest that you put it somewhere you have other tools that don&#39;t have an installer and you have added that location to your system or user path (in Windows) so that you can easily execute it from anywhere (without using always the full path to the executable).&lt;/p&gt;
&lt;h3&gt;Create DXP API credential for the environment&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Login to the Optimizely DXP self-service portal&lt;/li&gt;
&lt;li&gt;Select the customer (organization) you want to create the API credential&lt;/li&gt;
&lt;li&gt;Switch to the API tab&lt;/li&gt;
&lt;li&gt;Under the &#39;Deployment API credentials&#39; click &#39;Add API Credentials&#39; button
&lt;ul&gt;
&lt;li&gt;Give a name for the API Credential&lt;/li&gt;
&lt;li&gt;Select to which environments the API credential can be used to&lt;/li&gt;
&lt;li&gt;&lt;img src=&quot;/link/94a1d204110540a4abea71da35c851ca.aspx&quot; /&gt;&lt;/li&gt;
&lt;li&gt;Click &#39;Save&#39;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;IMPORTANT!&lt;/strong&gt; After you have clicked Save, you should copy the &#39;API Secret&#39; value as this is the only time you can do it! If you forget to copy and store it securely in this step, just create a new API Credential&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Download blobs from DXP environment&lt;/h2&gt;
&lt;p&gt;The following commands assume we have created API Credential for production environment and we will download blobs from production, but the same applies to any other DXP environment as long you use correct environments API Credentials and correct &#39;Environment name&#39; in cmdlets that require you to define it.&lt;/p&gt;
&lt;p&gt;EpiCloud module has these three &lt;strong&gt;environment names&lt;/strong&gt;: Integration, Preproduction and Production&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open Windows PowerShell&lt;/li&gt;
&lt;li&gt;Start a new PowerShell session that bypasses execution policy (needed because the EpiCloud module is not signed) with the following command (&lt;strong&gt;NOTE!&lt;/strong&gt; you have to use the correct capitalization of ExecutionPolicy and the value Bypass, otherwise the bypass policy is not applied)
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;powershell.exe -ExecutionPolicy Bypass&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Set the Optimizely API credential for the session, so that you don&#39;t need to pass those as arguments to the EpiCloud module each time with the following command
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;Connect-EpiCloud -ClientKey Your-ClientKey -ClientSecret Your-ClientSecret -ProjectId Your-ProjectId&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;ClientKey is the &#39;API key&#39; value from API Credential&lt;/li&gt;
&lt;li&gt;ClientSecret is the &#39;API secret&#39; value from API Credential&lt;/li&gt;
&lt;li&gt;ProjectId is the &#39;Project Id&#39; value from API Credential&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;now we have the &quot;authentication&quot; for the API setup and we can use the EpiCloud modules cmdlets&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Optional&lt;/em&gt;, get the containers for production environment with the following command (not needed if you know the container name):
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;Get-EpiStorageContainer -Environment &quot;Production&quot;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;the command will return you the columns &quot;projectId&quot;, &quot;environment&quot; and &quot;storageContainers&quot;
&lt;ul&gt;
&lt;li&gt;for example: azure-application-logs, azure-web-logs and name-of-your-blob-container
&lt;ul&gt;
&lt;li&gt;the blob container name is naturally the container name you have defined for that environment in your web.config, &lt;a href=&quot;/link/f8095ba67d6243d5960675015884e355.aspx#blobproviders&quot;&gt;episerver.framework blob&lt;/a&gt; section&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;To be able to download the blobs we need a SAS token (Shared Access Signature) for the blob storage container, execute the following command
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;Get-EpiStorageContainerSasLink -Environment &quot;Production&quot; -StorageContainer &quot;your-site-assets-container-name-here&quot; -RetentionHours 1 | Format-Table sasLink -AutoSize -Wrap&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;-Environment, one of the EpiCloud environment strings listed in the start&lt;/li&gt;
&lt;li&gt;-StorageContainer, the container name you have defined for the blobs&lt;/li&gt;
&lt;li&gt;-RetentionHours, integer how many hours the SAS token is valid for (use short time)&lt;/li&gt;
&lt;li&gt;Without piping the cmdlet to Format-Table, command would output 5 &quot;columns&quot;: projectId, environment, containerName, sasLink and expiresOn&lt;/li&gt;
&lt;li&gt;copy the sasLink value as you will need that as input to the AzCopy command, just be sure when you paste the url for AzCopy that it doesn&#39;t contain any extra whitespaces or line feeds&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Next step is to download the blobs
&lt;ul&gt;
&lt;li&gt;AzCopy will need the path where to download the blobs, so decide the location where you will want to download the blobs
&lt;ul&gt;
&lt;li&gt;do note that you don&#39;t need to create the &quot;container name&quot; folder in your local path as AzCopy will &quot;replicate&quot; the structure from the storage&lt;/li&gt;
&lt;li&gt;for example, you define local path c:\blobs-from-prod\2021-06-13 (this needs to exist)
&lt;ul&gt;
&lt;li&gt;and the container name is &#39;siteassets&#39;, then AzCopy will create that &#39;siteassets&#39; under the c:\blobs-from-prod\2021-06-13 path&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Execute command to copy the blobs
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;azcopy copy &quot;sasLink value here inside the double quotes&quot; &quot;your destination path inside double quotes&quot; --recursive&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;note there are two hyphens before the recursive argument, this means that the blobs are copied recursively under the container (read: all blobs are copied to destination)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;how long does it take? depends on how many blobs there are and how fast your internet connection is
&lt;ul&gt;
&lt;li&gt;for example, with 1Gbps connection it took less than a minute to copy 3,6GB (3331 files) =D&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;img src=&quot;/link/5979eed365554a3ab6c2b92000e33164.aspx&quot; /&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;I&#39;ve covered the tools and steps you need to do, to be able to download blobs from Optimizely DXP environment. Easy &amp;amp; Fast. Hopefully this short &quot;how to&quot; helps you in your projects.&lt;/p&gt;</id><updated>2021-06-13T14:54:01.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Remove Unrelated Content Resources scheduled job failing</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2021/3/remove-unrelated-content-resources-scheduled-job-failing/" /><id>&lt;p&gt;This is quick post about a situation where the Episerver scheduled job &quot;Remove Unrelated Content Resources&quot; started to fail because of a database constraint. I would say that this is a really rare case (seen it happen only in two projects) but sharing the information just in case you face the same thing in your project so you are able to solve it quicker ;-)&lt;/p&gt;
&lt;p&gt;So when this error occurs the scheduled job records error message &quot;&lt;em&gt;The DELETE statement conflicted with the REFERENCE constraint &quot;FK_tblContentProperty_tblContent2&quot;. The conflict occurred in database &quot;YOUR-DB-NAME-HERE&quot;, table &quot;dbo.tblContentProperty&quot;, column &#39;ContentLink&#39;.&lt;/em&gt;&quot;. No that helpful message :-/ something wrong in the database BUT you would be interested to know what content caused the issue, so you might head to your applications error logs and hope to see the offending content id there, but sadly it is not logged (&lt;strong&gt;WINK&lt;/strong&gt;, Episerver maybe change the scheduled job and/or the ContentDB to log the offending content id?).&lt;/p&gt;
&lt;h2&gt;Remove Unrelated Content Resources scheduled job&lt;/h2&gt;
&lt;p&gt;So the job is defined in &lt;em&gt;EPiServer.dll&lt;/em&gt; and the class is &lt;em&gt;CleanUnusedAssetsFoldersJob&lt;/em&gt; in &lt;em&gt;EPiServer.Util&lt;/em&gt; namespace. Having a look at the code in &lt;a href=&quot;https://github.com/icsharpcode/ILSpy/releases&quot;&gt;ILSpy&lt;/a&gt; I could track the code path and see if there are any places where I would get some log messages to help find the offending content id. I was lucky, the code eventually endsup to the &#39;&lt;em&gt;ContentSaveDB.Delete(ContentReference contentLink, bool forceDelete)&lt;/em&gt;&#39; method (in EPiServer.dll, namespace &lt;span&gt;EPiServer.DataAccess.Internal). In that method there is a debug log message writen before the content is deleted using the a stored procedure &#39;editDeletePage&#39; which takes the content id and boolean force delete parameters.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Log messages from ContentSaveDB.Delete&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span&gt;So next we need to modify our logging to log debug messages from the ContentSaveDB class. Here is a sample using the Episerver default log4net logging framework, if you have replaced log4net with something else you anyways get the idea what you want to log.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;In your Episerver log configuration add new appender or ensure that the appender you use allows the debug messages to be writen, here is a demo appender I used to log the messages to a separate file (notice there is no threshold or ranges for log levels configured):&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;&amp;lt;appender name=&quot;debugLogAppender&quot; type=&quot;log4net.Appender.RollingFileAppender&quot; &amp;gt;
	&amp;lt;file value=&quot;App_Data\logs\DebugMessages.log&quot; /&amp;gt;
	&amp;lt;encoding value=&quot;utf-8&quot; /&amp;gt;
	&amp;lt;staticLogFileName value=&quot;true&quot;/&amp;gt;
	&amp;lt;datePattern value=&quot;.yyyyMMdd.&#39;log&#39;&quot; /&amp;gt;
	&amp;lt;rollingStyle value=&quot;Date&quot; /&amp;gt;
	&amp;lt;appendToFile value=&quot;true&quot; /&amp;gt;
	&amp;lt;layout type=&quot;log4net.Layout.PatternLayout&quot;&amp;gt;
		&amp;lt;conversionPattern value=&quot;%date [%thread] %level %logger: %message%n&quot; /&amp;gt;
	&amp;lt;/layout&amp;gt;
&amp;lt;/appender&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then we need a logger definition that uses the above appender to actually log the messages (using additivity=&quot;false&quot; here so that the messages matching this logger areonly logged here and not passed to other loggers):&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;&amp;lt;logger name=&quot;EPiServer.DataAccess.Internal.ContentSaveDB&quot; additivity=&quot;false&quot;&amp;gt;
	&amp;lt;level value=&quot;All&quot; /&amp;gt;
	&amp;lt;appender-ref ref=&quot;debugLogAppender&quot; /&amp;gt;
&amp;lt;/logger&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So next step is to run the the failing job again with the above logging configured and now we get to the log debug messages that tell us the offending content id, like this: &quot;DEBUG EPiServer.DataAccess.Internal.ContentSaveDB: Deleting content 115&quot;. So basically the last debug message about &quot;deleting content&quot; before the error in database happens is your offending content id and you can use that content id then to see what content the offending content is (hint: take that id and browse to it in Episerver edit mode, so when in edit mode for example on the site start page modify your browser url and replace the content id in &quot;epi.cms.contentdata:///&lt;strong&gt;5&lt;/strong&gt;&quot; with the offending content id value like this &quot;epi.cms.contentdata:///115&quot;).&lt;/p&gt;
&lt;p&gt;So this way you can find the offending content and try to troubleshoot and understand what has caused the situation and should you actually care or just get rid of the content the &quot;force&quot; way.&lt;/p&gt;
&lt;h2&gt;Deleting the offending content&lt;/h2&gt;
&lt;p&gt;The first time we faced this issue we contacted Episerver support to get knowledge has this happened in other projects and is there any &quot;supported&quot; way to fix it. This is not common and from Episerver support we just got information that we should edit the &#39;editDeletePage&#39; stored procedure in the database and set it temporary to use the force delete always (see the @ForceDelete argument in the stored procedure and set it to be 1 always inside the stored procedure) and then run the scheduled job and revert the change to the stored procedure. Naturally there is the disclaimer that you need to test it and take backups if that doesn&#39;t work for some reason or causes new issues so you can revert the database to the otherwise working state.&lt;/p&gt;
&lt;p&gt;But now that you know what is the offending content id you could skip the stored procedure modifications and instead execute it with the offending content id and call it with @ForceDelete value 1 from SQL Server Management Studio or from command line like this:&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;USE [YOUR-DATABASE-NAME-HERE]
GO
DECLARE	@return_value Int
EXEC	@return_value = [dbo].[editDeletePage]
		@PageID = THE-CONTENT-ID-HERE,
		@ForceDelete = 1
SELECT	@return_value as &#39;Return Value&#39;
GO&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And after executing that you would run the scheduled job again which could still faill because you might have more than one offending content ids but then you would just execute the stored procedure again with the new offending content id.&lt;/p&gt;
&lt;p&gt;I hope you never face this same issue but if you do, then you can use the above to fix the situation.&lt;/p&gt;</id><updated>2021-03-14T11:07:41.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Migrating Episerver content with custom import processing</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2021/1/migrating-episerver-content-with-custom-import-processing/" /><id>&lt;p&gt;In this blog post I tell you how we solved with custom import processing the customers need to migrate thousand or a bit more pages (plus blocks and media data instances) from one Episerver site to another Episerver site with different content types.&lt;/p&gt;
&lt;h3&gt;Background&lt;/h3&gt;
&lt;p&gt;First a quick background to understand a bit why and what. Our team had gotten a website lets call it &#39;&lt;strong&gt;Mango&lt;/strong&gt;&#39; to maintenance from another Episerver partner and the site building had started around 2014 so there was some history on the implementation of the site and one could clearly see there has been different teams doing the implementation as there seemed to not be one common way how to do things but anyways. So our teams work started with getting the hang of the website implementation and at the same time continue implementing new features and doing some polishing and fixes to it. Months passed and we got a request from the project owner that we would need a new website which is a copy paste of the current website codebase and we need it yesterday ;-)&lt;/p&gt;
&lt;p&gt;So naturally we had some discussion with the customer could they tell us more why they need a copy paste of the website and we explained that we shouldn&#39;t just copy paste blindly as we would then copy some of the not so good ideas. Also we pointed out that we will have technical dept after this in two sites and not just one. The customer couldn&#39;t tell us everything about why they need it and was sad about it at the same time but this copy paste website was supposed to be just temporary, live for 3 months or 6 months at maximum. We would only change fonts and some colors and we would get those soon from a design agency.&lt;/p&gt;
&lt;p&gt;Well you can guess it already - the design changes were not just fonts and colors but quite big layout and grid changes compared to copied original layout - and there was the time pressure for the implementation of the &quot;ccopy &amp;amp; paste&quot; website. Anyways the team did it - we did copy paste code base but with modifications, like renaming used content types (namespace changes naturally), use feature folders, refactor configuration and settings, new layouts and styles, etc improvements.&lt;/p&gt;
&lt;p&gt;After this &quot;copy &amp;amp; paste&quot; website was ready and testing with real content started, we were told the real reason: there was a merger and now it was official and all legal approval was ready and announced to the public. Our company &#39;&lt;strong&gt;Mango&lt;/strong&gt;&#39; merged with another company lets call it &#39;&lt;strong&gt;Orange&lt;/strong&gt;&#39; and together they were called &#39;&lt;strong&gt;Mango Orange group&lt;/strong&gt;&#39;. So we had created a website for this new company and mainly to contain their new joined corporate pages and financial information. All good.&lt;/p&gt;
&lt;p&gt;All of the initial content for this new &#39;Mango Orange group&#39; website was done manually as the content didn&#39;t exist on either &#39;Mango&#39; or &#39;Orange&#39; websites but then came the &#39;but&#39; - it was decided by the new company that this copy &amp;amp; paste website would be the new website and we would need to develop new features there AND migrate existing content from &#39;Orange&#39; website...&lt;/p&gt;
&lt;h3&gt;Content migration options&lt;/h3&gt;
&lt;p&gt;If the source websites content types are defined in separate assembly then you could simply reference that assembly in the target website solution and use Episerver built-in export and import functionality with a &lt;strong&gt;BUT&lt;/strong&gt;, yes you would have the content types and the import would just work but then you would need to convert the pages, blocks and media types to &#39;Mango&#39; websites content types which are the ones used. Pages can be converted out of the boxes using the admin tool but that is manual work AND IT DOESN&#39;T convert blocks (or used media data)! Fellow EMVP Tomas Hensrud Gulla has blogged about his custom tool which can covert blocks: &lt;a href=&quot;https://blog.novacare.no/convert-episerver-blocks/&quot;&gt;Convert Episerver Blocks&lt;/a&gt;. So even if we would do all the manual work and convert page types and block types out media data content types still would be incorrect (and in reality we had same content type names with different content guids so we were already screwed on that part) and also it would take time to do it.&lt;/p&gt;
&lt;p&gt;My next idea was: What if we could use the Episerver import and hook to the events it exposes and modify the source content in those events to match the destination website content types. Quick proof of concept using two Ally websites with modified content types and different content types gave good results, like we can change the content type in import, we can change properties data type, we can filter out languages we don&#39;t have enabled, etc.&lt;/p&gt;
&lt;p&gt;So we decided to go with building a custom import processing &quot;framework&quot; to handle the content in the import and convert it to the content types used on our &quot;Mango Orange group&quot; website.&lt;/p&gt;
&lt;h3&gt;How does the Episerver import flow works?&lt;/h3&gt;
&lt;p&gt;I&#39;ll keep this part intentionally on a high level (I&#39;ll try to find time to blog about this in more details in future) but on a high level this is how it goes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Episerver import reads the your-export-package.episerverdata package
&lt;ul&gt;
&lt;li&gt;this really is just a zip package (you can open it for example with 7-zip) and look at the structure
&lt;ul&gt;
&lt;li&gt;inside there is folder &#39;epi.fx.blob:&#39; which child folder contains the exported blobs&lt;/li&gt;
&lt;li&gt;in the root there is the epix.xml
&lt;ul&gt;
&lt;li&gt;xml file containing the exported content (basically pages and blocks with content language versions)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;and the import framework is triggered with the data from your package&lt;/li&gt;
&lt;li&gt;first the blobs are imported
&lt;ul&gt;
&lt;li&gt;this is the binary part of the blobs, so files are copied to your target blobs container&lt;/li&gt;
&lt;li&gt;in the event handling you can inspect the blobs filename for example for allowed extension and cancel the import for the blob if the file extension is not allowed in the target website&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;next step is to import the content
&lt;ul&gt;
&lt;li&gt;here you can inspect the content to be imported, you get all the data about the content to be imported (&lt;strong&gt;Note!&lt;/strong&gt; The event handler gets a single item when it is called and it doesn&#39;t know about other content)
&lt;ul&gt;
&lt;li&gt;add or remove properties&lt;/li&gt;
&lt;li&gt;rename properties&lt;/li&gt;
&lt;li&gt;change propertys data type (like from Episerver Url to ContentReference, naturally only when the url was pointing to Episerver content)&lt;/li&gt;
&lt;li&gt;you have full control to the content type about to be imported, so can do quite many things to the data at this stage&lt;/li&gt;
&lt;li&gt;cancel the import of the content&lt;/li&gt;
&lt;li&gt;change should the &#39;For All Sites&#39; be used instead of &#39;For This Site&#39; (for example if images are not stored with the content &#39;For This Page&#39; or &#39;For This Block&#39; but in a shared location)&lt;/li&gt;
&lt;li&gt;check that the master language is enabled in the target website&lt;/li&gt;
&lt;li&gt;remove language branches that the target doesn&#39;t support&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;after your code has processed the content and has not canceled the content import
&lt;ul&gt;
&lt;li&gt;Episerver default import implementation creates the content type instance basically using the content repository &lt;strong&gt;GetDefault&lt;/strong&gt; using the content type id and content master language
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Note&lt;/strong&gt;, when Episerver is resolving the content type to create it has some fallback logic &lt;strong&gt;if the content to be created was not found using the content type id&lt;/strong&gt; (property: PageTypeId)
&lt;ul&gt;
&lt;li&gt;try to load the content type using the content type name (property: PageTypeName)&lt;/li&gt;
&lt;li&gt;try to load the content type using the ContentTypesMap&lt;/li&gt;
&lt;li&gt;try to load the content type using the content types display name&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;after the instance is created using IContentRepository.GetDefault method it reads the property values for the master language of the content and triggers the property import event
&lt;ul&gt;
&lt;li&gt;most likely you don&#39;t do anything else than debug log the values here as you have most likely processed the property values already in the import content event handler&lt;/li&gt;
&lt;li&gt;and then Episerver import sets the instances property value to the property value from the import process&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;and then the same for language branches if there were any&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;import done&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note!&lt;/strong&gt; When importing content to the target site you most likely should keep the &quot;Update existing content items with matching ID&quot; checkbox checked. Why? This also makes the import keep the same content id for the content which you will need if you are not doing the whole import for all content in one go (most likely you will do multiple imports to keep the import package size smaller and to do the migration in parts), this way your content links don&#39;t break when doing multiple import batches as Episerver uses the original content ids so the links between content keeps working because of that). There is also one cool feature or side-effect of having it checked - you could later also import a single language branch for the content from the original source and it would be connected to the already imported content because same content ids were used in target and source.&amp;nbsp;&lt;/p&gt;
&lt;h2&gt;Building the custom import processing&lt;/h2&gt;
&lt;p&gt;Building blocks for our custom import processing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;custom initializable module that hooks to the import events
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/link/e8ee864338d24e89b67f0325cc0fbaa1.aspx&quot;&gt;creating initialization module&lt;/a&gt; Episerver documentation&lt;/li&gt;
&lt;li&gt;import events &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.IDataImportEvents?version=11&quot;&gt;IDataImportEvents&lt;/a&gt; (assembly: EPiServer.Enterprise, namespace: EPiServer.Enterprise)&lt;/li&gt;
&lt;li&gt;is the entry point for the whole import process&lt;/li&gt;
&lt;li&gt;finds the import content type processor which can handle the content and calls it with the content properties
&lt;ul&gt;
&lt;li&gt;handles language branches also, so if the langugae branch is not supported in the target it will remove the language branch properties&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;handles should a blob be imported or not (based on the blobs file extension)&lt;/li&gt;
&lt;li&gt;knows about the target enabled languages
&lt;ul&gt;
&lt;li&gt;will cancel import for content which is not in the enabled languages&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;if content uses &#39;For This Site&#39; folder then that is switched to &#39;For All Sites&#39;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;custom interface for content processors: IImportContentTypeProcessor
&lt;ul&gt;
&lt;li&gt;the implementation is responsible to process the content
&lt;ul&gt;
&lt;li&gt;modify the content to match the target system&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;has a method: bool CanHandle(string contentTypeId);
&lt;ul&gt;
&lt;li&gt;guid string for content type like: 9FD1C860-7183-4122-8CD4-FF4C55E096F9&lt;/li&gt;
&lt;li&gt;called by our initialization module when import raises the ContentImporting event to find a processor that can handle the content&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;has a methd: bool ProcessMasterLanguageProperties(Dictionary&amp;lt;string, RawProperty&amp;gt; rawProperties);&lt;/li&gt;
&lt;li&gt;has a method: bool ProcessLanguageBranchProperties(Dictionary&amp;lt;string, RawProperty&amp;gt; rawProperties);&lt;/li&gt;
&lt;li&gt;(initially I thought we might need language branch specific processing but it turned out there was no need for that method)&lt;/li&gt;
&lt;li&gt;the used Dictionary&amp;lt;string, RawProperty&amp;gt; rawProperties is just to avoid looking up properties multiple times from the ContentImportingEventArgs e e.TransferContentData.RawContentData.Property RawProperty array =&amp;gt; read the array to a dictionary where the key is the property name and value is propertys RawProperty data&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Import handler initialization module implementation&lt;/h3&gt;
&lt;p&gt;So create a new initialization module using the Episerver documentation: &lt;a href=&quot;/link/e8ee864338d24e89b67f0325cc0fbaa1.aspx&quot;&gt;creating initialization module&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;And in the initialize method get a reference to IDataImportEvents and to the ILanguageBranchRepository services. We store the ILanguageBranchRepository to the initialization modules private field for later use. So out Initialize method implementation looks like this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;public void Initialize(InitializationEngine context)
{
	try
	{
		if (context.HostType == HostType.WebApplication)
		{
			var locator = context.Locate.Advanced;

			// import events
			var events = locator.GetInstance&amp;lt;IDataImportEvents&amp;gt;();

			// language branch repository
			_languageBranchRepository = locator.GetInstance&amp;lt;ILanguageBranchRepository&amp;gt;();

			if (events != null &amp;amp;&amp;amp; _languageBranchRepository != null)
			{
				// Register the processors
				RegisterImportContentTypeProcessors();

				if (_importContentTypeProcessors.Count == 0)
				{
					ContentMigrationLogger.Logger.Warning(&quot;There are no import content type processors registered. Disabling import migration handling.&quot;);
					// pointless to register to import events if we have no processors
					return;
				}

				// subscribe to import events
				events.BlobImporting += EventBlobImporting;
				events.BlobImported += EventBlobImported;
				events.ContentImporting += EventContentImporting;
				events.ContentImported += EventContentImported;
				events.PropertyImporting += EventPropertyImporting;

				events.Starting += EventImportStarting;
				events.Completed += EventImportCompleted;

				// we have registered events handlers which we should unregister in uninitialize
				_eventsInitialized = true;
			}
			else
			{
				ContentMigrationLogger.Logger.Error($&quot;Import migration handling not enabled because required services are not available. IDataImportEvents is null: {events == null}. ILanguageBranchRepository is null: {_languageBranchRepository == null}.&quot;);
			}
		}
	}
	catch (Exception ex)
	{
		ContentMigrationLogger.Logger.Error(&quot;There was an unexpected error during import handler initialization.&quot;, ex);
	}
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;NOTE!&lt;/strong&gt; The method RegisterImportContentTypeProcessors() is in the initialization module and is used to register the IImportContentTypeProcessor services and stored to initialization modules private field for later use. ContentMigrationLogger is just our own static class that contains the shared logger for all import logging activities. The events BlobImported and ContentImported are only used to write debug messages about the content to be able to troubleshoot issue related to the content (mainly used when developing an implementation for IImportContentTypeProcessor and testing locally with content).&lt;/p&gt;
&lt;h4&gt;Blob import&lt;/h4&gt;
&lt;p&gt;So as I mentioned previously that blobs are the first thing the import processes, our blob importing handler looks like this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;private static readonly string[] _allowedFileExtensions = new string[] { &quot;mp4&quot;, &quot;jpg&quot;, &quot;jpeg&quot;, &quot;png&quot;, &quot;svg&quot;, &quot;pdf&quot;, &quot;json&quot; };

private void EventBlobImporting(EPiServer.Enterprise.Transfer.ITransferContext transferContext, FileImportingEventArgs e)
{
	if (e == null)
	{
		ContentMigrationLogger.Logger.Error(&quot;EventBlobImporting called with null reference for FileImportingEventArgs.&quot;);
	}

	if (ContentMigrationLogger.IsDebugEnabled)
	{
		ContentMigrationLogger.Logger.Debug($&quot;Importing blob, provider name &#39;{e.ProviderName}&#39;, relative path &#39;{e.ProviderRelativePath}&#39; and permanent link virtual path &#39;{e.PermanentLinkVirtualPath}&#39;.&quot;);
	}

	// try to get the file extension
	if (e.TryGetFileExtension(out string fileExtension))
	{
		// check is the extension allowed/supported
		if (!_allowedFileExtensions.Contains(fileExtension, StringComparer.OrdinalIgnoreCase))
		{
			ContentMigrationLogger.Logger.Error($&quot;Cancelling blob import because the file extension &#39;{fileExtension}&#39; is not allowed. Provider name &#39;{e.ProviderName}&#39;, relative path &#39;{e.ProviderRelativePath}&#39; and permanent link virtual path &#39;{e.PermanentLinkVirtualPath}&#39;.&quot;);
			e.Cancel = true;
		}
	}
	else
	{
		// no file extension or it could not be resolved, cancel the blob import
		ContentMigrationLogger.Logger.Error($&quot;Cancelling blob import because the file extension could not be extracted from provider relative path value &#39;{e.ProviderRelativePath}&#39;.&quot;);
		e.Cancel = true;
	}
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And here are a snip from our extensions and helpers class for the used extension: TryGetFileExtension(out string fileExtension)&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;/// &amp;lt;summary&amp;gt;
/// Gets the file extension from a file name (or path and filename).
/// &amp;lt;/summary&amp;gt;
/// &amp;lt;param name=&quot;fileName&quot;&amp;gt;file name or path and file name&amp;lt;/param&amp;gt;
/// &amp;lt;param name=&quot;fileExtension&quot;&amp;gt;the file extension extracted from the &amp;lt;paramref name=&quot;fileName&quot;/&amp;gt; without the dot, so result will be like: jpg or png etc.&amp;lt;/param&amp;gt;
/// &amp;lt;returns&amp;gt;True if file extension was extracted from &amp;lt;paramref name=&quot;fileName&quot;/&amp;gt; otherwise false&amp;lt;/returns&amp;gt;
/// &amp;lt;remarks&amp;gt;
/// &amp;lt;para&amp;gt;
/// The extension is not normalized (nothing is done to capitalization), meaning if the extension is &#39;PnG&#39; then that will be returned.
/// &amp;lt;/para&amp;gt;
/// &amp;lt;/remarks&amp;gt;
private static bool GetFileExtension(string fileName, out string fileExtension)
{
	fileExtension = null;

	if (!string.IsNullOrWhiteSpace(fileName))
	{
		try
		{
			string tmpFileExtension = Path.GetExtension(fileName);

			if (!string.IsNullOrWhiteSpace(tmpFileExtension) &amp;amp;&amp;amp; tmpFileExtension.Length &amp;gt;= 2)
			{
				// length check so that there is more than the dot

				fileExtension = tmpFileExtension.Substring(1);
				return true;
			}
		}
		catch (Exception ex)
		{
			ContentMigrationLogger.Logger.Error($&quot;Gettting file extension from filename &#39;{fileName}&#39; failed.&quot;, ex);
			return false;
		}
	}

	return false;
}

/// &amp;lt;summary&amp;gt;
/// Tries to get the file extension from FileImportingEventArgs property ProviderRelativePath value.
/// &amp;lt;/summary&amp;gt;
/// &amp;lt;param name=&quot;args&quot;&amp;gt;Instance of &amp;lt;see cref=&quot;EPiServer.Enterprise.FileImportingEventArgs&quot;/&amp;gt; or null&amp;lt;/param&amp;gt;
/// &amp;lt;param name=&quot;fileExtension&quot;&amp;gt;Extracted file extension&amp;lt;/param&amp;gt;
/// &amp;lt;returns&amp;gt;True if the file extension was extracted otherwise false&amp;lt;/returns&amp;gt;
public static bool TryGetFileExtension(this EPiServer.Enterprise.FileImportingEventArgs args, out string fileExtension)
{
	fileExtension = null;

	if (args == null)
	{
		return false;
	}

	if (GetFileExtension(args.ProviderRelativePath, out fileExtension))
	{
		return true;
	}

	return false;
}&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Content import&lt;/h4&gt;
&lt;p&gt;Next step in the import process is to import the actual content and this is the place where we can do manipulation to the content and or cancel the import totally for the content. Episerver import process triggers the ContentImporting event per each content item that is about to be imported (so this event happens before the content item is actually created).&lt;/p&gt;
&lt;p&gt;Here is a picture about the epix.xml file structure (the actual xml file was modified, entries removed so that I could squeeze the entries to the picture), showing that there is master language content in english and then there is language branch content in finnish:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/47196f840c27403fbd377b2879fc1ed1.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So from the picture you can see the basic structure of the XML:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;pages element will contain TransferContentData element(s)
&lt;ul&gt;
&lt;li&gt;TransferContentData is the single content item and its language versions (and other possible settings, like language fallback settings in ContentLanguageSettings element)
&lt;ul&gt;
&lt;li&gt;if the RawContentData has type attribute with value RawPage, then the content is a page, otherwise its some other content type like: block, media, etc&lt;/li&gt;
&lt;li&gt;RawContentData holds the master content property values&lt;/li&gt;
&lt;li&gt;Language branch property values are in the RawLanguageData&lt;/li&gt;
&lt;li&gt;structure is the same for both the RawContentData and RawLanguageData
&lt;ul&gt;
&lt;li&gt;so below those we have the ACL element that can hold access-control settings&lt;/li&gt;
&lt;li&gt;Property element holds all properties for the content
&lt;ul&gt;
&lt;li&gt;there is a RawProperty per each property on the content type, and these hold all information about each property, like type, typename, is language specific, etc&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;epix XML file high level structure:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/3c91f0d2481b46559c9a7dacaae9d2e3.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So back to the importing event, this event is where we get the TransferContentData element with children nicely parsed to corresponding .Net framework objects. We can get the content from &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.ContentImportingEventArgs?version=11&quot;&gt;ContentImportingEventArgs&lt;/a&gt; TransferContentData property which is of type &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Core.Transfer.ITransferContentData?version=11&quot;&gt;ITransferContentData&lt;/a&gt; and that interface defines the properties which will hold the data from the epix.xml file. So this is the place where &quot;fun&quot; starts with the data manipulation. Unfortunately I cannot share all the code we&#39;ve writen for the custom import processing but I&#39;ll try to share enough information and code snips that might be enough to get you in good start in your own project.&lt;/p&gt;
&lt;p&gt;For example in our content the source content had language fallbacks defined in sub structure of the website and we didn&#39;t want to bring in any of those settings to our target site, so we created a simple extension method that always removes that information from content:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;/// &amp;lt;summary&amp;gt;
/// Removes all content language settings from the &amp;lt;see cref=&quot;ITransferContentData.ContentLanguageSettings&quot;/&amp;gt;.
/// &amp;lt;/summary&amp;gt;
/// &amp;lt;param name=&quot;transferContentData&quot;&amp;gt;Instance of &amp;lt;see cref=&quot;ITransferContentData&quot;/&amp;gt; or null&amp;lt;/param&amp;gt;
public static void RemoveContentLanguageSettings(this ITransferContentData transferContentData)
{
	if (transferContentData != null &amp;amp;&amp;amp; transferContentData.ContentLanguageSettings != null)
	{
		transferContentData.ContentLanguageSettings.Clear();
	}
}

// transferContentData is gotten from the ContentImportingEventArgs e, 
// var transferContentData = e?.TransferContentData;
// transferContentData.RemoveContentLanguageSettings();&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then we also use an extension method with some overload extension methods which we can use to once load the RawProperty items to a dictionary where key is the property name and value is the original RawProperty instance. This way we get faster lookups for the properties we want to get instead enumerating the RawProperty[] containing the properties each time we need to do something about a property in the processing. Below is sample code for such extension:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;/// &amp;lt;summary&amp;gt;
/// Reads the properties to a dictionary where key is the property name and value is the original RawProperty.
/// &amp;lt;/summary&amp;gt;
/// &amp;lt;param name=&quot;rawProperties&quot;&amp;gt;Array of RawProperty instances&amp;lt;/param&amp;gt;
/// &amp;lt;returns&amp;gt;Dictionary containing the original properties&amp;lt;/returns&amp;gt;
/// &amp;lt;remarks&amp;gt;
/// &amp;lt;para&amp;gt;
/// NOTE! The RawProperty instances are not copied from the origin they still point to the original array
///  so any modifications are done to the original properties.
/// &amp;lt;/para&amp;gt;
/// &amp;lt;/remarks&amp;gt;
public static Dictionary&amp;lt;string, RawProperty&amp;gt; GetRawPropertiesAsDictionary(this RawProperty[] rawProperties)
{
	if (rawProperties == null || rawProperties.Length == 0)
	{
		return new Dictionary&amp;lt;string, RawProperty&amp;gt;(0);
	}

	var dict = new Dictionary&amp;lt;string, RawProperty&amp;gt;(rawProperties.Length);

	foreach (var property in rawProperties)
	{
		// if there is a null value in the array then skip it
		if (property == null)
		{
			continue;
		}

		dict.Add(property.Name, property);
	}

	return dict;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;NOTE! As said in the above code comments, the dictionary value objects are still pointing to the same original objects held in the source array, so any changes you make using the dictionary are actually done to the source array objects too, keep this in mind! In our implementation the idea is not to care about the changes to the original array as we will always overwrite the original array objects in our code by calling a Save extension method like this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;/// &amp;lt;summary&amp;gt;
/// Writes the &amp;lt;paramref name=&quot;rawProperties&quot;/&amp;gt; dictonary &#39;Values&#39; property to &amp;lt;paramref name=&quot;content&quot;/&amp;gt; instances &#39;Property&#39; field.
/// &amp;lt;/summary&amp;gt;
/// &amp;lt;param name=&quot;content&quot;&amp;gt;Instance of RawContent&amp;lt;/param&amp;gt;
/// &amp;lt;param name=&quot;rawProperties&quot;&amp;gt;Dictionary containing the raw properties to be writen to the &amp;lt;paramref name=&quot;content&quot;/&amp;gt; instances Property field.&amp;lt;/param&amp;gt;
/// &amp;lt;exception cref=&quot;ArgumentNullException&quot;&amp;gt;&amp;lt;paramref name=&quot;content&quot;/&amp;gt; is null&amp;lt;/exception&amp;gt;
/// &amp;lt;remarks&amp;gt;
/// &amp;lt;para&amp;gt;
/// You need to call this method so that all your changes like removing property gets persisted to the RawContent (&amp;lt;paramref name=&quot;content&quot;/&amp;gt;) instances Property field.
/// &amp;lt;/para&amp;gt;
/// &amp;lt;para&amp;gt;
/// If the &amp;lt;paramref name=&quot;rawProperties&quot;/&amp;gt; is null or has no entries then nothing is done. If your intention is to remove all properties then you need to do it other way.
/// &amp;lt;/para&amp;gt;
/// &amp;lt;/remarks&amp;gt;
public static void Save(this RawContent content, Dictionary&amp;lt;string, RawProperty&amp;gt; rawProperties)
{
	if (content == null)
	{
		throw new ArgumentNullException(nameof(content));
	}

	if (rawProperties.HasEntries())
	{
		content.Property = rawProperties.Values.ToArray();
	}
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And we have similiar Save method for block properties (cases where a local block is used aka Block as a property).&lt;/p&gt;
&lt;p&gt;Here is a link to a gist containing some extension methods to get Episerver known properties from the &quot;rawproperties&quot; dictionary: &lt;a href=&quot;https://gist.github.com/alasvant/c446b68cda607ab734110af73aef317b&quot;&gt;https://gist.github.com/alasvant/c446b68cda607ab734110af73aef317b&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;Import flow in our implementation&lt;/h4&gt;
&lt;p&gt;So instead copy pasting all of our projects code here, instead I will describe our flow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;remove the previously mentioned content language settings&lt;/li&gt;
&lt;li&gt;parse the content propeties to dictionary&lt;/li&gt;
&lt;li&gt;get the content &#39;content type id&#39; property value from dictionary
&lt;ul&gt;
&lt;li&gt;if there is no content type id, do nothing about the content (let Episerver import decide what to do about it)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;find a content type processor using the content type id from the registered content type processors
&lt;ul&gt;
&lt;li&gt;note, in our implementation a content type processor can handle a single or many content types, that&#39;s why it has the CanHandle(string) method defined in the interface&lt;/li&gt;
&lt;li&gt;first processor saying it can handle the content is used&lt;/li&gt;
&lt;li&gt;if not content type processor is found, we check the content master language that it is enabled in the system
&lt;ul&gt;
&lt;li&gt;if the master language is not enabled, we cancel the import for the content as the Episerver import would otherwise error on the content and break the import process&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;if a processor was found, we call its ProcessMasterLanguageProperties method with the master language properties
&lt;ul&gt;
&lt;li&gt;if the processor returns true, so the content was processed without errors, we save the modified properties to RawContentData property (event argument e.TransferContentData.RawContentData)
&lt;ul&gt;
&lt;li&gt;and then we have code that loops the e.TransferContentData.RawLanguageData (list containing the language branches)
&lt;ul&gt;
&lt;li&gt;check that the language is enabled in the system, if not - add the language to &quot;to be removed list&quot;&lt;/li&gt;
&lt;li&gt;pass the parsed language properties to ProcessLanguageBranchProperties method in the processor
&lt;ul&gt;
&lt;li&gt;if the method returns true, save the properties to the original object (RawContent.Property whihc is RawProperty array, so we basically call for the dictionary: content.Property = rawProperties.Values.ToArray();)&lt;/li&gt;
&lt;li&gt;if the method returns false, then add the language to &quot;to be removed list&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;finally, if there are any entries in the &quot;to be removed list&quot;, we remove those languages from the original list&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Content manipulation&lt;/h4&gt;
&lt;p&gt;So what can we do about the content:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;we can cancel the content import by setting the e.Cancel = true, in the event handler for the ContentImportingEventArgs instance&lt;/li&gt;
&lt;li&gt;we can change the content type id
&lt;ul&gt;
&lt;li&gt;for example, let&#39;s say that you have copied (like copy paste source code) from site A to site B, but you have changed the content type id and the class name and now you would like to import some instance from site A to site B&lt;/li&gt;
&lt;li&gt;if all the properties are still the same, it is enough just to change the content type id value in the import to match the target system content id&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;we can remove properties, simply in the code remove the property from the rawDictionary (referring our processing implementation)&lt;/li&gt;
&lt;li&gt;we can rename properties, simply get the property using the source property name and change the referenced RawProperty objects property name&lt;/li&gt;
&lt;li&gt;we can convert property types (with some limitations, you need to be able to write the code to do the conversion)
&lt;ul&gt;
&lt;li&gt;for example in our case, we had used &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Url?version=11&quot;&gt;Episerver Url&lt;/a&gt; type for a property and in the destination we had changed it to ContentReference as the properties were used for example for image so there was no need for it to be Url
&lt;ul&gt;
&lt;li&gt;so we have created a helper method which tries to convert source Url to a ContentReference&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;the source value is like this in the RawProperty value (string): ~/link/653ebc1fa8184dbdbab2023e3998ce28.aspx&lt;/li&gt;
&lt;li&gt;code extracts the &quot;guid&quot; string from the source, so it grabs the value: 653ebc1fa8184dbdbab2023e3998ce28&lt;/li&gt;
&lt;li&gt;and then creates a new string with the following format: [653ebc1fa8184dbdbab2023e3998ce28][][]
&lt;ul&gt;
&lt;li&gt;thats how ContentReference is serialized to the epix.xml, so we a re just mimicin that, NOTE! There can be also language information but we are leaving those empty in our case&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;we can re-map the content to a totally different content type
&lt;ul&gt;
&lt;li&gt;so basically change the content type id&lt;/li&gt;
&lt;li&gt;rename properties&lt;/li&gt;
&lt;li&gt;remove properties&lt;/li&gt;
&lt;li&gt;add new properties and copy values to it from multiple properties
&lt;ul&gt;
&lt;li&gt;for example when target uses a local block (block as a property) and the source have the values in multiple properties&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;you can change the contents publishing status
&lt;ul&gt;
&lt;li&gt;for example in our code we have blocks published but pages are changed to draft&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;you could add new language branches in your code&lt;/li&gt;
&lt;li&gt;you could change content master language
&lt;ul&gt;
&lt;li&gt;for example if all content should be created in english and your source content has content created in Finnish as the master language and there is English language branch, then in your code you could switch the master language properties vice versa with tha language branch&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What we can&#39;t do?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;add totally new content that doesn&#39;t exist in the import data, meaning you just can&#39;t create a new additional page for example, because everything is based on the import data&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;well you might be able to do some hacks like storing information about some content during the import process and after import have something else be triggered to create the extra content&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Property import&lt;/h4&gt;
&lt;p&gt;Last step in the import is the PropertyImporting event which we basically used only for debug logging in development (property information, what is about to be imported).&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;This time a bit less code than usually in my posts but I simply can&#39;t copy paste the code from the project :D Hopefully this post though gives you some ideas if you find your self in a situation that you need to find a way how migrate a lot of content from site or sites to another site - you might maybe be able to import some if not all content using a custom import processing implementation and cut the costs in the migration project.&lt;/p&gt;
&lt;p&gt;I make no promise but I might later add a new post with a working sample code with two Alloy sites - but as we all know that Episerver .Net Core is coming, so depending when it is officially released might change my priorities :D&lt;/p&gt;
&lt;p&gt;Please feel free to ask questions in the comments section if something was left unclear or needs more information and I will try to answer.&lt;/p&gt;</id><updated>2021-01-23T13:15:45.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Export selected languages</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2020/11/export-selected-languages/" /><id>&lt;p&gt;Ever needed the functionality to just export the selected languages in Episerver instead of all languages which is the default behavior? If Yes, then this post is for you.&lt;/p&gt;
&lt;h2&gt;Background&lt;/h2&gt;
&lt;p&gt;Little background for the post and small &lt;a href=&quot;https://nuget.episerver.com/package/?id=Swapcode.EpiExport.LanguagesSelector&quot;&gt;add-on&lt;/a&gt; I&#39;ve added to Episerver NuGet feed. I&#39;m working on a project where we are going to migrate thousands of pages from old site to a new site. There are just few buts.. our source site uses tens of languages where usually master language is English but there are some single pages here and there where the master language is not english. Our destination has just two languages enabled. So when we export content from source site even small selected sections for export can grow to huge export packages like the worst ones are 1GB! So if we could only define which languages to export we could cut down the size of the export package.&lt;/p&gt;
&lt;p&gt;We&#39;ve built an import pocessing &#39;framework&#39; in the destination site that manipulates the content in import as we have different content types in the target site (solution). Also if there are unsupported languages in the content we discard those versions (master or language branch). Also the processing can transform content types from source to destination content types - so when we do import there are zero warnings :) Our import processing utilizes the &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.IDataImportEvents?version=11&quot;&gt;EPiServer.Enterprise.IDataImportEvents&lt;/a&gt; and does the magic there. Have to say I&#39;m happy how well it has worked but that was not the point of this blog post but the export languages, maybe more about the export / import migration in another post :D&lt;/p&gt;
&lt;h2&gt;Export&lt;/h2&gt;
&lt;p&gt;Episerver exposes events during the export which we can hook to, see the &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.IDataExportEvents?version=11&quot;&gt;EPiServer.Enterprise.IDataExportEvents&lt;/a&gt;. So I wanted to know if I hook up to the &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.IDataExportEvents?version=11#EPiServer_Enterprise_IDataExportEvents_Starting&quot;&gt;Starting event&lt;/a&gt; could I say to the export that it should only export the languages I supply to it. And yes it is possible to supply a list of languages for the export (see the &lt;a href=&quot;https://github.com/alasvant/Swapcode.EpiExport.LanguagesSelector/blob/master/src/Swapcode.EpiExport.LanguagesSelector/ExportLanguagesInitializationModule.cs&quot;&gt;code in my repo&lt;/a&gt;) so in the Starting event of export we get &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.Transfer.ITransferContext?version=11&quot;&gt;ITransferContext&lt;/a&gt;&amp;nbsp;which inherits &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Core.Transfer.IContentTransferContext?version=11#EPiServer_Core_Transfer_IContentTransferContext_ContentLanguages&quot;&gt;ContentLanguages&lt;/a&gt; property from&amp;nbsp;IContentTransferContext which is a list of languages. If no languages are supplied then all languages are exported and the language value in the list is any valid culture name like &#39;en&#39; or &#39;fi&#39; (see &lt;a href=&quot;https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.getcultureinfo?view=netframework-4.6.1#System_Globalization_CultureInfo_GetCultureInfo_System_String_&quot;&gt;CultureInfo.GetCultureInfo(string)&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;After the POC we needed a way to be able to configure the languages - hard coding the values to the event was not an option because we still should be able to export all languages for example if we need to move small amount of data from production to test or development environment. So for that I utilized the old but still working &lt;a href=&quot;/csclasslibraries/cms/EPiServer.PlugIn.GuiPlugInAttribute?version=11&quot;&gt;GuiPlugIn&lt;/a&gt; and &lt;a href=&quot;/csclasslibraries/cms/EPiServer.PlugIn.PlugInPropertyAttribute?version=11&quot;&gt;PlugInProperty&lt;/a&gt; attributes together with &lt;a href=&quot;/csclasslibraries/cms/EPiServer.PlugIn.PlugInSettings?version=11&quot;&gt;PlugInSettings&lt;/a&gt; to persist and load the configured settings.&lt;/p&gt;
&lt;h2&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Export languages configuration is done from Admin views &#39;Config&#39; tab and using the Plug-in Manager.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;click the &#39;Swapcode.EpiExport.LanguagesSelector&#39;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/link/7deccf17b1b1422bbae3e425250766ce.aspx&quot; /&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;next you will be shown a list of checkboxes of the languages enabled in your system&lt;/li&gt;
&lt;li&gt;select the languages you want to include to the export and click save&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/link/ee0f4e4d3efd46ed9db0baa452e0d59a.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Just remember to uncheck the checkboxes after your export and save that Episerver default export functionality is used again.&lt;/p&gt;
&lt;h2&gt;What it does not do?&lt;/h2&gt;
&lt;p&gt;It just sets the languages but this does actually mean language branches - so for example you have a page created in English (master language) and then you have translated that page to three other languages like Finnish, Swedish and Latvian. If you now set in settings that English and Finnish languages are used in export you will only get the en and fi languages to the export package.&lt;/p&gt;
&lt;p&gt;Lets take the example one step further and add pages under the above sample pages but create each page in differen language (so master language is not en but some other language for each page), something like this:&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;img src=&quot;/link/328b07227ec44602b26354fdea4eb202.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now if you export the &#39;Group masterlanguages&#39; and have the export setting &#39;Include sub items&#39; selected You most likely would expect the export to only contain the container and the &#39;en&#39; and &#39;fi&#39; pages but you will actually have all the pages in this case - this is Episerver export defaul behavior, it includes all master language versions as those have the not culture specific values.&lt;/p&gt;
&lt;p&gt;The same will be true for blocks (or any other localizable content) - so cases like if you have blocks in the &#39;For This Page&#39; assets folder all languages are exported even when you have specified that only certain languages should be exported.&lt;/p&gt;
&lt;p&gt;So remember the above if using this add-on (at least for the initial version 1.0.0).&lt;/p&gt;
&lt;p&gt;I did try to exclude content in the &lt;a href=&quot;/csclasslibraries/cms/EPiServer.Enterprise.IDataExportEvents?version=11#EPiServer_Enterprise_IDataExportEvents_ContentExporting&quot;&gt;ContentExporting&lt;/a&gt; event using the selected languages but when I removed content in export the generated export package failed to import. There was an XML Element error during import (maybe I will write something in another blog post about the Episerver *.episerverdata export package ;-) ), I do know where the error is in the XML file in theory but import is reading the XML file in parts and the line number is not true when looking at the whole XML file and in my case that XML file was 25MB and I didn&#39;t have time to start splitting it and investigate it as in our import processing &quot;framework&quot; we are already handling not supported languages (not enabled languages) so it was not that big issue for us other than we couldn&#39;t make the package even smaller.&lt;/p&gt;
&lt;h2&gt;NuGet package and sources in GitHub&lt;/h2&gt;
&lt;p&gt;Sources are in GitHub:&amp;nbsp;&lt;a href=&quot;https://github.com/alasvant/Swapcode.EpiExport.LanguagesSelector&quot;&gt;https://github.com/alasvant/Swapcode.EpiExport.LanguagesSelector&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;NuGet package is in Episerver NuGet feed:&amp;nbsp;&lt;a href=&quot;https://nuget.episerver.com/package/?id=Swapcode.EpiExport.LanguagesSelector&quot;&gt;https://nuget.episerver.com/package/?id=Swapcode.EpiExport.LanguagesSelector&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In our project we have already added additional filtering to the export and I will most likely add that soonish to this package (but then it is no longer just about languages, so the add-on naming no longer is correct :D).&lt;/p&gt;
&lt;p&gt;I will also try to look in the issue about excluding content in the ContentExporting event and what causes the import to fail - it could be the data or a bug in the export.&lt;/p&gt;</id><updated>2020-11-25T18:08:56.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>How to protect you public test or stage environment from outsiders using authorization?</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2020/11/how-to-protect-you-public-test-or-stage-environment-from-outsiders-using-authorization/" /><id>&lt;p&gt;Sometimes we face a situation that we can&#39;t use maybe the more common/better ways to protect the test/stage environment by using IP-restrictions in &lt;strong&gt;web[dot]config&lt;/strong&gt; (fyi, the blog platforms cloudflare security feature prevents using the dot in the file name :D) or in a load balancer or in a Web Application Firewall (WAF) or in a firewall or some other security feature. So is there then no other way? Yes there is - well at least we can make the resources not availabe if the user can&#39;t login in.&lt;/p&gt;
&lt;p&gt;Background:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;we havea public facing test and stage environment&lt;/li&gt;
&lt;li&gt;we have people from all over the world with dynamic IP-addresses who should be able to see the content
&lt;ul&gt;
&lt;li&gt;they all have logins and different roles setup&lt;/li&gt;
&lt;li&gt;ASP.NET Identity is used&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;we don&#39;t want &quot;outsiders&quot; to be able to see any resources from the website&lt;/li&gt;
&lt;li&gt;excluding login and logout pages, these should be naturally accessible for anonymous users ;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Solution one&lt;/h2&gt;
&lt;p&gt;Use Episerver &#39;Set Access Rights&#39; and remove the &#39;Read&#39; access from &#39;Everyone&#39; group.&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;can be done from Episerver Admin UI&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;depending how access rights are used on the site, might require quite some clicking in the content tree to find all nodes which are not inheriting access rights from root&lt;/li&gt;
&lt;li&gt;does not affect any static resources&lt;/li&gt;
&lt;li&gt;does not affect any dynamic resources / programmatic resources not having authorization&lt;/li&gt;
&lt;li&gt;when content is refreshed from production, this needs to be done again&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Solution two&lt;/h2&gt;
&lt;p&gt;Use IIS security in web[dot]config file.&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;can be done with a simple configuration transform for the target environment&lt;/li&gt;
&lt;li&gt;content can be refreshed and no actions are needed, same restrictions are still in place&lt;/li&gt;
&lt;li&gt;we can protect all resources from the website&lt;/li&gt;
&lt;li&gt;well excluding the login and logout pages&lt;/li&gt;
&lt;li&gt;no need to change the existing access rights for the content in Episerver&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;p&gt;None - Just kidding :D&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;it will not totally block access to the website for anonymous users, the login and logout pages are exposed&lt;/li&gt;
&lt;li&gt;actually my below condifuration exposes anything below util path&lt;/li&gt;
&lt;li&gt;because it will redirect to login when user is not logged in, if Episerver default login is used, it will expose the fact that the website is done with Episerver to the world
&lt;ul&gt;
&lt;li&gt;usually this is acceptable&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;web[dot]config configuration sample&lt;/h3&gt;
&lt;p&gt;So how to configure? Find in your web[dot]config file the root &#39;system.webServer&#39; section and add the following or modify your existing element.&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;&amp;lt;security&amp;gt;
  &amp;lt;authorization&amp;gt;
    &amp;lt;!-- remove any exising users and roles --&amp;gt;
    &amp;lt;remove users=&quot;*&quot; roles=&quot;&quot; verbs=&quot;&quot;/&amp;gt;
    &amp;lt;!-- add the roles that should be allowed to use the website after authentication --&amp;gt;
    &amp;lt;!-- now using the Episerver default groups which you need to change if you use something else --&amp;gt;
    &amp;lt;add accessType=&quot;Allow&quot; users=&quot;&quot; roles=&quot;WebAdmins,WebEditors&quot;/&amp;gt;
  &amp;lt;/authorization&amp;gt;
&amp;lt;/security&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So the above will allow the selected groups/roles to access the website and if the request is not authorized the user will be redirected to the configured login url.&lt;/p&gt;
&lt;p&gt;The above configuration blocks all requests so we need to allow anonymous requests (or requests that don&#39;t have the required roles to access the login page), so we need to add another configuration.&lt;/p&gt;
&lt;p&gt;In your web[dot]config file find the location element for your login path (or add it if you are not using Episerver default login page). So the following configuration assumes you are using the default path which is under the &#39;util&#39; path (if you have moved the util path to your-secret-util-path then you would apply the following change to that location element). Find the location element with path=&quot;util&quot; and modify its system.webServer section.&lt;/p&gt;
&lt;p&gt;Note! This is needed because we are not using Forms authentication but ASP.NET Identity, the authorization element has attribute &#39;bypassLoginPages&#39; which defaults to true and would skip authorization for the loging page when using Forms authentication.&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;&amp;lt;security&amp;gt;
  &amp;lt;authorization&amp;gt;
    &amp;lt;!-- remove any previous configurations --&amp;gt;
    &amp;lt;remove roles=&quot;&quot; users=&quot;*&quot; verbs=&quot;&quot;/&amp;gt;
    &amp;lt;!-- allow anonymous users --&amp;gt;
    &amp;lt;!-- and allow get and post verbs only --&amp;gt;
    &amp;lt;!-- get the page and post the login form --&amp;gt;
    &amp;lt;add accessType=&quot;Allow&quot; users=&quot;*&quot; roles=&quot;*&quot; verbs=&quot;GET,POST&quot;/&amp;gt;
  &amp;lt;/authorization&amp;gt;
&amp;lt;/security&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So after the above configurations the website should beonly accessible for &#39;authorized&#39; users - so users who have logged in and have WebAdmins and/or WebEditors roles.&lt;/p&gt;
&lt;p&gt;Tip! If you look at the login page request in browser developer tools network tab you will notice that there are some requests that have ended with 302 redirect to the login page. The login page works even without these resources and there seems to be no visible artifacts or missing functionality BUT if you want those requests to succeed you need to add two more location elements with allow configuration.&lt;/p&gt;
&lt;p&gt;On the login page there are three requests to WebResource.axd and two requests to CSS style files under&amp;nbsp;App_Themes/Default/Styles, so we can also allow those if we want to.&lt;/p&gt;
&lt;p&gt;Add the following location elements to your web[dot]config file.&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;&amp;lt;!-- allow anonymous users to get the resources requested on util/login.aspx and logout.aspx--&amp;gt;
&amp;lt;location path=&quot;WebResource.axd&quot;&amp;gt;
  &amp;lt;system.webServer&amp;gt;
    &amp;lt;security&amp;gt;
      &amp;lt;authorization&amp;gt;
        &amp;lt;remove roles=&quot;&quot; users=&quot;*&quot; verbs=&quot;&quot;/&amp;gt;
        &amp;lt;add accessType=&quot;Allow&quot; users=&quot;*&quot; roles=&quot;*&quot; verbs=&quot;GET&quot;/&amp;gt;
      &amp;lt;/authorization&amp;gt;
    &amp;lt;/security&amp;gt;
  &amp;lt;/system.webServer&amp;gt;
&amp;lt;/location&amp;gt;
&amp;lt;!-- allow anonymous users to get styles --&amp;gt;
&amp;lt;location path=&quot;App_Themes/Default/Styles&quot;&amp;gt;
  &amp;lt;system.webServer&amp;gt;
    &amp;lt;security&amp;gt;
      &amp;lt;authorization&amp;gt;
        &amp;lt;remove roles=&quot;&quot; users=&quot;*&quot; verbs=&quot;&quot;/&amp;gt;
        &amp;lt;add accessType=&quot;Allow&quot; users=&quot;*&quot; roles=&quot;*&quot; verbs=&quot;GET&quot;/&amp;gt;
      &amp;lt;/authorization&amp;gt;
    &amp;lt;/security&amp;gt;
  &amp;lt;/system.webServer&amp;gt;
&amp;lt;/location&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All done!&lt;/p&gt;
&lt;p&gt;Note, you should be able to also use the &quot;system.web&quot; section for the configuration but that is basically for old IIS and old Cassini development server ;)&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&quot;https://docs.microsoft.com/en-us/iis/configuration/system.webserver/security/authorization/#configuration&quot;&gt;IIS configuration reference for authorization&lt;/a&gt;.&lt;/p&gt;</id><updated>2020-11-10T18:34:46.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>How to blog on Episerver World?</title><link href="https://world.optimizely.com/blogs/Antti-Alasvuo/Dates/2020/9/how-to-blog-on-episerver-world/" /><id>&lt;p&gt;In this post I will show you how to blog to Episerver World and contribute to the community.&amp;nbsp;Some seasoned developers already know how to blog and might wonder what is the point of the post? In a nutshell there are two options: aggregate your blog feed from external system or use the Episerver World blog platform. But keep on reading, you might not have used the Episerver World blog platform and the codumentation about it is simply non-existant (or I really couldn&#39;t find it :D) - there is a &lt;a href=&quot;/link/9a1302979f4e4e118306b1054f51bc3c.aspx&quot;&gt;really old instruction&lt;/a&gt; that speaks about using Microsoft/Windows Live Writer which has been discontinued for quite some time.&lt;/p&gt;
&lt;h2&gt;Background&lt;/h2&gt;
&lt;p&gt;I&#39;ve been using the free &lt;a href=&quot;https://wordpress.com&quot;&gt;wordpress.com&lt;/a&gt; blogging platform for quite some time. For free you get the wordpress platform and custom cname-host (with https) like &lt;a href=&quot;https://swapcode.wordpress.com/&quot;&gt;swapcode.wordpress.com&lt;/a&gt; that I have been using for my blog so far. In that service you get a limited amount of free templates where to choose how your site looks like and you can change the template when you so choose. The platform includes commenting and spam filtering but users naturally need to have a wordpress account to be able to comment, someone might have a good comment but the choose not to register with wordpress just to be able to comment. The built-in comment filtering is really needed - usually there is one real comment and 40 spam comments, so you need to also manage the comments and allow the real comments to be visible on the site.&lt;/p&gt;
&lt;p&gt;So I&#39;ve been pretty happy with the wordpress platform but never really been happy with the templates because most of them seem to have have something that annoys me or I would like them to work a bit differently. Yes, I oculd pay for the service and get my own fancy domain and have my own templates but really, I don&#39;t have time for that. I need something out of the box that I can live with.&lt;/p&gt;
&lt;p&gt;So that is the &#39;why&#39;, why I decided that I would give the Episerver World blog platform a try - most likely it has everything I need and as a bonus there is the rating system, getting anonymous feedback on the post quality/usefulness.&lt;/p&gt;
&lt;h2&gt;Aggregating blog feed&lt;/h2&gt;
&lt;p&gt;If you use aggregating your blog feed to Episerver World then you simply navigate to your profile settings page&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;go to &#39;Account Settings&#39; tab&lt;/li&gt;
&lt;li&gt;add your Atom/RSS feed url like:&amp;nbsp;&lt;a href=&quot;https://swapcode.wordpress.com/feed/&quot;&gt;https://swapcode.wordpress.com/feed/&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;optionally add some tags to filter what blog posts are aggregated to Episerver World&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&amp;nbsp;and click &#39;Save&#39; button&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;(disclaimer! I can&#39;t remember anymore if you initially have to use the &#39;&lt;a href=&quot;/link/1883e72de62e4947b1e9f57a841b088c.aspx&quot;&gt;Create Blog&lt;/a&gt;&#39; page to get your feed added)&lt;/p&gt;
&lt;h2&gt;Blogging using the Episerver World platform&lt;/h2&gt;
&lt;p&gt;To get started, first you have to have an account at Episerver World (same applies to aggregated feed) and if you are working with Episerver you really already have the account but if not, why not register an account even if you are not starting to blog right away ;)&lt;/p&gt;
&lt;p&gt;From the Episerver World Blogs page there is a link at the top &#39;&lt;a href=&quot;/link/1883e72de62e4947b1e9f57a841b088c.aspx&quot;&gt;Start blogging - create your own blog&lt;/a&gt;&#39; so click that link to create Your new shiny Episerver World Blog.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/fc5b31520ed947ccb0ac126276cf00d2.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Read, understand, check &#39;I Accept&#39; checkbox and click Submit button. Easy.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note!&lt;/strong&gt; &lt;span style=&quot;text-decoration:&amp;#32;underline;&quot;&gt;If you have already created an aggregated blog&lt;/span&gt; then you will not need to do this step as Your blog already exists. You can directly navigate to the &#39;&lt;a href=&quot;/link/5e37b120db614d1b8287a3b664f1a9ac.aspx&quot;&gt;Manage your blog posts on Episerver World&lt;/a&gt;&#39; page.&lt;/p&gt;
&lt;h2&gt;Your first blog post&lt;/h2&gt;
&lt;p&gt;Ready, set,Go! Writing your first blog post. You create and manage the blog post from the &quot;&lt;a href=&quot;/link/5e37b120db614d1b8287a3b664f1a9ac.aspx&quot;&gt;Manage your blog posts on Episerver World&lt;/a&gt;&quot; page, you can find the link from the &lt;a href=&quot;/link/bccc323fc15e45259d7a934d1efb5d52.aspx&quot;&gt;Blogs&lt;/a&gt; front page. &lt;strong&gt;@Episerver world team&lt;/strong&gt;, if the user is logged in and has blogs enabled then could the Episerver World &#39;Blogs&#39; main nvaigation have a child &#39;Manage blogs&#39; entry?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/8834095c4ec6477497f00dfa9fbe06cd.aspx&quot; width=&quot;980&quot; alt=&quot;Manage&amp;#32;Your&amp;#32;blog&amp;#32;posts&amp;#32;or&amp;#32;create&amp;#32;a&amp;#32;new&quot; height=&quot;592&quot; /&gt;&lt;/p&gt;
&lt;p&gt;(&lt;strong&gt;@Episerver World Team&lt;/strong&gt;, can we have a visible image caption? Now we can only enter description which is not visible and TinyMCE has the image_caption propery that could be used here)&lt;/p&gt;
&lt;p&gt;So to create your first blog post enter the title for the blog post and click &#39;New Post&#39; button. Then you get to the new post view which looks almost the same as if you were posting to Episerver World forums a new thread or replying but there is one additional button &#39;Save Draft&#39; so when blogging you can save a draft and then later continue on the post.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/b28e256d93e84fc4b3af0d5ca6c09edc.aspx&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Save draft and continue your writing&lt;/h2&gt;
&lt;p&gt;So here is a demo about writing this blog post and then using the &#39;Save Draft&#39; and continue work.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/0822a0824e624ba8a5206274cc2cbee2.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So You continue the work on a blog post from the &quot;&lt;a href=&quot;/link/5e37b120db614d1b8287a3b664f1a9ac.aspx&quot;&gt;Manage your blog posts on Episerver World&lt;/a&gt;&quot; page and &lt;span style=&quot;text-decoration:&amp;#32;underline;&quot;&gt;you have two options&lt;/span&gt; now, &lt;strong&gt;edit&lt;/strong&gt; or &lt;strong&gt;preview&lt;/strong&gt;. Clicking &#39;Edit&#39; takes you to page where you can edit Your page and &#39;Preview&#39; opens a new window/tab in your browser where You can preview the page (so this is where the &lt;strong&gt;preview functionality&lt;/strong&gt; has been hidden, if you are writing a new post and want to preview it then you first must save it as a draft so that you are able to preview it).&lt;/p&gt;
&lt;p&gt;As a side note, during the save draft operation I got the &#39;I&#39;m not a robot&#39; hCaptcha verification screen from Cloudflare - don&#39;t panic! Just follow instructions on the screen :D&lt;/p&gt;
&lt;h2&gt;What about then the editing experience&lt;/h2&gt;
&lt;p&gt;It is the TinyMCE editor that we are used to in projects - only this time we are actually using it to create content ;-)&lt;/p&gt;
&lt;p&gt;We can easily drag&#39;n&#39;drop or copy paste images from memory to the editor.&lt;/p&gt;
&lt;p&gt;There are the usual formatting options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;H1-H6, paragraph and preformatted&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;bold&lt;/strong&gt;, &lt;span style=&quot;text-decoration:&amp;#32;underline;&quot;&gt;underline&lt;/span&gt;, &lt;em&gt;italic&lt;/em&gt;, &lt;span style=&quot;text-decoration:&amp;#32;line-through;&quot;&gt;etc&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color:&amp;#32;#ff6600;&quot;&gt;text color&lt;/span&gt; and &lt;span style=&quot;background-color:&amp;#32;#ffcc00;&quot;&gt;text background&lt;/span&gt; color&lt;/li&gt;
&lt;li&gt;we can insert sample code&lt;/li&gt;
&lt;li&gt;lists
&lt;ul&gt;
&lt;li&gt;with indenting&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;insert &lt;a&gt;links&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;we can add tables (you shouldn&#39;t :D)&lt;/li&gt;
&lt;li&gt;we can also preview from editor: View menu -&amp;gt; Preview
&lt;ul&gt;
&lt;li&gt;opens a smallish dialog where you can see your content from the editor but wihout page layout&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In a few words - enough for me (almost).&lt;/p&gt;
&lt;h2&gt;So what I mis- or dislike / limitations&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;I need to save a draft before I can use preview (&lt;em&gt;the full page preview&lt;/em&gt;)
&lt;ul&gt;
&lt;li&gt;hCaptcha verification step (&lt;em&gt;Not always shown! but often and gets annoying when you fix typos/re-organize the content and want to review the full page&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;click preview in manage blog posts view to be able to see the post preview&lt;/li&gt;
&lt;li&gt;then to continue work I need to click the edit&lt;/li&gt;
&lt;li&gt;more steps than in normal flow when using Episerver CMS
&lt;ul&gt;
&lt;li&gt;but understandable if you think how You would implement this in a project and now allowing access to the Episerver edit view, you would end-up in similiar situation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Can not browse or re-use images that you have previously used
&lt;ul&gt;
&lt;li&gt;once again implementation details
&lt;ul&gt;
&lt;li&gt;we don&#39;t have access to edit view and that way have the possibility to use media assets&lt;/li&gt;
&lt;li&gt;and currently blog post images are stored to the content assets folder of the blog page and not some &#39;siteassets/blogger-name&#39;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;If I remove an image and afterwards decide that I need it, I need to add it again
&lt;ul&gt;
&lt;li&gt;No idea was the image kept in backend or not&lt;/li&gt;
&lt;li&gt;Yes, this is related to the browsing and re-using images&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Cannot use GitHub gist
&lt;ul&gt;
&lt;li&gt;well cannot use the embedded gist as it is JavaScript like this&amp;nbsp;&amp;lt;script src=&quot;https://gist.github.com/alasvant/8baf56e26530a727a895ffcb69ea03ab.js&quot;&amp;gt;&amp;lt;/script&amp;gt;
&lt;ul&gt;
&lt;li&gt;which gets HTML encoded to the page&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;can use the link like this&amp;nbsp;&lt;a href=&quot;https://gist.github.com/alasvant/8baf56e26530a727a895ffcb69ea03ab&quot;&gt;https://gist.github.com/alasvant/8baf56e26530a727a895ffcb69ea03ab&lt;/a&gt;&amp;nbsp;to link to the gist but idea is to have the gist rendered to the page without copy pasting it to the insert code sample
&lt;ul&gt;
&lt;li&gt;yes, the code sample is the workaround in this case&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;we can use the &#39;View -&amp;gt; Source code&#39; of the content in editor but we cannot add an iframe (gets stripped =&amp;gt; not allowed)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;the menu item &#39;Insert&#39; -&amp;gt; &#39;Template&#39; is there but there are no ready made templates, so how the user is supposed to create their own templates and re-use those in different posts when the templates should be defined in the plug-in configuration and that naturally comes from server-side
&lt;ul&gt;
&lt;li&gt;so why is the &#39;Template&#39; TinyMCE plug-in added as we can&#39;t use it?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;SEO
&lt;ul&gt;
&lt;li&gt;There is no meta description
&lt;ul&gt;
&lt;li&gt;@Episerver World Team, there really should be a text field / area for the meta description&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;There is no author
&lt;ul&gt;
&lt;li&gt;@Episerver World Team, this could be automatically the blog name which usually is the name of the author?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;og:image is Episerver logo
&lt;ul&gt;
&lt;li&gt;well good that we at least have something&lt;/li&gt;
&lt;li&gt;@Episerver World Team, could we some how get an option where the author of the blog post could upload the image to be used in the post
&lt;ul&gt;
&lt;li&gt;because now all the posts have the small Episerver logo which really is too small when someone wants to share the blog post to social media&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;there are other things that SEO tools report like canonical url missing and no language defined, etc, but those are minor things ;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;So there are some limitations, extra click workflows but in overall this is a really easy way to start blogging on Episerver World and sharing for the Episerver Community.&lt;/p&gt;</id><updated>2020-09-20T09:25:17.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Swapcode.Episerver.AuditLog is now available in Episerver NuGet feed</title><link href="http://swapcode.wordpress.com/?p=437" /><id>What is it and what it does? Keep reading to find out. In my previous blog post I wrote about how to write the content security changes to Episerver &amp;#8216;Change Log&amp;#8217; (activity log) and then administrators can view and filter those changes in the out-of-the-box view. The actual code required for this is really simple [&amp;#8230;]</id><updated>2020-09-05T10:27:01.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Episerver activity log with custom content security activity</title><link href="http://swapcode.wordpress.com/?p=428" /><id>In this post I will show you how you can create a custom content security activity and store that to Episerver activity log (change log) when access rights are changed in Episerver. This blog post was inspired by the fact that there is no out of the box audit log about content access rights changes [&amp;#8230;]</id><updated>2020-07-06T16:58:17.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Episerver Content Delivery API new feature and options</title><link href="http://swapcode.wordpress.com/?p=412" /><id>If you missed the Episerver update 296 release notes or the Episerver Features February 2020 announcement &amp;#8211; then this post is for you, because you have missed some important information about Episerver Content Delivery API. So keep on reading.. Episerver Content Delivery API new feature: support for local blocks Before version 2.9.0 of Content Delivery [&amp;#8230;]</id><updated>2020-03-22T13:05:52.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Episerver Content Delivery API new feature and options</title><link href="http://swapcode.wordpress.com/?p=412" /><id>If you missed the Episerver update 296 release notes or the Episerver Features February 2020 announcement &amp;#8211; then this post is for you, because you have missed some important information about Episerver Content Delivery API. So keep on reading.. Episerver Content Delivery API new feature: support for local blocks Before version 2.9.0 of Content Delivery [&amp;#8230;]</id><updated>2020-03-22T13:05:52.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Episerver developer meetup Helsinki March 2020</title><link href="http://swapcode.wordpress.com/?p=393" /><id>We had an awesome Episerver developer meetup in Helsinki. This time the years first meetup was kindly hosted by Solita Oy. Topics were this time: Episerver ASP.NET Core, customizing the Episerver Content Delivery API and a customer case utilizing Episerver CMS and real time public transport tracking. Of course there was also developer friendly menu: [&amp;#8230;]</id><updated>2020-03-05T19:35:38.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>Broken Web API attribute routing</title><link href="http://swapcode.wordpress.com/?p=384" /><id>Have you encountered the situation with your Episerver solution that your attribute based Web API starts throwing &amp;#8220;The object has not yet been initialized. Ensure that HttpConfiguration.EnsureInitialized() is called in the application&amp;#8217;s startup code after all other initialization code.&amp;#8221;? If Yes, then this post is for you. The issue You have an Episerver solution and [&amp;#8230;]</id><updated>2020-01-12T13:07:48.0000000Z</updated><summary type="html">Blog post</summary></entry> <entry><title>TinyMCE entity encoding</title><link href="http://swapcode.wordpress.com/?p=374" /><id>Just a quick post about few TinyMCE settings that you might need when you use Episerver content created with &amp;#8220;XhtmlString&amp;#8221; property in some other context than web browser or you might be doing some manipulation to the content using XML structure. Default entity encoding is &amp;#8216;named&amp;#8217; The default setting in TinyMCE entity_encoding is &amp;#8216;named&amp;#8217; which [&amp;#8230;]</id><updated>2019-12-15T19:00:25.0000000Z</updated><summary type="html">Blog post</summary></entry></feed>