<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><language>en</language><title>Blog posts by Elias Lundmark</title> <link>https://world.optimizely.com/blogs/elias-lundmark/</link><description></description><ttl>60</ttl><generator>Optimizely World</generator><item> <title>Keeping local environments in sync with your Cloud environments</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2024/3/keeping-local-environments-in-sync-with-your-cloud-environments/</link>            <description>&lt;p&gt;We recently announced that we are &lt;a href=&quot;/link/2502974a919b43be954568f15310b0f0.aspx&quot;&gt;improving the scalability of SQL databases in DXP Cloud Services&lt;/a&gt;, this new architecture also enhances our overall security for SQL databases where we are aiming to harden technical controls to maintain confidentiality and integrity of our customers data. This change had an unintended consequence though &amp;ndash; it disallows developers from connecting local development environments directly to SQL databases in DXP Cloud Services. We strongly advise against this practice, while the ease of use and flexibility is great, manually managing and storing connection strings and credentials for service users greatly increases the risk of these credentials falling into the wrong hands, allowing potential attackers to access or modify data.&lt;/p&gt;
&lt;p&gt;To avoid these risks, our new architecture disallows direct connections from third-party sources to SQL Servers running in DXP Cloud Services. Instead, you should use the paasportal or the API to export your databases and content to use in your local development environments, which are more secure and reliable methods.&lt;/p&gt;
&lt;h2&gt;How to export content&lt;/h2&gt;
&lt;p&gt;Via the paasportal&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;a href=&quot;https://paasportal.episerver.net&quot;&gt;https://paasportal.episerver.net&lt;/a&gt; and select the project you wish to export a database from&lt;/li&gt;
&lt;li&gt;Navigate to the Troubleshoot tab&lt;/li&gt;
&lt;li&gt;In the &amp;lsquo;Export Database&amp;rsquo; section, select the environment you wish to export the database from, and how long the paasportal should retain this copy.&lt;/li&gt;
&lt;li&gt;Once the export is done, click the database file to download it as .bacpac. These files can then be used to import your database to a local SQL server, or an Azure SQL Server.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Via API with Powershell&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;a href=&quot;https://paasportal.episerver.net&quot;&gt;https://paasportal.episerver.net&lt;/a&gt; and generate credentials as described here &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/authentication&quot;&gt;https://docs.developers.optimizely.com/digital-experience-platform/docs/authentication&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Authenticate woth Connect-EpiCloud, &lt;em&gt;Connect-EpiCloud -ClientKey &amp;lt;ClientKey&amp;gt; -ClientSecret &amp;lt;ClientSecret&amp;gt; -ProjectId &amp;lt;ProjectId&amp;gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Start a database export with Start-EpiDatabaseExport, for example Start-EpiDatabaseExport -Environment Integration -DatabaseName epicms -Wait&lt;/li&gt;
&lt;li&gt;Fetch the download link for the .bacpac with Get-EpiDatabaseExport&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Via the API you can also download BLOBs from the storage account, where Get-EpiStorageContainer allows you to list all storage containers and GetEpiStorageContainerSasLink creates a SAS URI that can be used to download BLOBs. For example,&lt;/p&gt;
&lt;p&gt;&amp;nbsp;Get-EpiStorageContainerSasLink -ProjectId &quot;2372b396-6fd2-40ca-a955-57871fc497c9&quot; `&lt;/p&gt;
&lt;p&gt;&amp;nbsp; -Environment &quot;Integration&quot; `&lt;/p&gt;
&lt;p&gt;&amp;nbsp; -StorageContainer &quot;mysitemedia&quot; `&lt;/p&gt;
&lt;p&gt;&amp;nbsp; -RetentionHours 2&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2024/3/keeping-local-environments-in-sync-with-your-cloud-environments/</guid>            <pubDate>Thu, 21 Mar 2024 12:08:12 GMT</pubDate>           <category>Blog post</category></item><item> <title>Import Blobs and Databases to Integration Environments</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/11/import-blobs-and-databases-to-integration-environments/</link>            <description>&lt;p&gt;In this blog, we are going to explore some new extensions to the &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/deployment-api&quot;&gt;Deployment API&lt;/a&gt; in DXP Cloud Services, specifically the ability to import databases and blobs via the API. Some caveats to consider before we jump into the details,&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Blob and database imports are limited to integration and ADE environments&lt;/li&gt;
&lt;li&gt;Database imports are only available when using &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/deploy-using-powershell&quot;&gt;-DirectDeploy&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Uploading and Deploying Databases&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can know supply a bacpac file to Add-EpiDeploymentPackage&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;$saslink = Get-EpiDeploymentPackageLocation
Add-EpiDeploymentPackage -SasUrl $saslink -Path &amp;ldquo;C:\MyDatabaseFiles\environmentname.cms.sqldb.20231106.bacpac&amp;rdquo;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Bacpac is the same format that is used when databases are &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/export-database&quot;&gt;exported&lt;/a&gt; along with the same naming convention, so that any database that is exported can easily be imported again.&lt;/p&gt;
&lt;p&gt;Once the upload is done, we can simply pass it to Start-EpiDeployment&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;Start-EpiDeployment -DeploymentPackage (&amp;rdquo;environmentname.cms.sqldb.20231106.bacpac&amp;rdquo;,&amp;rdquo;cms.app.1.0.0.nupkg&amp;rdquo;) -TargetEnvironment &amp;ldquo;Integration&amp;rdquo; -DirectDeploy&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example deploys a nupkg at the same time, but that can be ommitted to just import a database. E.g., &lt;em&gt;-DeploymentPackage &amp;ldquo;environmentname.cms.sqldb.20231106.bacpac&amp;rdquo;&lt;/em&gt;. As usual, you will be able to see the status of the deployment in the management portal, or Get-EpiDeployment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Creating a writeable SAS URI to upload blobs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For quite some time now, we have had the possibility to create &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/storage-containers&quot;&gt;readable SAS URIs via the Deployment API&lt;/a&gt;. E.g.,&lt;/p&gt;
&lt;pre class=&quot;language-markup&quot;&gt;&lt;code&gt;Get-EpiStorageContainerSasLink -Environment &amp;ldquo;Integration&amp;rdquo; -StorageContainer &amp;ldquo;mysitemedia&amp;rdquo;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can now add a &lt;strong&gt;-Writable&lt;/strong&gt; flag to this command which enables you to upload blobs to the container as well. You can use this SAS URI to write via HTTPS, or use with &lt;a href=&quot;https://azure.microsoft.com/en-us/products/storage/storage-explorer&quot;&gt;Azure Storage Explorer.&lt;/a&gt; If you are using Azure Storage Explorer, select connect to a Blob container or Directory&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/176cc196b8874be0b427909033b0bda0.aspx&quot; width=&quot;395&quot; height=&quot;259&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Then select Shared access signature URL (SAS), and paste the writeable SAS URL&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;/link/7fc94f9b5cd94aa2a83f8e7575dffec1.aspx&quot; width=&quot;416&quot; height=&quot;204&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Our hope is that this will make it easier to deploy an existing site to our Cloud Services, and allow you to export content from any environment and easily import it again.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;!-- notionvc: e2849fdf-909c-4250-88c5-d6157537fe4e --&gt;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/11/import-blobs-and-databases-to-integration-environments/</guid>            <pubDate>Mon, 04 Dec 2023 12:05:02 GMT</pubDate>           <category>Blog post</category></item><item> <title>Enhancing Database Scalability in DXP Cloud Services</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/11/enhancing-database-scalability-in-dxp-cloud-services/</link>            <description>&lt;p&gt;&lt;span&gt;In today&#39;s dynamic digital environment, efficient database scalability is more critical than ever. Implementing robust autoscaling for SQL is integral to meeting this need, ensuring our systems remain flexible and responsive. In this blog post, we will delve into how we are enhancing our database infrastructure in DXP Cloud Services to improve scalability and performance.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;Autoscaling solutions for SQL databases offer a sophisticated method for handling multiple databases that experience diverse workloads and resource requirements. Instead of provisioning each database with a rigid set of resources, this technology enables a more fluid resource allocation strategy. This adaptability not only mitigates the risks associated with under-provisioning but also maximizes resource utilization and optimizes performance across the board.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;The advantages of our new database infrastructure enhancements include:&lt;/span&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Management efficiency &amp;ndash; Our solution simplifies the operation of hundreds, or even thousands, of databases. It ensures all databases receive the necessary resources, driving optimal performance across various workloads.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Auto-scaling &amp;ndash; The system can intuitively allocate more resources during high-demand periods, maintaining high efficiency and responsiveness under fluctuating loads.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;In light of these benefits, we are planning a gradual transition to this enhanced database infrastructure over the coming weeks and months. Our goal is to refine our operations and offer superior adaptability to diverse workloads and resource needs. We will initiate this transition with non-production databases, such as those in ADE, integration, and pre-production environments, starting in the SE Central region within the next few weeks.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;As businesses and technologies evolve, the need for agility and efficiency only grows. Our upcoming database infrastructure enhancements equip us with the necessary tools to operate DXP Cloud Services at an elevated scale, improving our capacity to meet varying demands. This progress excites us, and we are eager to extend these benefits across our entire platform in 2024.&lt;/span&gt;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/11/enhancing-database-scalability-in-dxp-cloud-services/</guid>            <pubDate>Thu, 02 Nov 2023 13:24:20 GMT</pubDate>           <category>Blog post</category></item><item> <title>New Features in DXP Cloud Services</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/4/new-features-in-dxp-cloud-services/</link>            <description>&lt;p&gt;In this article, we&amp;rsquo;re going to dive into some new features we have released for the management portal and DXP Cloud Services recently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;span class=&quot;notion-enable-hover&quot;&gt;Preview CMS 12 / Commerce 14 sites during upgrade&lt;/span&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Soon after we released the migration tool to allow users to upgrade to CMS 12 and Commerce 14, we realized that many users were adding temporary hostnames to the new .NET Core projects in order to test multi-site setups before going live and make sure everything works as expected with discrete domain names. While this works in practice, it is not an ideal experience as adding temporary domain names can be cumbersome, especially if you have a lot of domain names.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;To improve this, we have now added a feature where you can preview your new CMS 12 / Commerce 14 sites using the domain names of the source project. This works similarly to&amp;nbsp;&lt;em&gt;&lt;a href=&quot;/link/6ba35c7a2dde44daacc95c4fd8d42fe8.aspx&quot;&gt;routing rules&lt;/a&gt;&amp;nbsp;&lt;/em&gt;where you can specify a cookie or query string to reach a deployment slot. For example,&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;www.domain.com - leads to the source CMS 11 site&lt;/li&gt;
&lt;li&gt;www.domain.com/?x-ms-routing-name=preview&amp;amp;x-opti-target=[environmentname] - leads to the target CMS 12 site&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Adding&amp;nbsp;&lt;em&gt;x-ms-routing-name&lt;/em&gt; with the value of preview and&amp;nbsp;&lt;em&gt;x-opti-target&lt;/em&gt; with the value of the target environments name (e.g., elias01mstr123prod) will route you to your new CMS 12 site. If you wish to navigate back to the source site, simply remove the cookies named&amp;nbsp;&lt;em&gt;x-ms-routing-name&amp;nbsp;&lt;/em&gt;and&amp;nbsp;&lt;em&gt;x-opti-target&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;A bit more intuitively, you can also preview sites directly in the paasportal during the&amp;nbsp;&lt;em&gt;prepare go live&lt;/em&gt; phase.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/675d76ba40d5419482baeff2b86f45a1.aspx&quot; width=&quot;546&quot; height=&quot;340&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/2de0858fa89b436db9f42f6ea85961d4.aspx&quot; width=&quot;545&quot; height=&quot;398&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Read more about it &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/v1.3.0-DXP-for-CMS11-COM13/docs/migration-to-cms-12-commerce-14&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;span class=&quot;notion-enable-hover&quot;&gt;Upgrade one site at a time to CMS 12 and Commerce 14&lt;/span&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In its first iteration, the project migration feature would allow you to go live on CMS 12 with all of your domain names in one action. We have now introduced some more granularity to this feature by allowing you to move select domain names to CMS 12. During the go live preparation, you can now select a domain name that you wish to move to your new CMS 12 project. The keen eyed amongst you may have noticed the &amp;ldquo;Ready for Go Live&amp;rdquo; button in the previous screenshot.&lt;/p&gt;
&lt;p&gt;Something to be mindful of when using this feature is that you are no longer able to copy content from the source project to the target project after going live with your first domain name. This is to protect against data loss. If there is content in the source environment that you would like to carry over from the source environment after your first go live, consider using the built-in &lt;a href=&quot;https://support.optimizely.com/hc/en-us/articles/4413192300301-Export-and-import-data&quot;&gt;import and export feature&lt;/a&gt; in the CMS.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/56fd6221d7f3429f9032983fbb9fffd2.aspx&quot; width=&quot;612&quot; height=&quot;402&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fetch outbound IPs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You are now able to fetch outbound IPs from all your environments under the Troubleshoot tab in the management portal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/a8e13fdbbc35463f8f745ecc04b92d5b.aspx&quot; width=&quot;844&quot; height=&quot;293&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Manage App Settings&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Managing secrets, such as API keys or connection strings, needs to be done securely, in order to minimize the risk of data breaches and unauthorized access. We&amp;rsquo;re now introducing a new feature in the management portal that allows you to do this. Under the &amp;ldquo;App Settings&amp;rdquo; tab you are able to add app settings to all of your environments.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/3b90cb0fc8f445dd8dd4eea800215cae.aspx&quot; width=&quot;862&quot; height=&quot;179&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Behind the scenes, this feature is leveraging Azure Key Vault to keep all environment variables secured. We couple this with Azure&amp;rsquo;s Managed Identity technology to ensure that only your application is able to read from the Azure Key Vault. Note that this feature is only available for environments that have Azure Key Vault, and we are in the process of rolling it out to existing environments.&lt;/p&gt;
&lt;p&gt;For more information, see our docs &lt;a href=&quot;https://docs.developers.optimizely.com/digital-experience-platform/docs/manage-app-settings&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/4/new-features-in-dxp-cloud-services/</guid>            <pubDate>Thu, 13 Apr 2023 13:05:46 GMT</pubDate>           <category>Blog post</category></item><item> <title>Modernizing Infrastructure for App Services in DXP Cloud Services</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/3/modernizing-infrastructure-for-app-services-in-dxp-cloud-services/</link>            <description>&lt;p&gt;Microsoft continuously updates hardware in Azure to keep up with new developments and demand. App Service Plans are no exception to this, the latest and most advanced of the App Service Plans offering is called Premium V3 (Pv3), which offers many benefits and improved features to the predecessor Premium V2 (Pv2).&lt;/p&gt;
&lt;p&gt;One of the major benefits of moving to the Pv3 SKU is greatly increased performance and scalability. Pv3 offers improved CPU and memory utilization through the use of modern hardware, allowing for faster and more efficient processing of requests. Additionally, the increased density of virtual machines on the Pv3 SKU enables more capacity per instance &amp;ndash; offering 2, 4, and 8 vCPUs per instance options that stretches up to 32GB of memory. Doubling the available capacity per instance compared to the previous generation.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;/link/269e7096bea646afb53968604233650b.aspx&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Pv3 also offers improved availability and reliability with the introduction of a zone-redundant architecture. This means that applications hosted on the Pv3 SKU are even more fault-tolerant than ever before, ensuring that applications remain available in the event of a failure.&lt;/p&gt;
&lt;p&gt;Pv3, and the underlying Dv4 platform, has been rolling out for new App Service Plans in several regions since early 2021 but is now finally becoming available for existing App Service Plans. We at Optimizely of course plan to take full advantage of the benefits of the Pv3 SKU to improve the scalability, performance, security, and availability of our platform.&lt;/p&gt;
&lt;p&gt;DXP environments that have been provisioned during the latter half of 2021 or later, or have migrated to CMS 12 / Commerce 14, are most likely already running on Pv3 as it has been our default SKU for provisioning since it became available (subject to availability in different Azure regions). Over the coming weeks and months, we will also be migrating App Service Plans to Pv3 as it becomes available.&lt;/p&gt;
&lt;p&gt;The availability and operation of our customers applications is of great importance to us, and we have developed a migration that is non-disruptive and ensures no downtime - the only thing you may notice is a change of outbound IP addresses. You will be receiving a notification from your Customer Success Manager if your environments are eligible for the upgrade to Pv3. If you do rely on allowlisting outbound IP addresses, we suggest you navigate to the &lt;a href=&quot;https://paasportal.episerver.net/&quot;&gt;management portal&lt;/a&gt; and get an updated list of addresses. If your environment is eligible for an upgrade to Pv3, this list will include the IP addresses post-upgrade as it contains all addresses for all possible app service SKUs.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;/link/c647170401d54ba788c686998e0cc483.aspx&quot; /&gt;&lt;br /&gt;&lt;em&gt;Outbound IP addresses are available in the &lt;a href=&quot;https://paasportal.episerver.net/&quot;&gt;management portal&lt;/a&gt; under the Troubleshoot tab&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We are excited to bring Microsoft&amp;rsquo;s latest App Service platform to our customers during 2023. Make sure to engage with your Customer Success Manager if you have any questions and follow our &lt;a href=&quot;https://status.episerver.com/&quot;&gt;status page&lt;/a&gt; for regular updates about maintenance.&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2023/3/modernizing-infrastructure-for-app-services-in-dxp-cloud-services/</guid>            <pubDate>Thu, 02 Mar 2023 11:22:11 GMT</pubDate>           <category>Blog post</category></item><item> <title>Developer Preview: Migrations to .NET 5 on DXP Cloud Services</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2022/2/developer-preview-migrations-to--net-5-on-dxp-cloud-services/</link>            <description>&lt;p&gt;&lt;span&gt;Major upgrades can be daunting and carry risk in many areas; data corruption, breaking integrations and can require downtime. On the other side of the coin, lagging in adopting the latest upgrades carries the risk of poor security posture, obsolescence, incompatibilities and potential software rot. Due to this, we set ourselves a goal when we planned the release of Content Cloud 12 and Commerce Cloud 14 customers that are running in our DXP Cloud Services &amp;ndash; make the transition as smooth as possible and minimize risk with the upgrade. This blogpost will give you an insight into our thought process behind our new development and give you a preview of what&amp;rsquo;s to come. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;The first step of this is upgrading the software itself. For this, we created the &lt;a href=&quot;https://github.com/episerver/upgrade-assistant-extensions/&quot;&gt;.NET Upgrade Assistant extension&lt;/a&gt; specific to Optimizely scenarios that allows users to go through a guided experience to reduce time and difficulty in the task of modernizing codebases. But how do you actually go about deploying this to production in a manner that carries the least risk?&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;To tackle this in our DXP Cloud Services, we have been working on a similar guided experience to let existing Content Cloud 11 and Commerce Cloud 13 users deploy sites in parallel to existing environments.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Starting a migration&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span&gt;In the DXP Management Portal, you&amp;rsquo;ll be greeted with a new tab called Project Migration. This will act as your main hub for administrating the migration process. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;img src=&quot;/link/ccf09c7446a94475bcbb4daaf65ff87b.aspx&quot; /&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;Kicking of the migration will provision a parallel &lt;a href=&quot;/link/96e1cffcc4bd4c45a352e821faaf06d9.aspx&quot;&gt;project&lt;/a&gt; that is completely decoupled from your current environments &amp;ndash; complete with integration, preproduction and production environments (including Search &amp;amp; Navigation indexes). This allows you to start deploying your new Content Cloud 12 and Commerce Cloud 14 solutions without affecting status quo.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Migrating environments&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span&gt;Once your new project has been provisioned, you will be able to start deploying to it using the DXP Management portal UI or the &lt;a href=&quot;/link/cecd04dddeea47f89a14d7f163913728.aspx&quot;&gt;deployment API&lt;/a&gt;. During any development process, it&amp;rsquo;s also convenient to access production data to test with &amp;ldquo;real&amp;rdquo; data and make sure everything works as expected. Navigating back to the source project in the DXP Management Portal will allow you to copy content from any environment in the source project to any environment in the target project on demand.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Going live&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span&gt;Once you are satisfied that the sites work as expected running on the newest version of Content and Commerce Cloud, it&amp;rsquo;s time for the moment of truth &amp;ndash; going live. In this move, there&amp;rsquo;s a balance to be struck between incurring downtime and managing data consistency. We want to keep the source site(s) available as long as possible, but at the same time make sure we haven&amp;rsquo;t left any data behind. As such, kicking off the go live process will do a couple of things,&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;Put the source site(s) in maintenance mode to make sure no more data is written to Azure resource local to those site(s)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Start a final data transfer over to the target environment&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Switch the origin in the Content Delivery Network (CDN) to the new environment for all hostnames coupled with the source environment&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span&gt;And that&amp;rsquo;s it &amp;ndash; your sites should be running on .NET 5 at this point. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;We&amp;rsquo;re still working hard to finish this tooling to allow seamless transitions to our latest software but wanted to put together this preview to convey our thought process and plans. If you have any feedback, we&amp;rsquo;re happy to receive it through our &lt;a href=&quot;https://feedback.optimizely.com/?project=DXCS&quot;&gt;feedback portal&lt;/a&gt;. Contact your Customer Success Manager if you are eager to get started as we are currently running an Early Adopters program for this feature.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;FAQ&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;&lt;span&gt;I have multiple sites and wish to upgrade them one at a time, is that possible?&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;We are working on how to support this scenario.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;span&gt;Are there any DNS changes involved with this transition?&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;You may be asked to create some TXT records to validate the domain for the new DXP project, but the final go live event is handled through the CDN.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;span&gt;Is there any risk with starting a migration?&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;None at all &amp;ndash; the source environments are not affected by this until the go live phase, and it is possible to abort a migration at any point. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;span&gt;Is a similar migration required in the future for .NET 6?&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;No, the new project will support both .NET 5 and 6. We also have mechanisms to detect the targeted .NET version based on the deployed source package to select appropriate version.&lt;/span&gt;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2022/2/developer-preview-migrations-to--net-5-on-dxp-cloud-services/</guid>            <pubDate>Wed, 09 Feb 2022 11:01:13 GMT</pubDate>           <category>Blog post</category></item><item> <title>Search &amp; Navigation Backend Migrations</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2021/5/search--nagivation-backend-migrations/</link>            <description>&lt;p&gt;Over the last couple of years, we have been hard at work upgrading the architecture and backend that power Search &amp;amp; Navigation (formerly known as FIND). The most recent development efforts have been on stabilizing the backend we refer to as &amp;ldquo;V3&amp;rdquo;, which is discussed in this &lt;a href=&quot;/link/b1b8a22b44044344b077187f9ca1d2eb.aspx&quot;&gt;blogpost&lt;/a&gt;. But as the number 3 in &amp;ldquo;V3&amp;rdquo; hints at, we have previous versions of our backend, which are still running.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Some of the improvements that we have made in V3 have trickled down into the previous versions, but not all of them. This is mainly due to other sets of constrains within the legacy platforms, such as different public cloud providers and software versions. But of course, we want all our clients and partners to be able to take advantage of our latest and greatest innovations &amp;ndash; so over the coming weeks and months we are going to be consolidating our backends to V3.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Historically it has been quite an ordeal to migrate indexes between our backend versions as it required re-indexing of data from scratch. To make this a smoother journey, we have been creating migration tools that moves Search &amp;amp; Navigation indexes from one backend to another &amp;ndash; data and all.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size:&amp;#32;18pt;&quot;&gt;What does this mean for you?&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Hopefully nothing. Our goal is to make this migration as friction free as possible and at the same time ensure that everything continues to work as expected. The big constrain to accomplish that is ensuring data consistency, and as such have had to make the trade-off of not accepting write operations during an index migration (much like &lt;a href=&quot;/link/77be6c75f12d4160868e5f8545119695.aspx&quot;&gt;Smooth Deploy&lt;/a&gt;). This will prevent indexing from happening for the time period where an index is being migrated, and you may encounter errors stating that read-only mode is activated for your index. In this case, we recommend retrying the indexing job until it goes through. Based on the volume of write operations to our clusters and time to migrate, most users will not notice this migration. There are outliers to this, and in that case, we will be in touch with you to ensure a smooth migration.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Even if we expect this migration to not affect users, we will be posting updates about this work on our status page for the sake of transparency. Since this work is expected to be going on for quite some time, we highly recommend that you subscribe to our status page to follow our progress. We also recommend that you update your &lt;a href=&quot;https://nuget.episerver.com/package/?id=EPiServer.Find&quot;&gt;client to the latest version available&lt;/a&gt;.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;P.S. in &lt;a href=&quot;/link/b1b8a22b44044344b077187f9ca1d2eb.aspx&quot;&gt;this blogpost &lt;/a&gt;we also mention the next generation of FIND. Not to worry, we are still working hard on this but have taken some detours along the way. More news to come during the second half of this year &amp;ndash; EMVPs, keep your eyes and ears open!&lt;/p&gt;
&lt;p&gt;Best regards,&lt;/p&gt;
&lt;p&gt;Elias Lundmark&lt;/p&gt;
&lt;p&gt;Product Manager at Optimizely&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2021/5/search--nagivation-backend-migrations/</guid>            <pubDate>Wed, 19 May 2021 14:13:43 GMT</pubDate>           <category>Blog post</category></item><item> <title>SMTP Authentication Changes in DXP</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2020/12/smtp-authentication-changes-in-dxp2/</link>            <description>&lt;p&gt;The provider for SMTP services and transactional e-mails&amp;nbsp;in DXP&amp;nbsp;are making some changes around authentication methods&amp;nbsp;during this quarter. The changes will&amp;nbsp;move away from basic authentication with username and password, and instead use API keys&amp;nbsp;for authentication.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So what&amp;nbsp;does&amp;nbsp;this&amp;nbsp;mean&amp;nbsp;for&amp;nbsp;you?&amp;nbsp;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you are using the SMTP service that is a part of DXP, you&amp;nbsp;will&amp;nbsp;need to make some modifications to your&amp;nbsp;&lt;em&gt;&amp;lt;smtp&amp;gt;&lt;/em&gt; section in&amp;nbsp;the web configuration file. Start&amp;nbsp;by&amp;nbsp;navigating&amp;nbsp;to the&amp;nbsp;management portal - within your DXP project and the &amp;ldquo;API&amp;rdquo; tab&amp;nbsp;you will now find an option to generate API keys.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://i.imgur.com/2uM64c6.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;After generating an API key (it&amp;rsquo;s only viewable directly after creation, so save it), grab the username and hostname as well from the&amp;nbsp;management portal. You&amp;rsquo;re then ready to&amp;nbsp;modify&amp;nbsp;configuration in your deployment packages.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&amp;lt;configuration&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp; &amp;lt;system.net&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp; &amp;lt;mailSettings&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp; &amp;lt;smtp from=&quot;yourdefaultreply@address.com&quot;&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp; &amp;lt;network &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp; host=&quot;smtp.sendgrid.net&quot; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp; password=&quot;[API key generated in management portal]&quot; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp; userName=&quot;apikey&quot; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp;  &amp;nbsp; port=&quot;[587, 465, 25 or 2525]&quot; /&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp;  &amp;nbsp; &amp;lt;/smtp&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp;  &amp;nbsp; &amp;lt;/mailSettings&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp; &amp;lt;/system.net&amp;gt; &lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;lt;/configuration&amp;gt; &lt;/code&gt;&lt;/p&gt;
&lt;p&gt;And that&amp;rsquo;s it, you&amp;rsquo;re all set to deploy to your environments in DXP.&amp;nbsp;Sendgrid&amp;nbsp;does have a hard deadline of&amp;nbsp;January 20th where they will stop supporting basic authentication. If you are currently using basic authentication and cannot make the changes ahead of the deadline, we will run a migration close to the deadline to do this automatically and transform&amp;nbsp;configuration files, but note that we will block any deployments after this migration if we notice that basic authentication is used. This is to ensure that transactional e-mails keep working as expected.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;We apologize for the late heads-up for this and our aim is to make this transition as smooth as possible.&amp;nbsp;Thank you for your patience and understanding.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;Best regards,&amp;nbsp;&lt;br /&gt;Elias Lundmark&amp;nbsp;&lt;br /&gt;Product Manager, Cloud Services&amp;nbsp;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2020/12/smtp-authentication-changes-in-dxp2/</guid>            <pubDate>Tue, 01 Dec 2020 16:24:27 GMT</pubDate>           <category>Blog post</category></item><item> <title>Demystifying Edge TTL in Cloudflare</title>            <link>https://world.optimizely.com/blogs/elias-lundmark/dates/2020/4/demystifying-edge-ttl-in-cloudflare/</link>            <description>&lt;p&gt;Hello World! (pun very much intended)&lt;/p&gt;
&lt;p&gt;Long-time lurker, first time poster. I work as a Managed Services Engineer here at Episerver and a common thing we tackle is CDN optimizations in DXP projects and I thought I&#39;d share some general information and basic things to look for when optimizing cache utilization in Cloudflare.&lt;/p&gt;
&lt;p&gt;Out of the box, Cloudflare caches the file formats they mention in their docs &lt;a href=&quot;https://support.cloudflare.com/hc/en-us/articles/200172516-What-file-extensions-does-CloudFlare-cache-for-static-content-#h_a01982d4-d5b6-4744-bb9b-a71da62c160a&quot;&gt;here&lt;/a&gt;, where it acts as a cache to bring assets geographically closer to the end-user to both offload the origin web server as well as speed things up for the user. Cloudflare has around 194 Points of Presences (PoPs) around the globe.&lt;/p&gt;
&lt;p&gt;The worst performance offenders are often images as they tend to be the largest assets so getting them as close to the user as possible makes sense, and many sites use image resizing to display one original image on multiple places in different dimensions, and that resize operation is quite resource intensive and consumes a substantial amount of CPU time. The name of the game becomes,&lt;/p&gt;
&lt;p&gt;1. Make sure assets are properly cached in the CDN&lt;br /&gt;2. Make sure assets are cached for as long as possible to keep down revalidations (where the CDN has to re-fetch the asset from the origin web server)&lt;/p&gt;
&lt;p&gt;Figuring out what is and what is not cached is fairly straight forward because exposes the response header CF-Cache-Status to us, which should return HIT if an asset is served by the CDN cache. See their docs &lt;a href=&quot;https://support.cloudflare.com/hc/en-us/articles/200172516-Understanding-Cloudflare-s-CDN#h_bd959d6a-39c0-4786-9bcd-6e6504dcdb97&quot;&gt;here&lt;/a&gt; for more possible responses. Note that you will get a few MISSes before a HIT as it takes some requests for cache to warm up. If you notice constant MISSes and never get a HIT in the CF-Cache-Status header, or if it returns DYNAMIC, your asset might not get cached as expected. &quot;Dr. Flare&quot; is a neat browser plugin to figure out what is and what is not cached and served by Cloudflare.&lt;/p&gt;
&lt;p&gt;So how do we tell Cloudflare what to cache and set Time-To-Live (TTL) values to dictate how long an asset is cached? First off the file has to be in the supported format as noted previously and the web server has to return &quot;public&quot; in the Cache-Control header in order to get cached. The TTL is then dictated by max-age is the cache-control header, or it&#39;s derived from the Expires header and then displayed in the cache-control header as Max-Age=Expires-Date (seconds).&lt;/p&gt;
&lt;p&gt;In the below example, no modifications have been done to the Episerver Alloy template and it defaults to using the Expires header with a 12-hour (43200 seconds) expiration time.&lt;/p&gt;
&lt;p&gt;Response from Cloudflare (max-age derived from Date and Expires):&lt;/p&gt;
&lt;p&gt;&lt;code&gt;(Invoke-WebRequest &quot;https://www.domainthatgoestocloudflare.com/contentassets/e6c47a7021e64c288fd79956fb477a50/alloymeetbanner.png&quot;).headers&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Key Value&lt;/code&gt;&lt;br /&gt;&lt;code&gt;--- -----&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Transfer-Encoding: chunked&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Connection: keep-alive&lt;/code&gt;&lt;br /&gt;&lt;code&gt;CF-Cache-Status: HIT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Age: 3&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Expect-CT: max-age=604800, report-uri=&quot;https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct&quot;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;CF-RAY: 58b1277f8e18caf8-ARN&lt;/code&gt;&lt;br /&gt;&lt;code&gt;cf-request-id: 02629703b50000caf851ac8200000001&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Cache-Control: public, max-age=43200&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Content-Type: image/png&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Date: Tue, 28 Apr 2020 13:33:18 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Expires: Wed, 29 Apr 2020 01:33:15 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;ETag: &quot;1D5FD02B5411180&quot;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Last-Modified: Wed, 18 Mar 2020 08:53:35 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Set-Cookie: __cfduid=d846441b2dd1d7fc2d424f6dc8dfaa81b1588080798; expires=Thu, 28-May-20 13:33:18 GMT; path=/; domain=....&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Server: cloudflare&lt;/code&gt;&lt;br /&gt;&lt;code&gt;X-AspNet-Version: 4.0.30319&lt;/code&gt;&lt;br /&gt;&lt;code&gt;X-Powered-By: ASP.NET&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Response from origin web server:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;(Invoke-WebRequest  &quot;https://www.domainthatbypassescloudflare.com/contentassets/e6c47a7021e64c288fd79956fb477a50/alloymeetbanner.png&quot;).headers&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Key Value&lt;/code&gt;&lt;br /&gt;&lt;code&gt;--- -----&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Transfer-Encoding: chunked&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Accept-Ranges: bytes&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Cache-Control: public&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Content-Type: image/png&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Date: Tue, 28 Apr 2020 13:34:15 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Expires: Wed, 29 Apr 2020 01:34:14 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;ETag: &quot;1D5FD02B5411180&quot;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Last-Modified: Wed, 18 Mar 2020 08:53:35 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Set-Cookie: ARRAffinity=e72ac5b17c6574f3ced950f54941e20cb3d62e26a50b6ece889ededb334459e9;Path=/;HttpOnly;Domain=...&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Server: Microsoft-IIS/10.0&lt;/code&gt;&lt;br /&gt;&lt;code&gt;X-AspNet-Version: 4.0.30319&lt;/code&gt;&lt;br /&gt;&lt;code&gt;X-Powered-By: ASP.NET&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;12 hours could be sufficient, but if we have hundreds of gigabytes of data that should be revalidated every 12 hours by ~200 PoPs, it gets inefficient very quickly. Mid-tier caching allows PoPs to share data between one another to offload the origin, but 12 hours is a bit low so let&#39;s bump that up.&lt;/p&gt;
&lt;p&gt;There&#39;s two ways of accomplishing this - either increase the time of the Expires header or remove it altogether and set a static max-age in the cache-control header instead. There is an easy way of accomplishing this with staticFile as it allows us to increase the value of Expires with just a few lines in web.config https://world.episerver.com/documentation/developer-guides/CMS/configuration/Configuring-staticFile/.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&amp;lt;configSections&amp;gt;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp; &amp;lt;!-- breaking change in CMS 11 https://world.episerver.com/documentation/upgrading/Episerver-CMS/cms-11/breaking-changes-cms-11/ --&amp;gt;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp; &amp;lt;section name=&quot;staticFile&quot; type=&quot;EPiServer.Framework.Configuration.StaticFileSection, EPiServer.Framework.AspNet&quot; allowLocation=&quot;true&quot; /&amp;gt;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;lt;/configSections&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;...&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&amp;lt;!-- Set Expires header for assets in path /contentAssets --&amp;gt;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;lt;location path=&quot;contentAssets&quot;&amp;gt;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;nbsp; &amp;nbsp; &amp;lt;staticFile expirationTime=&quot;30.0:0:0&quot;/&amp;gt;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;&amp;lt;/location&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;After applying the above configuration to web.config (and purging the cache), we can see that the Expires header is now 30 days instead of 12 hours and max-age updates accordingly from Cloudflare.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;(Invoke-WebRequest  &quot;https://www.domainthatgoestocloudflare.com/contentassets/e6c47a7021e64c288fd79956fb477a50/alloymeetbanner.png&quot;).headers&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Key Value&lt;/code&gt;&lt;br /&gt;&lt;code&gt;--- -----&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Transfer-Encoding: chunked&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Connection: keep-alive&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Vary: Accept&lt;/code&gt;&lt;br /&gt;&lt;code&gt;CF-Cache-Status: HIT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Age: 3&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Expect-CT: max-age=604800, report-uri=&quot;https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct&quot;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;CF-RAY: 58b1451d1d0b75f8-ARN&lt;/code&gt;&lt;br /&gt;&lt;code&gt;cf-request-id: 0262a9862e000075f8c383a200000001&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Cache-Control: public, max-age=2073597&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Content-Type: image/png&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Date: Tue, 28 Apr 2020 13:53:31 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Expires: Fri, 22 May 2020 13:53:28 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;ETag: &quot;1D5FD02B5411180&quot;&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Last-Modified: Wed, 18 Mar 2020 08:53:35 GMT&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Set-Cookie: __cfduid=d0fadf111b66d8b5685ad342c62c9393b1588082011; expires=Thu, 28-May-20 13:53:31 GMT; path=/; domain=.....&lt;/code&gt;&lt;br /&gt;&lt;code&gt;Server: cloudflare&lt;/code&gt;&lt;br /&gt;&lt;code&gt;X-AspNet-Version: 4.0.30319&lt;/code&gt;&lt;br /&gt;&lt;code&gt;X-Powered-By: ASP.NET&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;But of course, with great caching power comes great caching responsibility and your mileage may vary with some things.&lt;/p&gt;
&lt;p&gt;1. As with any other caching system, we have to consider cache evictions when assets gets updated. Out of the box, IIS includes the ETags header, but there are other ways to also evict cache such as versioned URLs. See our docs on the subject &lt;a href=&quot;/link/f2555bbb4fd74dff83a6b643536fe570.aspx#ConfiguringCacheHeaders&quot;&gt;here&lt;/a&gt;. I&#39;d encourage anyone to play around with different solutions and see what works best for you and your editors.&lt;/p&gt;
&lt;p&gt;2. If you use any third-party plugin, such as ImageResizer or have a custom HTTP module, you might find yourself having to modify headers through that instead. For instance, ImageResizer has its own configuration as described &lt;a href=&quot;https://imageresizing.net/docs/v4/plugins/clientcache&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/elias-lundmark/dates/2020/4/demystifying-edge-ttl-in-cloudflare/</guid>            <pubDate>Wed, 29 Apr 2020 08:32:52 GMT</pubDate>           <category>Blog post</category></item></channel>
</rss>