Mike
Apr 20, 2026
  55
(0 votes)

Remote Debugging in Optimizely DXP: What Is Actually Possible?

Introduction

At SYZYGY Techsolutions, we support Optimizely DXP projects at scale, so continuously identifying the right tools and approaches for analyzing complex issues is an essential part of how we work. 

At some point, most developers encounter issues that are complex to reproduce locally. Differences between local setup and other environments, variations in operating systems, and discrepancies in configuration or data all contribute to this gap. While logs and telemetry provide valuable signals, they are inherently indirect. They rely on posthoc interpretation and selective instrumentation, which can make it difficult to fully understand execution flow and runtime behavior in a managed platform like Optimizely DXP. 

This led to a question:

Can you remotely debug an application running in Optimizely DXP? 

After going through official documentation, community discussions, and existing guides, I realized that the answer isn’t clearly documented. And that’s not accidental. 

TL;DR: Based on this investigation, true remote debugging inside Optimizely DXP does not appear to be a supported workflow. This aligns with the nature of DXP as a managed Platform-as-a-Service (PaaS), where direct access to infrastructure and debugging capabilities is limited. 

Still, the question remains relevant. For developers working with complex behavior, it’s useful to understand the boundaries. To get a clear picture, I approached this as a structured exploration. Instead of focusing on DXP directly, the investigation starts from the underlying platform it builds upon – Azure App Services (Linux) – and moves upward from there. This makes it possible to map what is available, how it behaves, and how those capabilities surface within a DXP environment


Why Remote Debugging Matters – and Why It’s So Hard in DXP

Remote debugging is one of the most powerful – yet often overlooked – tools in a developer’s toolkit. Being able to attach a debugger to a live environment allows you to inspect variables, step through code paths, and diagnose elusive bugs that only manifest in production-like conditions. In traditional Azure App Service setups, remote debugging with tools like Visual Studio or JetBrains Rider is not only possible but well-documented. 

Optimizely DXP is a managed Platform-as-a-Service (PaaS) which prioritizes multiple layers of abstraction and a strong emphasis on security, compliance, and performance over infrastructure access. In practice, DXP limits direct access to underlying app services, VM internals, and debugging endpoints.

So what options are actually available – and how far can you realistically go with them? 


Remote Debugging Options – Azure Web App (Linux) + .NET

The Optimizely ecosystem runs on Linux containers, which means that tools like Snapshot Debugger or Remote debugging on Azure App Services (Windows) are simply not available. The alternative we're left with is an SSH debugger. The referenced article explains how it works – but since we aimed to explore every viable optionit’s worth noting that both JetBrains Rider and Visual Studio support SSH-based debugging. They use slightly different configurations, but the idea is the same – establish a tunnel over a specific port, and communicate bidirectionally about the execution process. Ideally, the deployed application should be built in Debug configuration ("dotnet build --configuration Debug" and "dotnet publish --configuration Debug"), so that symbol files (".pdb") are included – enabling the debugger to accurately map execution to source code. 

As an Optimizely Power User, you typically have access to three environments via the Azure Portal: Integration, Preproduction, and Production (sometimes also ADE1). In my case, only the Integration environment exposed enough of the Web App’s configuration to investigate this directly. Since Preproduction and Production did not expose the Web App resource without Optimizely Support involvement, all debugging attempts focused on the Integration environment. 

While Kudu allows basic SSH access to the instance, enabling remote debugging requires establishing an SSH tunnel from the local environment. Ideally, the tunnel should be managed by Azure and connect directly to the Optimizely DXP environment – avoiding firewall issues or potential security risks. Fortunately, Azure provides an out-of-the-box solution for this: Open as SSH session to a container in Azure App Service. The documentation clearly outlines the necessary steps. Just ensure the Azure CLI is installed and that you've run "az login" beforehand. Once you run "az webapp create-remote-connection --subscription <subscription-id> --resource-group <resource-group-name> -n <app-name>", the SSH tunnel to your instance will be active. Then, using your preferred SSH client, you can connect and gain shell access to the server – just as described in the documentation.


Where the approach started to break down 

Encouraged by the successful SSH connection, I opened JetBrains Rider and attempted to “Attach to Remote Process”. After entering the local port of the SSH tunnel, I was prompted – as expected – with a message like “Your server doesn’t have the necessary files for debugging. Would you like to install them?”. I clicked “Yes”. The debugger tools downloaded successfully – but the installation failed after just 1-2 MB were transferred to the server. No clear exception was shown; Rider simply displayed the same prompt again. Multiple retries led nowhere, so I started investigating the issue more closely. It turned out Rider was attempting to download and install the debugging tools by copying them over the SSH tunnel. The Kudu logs referenced a "WebSocketException" (The remote party closed the WebSocket connection without completing the close handshake.), but provided little else of value. I did confirm that Rider was uploading ZIP archives to the expected directory ("~/.local/share/JetBrains/RiderRemoteDebugger/..."), but only partially – the file sizes were well below the 100MB+ expected for full debugger tools. Naturally, trying to unzip a partially downloaded archive failed. At that point, it became clear we had two options: either upload the debugger archive via "scp", or download and install it directly from within the instance. Using "wget" was straightforward – and it worked without issues. I installed the Linux debugger tools matching the container architecture (details here) into the same directory where Rider had previously failed. The exact path may vary by Rider version, but it's typically something like "~/.local/share/JetBrains/RiderRemoteDebugger/...". After installing "unzip" ("apt-get update && apt-get install -y unzip") and extracting the archive, I reached the final step: Rider was now able to list the running processes on the instance – including my active .NET process. 


Next attempt – and one more discovery 

I clicked Attach again, however the same pattern reappeared: the connection was briefly established, then immediately closed. Kudu logs once again showed a "WebSocketException", prompting me to investigate potential issues with the SSH connection itself. I decided to retry the file upload using "scp" myself – this time with verbose logging – to better understand what was happening. I started small: uploading an empty "test.txt" worked fine. A slightly larger file with a few lines of text – also successful. But when attempting to upload a larger file, I encountered a familiar error: broken pipe. This pointed toward a likely culprit: some form of bandwidth limitation. So I tried uploading the file with bandwidth throttling enabled: "scp -vvv -o MACs=hmac-sha1 -P <port> -l 8192 "debugger-file.zip" root@127.0.0.1:~/.local/share/JetBrains/RiderRemoteDebugger/..." 

That worked. My colleague and I reproduced the behavior on multiple machines, which made a purely client-side explanation less likely. The remaining question was: how can we enforce the same throttling for JetBrains Rider’s automatic upload process? 

And here’s the tricky part: the SSH connection is managed by Azure, leaving little room for server-side customization. I briefly considered terminating the existing SSH server and launching a custom one with modified settings – but OpenSSH doesn’t offer the necessary configuration flexibility, and interfering with the managed Azure/Optimizely infrastructure wasn’t a viable option. My next attempt involved using "tc" (traffic control) on the server to limit bandwidth, matching the constraint I’d used with "scp". However, even the initial setup command – "tc qdisc add dev $IFACE root handle 1: htb default 30" – failed with a Permission denied error. A quick check using "capsh --print | grep cap_net_admin" confirmed it: I didn’t have the required capabilities – and that was the end of that path. Realizing that tools like trickle or other ad hoc bandwidth limiters would likely interfere with the existing setup, I shifted focus to throttling the connection from the client side. On Windows, I experimented with built-in QoS policies to throttle traffic – but it didn’t help. Rider continued to disconnect shortly after starting the debugger, and the issue remained. 


Eliminating concerns, or what else was tried 

1. SFTP. Some resources mention SFTP as a requirement for remote debugging, raising concerns about whether it’s enabled in the Optimizely setup. It is. You can confirm this either in the Azure Portal or by running "grep sftp /etc/ssh/sshd_config". Any output indicates it’s active – and that’s sufficient. 

2. Release vs. Debug configurations. I tested both build modes to rule out any mismatch – same result in both cases. The issues persisted regardless of configuration. That may have been the next problem if the SSH issue had been resolved, but I did not get far enough to prove it. 

3. Azure’s “Remote Debugging Enabled” flag. I did not find this useful for the .NET on Linux scenario tested here. In this setup, it was not applicable here and also interfered with SSH access, as both rely on port 2222. Still, if you want to experiment, you can toggle it using the following commands: "az webapp config set --resource-group <resource-group> -n <webapp-name> --remote-debugging-enabled=true" and to roll-back "az webapp config set --resource-group <resource-group> -n <webapp-name> --remote-debugging-enabled=false". Even if not useful for this scenario, the output of these commands is worth exploring – Azure exposes a surprisingly rich ARM definition of your Web App. 

4. What about Visual Studio? As expected, I gave it a try – but the results were even worse. Visual Studio refused the SSH connection entirely and exited with the message: "Connectivity Failure: Please make sure host name and port number are correct." No additional context was provided, leaving me guessing what might’ve gone wrong.  Throughout this process, I monitored Kudu logs in parallel. While some error entries did appear, they weren’t particularly helpful – mostly the same "WebSocketException" patterns I saw when using Rider. One thing was clear: Visual Studio recognized the SSH tunnel. When another connection was already active, it would wait and only fail once the other connection was released – confirming awareness of the underlying channel, even if it couldn’t use it properly. 

5. Limit bandwidth from Rider directly? I checked Rider’s available settings, but found no built-in way to throttle the SSH transfer rate. It’s possible that using a proxy or a more advanced traffic-shaping setup could help – but investigating that path would take considerably more time, with no guaranteed payoff. At this point, I made the call to stop here. Remaining options required more time with no clear path to a reliable result.


Final Observations

attempted to connect to the instance overnight, when I expected activity in Integration to be minimal. Interestingly, the debugger connection held for nearly 10 seconds, longer than any of the daytime attempts, before ultimately failing again. Bandwidth-related limits or instability seem like the most plausible explanation from these tests, though I could not prove that definitively or rule out other blockers. 

At this point, it’s worth stepping back and looking at the outcome more practically. 

Optimizely DXP is a managed platform, and remote debugging is not a workflow it currently exposes or documents directly. That makes the result expected – but the path to that understanding is not always obvious when you start. For developers working with complex systems, the question itself still comes up. This exploration doesn’t change the platform behavior, but it does make the current boundaries clearer: what can be attempted, what partially works, and where things start to break down in practice. 

If nothing else, this should save time for anyone approaching the same idea – and provide a more concrete starting point, should you decide to take it further. 

Apr 20, 2026

Comments

Please login to comment.
Latest blogs
Removing Unused Properties in Optimizely CMS 13

Learn how to remove orphaned property definitions in Optimizely CMS 13. Explore API updates for IContentTypeRepository and how to safely use...

Stuart | Apr 17, 2026 |

How to Remove the "Paste formatting options" Dialog in Optimizely CMS 12

If you've upgraded from an older Optimizely CMS solution lately, you may have noticed a dialog popping up every time an editor pastes content from...

Henning Sjørbotten | Apr 17, 2026 |

Creating an admin tool - unused assets

Let's make an admin tool to clean unused assets and see how to extend your favorite CMS with custom tools and menues! We will build a tool step by...

Daniel Ovaska | Apr 15, 2026