Errors
- Actual figure not found for this caption (check if figures have appropriate styles): Web hosting platform's SSH Access dashboard.
- Actual figure not found for this caption (check if figures have appropriate styles):
SSHStatus panel showing that we have madesshaccessActive. - Actual figure not found for this caption (check if figures have appropriate styles): JetBrains PyCharm IDE SSH Configuration panel showing a successful connection test.
- Actual figure not found for this caption (check if figures have appropriate styles): The
WinFspInstaller download site. - Actual figure not found for this caption (check if figures have appropriate styles):
WinFspInstaller wizard showing that just theCorefeature set is needed. - Actual figure not found for this caption (check if figures have appropriate styles): Shows the many architecture and operating system combinations that are supported by rclone.
- Actual figure not found for this caption (check if figures have appropriate styles): The
rcloneproject structure showing two content root directories in the JetBrainsPyCharmIDE. - Actual figure not found for this caption (check if figures have appropriate styles): The
rclone_mount_start.batShortcutproperties. - Actual figure not found for this caption (check if figures have appropriate styles): The
rcloneprofilehostingermounted to theH:drive, as displayed inWindows Explorer. - Actual figure not found for this caption (check if figures have appropriate styles):
Createa newprojectin an existingdirectoryusing the JetBrains PhpStorm IDE. - Actual figure not found for this caption (check if figures have appropriate styles): A view into the
Xdebugstep debugging log. - Actual figure not found for this caption (check if figures have appropriate styles):
Directorymapping in JetBrains PhpStorm IDE. - Actual figure not found for this caption (check if figures have appropriate styles): Editing a remote file located outside the mounted project path.
- Actual figure not found for this caption (check if figures have appropriate styles):
SimpleRunConfigurationin JetBrains PhpStorm to start listening forXdebugmessages. - Actual figure not found for this caption (check if figures have appropriate styles): Step debugging a remote mounted file.
- Actual figure not found for this caption (check if figures have appropriate styles): Resulting Web page after stepping through code in our debugger.
- Actual figure not found for this caption (check if figures have appropriate styles): The
VS CodeIDE with just six extensions installed; theRemote - SSHandSSH FSextensions are notably not required. - Actual figure not found for this caption (check if figures have appropriate styles): Execute
hw.pyusingdebugpyto set up the debug session and listen for a remote debugger. - Actual figure not found for this caption (check if figures have appropriate styles): Remote debugging the
hw.pyscript in Microsoft VS Code. - Actual figure not found for this caption (check if figures have appropriate styles): A debug session in the
PyCharmIDE against a backend container running on an AWS EC2 micro instance using a mountedDdrive to access theDockercontainer file system. - Code listing found, but no header specified.
Listingcannot be added:
$ ln -s /etc/cl.php.d/alt-php82 alt_php_conf
For a long time now, developers have used a local virtual machine to host a target runtime environment—often built on a guest Linux OS in the VM—in order to develop for a different system while still taking advantage of the powerful features and familiarity that their local dev tools offered. This article describes how to capture the same benefits of that local pattern, but when debugging and editing code and configs on heterogenous remote targets—whether they be a development machine, a LLM platform, a container, cloud infrastructure-as-code, a Web hosting provider, or even a production environment! After all, as the old saying goes, “I don't always test my code, but when I do, I do it in production.”
What's the Big Idea?
The big idea being presented by this article is the ability to reproduce the same developer experience for remote targets as when one develops against a local virtual machine using a shared directory mounted locally in both environments. However, instead of being limited to just a local shared directory, that same pattern is employed to reproduce the same optimal experience while targeting any runtime environment anywhere on the Internet. The power of this solution is that it allows you to connect your familiar development environment configured on your local machine to any runtime anywhere on the internet. As a result, you are no longer constrained by the tools and capabilities that may or may not be available in the remote environment, and you no longer need to host heavy-weight remote components, such as VS Code Server, in the remote environment. The power of a single locus of control, configured with all your favorite tooling, is always available to you, while targeting any remote environment. Your favorite IDE, AI assistant, visual git branch manager, code assist, and more remain available to you for any coding or configuration task anywhere.
As a quick example, look at how this AI prompt refers to the contents of a remote file as “the current file.” Using a remote mount, we can have remote artifacts in our local context to be able to leverage AI assistance as well as all our other local tooling.
[AI Query]
What are all the possible xdebug parameters for the current file?
[/AIQuery]
Note, AI prompts like this one are shown throughout this article. These prompts, responses, and additional AI conversations can be found in the ai_chats folder in each git repository at https://github.com/billcat-codemag. The listings shown in this article are also available for download there as well.
A Quick Look Under the Hood
It is worth taking a few moments to highlight what makes the rclone mount solution so powerful as well as to understand some of the design choices that it represents. This section describes the design choices made by rclone mount compared to other options currently available, and the context in which it exists. I firmly believe rclone mount's design provides a lighter, faster, and more reliable solution than most of the alternatives. This section takes a quick look under the hood and encourages you to decide whether you agree.
File Mapping Requirement of Interpreted Languages
The technique in this article may look familiar to anyone who has ever debugged a compiled language's code in a remote environment. The task is similar in that you would point a remote debugger at a remote object, and the remote debugger would read the remote symbols and map them back to your local source code in your development environment—typically, an integrated development environment, or an IDE.
With dynamic scripted languages, however, the challenge is a little bit different. The code that is being interpreted is actually deployed into the remote runtime environment, and in order to debug it, the debugger needs to have a matching local codebase to which it can map the remote code base, so that it can present execution state and map execution progress in the local IDE. This typically will be solved by a file syncing type solution. However, as will be discussed further below, file syncing is often fraught with complicating factors and technical difficulties.
Concurrency Not Included
The proposed solution using rclone mount with WinFSP does exactly what it says—it provides a local mount for a remote file system. As discussed, this affords you all of the benefits of your local development environment paired with all of the benefits that various heterogeneous remote runtime environments can provide.
However, it is important to also recognize what we are not saying. This solution is not intended and is not capable of supporting multiuser concurrent access on remote resources; the rule is that the last-write-wins. Versioning services such as git are not included, but they can be used separately in the usual way. These simple semantics are what help deliver rclone mount's speed and reliability.
It is perhaps useful to think of a remote code base as a “remote local” environment. For example, you would not think of sharing your local development code base with another developer for concurrent development activities. Also, you would still merge your “remote local” code with an integrated code base using a concurrent versioning tool like git, if you wanted to deploy to an integrated development environment for testing. Nonetheless, your “remote local” environment still enjoys all of the advantages of the remote system, such as the remote runtime and various integrations that can be made available in the remote environment—whether mocked or actual—along with the ability to develop using your local IDE and other tools.
One last note on this topic. If you ever need to point your local development environment at shared resources, such as to debug some critical issue in a live production environment, it would be important to coordinate your activities so that nobody else was attempting to make changes to either code or configuration files in that environment at the same time that you were. This might seem like it goes without saying, and on one level it does, but when it becomes very easy to point at a remote code base / config files and start editing, it might be tempting during a high pressure scenario to jump into a shared environment and make changes without first coordinating with other colleagues that might have the same inclination.
Pros and Cons of Other Strategies
By using a single, robust, bi-directional syncing solution, we can avoid the quirks and pitfalls—and there are many—of differing file synchronization strategies and implementations. Rclone + sftp is a cross platform and bi-directional solution that effectively mounts the remote system to your local file system. The pair employ robust rules and protocols that make the resulting mount appear and act like a local file system, and not like synced files. As one conspicuous example, the local files go away when you unmount the remote file system. Other behaviors are less obvious, but nonetheless, they too provide behaviors via a robust protocol and serve to avoid the pitfalls of syncing files. As an added bonus, once you set it up and tune the rclone mount in your local development environment once, you can then leverage it, along with your local IDE and any other local tooling infrastructure, against multiple remote targets. As a result, you avoid frustrating file sync headaches, such as:
- Unpredictable file locking and other file sync conflicts
- Solutions that only support one-way sync
- Implementations that require a manual sync after each save
- Tools that do not sync out-of-band or offline changes made on the remote system
- Requiring resource hungry remote components, such as VS Code Server
Each of the above issues—which can vary by particular tools and sometimes in maddening ways—are avoided by using rclone mount instead of a syncing utility specific to each tool. Rclone mount presents the remote file system—and any changes to it—as a system resource and as though it was mounted locally.
Yet, the remote environment—where your runtime executes—remains the canonical single source of truth for your files. It can be modified from either environment and changes flow both ways when the mount is active. Changes made remotely while the mount is offline will appear in the local environment the next time it is mounted—immediately and automatically. This provides a much more fluid experience than any of the syncing implementations that I have encountered, whether they are built into an IDE or provided by an external software package. As such, it can be relied upon as a core piece of tooling infrastructure that builds developer confidence and efficiency with every new use case.
A Note About Visual Studio Code Server
Visual Studio Code (VS Code) is a powerful IDE offered by Microsoft for free to the developer community. VS Code is widely acclaimed for its powerful features and clean interface. Indeed, VS Code seeks to offer a robust remote debugging experience of the kind described here for multiple platforms via its Remote-SSH extension. When installed and configured in the IDE, by default, Remote-SSH seeks to install (or detect) the VS Code Server component on the target server, typically in ~/.vscode-server directory.
However, running VS Code Server is prone to some pitfalls. One obstacle that you may run into with VS Code Server is that it requires certain permissions to fully install, which may or may not be an option. Also, VS Code Server is a very resource intensive component that some resource constrained environments cannot support or which some hosting providers will not install. Lastly, I have found that some of the plugins provided by the community which are needed to supplement the Remote-SSH foundation for specific languages are not always reliable. In other words, your mileage may vary, and each new use case can present unexpected challenges.
Lastly, I recognize that many people enjoy the VS Code IDE's interface and its broad availability. As such, I would like to note that the tools and techniques shown in this article can also be used with VS Code—without the need to install the heavy Remote-SSH IDE extension (or any other remote components), and hence, without the need to install the VS Code Server in the remote runtime environment. We will see below, in the example scenario “Debug Python on AWS EC2 Micro Instance,” that the same full-featured, local-quality development experience—that makes the remote machine feel like your local machine—can still be easily achieved without the need to install the Remote-SSH extension or VS Code Server backend.
AI-Powered Development Anywhere
Whether you are developing LLMs or other ML/AI solutions, or, you just want to have your favorite AI Assistant by your side for your latest coding or configuration task, being able to have your local AI-powered toolkit ready for any task on any platform delivers powerful productivity boosts. As Sahil Malik lucidly describes in his article “AI and Developer Productivity” in the Sep/Oct 2025 edition of CODE Magazine, AI tools are bringing a “fundamental redefinition of productivity” to developer workflows. The tools and techniques described here will allow you to unleash the same power described by Sahil when working on any target system that you can conceive—not just your local machine and not just certain select systems where you have the option to host irregular backend dependencies.
Shedding the mindset that development and configuration are things that only happen in your local development environment is even more critical in the era of AI. AI-augmented tools and workflows, such as code generation, intelligent completion, automated testing, automated code review, automated documentation, intelligent troubleshooting, and more have already profoundly improved local development productivity for many developers working on local codebases. The same impact and rich AI-augmented tools can also be easily brought to bear on all sorts of coding and configuration tasks targeting remote systems as well.
Lastly, it is worth noting that some LLM development platforms, such as lightning.ai, include direct support for working with VS Code Server and advertise simple setup as a key benefit. However, when you don't have the option to develop in an environment that provides support for the VS Code Server component (or, even when you do, but you prefer to stick with a standard, lightweight toolkit) the techniques shown in this article will allow you to still employ your feature-rich local development environment against all kinds of LLM-optimized backends. As we will see, your canonical code and integrated runtime environment can remain situated on a remote system close to other ML/LLM assets and services—perhaps even with very tight controls being applied—but you are still able to bring your most powerful tools and techniques to tasks on those systems.
Remote Debug Anything Anywhere
In this section, we will see how we can setup multiple different remote targets using the same tooling and techniques. Each of the different remote targets shown here showcases the power and versatility of this approach. The four main activities described below include the following:
- Setup rclone mount on Windows
- Remote debug PHP running on a hosted platform without root access
- Remote debug Python running on a free-tier AWS EC2 micro instance with limited compute and memory, and
- Remote debug Python running in a Linux container running on the same free-tier AWS EC2 micro instance
These examples will showcase remote debugging with different IDEs using various remote interpreters, all using a single, remote file system mounting tool, namely, rclone mount. The variety of local IDEs shown is meant to showcase broad applicability, but you will probably use a single, favorite local development environment, and pivot between the development targets using the same IDE.
Lastly, the below example scenarios will show detailed instructions and code for installing and configuring the enabling technology that is the focus of this article—namely, rclone mount on Windows. However, it is largely left to the reader to ensure that the other enabling technologies—namely, ssh and sftp on Linux for remote file access, a remote runtime, and a local IDE for debugging—are configured. The examples describe the basic design for each scenario as well as all of the needed components, but the detailed steps for configuring the components aside from rclone mount is largely left as an exercise for the reader or their service provider. With that said, we do give some tips and pointers for setting up file mapping, providing connectivity, and executing a debug session to complete the examples.
Setup Rclone Mount on Windows
As mentioned above, it is assumed that ssh and sftp are already setup for use by our provider or by us, so we start with configuring our rclone mount client to take advantage of those existing services. To do so, the following components need to be installed and configured in our local development environment. This section describes what is essentially a one-time procedure; after the initial installation and setup, we will be able to easily re-use each of these components using the very succinct procedures shown for the example scenarios each time we wish to mount a new remote environment. The components include the following:
- Configure ssh and sftp client credentials
- Install WinFSP
- Install rclone
- Configure rclone profile
- Setup
rclonemountcommandparameters - Setup rclone mount to run at startup without a window
Configure Ssh and Sftp Client Credentials
Using a managed hosting provider saves us some work with setting up the server side, which is great, but of course, we still need to configure the client. In the case of the hosting provider that I am using for this article, Hostinger.com, we first need to activate ssh access. The interface looks like this and is fairly self-explanatory. Notice that we choose password access, but key-based access is also an option. As you can see in Figure 1 and Figure 2, all of the connection and credentials information is already available, but we will not be able to ssh into the remote environment until we click Enable to enable ssh access and have it show Active status.
That was easy! Now, we need to enable our ssh client. For many people, this might involve installing and configuring the putty client or Windows Terminal, but for this section we are using the PhpStorm IDE from JetBrains, so we setup an empty project on the local file system so that we can gain access to the IDE's Settings panel, where we can setup a ssh connection.
As you see in Figure 3, our settings simply match what we configured in the hosting provider's dashboard. By unchecking the “Visible only for this project” checkbox, we will be able to use this same SSH Configuration when we create a new project in a mounted remote project director. PhpStorm also provides a convenient facility to test our connection, which is also shown.
From here, we can use our test project and SSH Configuration to login and check that our backend services are running by connecting and running simple version commands. Here is what that looks like when we check the sftp and PHP runtime versions on our remote host:
$ sftp -v -P 65002 u919949149@82.25.83.203
OpenSSH_8.7p1, OpenSSL 3.5.1 1 Jul 2025
.
.
.
Connected to 82.25.83.203.
sftp> version
SFTP protocol version 3
$ php --version
PHP 8.2.29 (cli) (built: Nov 12 2025 00:00:00) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.29, Copyright (c) Zend Technologies
with Zend OPcache v8.2.29, Copyright (c), by Zend
Technologies
with Xdebug v3.3.1, Copyright (c) 2002-2023, by Derick
Rethans
As shown, connecting to sftp to obtain the version and protocol information reveals that, in this case, sftp is a subsystem of the openssh package and it is providing sftp protocol version 3 service on the same port that we use for ssh access. This is a very common configuration in modern Linux systems.
Install Windows File System Proxy (WinFSP)
Next, we will install WinFSP in our local development environment. WinFSP provides a FUSE-like integration system between rclone mount's user space file system and the Windows kernel. In short, this layer allows rclone mount to make various cloud APIs and file transfer protocols—such as sftp—behave like a local Windows file system. If you would like to learn more about this component, please see the project documentation at https://winfsp.dev/doc as well as this concise explanation from ChatGPT at https://chatgpt.com/s/t_692c41458b588191bd29b8b7d997890d.
To download WinFSP, head over to https://winfsp.dev/rel and download the WinFSP Installer as shown in Figure 4.
Simply run the installer from whatever directory you downloaded it into. In the installer wizard, click Next to accept the default settings for each step. As shown in Figure 5, we only need the Core component, which should be the default setting. When prompted by the Windows User Account Control dialog, accept the permission elevation prompt to proceed. Click through all steps to complete the installation of the WinFsp component.
Create a Skeleton Directory Structure
The remaining components do not have installers. As such, we will create a skeleton directory structure in which we can download and extract the executable files and place other runtime directories.
A good place to put the base directory is at the following location:
%USERPROFILE%\rclone
Under the %USERPROFILE%\rclone directory, create the following directories:
\bin
\conf
\log
\usr
\usr\script
You can either manually create the directories, or, use the code shown in Listing 1 to create the skeleton directory structure for you below the directory that you specify for RCLONE_BASE_PATH variable in that script.
Listing 1: Make initial skeleton directories
:: Make empty directories for rclone install and runtime
@echo off
set RCLONE_BASE_PATH=%1
echo:
echo Making empty installation dirs at RCLONE_BASE_PATH: %RCLONE_BASE_PATH%
echo:
mkdir "%RCLONE_BASE_PATH%\bin"
mkdir "%RCLONE_BASE_PATH%\conf"
mkdir "%RCLONE_BASE_PATH%\log"
mkdir "%RCLONE_BASE_PATH%\usr\script"
dir "%RCLONE_BASE_PATH%"
Either way, when you are finished, the top-level directory contents should look like this:
Directory of C:\Users\wcatl\rclone
<DIR> .
<DIR> ..
<DIR> bin
<DIR> conf
<DIR> log
<DIR> usr
Install Rclone for Windows
The last component that we will need is rclone itself. rclone for Windows is distributed as a portable zip file. Head over to https://rclone.org/downloads and download the latest version for your OS and architecture. As you can see in Figure 6, many architecture and operating system combinations are supported by rclone.
Download the zip file for your system to the \bin directory in the directory skeleton created above and extract the zip file to a subfolder. If you keep the zip file for future use, your directory contents will look like this:
Directory of C:\Users\wcatl\rclone\bin
<DIR> .
<DIR> ..
<DIR> rclone-v1.69.2-windows-amd64
23,257,507 rclone-v1.69.2-windows-amd64.zip
Configuration as a Project
So, you might be wondering, can you also use your IDE to develop and configure your local system as well? Of course! I always look to point the full power of my local development environment at every coding and configuration task that I can, whether local or remote. I find that with configuration tasks in particular, configuration artifacts tend to be more scattered around the file system and it is useful to add the locations of various artifacts to a project structure for future reference and easy tracking. Then, whenever I need to revisit a configuration scenario, everything is all in one place for easy access and modification. Plus, it is then very easy to also add my configuration artifacts to a git repository for change management. After the initial setup steps have been completed, it is really very simple to add new remote or local targets at any time.
Since I believe that all solid projects should eat their own dog food whenever possible, Figure 7 provides a quick look at what my development project looks like for developing the rclone mount batch scripts that I wrote for this article. You too may want to point your development environment at your rclone base path and treat this setup as a small development project—or perhaps more accurately, a configuration project—as we will be editing some configuration files and batch files to setup our remote mounts. You will come back to this project whenever you want to add a new remote target or modify an existing one.
Also, take a moment to note here how the WinFsp installation path is different from our rclone base path, but it is still able to be shown in a single project view in the JetBrains IDE. This becomes even more powerful when each content root can be in a different remote environment or a different container, as we will see below.
Configure Rclone Mount Profile Settings
Next, we will manually create an empty configuration file called rclone.conf in the /conf directory. Within rclone.conf, we will configure rclone profiles. Each rclone profile specifies the location and access credentials for a unique backend target. In our case, all of the backends are of the type sftp. In our first profile, the values mirror the access information that we originally saw on the Hostinger dashboard. For the hostinger profile shown here, the profile, host, port and user values are unique to my setup, as they will be for your system:
[hostinger]
type = sftp
host = 82.25.83.203
port = 65002
user = u919949149
pass =
ask_password = true
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
Save the rclone.conf file after adding your first profile.
Importantly, we intentionally left the password setting blank. This is because rclone uses a random seed obfuscation algorithm to store passwords in profiles. Therefore, we need to use a command line tool to convert our plaintext password to one that rclone mount expects. Random seed obfuscation means that the resulting obfuscation will be different every time it is generated, so don't be surprised if you end up running this command multiple times—either while learning the tool or for any other reason—and you notice that the resulting value is different each time, even for the same plaintext password.
The command to generate the needed obfuscatedPassword looks like the following:
{path_to_rclone.exe}\rclone.exe config password hostinger pass={your_cleartext_ssh_password} –config {RCLONE_BASE_PATH}\conf\rclone.conf
In the above example, the hostinger command line value is the profile in rclone.conf, the pass parameter is set to your cleartext password, and the -config switch is given the full path and filename to where the target profile is located—and where you want to create or modify the obfuscated password. Note, you can also use the rclone config command without the password sub-command. If you use that command mode, rclone config will give you an interactive set of questions that you can answer in order to create the profile. You can try it yourself, but for this article, we opted to share the config file contents and then add the password with the dedicated rclone config password mode to achieve the same thing.
Rclone Mount Global Settings
In this section, we will set the variables in the custom scripts created for this article that we need to create our first remote mount. Listing 2 shows the main startup script, rclone_mount_start.bat. In addition to rclone.conf, where we setup the backend profile, this startup script is the only other place where we need to specify unique values to configure our remote mount.
Listing 2: The full rclone startup script.
@echo off:: This is the startup / launch file to launch multiple rclone
processes with no windows and using custom rclone profiles:: -----------------------------:: Profile list - ADD NEW PROFILE NAMES HERE:: -----------------------------:: First add new profile name to the following list so the
:main code will call it:: For consistency, these names should match the section names
in the rclone.conf fileset "RCLONE_PROFILE_LIST=hostinger":: Set the rclone install directory; this script expects this
to be the name of the directory that contains the rclone.exe
file, and be in the /bin directory below the installation
basepath
set "RCLONE_INSTALL_DIR=rclone-v1.69.2-windows-amd64"call :mainexit /b 0 :: Exit the batch script:: -----------------------------:: Profile settings - CONFIG NEW PROFILES HERE:: -----------------------------:: Define each profile section below. Each label should match
the names in RCLONE_PROFILE_LIST above.:: -----------------------------:: Hostinger Domains (H:) (hostinger):: -----------------------------:hostinger setlocal set "RCLONE_PROFILE=hostinger" :: Same as section name
in rclone.conf file set "REMOTE_HOST_PATH=domains" :: Relative path from remote
login path to use as mount
path set "LOCAL_VOLUME_LETTER=H" set "LOCAL_VOLUME_NAME=%RCLONE_PROFILE%-domains" echo About to start rclone mount for
%RCLONE_PROFILE% profile... >> "%GLOBAL_LOG_FILE%" call :start_profile_mount endlocal exit /b :: Exit the section:: /////////////////////////////////////////////////////////////
:: -----------------------------:: Shared config sections - DO NOT EDIT:: -----------------------------:start_profile_mount echo Beginning start_profile_mount... >> "%GLOBAL_LOG_FILE%" :: Set fixed profile variables set "RCLONE_PROFILE_PATH=
%RCLONE_BASE_PATH%\usr\profile\%RCLONE_PROFILE%" set "CTL_LOG_FILE=
%RCLONE_PROFILE_PATH%\log\rclone_mount_ctl.log" :: Make new dirs; mkdir is idempotent and non-destructive,
but we check if target exists anyway for efficiency @if not exist "%RCLONE_PROFILE_PATH%\log" mkdir
"%RCLONE_PROFILE_PATH%\log" @if not exist "%RCLONE_PROFILE_PATH%\cache" mkdir
"%RCLONE_PROFILE_PATH%\cache" :: Note: 'start ""' required because rclone exe never
returns and blocks the console start "" "%RCLONE_BASE_PATH%\bin\SilentCMD\SilentCMD.exe"
"%RCLONE_BASE_PATH%\usr\script\rclone_mount_ctl.bat"
"/LOG+:%RCLONE_PROFILE_PATH%\log\SilentCMD.log" echo Finished start_profile_mount section. >> "%GLOBAL_LOG_FILE%" exit /b:main setlocal :: ----------------------------- :: Shared variables - shared across profiles :: ----------------------------- set "SCRIPT_DIR=%~dp0" :: Set the script dir variable to
the location of this batch file
:: Set rclone basepath to the parent of the parent dir
of script dir for %%A in ("%SCRIPT_DIR%\..\..") do set "RCLONE_BASE_PATH=%%~fA"
set
"RCLONE_EXE_PATH=
%RCLONE_BASE_PATH%\bin\%RCLONE_INSTALL_DIR%\rclone.exe" set "RCLONE_CONF_PATH=%RCLONE_BASE_PATH%\conf\rclone.conf" set "GLOBAL_LOG_FILE=
%RCLONE_BASE_PATH%\log\rclone_mount_global.log" echo: >> "%GLOBAL_LOG_FILE%" echo Beginning :main secion at %DATE% %TIME% >>
"%GLOBAL_LOG_FILE%" echo Starting rclone_mount_start.bat >> "%GLOBAL_LOG_FILE%" echo: >> "%GLOBAL_LOG_FILE%" :: ----------------------------- :: Call each profile section and mount each remote :: ----------------------------- echo "RCLONE_PROFILE_LIST: %RCLONE_PROFILE_LIST%" >>
"%GLOBAL_LOG_FILE%" echo "About to loop through profiles." >> "%GLOBAL_LOG_FILE%" for %%p in (%RCLONE_PROFILE_LIST%) do ( call :%%p timeout /t 2 /nobreak >nul ) echo Finished :main section at %DATE% %TIME% >>
"%GLOBAL_LOG_FILE%"
endlocal exit /b :: Exit the section
If you look at Listing 2 you will see a line for adding your new mount to a list of profiles. In our case we will have just one remote mount, namely, hostinger, that maps to both our profile name in rclone.conf as well as a configuration section that we will create in our custom script for each profile. It looks like this for just the hostinger profile:
:: -----------------------------
:: Profile List - ADD NEW PROFILE NAMES HERE
:: -----------------------------
:: First add new profile name to the following list so the
:main code will execute it
:: For consistency, these names should match the section
names in rclone.conf file
set "RCLONE_PROFILE_LIST=hostinger"
Technically speaking, that name only needs to match the custom code section label that we will be creating next that sets the remote mount parameters that are needed by rclone mount for the profile. By convention, however, we use the same name that we used for the profile in rclone.conf for the profile's configuration section in our custom startup script. Note, the rclone profile names must be unique and that serves us well as our custom scripts will leverage the name to setup unique profile directories for storing logs and caches. This allows us to run multiple rclone mount processes concurrently and avoid any local resource conflicts.
The next global setting that we need to set has to do with the rclone version and architecture of our instance. Since we used the name of the zip file for our install directory under /bin, and that can vary, we need to be able to specify it here for use by the controller script when building the execution path for rclone.
set "RCLONE_INSTALL_DIR=rclone-v1.69.2-windows-amd64"
After the variables get set, the script calls into the :main section. In the :main section, additional derived variables will get set. Then, the script will iterate over RCLONE_PROFILE_LIST to call the mount-specific code sections, set the mount-specific variables, and run the controller script that mounts the remote backend for each profile.
Rclone Mount Profile Settings
As mentioned, a series of profile-specific variables need to be set in our custom startup script. For the hostinger profile and code section, those variables look like this:
:: -----------------------------:: Hostinger Domains (H:) (hostinger):: -----------------------------
:hostinger
setlocal
set "RCLONE_PROFILE=hostinger" :: Same as section name in rclone.conf file
set "REMOTE_HOST_PATH=domains" :: Relative path from default login path to use as mount path
set "LOCAL_VOLUME_LETTER=H"
set "LOCAL_VOLUME_NAME==%RCLONE_PROFILE%-domains"
echo About to start rclone mount for %RCLONE_PROFILE% profile... >> "%GLOBAL_LOG_FILE%"
call :start_profile_mount
endlocal
exit /b
We set the RCLONE_PROFILE variable to the name of the profile in rclone.conf. We set the REMOTE_HOST_PATH variable to the relative path from the default path that the user is placed into upon regular login. We set the LOCAL_VOLUME_LETTER variable to the drive letter we would like to use for this remote mount. Lastly, we set the LOCAL_VOLUME_NAME variable to a distinct name for this mount, and as you can see, it also reflects the REMOTE_HOST_PATH relative path, to further distinguish it from other profiles that we might decide to setup for the same backend using a unique RCLONE_PROFILE and unique LOCAL_VOLUME_LETTER.
Note, it is never valid to use the same RCLONE_PROFILE name to mount the same backend, even if a different drive letter is used. If you wish to do that, setup a new RCLONE_PROFILE with a unique name against the same backend location and credentials. Presumably, you will at least need the value of REMOTE_HOST_PATH; if you don't change at least the REMOTE_HOST_PATH value, it does beg the question—why are you doing it? Remember, concurrent access on the same remote directory is not recommended, and last-write-wins semantics will apply.
After we set the needed variables in our custom startup script, the startup script, via the :start_profile_mount code section, calls the controller script rclone_mount_ctl.bat shown in Listing 3. The controller script sets some additional derived variables and then runs the rclone mount executable to mount the remote file system. As you can see, rclone mount takes several parameters to configure its operation, including the location of its configuration file, virtual file system settings, log settings, and cache settings. Also shown in Listing 3 is that we configure rclone mount to run in network-mode and tell WinFsp to apply user-based security to each file with the FileSecurity parameter.
Listing 3: The full rclone controller script.
@echo off
:: This is the controller file to config variables and call mount script.
:: -----------------------------
:: Log start session
:: -----------------------------
echo: >> "%CTL_LOG_FILE%"
echo %DATE% %TIME% >> "%CTL_LOG_FILE%"
echo Starting rclone_mount_ctl.bat >> "%CTL_LOG_FILE%"
echo
echo: >> "%CTL_LOG_FILE%"
:: -----------------------------
:: Start session
:: -----------------------------
call :main
exit /b 0
:: -----------------------------
:: rclone_mount_dir.bat functions
:: -----------------------------
:mountDir
echo CTL_LOG_FILE: "%CTL_LOG_FILE%"
echo RCLONE_EXE_PATH: %RCLONE_EXE_PATH% >> "%CTL_LOG_FILE%"
echo RCLONE_BASE_PATH: %RCLONE_BASE_PATH% >> "%CTL_LOG_FILE%"
echo RCLONE_PROFILE: %RCLONE_PROFILE% >> "%CTL_LOG_FILE%"
echo RCLONE_PROFILE_PATH: %RCLONE_PROFILE_PATH% >> "%CTL_LOG_FILE%"
echo REMOTE_HOST_PATH: %REMOTE_HOST_PATH% >> "%CTL_LOG_FILE%"
echo LOCAL_VOLUME_LETTER: %LOCAL_VOLUME_LETTER% >> "%CTL_LOG_FILE%"
echo LOCAL_VOLUME_NAME: %LOCAL_VOLUME_NAME% >> "%CTL_LOG_FILE%"
echo Mounting to the target drive letter: %LOCAL_VOLUME_LETTER% >> "%CTL_LOG_FILE%"
for /F "tokens=2" %%i in ('whoami /user /fo table /nh') do set USER_SID=%%i
echo USER_SID: %USER_SID% >> "%CTL_LOG_FILE%"
:: --vfs-cache-mode full - Needed if applications edit files locally and expect changes to be uploaded correctly.
:: --dir-cache-time 1s - Disables directory caching entirely when set to 0.
:: --attr-timeout 1s - Controls how long file attributes (stat() results) stay cached. Makes external edits (mtime/size changes) show up instantly.
:: --poll-interval 0 - Disables backend polling entirely. Default for sftp since it has no change notification support.
"%RCLONE_EXE_PATH%" mount "%RCLONE_PROFILE%":"%REMOTE_HOST_PATH%" "%LOCAL_VOLUME_LETTER%": ^
--config "%RCLONE_CONF_PATH%" ^
--log-file "%RCLONE_PROFILE_PATH%\log\rclone.log" ^
--log-level NOTICE ^
--network-mode ^
--volname "%LOCAL_VOLUME_NAME%" ^
--vfs-cache-mode full ^
--dir-cache-time 1s ^
--attr-timeout 1s ^
--poll-interval 0 ^
--cache-dir "%RCLONE_PROFILE_PATH%\cache" ^
-o FileSecurity="D:P(A;;FA;;;%USER_SID%)"
exit /b 0 :: Exit the section
:main
echo About to enter rclone mountDir section... >> "%CTL_LOG_FILE%"
call :mountDIR
echo ...returned from rclone mountDir section. >> "%CTL_LOG_FILE%"
exit /b 0 :: Exit the main script
Check out the following prompts and conversations in the ai_chats directory in the rclone repository to see how I used my AI assistant to help me quickly understand and resolve some performance tuning and permissions questions. The throughput conversation resulted in some early tuning of cache settings for rclone mount from a lower value of .1s to a higher value of 1s; it was a counter-intuitive result, but it made sense after the additional explanation from Gemini AI.
[AI Query]
Why does the rclone mount command in the rclone_mount_ctl.bat file have very slow throughput for a simple file copy operation?
[/AI Query]
[AI Query]
When using winfsp with rclone mount on windows, how can I force files on the windows filesystem mount to be owned by my user and my group sid - and not include everyone?
[/AI Query]
Setup Rclone Mount to Run at Startup
As the final step, we will configure the rclone mount startup script to run during system startup without opening any windows or command lines. To achieve this, we use a utility called SilentCMD to suppress the cmd window and run the startup batch script process in the background. In addition, we will place a Windows shortcut in the directory at Win+R > shell::startup that points at our custom startup script under the RCLONE_BASE_PATH directory.
To download the SilentCMD utility, head over to the SilentCMD github site at https://github.com/stbrenner/SilentCMD and download the latest SilentCMD zip file. Scroll down until you see the download link, as shown in Figure 8.
Place the zip file in your \bin directory under your rclone base path and unzip it. If you retain the zip file, your \bin directory will look like the one shown here when done:
Directory of C:\Users\wcatl\rclone\bin
<DIR> .
<DIR> ..
<DIR> rclone-v1.69.2-windows-amd64
23,257,507 rclone-v1.69.2-windows-amd64.zip
<DIR> SilentCMD
7,330 SilentCMD.zip
With that final tool in place, we can now configure our startup shortcut to run in the background at system startup.
Navigate Windows Explorer to the {RCLONE_BASE_PATH}\usr\script directory. In that directory, right-click the rclone_mount_start.bat file and select Create Shortcut (select Show More Options first, if needed). Next, right-click on the shortcut that gets created and select Properties.
Within Properties, we are going to modify the values in the Target: and Start in: fields. For Target:, set the field value to the following, where {RCLONE_BASE_PATH} is the literal path to the base path of the rclone directory structure created above:
{RCLONE_BASE_PATH}\bin\SilentCMD\SilentCMD.exe {RCLONE_BASE_PATH}\usr\script\rclone_mount_start.bat /LOG+:{RCLONE_BASE_PATH}\log\SilentCMD_global.log
Next, set the Start in field to the following:
{RCLONE_BASE_PATH}\usr\script
Leave all of the other tabs and fields set to their default values.
When you are finished, your shortcut's Properties fields should look similar to the one shown in Figure 9. When you are satisfied with the values shown, click OK to exit.
The final step is to copy the shortcut properties file from the current directory into the directory given by shell:startup. To find out what that directory is, hit the Win+R key combination to bring up the Windows Run dialog. In that dialog, type shell:startup and press enter. That should open a Startup directory in Windows Explorer. Copy-Paste the shortcut that you just created into the Startup directory that you just opened at shell:startup.
Now, restart your machine. Upon restart, you should see a H drive within Windows Explorer. The mounted backend should look like a regular letter drive, just like the one shown in Figure 10.
Debug PHP U-sing Xdebug on a Web Hosting Platform
For this first example scenario, we are using a budget web hosting provider that gives us all of the benefits of a managed hosting environment, but also many of the typical restrictions. A common benefit to using a hosted environment is that we will not need to install ssh or sftp on the server—they are already provided for us to use. In addition, for this example using PHP, the hosting provider provides the ability to turn on the Xdebug debugger via a simple checkbox configuration, which is great for facilitating remote development using the hosting provider's server.
An important thing to note is that remote debuggers come in two basic types:
- Remote Caller. The type that calls out to a local debugger. In this case, the debugging client listens on a configured port on our local machine and allows inbound connections through any routers and firewalls that may be in the loop between it and the remote runtime environment.
RemoteListener. The type that listens on a port in the remote runtime environment and waits for the debugger to connect. In this case, the remote runtime environment must allow inbound connections through any routers and firewalls that may be in the loop between it and the local development environment.
The key benefit of Remote Caller type debuggers is that typically—assuming broad outbound egress is already allowed from the remote host—it does not require our system administrator on the remote side to open up any special network ports or share your project folder as a network share in any way. This means that any regular directory to which you can ssh can be easily mounted as a project directory by rclone mount and the remote debugger can establish an outgoing connection back to our development environment and attach to our debugger's listener. However, we do have to make sure a return route to your debug client is available, typically by setting up some port forwarding rules on the client ingress side. On the other hand, with Remote Listener type debuggers, the challenge is enabling our local debugging client to be able to reach the server-side listener on a special port and behind a firewall. While the Remote Listener type debugger can require some system administrator support, it is a more familiar pattern in client-server architecture. Both debugging types work equally as well, but I find the Remote Listener type debugger—which we will see used by Python in the next example—to be more intuitive.
In the case of Xdebug for PHP, however, the remote debugger component is the Remote Caller type, and your local debugging environment needs to be listening and fully routable from the location where your code is running. With all of the above being said, that leaves two basic things for us to do:
- Configure the rclone mount profile
- Configure the IDE's remote debugger client
Setup the Phpdebug Project in the JetBrains PhpStorm IDE
At last, we finally get to actually use our remote mount! From here, there are very few steps. These will be the simple steps that you will repeat whenever you would like to set up a new remote debugging or configuration project.
We will start by choosing the web site directory automatically provisioned for us by our hosting provider. In the case of the JetBrains IDE, it does not care if we create our project in a new or existing directory. For an existing directory, it will happily prompt us to import the existing contents into our project, which we will do.
Then, we will need to map the server directory path to our project directory path, so that the Xdebug debugger can trace file execution on both sides.
It is here that you will need to perform a small bit of mental jujitsu. In our case, our “local path” is actually our mounted remote project, mounted to the local H: drive using rclone mount. However, Xdebug is still very much aware that there is a separate remote file path for the remote runtime, and it needs to be told the absolute path to the execution base path—in terms of the remote file system—so it can map the two directories for display and tracing during debugging. In Figure 12, we can see a snippet from the Xdebug log that shows how Xdebug references the remote file paths, showing absolute file paths when sending debug info to our debug client.
You may recall, when we setup the H: drive in our custom startup script, we set REMOTE_HOST_PATH=domains. Since our remote login directory is /home/u919949149, that means that our H: drive root is mounting /home/u919949149/domains on the remote system. Then, when we set up our project in the JetBrains IDE, we selected the site directory below the domains directory for our site as our project directory, namely, springgreen-elk-982235.hostingersite.com. Therefore, on the local path, that looks like this: H:\springgreen-elk-982235.hostingersite.com. Even though the H: drive is technically also remote, from the IDE's perspective, it is acting as our local code base for debug tracing and file display. Therefore, we need to map our local code base directory to the remote directory system. In the JetBrains PhpStorm IDE, it looks like the configuration pane shown in Figure 13.
The great thing here is that all we needed to do was mount the remote directory; we did not need to perform a new sync or refresh any local copies of our files since their last use! This property of a file system mounted using rclone mount provides a completely different user experience compared to managing the synchronization of two file sets using bespoke tools and techniques that might vary from scenario to scenario.** **This strategy eliminates the pitfalls that can arise, such as tracking offline changes and watching out for sync conflicts that can occur with bi-directional syncs that then need to reconcile files on both sides of the sync. In the case of rclone mount, the inherent rules that the remote file system is canonical and that the local files only exist when mounted enforces a very simple and useful constraint for code development purposes that simplifies locking and change conflict semantics and produces a robust and reliable experience.
Setup Xdebug for Remote Debugging PHP
It is outside the scope of this article to detail setting up Xdebug. However, we would just like to remind the reader that Xdebug employs a Remote Caller remote debugging strategy. This means that your local debug environment will host a listener port to which the remote runtime will need to have an incoming routable network route.
Often, this approach will require port forwarding at the public ingress endpoint, typically provided by your router, for your local environment that forwards requests received on the public port to a configured port on your local development machine. You will need to consult your router documentation or contact your network administrator for further details. You will also want to be mindful of any OS-based firewalls in the loop, such as Windows Firewall, which might also be configured to not allow traffic on the incoming network route. See the sidebar “Windows Defender Firewall Hidden Gotcha” for a tip on how to avoid one particularly bizarre potential pitfall with Windows Firewall in Windows 11.
While the full Xdebug setup is not in scope for this article, we want to mention that in addition to enabling Xdebug for our site in the host provider's dashboard, we also need to provide certain parameter values in a configuration file. Our requirement to configure one configuration file also presents a good opportunity to share a powerful trick. Figure 14 shows a screenshot of the alt_php.ini file in the alt_php_conf directory under the project directory, which again, is mounted locally at H:\springgreen-elk-982235.hostingersite.com.
Recall, our remote mounted path, which is mounted to the H drive, is our default login path concatenated with the relative path assigned to REMOTE_HOST_PATH in our custom startup script—namely, domains. It looks like this:
H:\ <-> /home/u919949149/domains
From there, we chose our project directory from below the domains directory:
H:\springgreen-elk-982235.hostingersite.com <-> /home/u919949149/domains/springgreen-elk-982235.hostingersite.com
Lastly, we mapped our html root directory to our top-level domain for Xdebug debugging our test website:
H:\springgreen-elk-982235.hostingersite.com\public_html
<->
/home/u919949149/domains/springgreen-elk-982235
.hostingersite.com/public_html
<->
https://springgreen-elk-982235.hostingersite.com
What you cannot tell from the Project Files view in Figure 14 is that the alt_php_conf directory is not located in the mounted path on the remote system. It is in fact a symbolic link pointing to a directory completely outside of our mounted remote directory path. That directory, however, contains the Xdebug config file that we would prefer to analyze, edit and maybe add to git using the power of our local development environment.
The remote directory contents actually look like what is shown here:
drwxr-xr-x 4 u919949149 o1008307423 4.0K .
drwxr-xr-x 3 u919949149 o1008307423 4.0K ..
lrwxrwxrwx 1 u919949149 o1008307423 23 alt_php_conf -> /etc/cl.php.d/alt-php82
drwxr-xr-x 2 u919949149 o1008307423 4.0K public_html
The command to create a symbolic link to access a directory outside our mounted path from within our project is shown here:
Note, the symbolic link points to a directory outside of our mounted path, but the directory appears in the project view in our IDE just like a regular directory. This is a good technique for adding scattered resources to a single directory mount, without needing to mount the entire root directory or make temporary copies of directories and their contents for editing. It is also useful when target files and directories that are all needed together to complete a certain task are scattered all over the file system; this technique provides a simple way to capture their locations in a single project for later review and editing. Just be sure to note the owner, group, and permissions on files mounted this way; you may need to use the command line to allow access or reset their ownership. Also, I want to briefly note that we could have mounted a directory that is above our login directory in the directory hierarchy using a construct like “../..” for REMOTE_HOST_PATH in our custom startup script, but I have found that kind of mount to be unreliable for reasons that I don't fully understand at this time.
In the case of our alt_php.ini file, the ownership and permissions aligned perfectly with our rclone mount permissions, so we just needed to add the Xdebug parameters shown in the editor panel in Figure 14. Also shown in the right panel of Figure 14 is a prompt to GitHub Copilot, showcasing how we can bring the power of AI to any task and any file using a remote mounted file system solution. Look at how effortlessly we can use a prompt that knows the context of the current file, and we'll ask GitHub Copilot to use the GPT-4.1 model to let us know all of the other Xdebug options that are available for use in this configuration!
With our directories mapped and our networking port open to allow the runtime environment to send Xdebug messages to our local environment, all that is left to do is to tell our local IDE to start listening for Xdebug messages. In the JetBrains PhpStorm IDE, you do that by setting up a new PHP Remote Debug run configuration to listen for Xdebug messages, and then start it to begin listening on the Xdebug default port 9003. This configuration is very simple and looks like the one shown in Figure 15.
When you then open your Web browser to the url for test.php with Xdebug configured, Xdebug messages will be sent to your IDE. We have a breakpoint set on line 3, so that is where execution will halt. As you can see in Figure 16, runtime variables are displayed in the expected way, and you can step through your code to inspect the execution.
Once you allow execution to continue, the results will be returned to your Web browser, as shown in Figure 17.
Debug Python on AWS EC2 Micro Instance
In this second example, we are going to show what it looks like to debug some Python code on a remote, free-tier EC2 micro instance in Amazon Web Services (AWS) that we have full control over. This environment shows that productive development can indeed be done on free-tier instances in AWS, as well as how little resources are needed to support the techniques described by this article. While keeping costs down is a priority for most people, it is particularly important for hobbyists and other budget constrained scenarios.
For this example, we have enabled sftp via the openssh toolset on the remote system and we have configured key-based ssh login using the standard ec2-user identity. Note, even though we fully control this instance, this environment's technical specs would not support installing and running the resource intensive vscode-server component. As discussed above, enabling sftp is readily available or already installed (as shown in the managed host example above) in most environments as part of the openssh toolset.
The Python debugger we'll use will run in remote listener mode, and therefore it will need to be network routable from our local development environment. Typically, this will be over the public internet and passing through public ingress points and a firewall that allow us to route traffic to the server in AWS. Our sample server is hosted in a public subnet with its own public IP address. Typically, however, development servers will be hosted on a private subnet on a server without its own publicly routable IP address. In that case, if we do not have a VPN setup, we could use a publicly routable host as a bastion host or “jump box” in order to use a ssh tunnel as a bridge to the private host. As you will see below in our final example, we can use the same technique to access a runtime environment hosted in a Docker container that provides a private service to the environment, and it does not have a publicly routable IP address and port of its own.
Setup Access to the Python Micro Instance
Just like for our hosted PHP runtime environment, we first need to configure our remote mount and then map the remote mounted file path to a local file path so the local IDE can map the remote runtime's debug messages to the local file path.
For the mypy3 profile shown here, the profile, host, port, user and key_file values are unique to my setup, as they will be for your system:
[mypy3]
type = sftp
host = 13.219.243.233
port = 22
user = ec2-user
key_file = G:\My Drive\dev_aws_key\aws-micro-2024-kp.pem
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
Notice the use of a PEM key file instead of using an obfuscated password. Both methods are supported, but your sshd backend must be configured to accept a key-based login if you choose that method.
Next, add the startup parameters section to our rclone_mount_start.bat startup batch file that we saw in Listing 2. Just insert the following new snippet below the existing section for (H:) (hostinger):
:: -----------------------------
:: Python instance (P:) (mypy3)
:: -----------------------------
:mypy3
setlocal
set "RCLONE_PROFILE=mypy3" :: Same as section name in rclone.conf file
set "REMOTE_HOST_PATH=projects" :: Relative path from default login path to use as mount path
set "LOCAL_VOLUME_LETTER=P"
set "LOCAL_VOLUME_NAME=projects-mypy3"
echo About to start rclone mount for %RCLONE_PROFILE% profile... >> "%GLOBAL_LOG_FILE%"
call :start_profile_mount
endlocal
exit /b
Don't forget, you also need to add the new profile name to the space delimited variable RCLONE_PROFILE_LIST variable in the same file, so it will run at startup!
:: -----------------------------
:: Profile list - ADD NEW PROFILE NAMES HERE
:: -----------------------------
:: First add new profile name to the following list
so the :main code will execute it
:: For consistency, these names should match the
section names in rclone.conf file
set "RCLONE_PROFILE_LIST=hostinger mypy3"
Note, the startup shortcut that we created in the prior section does not need to be modified when we add or remove remote mounts. To add the new mount to your system, just reboot your machine and you should see a new P: drive mount at startup.
Setup VS Code Launch Configuration
For file mapping, VS Code uses a launch configuration located in a file called launch.conf. In the VS Code editor, launch.conf looks like that which is shown in Listing 4. After we execute the target file that we want to debug in the remote runtime using a ssh terminal and debugpy, that file's runtime will then start a listener on port 5678 and halt execution—waiting until a debugger connects and takes control of stepping through the program's execution.
Listing 4: Launch.json showing the debugger launch configuration in Microsoft VS Code which defines the target host and port as well as path mappings.
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit:
// https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "mypy3 Python Debugger: Remote Attach",
"type": "debugpy",
"request": "attach",
"connect": {
"host": "13.219.243.233",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/home/ec2-user/projects/mypy3"
}
]
}
]
}
One more point to make about the VS Code setup. Notice in Figure 18 that the “Remote - SSH” extension (which requires the resource intensive backend vscode-server installation) is not needed here. Further, not even the “SSH FS” extension for providing a sftp client for VS Code is needed. Neither of those extensions are required because we are presenting the remote mount on a local file path (P:) using rclone mount, and providing the mapping to the remote file system in the launch.json file. Figure 18 shows that just four Python language support and debugging extensions are present, along with two GitHub Copilot extensions to showcase the full power of bringing your local environment and AI assistant to any task.
Setup Debugpy for Remote Debugging Python
Now, let's run the code that we wish to execute using debugpy on the remote machine. In our case, note that we are running the code directly using debugpy. For scenarios where you want to start execution and immediately wait for a debugger to take control, but you are not executing the code from a command line, you can add Python code directly into your source code to set up the debugging session as well. However, for this example, we will use debugpy from the command line, as shown in Figure 19.
First, we ssh into the remote runtime environment using that same key-based authentication we used to setup the mypy3 profile in rclone.conf above. After activating the virtual environment to obtain the exact runtime environment that I want, we proceed with starting the code execution with debugpy. Setting the –listen parameter to 0.0.0.0 tells debugpy to listen on port 5678 on all available network interfaces. The above command will begin code execution but will then automatically wait for us to connect our debugger before continuing to execute even a single line of code. For us, that means our next step is to start our launch configuration in the VS Code IDE, which will initiate a connection from our local IDE to the remote runtime environment.
As shown in Figure 20, once we click the "mypy3 Python Debugger: Remote Attach" launch configuration button that VS Code provides for us from our setting in launch.json, the debugger initiates an outbound connection and attaches to the remote host and port shown in Listing 4. Once that connection is made, the debugger will take control of code execution and will stop at the first breakpoint it hits, as also shown in Figure 20.
As mentioned, the code execution will halt at the first breakpoint that we set; in this case, on line 5. At that point, variables are available to be inspected and the step controls (top center of the main panel) are available for stepping through the code.
As you can also see in Figure 20, we again have the full power of GitHub Copilot in the right pane. GitHub Copilot Chat shows the response to our prompt: Is there an error in this code? The response advises us that the comment on Line 1 is potentially misleading, since we do not have a breakpoint set on Line 1 and the comment is indicating that execution will halt on that line automatically while debugging.
While it is true that, when we run with debugpy, code execution will halt immediately before beginning execution and while waiting for the debugger to connect, and further, it is also true that code execution will halt at our breakpoint on line 5, it is not true that execution will ever halt at line 1, which is what GitHub Copilot has identified for us. Lastly, in pointing out this subtlety misleading statement—in a comment no less—GitHub Copilot further advises us that we could use in-line Python code to ask the runtime to wait, even where the script was not initially run using debugpy. The method shown to us by GitHub Copilot to achieve this result is debugpy.wait_for_client().
Debug Python in a Remote Container
In this final example, we will leverage the same techniques as the second example, again using Python as the target runtime, except that this time we will be using the JetBrains PyCharm IDE and we will be modifying a code base that runs inside of a docker container. Now, in the case of Python, we do have a very good option for using virtual environments, as shown above, for developing against different versions of Python with different library versions, on the host system. However, virtual environments do require at least some setup, and the same technique may not be as portable or full-featured for other runtimes, such as Perl or Ruby. In those cases, where an isolated virtual environment may not be well supported natively, or, where it may not be easily portable for re-use, being able to develop inside of a container and then capture all of your changes in a container image is a great option. With containers, all of your dependencies are isolated from the host system's files and able to be captured together for distribution—either to the world or just to QA and Prod. Containers are endlessly versatile and can provide the same isolation and portability capabilities for any development platform that you can imagine.
Containers are also great for when your remote environment gets more complex, perhaps including support for other external integrations that can also be easily captured in your container image and shared or promoted, as the case may be. This has real relevance, for example, when developing AI language models with significant data processing dependencies, or when developing an e-commerce solution that integrates with an external payment processing dependency. You can bring the power of your local development tooling to each of these containerized runtime scenarios—and many more—wherever they are running!
Setup Access to the Python Container
Describing all of the steps for configuring a Docker container is outside the scope of this article, but we will again mention a few key things. For our example, we have picked a simple container based upon a minimal Ubuntu Linux, and installed a couple of basic utilities and Python in our Dockerfile. Listing 5 shows our Dockerfile for building and running our container.
Listing 5: Dockerfile for Ubuntu Linux container running on AWS EC2 micro instance.
FROM ubuntu:22.04
# Before creating a container from this image, first generate the
# ssh key for devuser on the host system for ingestion into the
# container: $ ssh-keygen -t ed25519 -f ./pydock_devuser_key -N ""
USER root
# Install PyCharm's required libraries and utilities
ENV DEBIAN_FRONTEND=noninteractive
# Install Python and PyCharm's required utilities
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
bash \
coreutils \
tar \
curl \
openssh-server \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
# Set up a non-root user if your project requires it
RUN useradd -ms /bin/bash devuser
RUN echo "devuser:*" | chpasswd -e \
&& chmod 700 /home/devuser
USER devuser
WORKDIR /home/devuser
USER root
# Add the public key to authorized_keys
RUN mkdir -p /home/devuser/.ssh && chmod 700 /home/devuser/.ssh
COPY ./keys/pydock_devuser_key.pub /home/devuser/.ssh/authorized_keys
# Configure SSH daemon
RUN mkdir -p /run/sshd \
&& chmod 600 /home/devuser/.ssh/authorized_keys \
&& chown -R devuser:devuser /home/devuser/.ssh \
&& if grep -q '^Port' /etc/ssh/sshd_config; then \
sed -i 's/^Port.*/Port 2223/' /etc/ssh/sshd_config; \
else \
echo 'Port 2223' >> /etc/ssh/sshd_config; \
fi \
&& ssh-keygen -A
CMD ["/usr/sbin/sshd", "-D"]
EXPOSE 2223
We have configured our host system private interface port for the container's sshd to be port 2223. This is intentionally different from the host system's own sshd port of 22. By choosing a different port, we can then choose between two strategies for gaining ssh and sftp access to the container file system.
After we build our container image and run the container, the container's sshd and sftp will be listening on port 2223 on the instance's private IP interface:
$ docker build -t ubuntu2204-python3 .
$ docker run -d -p 172.31.0.148:2223:2223 --name pydock-dev-container ubuntu2204-python3
We configured our system sshd to listen on the additional port 2222 on the instance's public IP interface using a ssh tunnel, set GatewayPorts to yes in /etc/ssh/sshd_config to allow remote tunneling, and opened our AWS Security Group firewall to allow traffic from our development machine's public IP address to port 2222. We are now ready to set up a ssh forwarder to forward traffic from the host system port 2222 on the instance's public IP interface to our container port 2223 on the same host instance's private IP interface:
$ ssh ec2-user@localhost -i ~/.ssh/aws-micro-2024-kp.pem -N -R 13.219.243.233:2222:172.31.0.148:2223
Note, the tunnel's target IP address (i.e., 172.31.0.148) could have been any IP address routable from the current EC2 instance. However, we used the same EC2 micro instance here as in the prior example to showcase the versatility of our very lightweight remote mount toolkit, even when working with a very low resource platform. Now, with the EC2 instance's inbound security group configured to allow the outbound public IP address and ports of our local development environment to connect on port 2222 of the host, we can configure our rclone mount profile in rclone.conf to access our containerized runtime environment using the container user, namely, devuser, and our forwarded port, 2222.
Therefore, our rclone.conf profile for this scenario looks like the one shown here:
[pydock]
type = sftp
host = 13.219.243.233
port = 2222
user = ec2-user
key_file = G:\My Drive\aws_dev\keys\pydock_devuser_key.pem
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
Since this is the same instance shown in the prior example and is hosted in a public subnet with a public IP address, we could have bound the container to the public interface and opened up the AWS Security Group for the instance to allow direct access to the container on port 2223. However, using the forwarding rule allows us to use this same technique if we wanted to forward traffic to a different, perhaps private, host that does not have a public IP address, so we showed it here.
Note, other port forwarding options that use iptables / nftables or socat might also be considered for port forwarding, but those may need to be installed using your system package manager if they are not already available. In some ways, they are simpler to configure and can provide higher throughput and lower CPU utilization than ssh. However, despite some extra key management work, the ssh forwarding strategy is already available wherever we have ssh services and is the one we opted for here. You can check out the following AI prompt and research conversation in the ai_chats directory of the pydock_build git repository if you are interested in learning more about port forwarding tools and options.
[AI Query]
What is the ssh command to setup an ssh tunnel from a port on the public ip to a port on the private ip of the same aws ec2 instance?
[/AI Query]
Since that link will expire 13 months after being created, a PDF snapshot of that conversation is also included with the downloads for this article.
Lastly, we need to add the following section to our custom mount script to mount our runtime container to the D: drive.
:: -----------------------------
:: Python Docker container (D:) (pydock)
:: -----------------------------
:pydock
setlocal
set "RCLONE_PROFILE=pydock" :: Same as section name in rclone.conf file
set "REMOTE_HOST_PATH=runtimes" :: Relative path from default login path to use as mount path
set "LOCAL_VOLUME_LETTER=D"
set "LOCAL_VOLUME_NAME=pydock-runtimes"
echo About to start rclone mount for %RCLONE_PROFILE% profile... >> "%GLOBAL_LOG_FILE%"
call :start_profile_mount
endlocal
exit /b
Again, we need to add our new profile / mount code section to the RCLONE_PROFILE_LIST to make sure the code section gets executed and mounted by our scripts. The following code snippet shows pydock added to the list:
:: -----------------------------
:: Profile list - ADD NEW PROFILE NAMES HERE
:: -----------------------------
:: First add new profile name to the following list so the
:main code will execute it
:: For consistency, these names should match the section
names in rclone.conf file
set "RCLONE_PROFILE_LIST=hostinger mypy3 pydock"
To add the new mount to your system, just reboot your machine and you should see a new D drive mount at startup.
Setup PyCharm Run Configuration
As you would expect, the JetBrains PyCharm IDE has very strong support for Python development. As you will see below, we can achieve a fully integrated development experience, while still leveraging our preferred pattern of using rclone mount to provide system level support for accessing and updating files on the remote file system. In PyCharm, we do not need to use the command line to launch our script like we did in VS Code. We simply need to tell PyCharm where the remote Python interpreter is located and how to map the remote filepath to our local filepath for debug tracing. That's it. Importantly, we did not configure PyCharm to use any of its own integrated file sync options to sync files on the remote.
So, briefly, the Remote Python Interpreter setup in PyCharm looks like Figure 21.

Notice in Figure 22, we can see the complete SSH Configuration used by our Remote Python Interpreter to provide access to the remote environment. You can look back at the field labeled SSH Server: in the dialog in Figure 21 to see how the SSH Configuration is referenced.

Lastly, the Run/Debug Configuration is shown in Figure 23. Because PyCharm offers seamless integration with remote Python interpreters, after our Run/Debug Configuration is properly configured as shown, we simply need to click the Debug button in this dialog or from the main editor window in order to start a debug session. We do not need to use the command line like we did in VS Code. PyCharm installs some helper code onto your backend system to enable this integration, but the total file size is around 60 MB, and it installs without any elevated permissions being needed.

As shown in Figure 24, our remote debug session starts, and we have the full power of all the tools in our local development environment to work on this code right inside our container.
Closing Remarks
I hope that the above tips for setting up and using rclone mount with sftp backends to debug remote environments opens new opportunities for you to use your best development tools and techniques in new target environments. I particularly hope that having this new capability as a reliable part of your system-level infrastructure, rather than a bespoke capability particular to one tool or another, will encourage you to find ways to leverage it to bring AI-augmented tools to your coding and configuration tasks in any environment. I also encourage you to apply this technique to other interpreted languages in other environments that are perhaps resource constrained, or, where it is just easier to mount the remote code, rather than attempt to sync a copy or install a heavy backend component.
Opportunities abound for enhancing remote development patterns and another area for which I am currently working on developing patterns are serverless scenarios. Please do share your feedback on your favorite serverless development pattern, or on anything discussed in this article!
Catlan_Photo-left_cropped.PNG
Catlan_Photo-right_cropped.PNG



