1. 2026 is the year of Linux (in Linux (in Windows)) on the desktop

    I'm not a very Windows-focused computer user. In fact I think the last era I used it really extensively, with any expertise, was in the 16-bit era of Windows 3.1 - 3.11. I quite liked those versions, which operated somewhat more like an integrated interface SDK for MS-DOS than an independent operating system. As the commercial internet boom took hold, and work became focused almost entirely on building web applications targetting UNIX-like deployment environments, (typically linux), I shifted over to working natively in that environment, and never looked back. So, I don't really know how to use modern Windows at all comfortably, and I don't have any personal ease when working with it's native interface.

    How I typically organise my development environments

    Normal development tooling for me is a simple GUI windowing environment, running on Debian linux. I (still) use GNU emacs or increasingly zed for almost everything that isn't in a web browser, alongside a handful of running terminals, an email client, and some kind of file browser, and music player and that's about it. My development projects I tend to isolate into containers, using lxc/lxd and more lately, incus, to give me "lightweight" virtual hosts nested on my computer, connected with a virtual ethernet LAN. It effectively is a local 'cloud', offering dependency and process isolation for your work, and powerful features like snapshot/checkpointing, image templating. What I particularly like about the incus approach, compared say to other container systems, like docker, is the abstraction is well suited to longer-lived, stateful development systems - you partition one linux system into a few dozen smaller more specialised linux systems with the state hidden from each other. The unit of containment is 'operating system'. Whereas with docker-like containment, I think the unit of abstraction is something more like 'one process, with all of it's dependencies bundled', which makes more sense to me for one-shot tasks, or often as a deployment target.

    Previously

    Back in the 90s(!), when computers were a lot less powerful than they are today, I commonly used to use emacs 'tramp' or Ange-FTP modes, to develop remotely on a development or staging server, from a thinner client. Now I use the same approach, and the same tools, to develop on my container environments, effectively treating them like a 'remote' host, even though they're only conceptually remote. Tramp, just like most of emacs, is a kludgey wonder - it encodes the information about remote endpoints using pseudo-paths, like /ssh:user@host:/path/to/something , and emacs just works out how to edit your files. Under the hood, it strings together a glue of subprocesses and temporary file copies and the like, to take your editing and reflect it on the remote environment. And not just files. This is emacs! Almost everything in emacs works tramp-aware, so you can browse the remote filesystem using dired - launch processes for compilation or linting, use git workspaces with magit-mode, run interactive shells and debuggers, build and run your projects, it has extremely high levels of DWIM. The main price you pay is occasional latency, as things shuttle back and forth, or buffer in and out of pipes, but I’m very used to this. And compared with the days when I used to use tramp to work over dialup(!) links to servers, this modern container approach is practically turbo-charged. Eat my dust!

    Zed simililarly offers remote editing using ssh connections, with a slightly different architecture. The zed remote feature is a little more modern, like VSCode - it downloads a headless install of the editor on the remote after you ssh to it, and then proxies to that backend using it's own protocols over ssh. The net effect is the same though, you work on your local keyboard screen and mouse, but your working environment is in the remote hosts. An advantage zed offers over tramp's elegant hackaround is that the latency is considerably reduced, it's not copying files backward and forwards and slowly lauching tasks in shells. A disadvantage zed offers, is that it's not emacs and it's tooling for lots of things (like git, or file browsing) are not as advanced or comprehensively scriptable. It's not uncommon for me to have both zed and emacs buffers attached to the same remote development context.

    So, that's what development tends to look like to me. One or two graphical editors, on my main desktop. Many persistant projects mapped to running containers (and maybe remote hosts), with various different projects open in them for work. I get a persisting, consistent user interface over diverse projects, all of them in a full linux environment, each completely isolated from the others, but networked.

    Windows re-enters the conversation

    In my current job, we're using Windows, and the whole Microsoft business stack, and we have a IT managed network. It's a bit of a change from what I'm used to. But, unlike a lot of software developers, I've found that I like change! (Often, you learn stuff. So what have I learned?)

    I've been issued with a pretty nice Microsoft Surface branded laptop. The hardware, at least, is nice (and higher spec than any of my own current computers). The software is, of course, Windows, which I remain suspicious of using. The surface runs it like a champ, of course.

    Interestingly enough, modern Windows understands that the majority of software deployment is now to linux, or linux-like environments, and the developer tools include an integrated linux-based toolchain. It's called 'Windows subsystem for Linux' and it's on it's second major version - WSL-2. WSL-2 basically integrates a virtualized linux kernel running inside the windows enviroment, which can be used much as I've described my container approach above - you have a virtual linux host, with it's own filesystem and processes, and a convenient interface between this and the host system.

    You can run your graphical applications, and even your IDE (Visual Studio Code, presumably :-)) , and browser (Edge, your AI-powered browser!) in your graphical desktop, and have a local 'server' for developoment. WSL integrates with Docker Desktop for Windows, allowing your docker containers to run natively in the linux environment, and you can even install and run multiple instances of WSL containers to have different isolated linux 'back-ends'. It's a compelling work narrative, but it is founded on the idea that your goal state is using Windows for all your user-facing software and interface. Howerver -- What if you don't want to?

    It's a UNIX system, I know this!

    Because the WSL environment is an optimised full linux VM, it seemed to me that I might even be able to treat the WSL environment like a remote linux system, and move my existing workflow over - use a linux desktop to remotely access a local linux "server", that just happens to be Windows, and run development inside there using my typical approach of multiple contexts isolated into separate system containers. That's more like my idea of best of both worlds - my work computer can be a locked down managed enterprise client, I can get good use from the fancy hardware, but still maintain the toolchains and client interfaces I'm most comfortable using. Assuming, of course, that I could get it all to work...

    Well, it took a bit of fiddling, but I'm here to say it works, well! my Windows Laptop runs on my desk, my software projects run on it inside incus containers, and I access them from emacs or zed or terminal windows, on my ancient creaking linux desktop system. I can easily run as many of these containers as I can fit into my 24GB of WSL (actually quite a lot of headless containers, in my experience) - voila! My Windows laptop is now a cloud host provider!

    Here's a detailed walkthrough of how I set it all up. Please note that you will need a local Windows admin account to make some of the necessary configuration changes to the Windows side of things, but aside from a couple of privileged config changes, all of this can be then run as a non-administrator user account, which is another great benefit.

    (Because I set this up originally a year ago, my instructions are written from before Debian 13 was promoted to stable, which is why you'll see me using 'Bookworm' in a few places.)

    The setup

    First: Setup WSL

    Firstly, you need to install a WSL2 environment. I picked Debian (which is supported), and I initialised this. I then system upgraded the Debian installation from bookworm (12/old-stable) to trixie (13/testing) using apt, because incus is packaged as part of trixie. I was then able to install incus using apt, and follow the incus initialisation and setup instructions from the project configuration page. I quickly launched a couple of bare bones containers to check that things were working as expected.

    Inside your WSL shell, assuming your user account is in the incus and incus-admin groups (check this with the id command), you should just be able to run incus launch images:debian/12 - this should download a base debian image, and launch it with a generated container name.

    You can see it running with incus list , and launch a shell within it with incus shell <container-name> - Please read the lovely docs for more such hints, this post is not intended to be an incus user guide ;-)

    The next thing I did was add some additional WSL configuration by creating a .wslconfig file in my User home directory on Windows - this is a plain text ini file. I was pleased to find that Notepad.exe still exists in 2024, and can be used to create this file :-)


    [wsl2]
    memory = 24G
    nestedVirtualization = true
    networkingMode = "mirrored"

    this is relatively self-explanatory - I'm giving most of my 32GB of RAM to the linux VM (because i'm not really using the windows side), I'm enabling nestedVirtualization, although I don't think this is a prequisite for running incus containers, it sounds like something I'll probably use at some point. Finally, and most importantly for this case, I'm setting the networkingMode up to use 'mirrored' networking mode - this replicates the windows networking devices and configuration inside the linux VM, meaning we can connect directly to the linux system from the network, without having to set up port forwarding or anything like this.

    Once you've created the file you need to restart WSL in order for it to take effect. The easiest way to check if it's working is to look at your available system RAM in linux using free - it have changed to be 24GB. The next stage is to setup windows to allow client connections from the LAN.

    Windows Networking

    We also want to be able to connect to our virtual linux box conveniently from the LAN. This requires a few things. Firstly, we need a stable network address or name. Secondly, we need to allow incoming network connections. This part requires enough Admin privileges on the Windows host to change networking settings.

    I redefined the network adapter settings in Windows to use a static IP for this LAN, and added a DNS name for it in my local resolver. I set this network configuration up as a 'Private' network profile. The next step is then to configure the Hyper-V firewall on windows to allow incoming connections to pass to the VM. Running a powershell window as admin, I added firewall rules to allow this for the private network profile. In this way, I can ensure that the host is only accesible like this on trusted networks.

    The WSL vm has a fixed identity string (the VMCreatorId) , a GUID, which is 40E0AC32-46A5-438A-A0B2-2B479E8F2E90, so the command you need is something like

     Set-NetFireWallHyperVProfile -Profile Private -name '{40E0AC32-46A5-438A-A0B2-2B479E8F2E90}' DefaultInboundAction Allow 

    Now incoming connections on the windows IP interface will be receivable in the WSL VM. Enable sshd on the WSL environment, and then check that you can ssh to the network address. You should get a login inside the WSL environment. If you run incus list here, you should see any running incus instances.

    Bonus networking: Setup ssh proxying, and multiplexing

    You can now access incus containers on the WSL instance from a remote emacs, if you use the incus-tramp method, and tramp pipelining. Access a path like /ssh:[email protected]|incus:me@container-name:/path/to/project in emacs and everything should be there. Relying on additional tramp stages for proxy chaining, although it's a very neat trick, can bring problems with performance, and reliability, and it is more simple to push the extra hops into the ssh layer.

    This involves a bit of glue code, which looks hideous, but works very well.

    The ssh glue

    Setup 'Control Master' for ssh, which allows repeated ssh connections to the same host to re-use an established ssh session. This will speed up the time taken to open new sessions, and noticeably improve the responsiveness of tramp for ssh remotes. Secondarily, use a ProxyCommand directive to connect a single ssh connection to a proxy session. Finally, you can use wildcard rules with a host suffix matching a certain host pattern straight through into an incus container on a specific host. Here's the relevant entries from my ~/.ssh/config


    Host *.wsl
    ProxyCommand ssh my.windows.box incus exec $(echo %h | sed 's/.wsl//') --user 1000 -- /usr/bin/nc -q0 localhost 22
    ForwardAgent yes
    Host *
    ControlPath ~/.ssh/master-%h:%p
    ControlMaster auto
    ControlPersist 10m

    In ssh configuration files, the first applicable setting is the one that will be used, so we should order the file from most specific towards most general.

    Here, we're using a fake 'domain' of .wsl, and then converting the command to an ssh to the windows host, that immediately launches incus, getting the container name by chopping off the '.wsl' from the provided hostname and running 'netcat' in the container to proxy our ssh session to the ssh server inside the container. With this little piece of ugly glue, we can run ssh container-name.wsl and immediately get an ssh session directly into the running container called container-name

    The control master block ensures that we re-use an established ssh control session for all connections to the same host, and persist it for 10minutes after the last connection exits, to improve reconnection times.

    With this piece in place, we can access files, shells, and processes from emacs buffers, using a tramp path of /ssh:my-container.wsl:/path/to/project - or laucnch a zed session in a remote project directory, using their ssh 'remote project' feature.

    Surviving restarts

    The main, and perhaps the only real downside of this approach is that Windows likes to restart, often. Usually in batches once or twice a week. This is a combination of updates from remote IT and Microsoft I suppose. Rather than get too aggravated about it, I prefer to think of it as a free chaos monkey.

    I can alleviate most of the pain points by making everything as restart-able as possible. WSL can be set to start as soon as you login by tweaking a few settings in the cmd.exe application

    • set the default shell to be Debian to match my WSL container session
    • set cmd to start on login

    This means that after a boot all I have to do is login to resume the WSL state

    • The incus container can be set to automatically start at boot with incus config
    • I have assigned both the incus container and the windows machine static IPs on my LAN, so they reboot with a stable address.
    • In WSL I have caddy set to reverse proxy from Debian to the incus address, and this is configured in /etc/caddy and run from a systemd unit, to restart on boot

    The local "staging" version of any webservices I am working with I typically run in Docker , inside the inucs container (as my user account) - i usually write a shell script to launch docker with the right port forwarding and data persistence flags for whatever I want to be running (for more complex setups this could be a docker compose configuration) - I simply put this shell script into my user crontab, using the magic @reboot trigger directive to launch this script after multi-user init, as me, with just a one-liner.

    with these configurations in place, all I need do is login to Windows (with my face 😘) to resume running services where I left them.

    posted by cms on
    tagged as