1. 55
christine.website
1.

2. 37

I’m primarily a Windows developer, and relate to the frustration of Windows development.

However, reading this article, most of the comments seemed related to initial setup: yes, you have to install git; yes, it installs its own bash; yes, vim doesn’t know what to do with the Windows clipboard but can do anything; yes, PowerShell came from an era where being conspicuously different was considered a virtue, but you’re free to use any other tool; etc.

The type of thing that makes me lose my mind about Windows as a platform is trying to deliver anything to a customer in an end-to-end way. On Linux, you often end up writing code…and that’s about it. Each distribution will package and update your code in their own way. They might get it wrong, but they’ll try. On Windows, updating your program is your problem. Depending on how you count, there’s either zero or a bajillion systems for updating code, but you can’t assume your users are using any of them, so you end up having to write your own. Users want to have precompiled binaries, but then they’ll be greeted with a slew of scary warnings, unless your code is signed, so you have to deal with that as a code author. Other platforms will have a standardized package install model; on Windows, the user’s running some executable you provided, so implementing every conceivable setup configuration is on you (see the git installer.) On other platforms a dist-upgrade will upgrade various things as a set; on Windows, you have to assume that the entire OS can move underneath your program and your program has to work. And you can’t expect users to help - how many users really know which version of Windows 10 they have? - so your program has to run on all of them.

There’s just a kind of cognitive burden with every program having to independently reinvent every wheel. The solutions are well known, but it’s just so…painful.

1. 7

On Linux, you often end up writing code…and that’s about it. Each distribution will package and update your code in their own way.

Only if your program is both open source and popular. The overwhelming majority of programs aren’t. Case in point: I spent hundreds of hours (spread over 4 years) writing a small easy to use crypto library. I have users, some of which even wrote language bindings. The only distribution packages I know of are for Void Linux, Arch Linux, and Net BSD. No Debian, no Redhat, no Gentoo, and most of all, no Ubuntu.

Not that it really matters. This is a single file library we’re talking about, which you can easily bundle in your own source code. But I did go out of my way to have a bog standard, easy to use makefile (with $PREFIX, $DESTDIR, $CC and all that jazz). Packaging it ought to be very easy. Yet no one stepped up for any of the major distributions out there. They might get it wrong, but they’ll try. The very fact they might get it wrong, in my opinion, suggest that packaging itself may be a bad idea to begin with. Linus Torvalds goes out of his way never to “break users”. We should be able to take advantage of that, but it would require abandoning the very concept of distribution, or at least specialising it. A distribution is mostly a glorified curated repository of software. Ideally a coherent whole, compiled, or even designed, to work together. The people managing it, the packagers, have made themselves responsible for the quality and security of that repository. Security by the way is the trump card they show in dynamic vs static linking debates: upstream devs can’t all be trusted with updating their software fast enough, so when there’s a vulnerability in some library, we ought to be able to swap it and instantly fix the problem for the whole distribution. Mostly though, it’s about making the life of packagers easier. Now I have no problem with curated repositories of software. What I have a problem with is the exclusivity. In most cases, there can be only one. One does not simply uses Debian and RedHat at the same time. They don’t just distribute software, they pervade the whole system. Including the kernel itself, which they somehow need to patch. This effectively turns them in to fenced gardens. It’s not as bad as Apple’s App Store, you can go over the fence, but it’s inconvenient at best. So. Linux distros won’t package my software, when they do it they might get it wrong anyway. Which means that in practice, I’ll have half a dozen systems moving under my feet, I can only hope that it will still work despite all those updates everywhere. Just like Windows, only worse. And just like on Windows, there’s only one solution: “Find your dependencies. Track them down, and eliminate them.” Ideally, we should only depend on the kernel. Maybe statically link everything, though if we’re short on space (??) we can lock those dependencies instead, like NPM or Cargo do it at the source level, and Nix (I think? I haven’t checked) can do at the binary level. On the flip side, that means you need to handle stuff like installation and updates yourself, or have a library do it for you. Just like Windows. Problem is, it’s not even possible, because of how distributions insist on standing between users and developers. On Windows, updating your program is your problem. As it should be. You wrote that program, you should be responsible for its life cycle. Distribution maintainers really got screwed when they realised that a bad program may undermine the distribution’s reputation. Though we may not like the idea of each program having its own update code, that update code can be as small as 100KB, including the cryptographic code (modern crypto libraries can be really small). Users want to have precompiled binaries, but then they’ll be greeted with a slew of scary warnings, unless your code is signed, so you have to deal with that as a code author. That, however, is something Windows is doing very, very wrong. Especially since signing your binaries is not enough, they decide whether your reputation warrants a warning anyway or not. This practice turns Microsoft into one giant middle man. They go as far as staking their reputation on the list of trusted authors and programs. While it does result in fewer users getting viruses, it also acts as yet another centralisation force, yet another way for huge entities and corporation to have an edge over the little folk. (An even more blatant example is how big email providers handle spam.) This is one of the few places where the solution is to tell everyone to “git gud”. That means teaching. People have to know how computers work. Not just how to use Microsoft® Word®, but the fundamentals of computing, and (among other things) what you can expect when you execute a random program from some shady web site. We don’t have to teach them programming, but at least let them try Human Resource Machine. Only then will it be safe to stop treating users like children. Heck, maybe they’ll even start to demand a better way. There is one thing for which a coherent curated repository of software is extremely useful: development environments. Developers generally need a comprehensive set of tools that work well together: at the very least a compiler, editor, version control, dependency management, and the actual dependencies of the program. It’s okay if things break a little because of version incompatibility. I can always update or fix the program I’m writing. Less technical end users however need more stability. When a program works, it’d better still work even when the system moves under its feet. The OS ought to provide a stable and sufficient API (ABI, really) upon which everyone can rely on. 1. 4 On Windows, updating your program is your problem. As it should be. You wrote that program, you should be responsible for its life cycle. The complaint here is it sucks for everyone to be reimplementing auto updates, possibly with bugs. I believe that my gaming PC is right now running buggy and wasteful auto update checkers from a half dozen different vendors, all of whom wasted money on these things which provide negative value. Whereas, uploading a new version to an app store or apt/rpm/etc repo is much nicer in this regard: users’ machines already have the mechanism to update software from those, often automatically. 1. 1 There are libraries for such things. Some of them could be provided by the OS vendor. I’m just not sure they should be part of the OS itself: it would add to what the OS must keep stable. Stability at the OS level is easier to achieve if said OS is minimal: just schedule programs & talk to the hardware. If programs can access the network, there is no need to provide an update mechanism on top. A standard, recommended library however, would be very nice. 2. 1 The type of thing that makes me lose my mind about Windows as a platform is trying to deliver anything to a customer in an end-to-end way. On Linux, you often end up writing code…and that’s about it. Each distribution will package and update your code in their own way. They might get it wrong, but they’ll try. On Windows, updating your program is your problem. Depending on how you count, there’s either zero or a bajillion systems for updating code, but you can’t assume your users are using any of them, so you end up having to write your own. […] And you can’t expect users to help - how many users really know which version of Windows 10 they have? - so your program has to run on all of them. There’s just a kind of cognitive burden with every program having to independently reinvent every wheel. The solutions are well known, but it’s just so…painful. Linux approaches to runnable-binaries-shipped-with-dependencies (Flatpak, snap, AppImage, …) do address some of these concerns, but I wonder if (or how) the increase of base images (echo "which version Windows 10 they have?" | sed s/Windows/Fedora/) will change the amount of work that the application developers will have to put in to create fully working {flatpaks,snaps,appimages}. 1. 5 It’s not that we don’t have an equivalent to that on Windows. It’s that there are just too damn many options, and Microsoft changes its mind every couple of years on what they want to do, exactly. Ever since the giant mess that was DLL hell, Windows has had something called side-by-side assemblies, which allow conflicting versions of DLLs to be installed globally. Combined with its take on app bundles, strongly allowing and encouraging application vendors to to just bundle all their DLLs alongside the application in the same directory, we end up effectively the same place as Flatpak, albeit exploded instead of single files. So that’s “solved”. But that’s only the mechanism. When it comes to actually distributing your app, Microsoft loses its attention every five seconds. The Micrsofot Store has been Microsoft’s answer for awhile, but it only relatively recently (last couple of years?) gained the ability to handle non-UWP binaries. We’ve also had ClickOnce, which was Microsoft’s answer to Java WebStart, and which was again .NET-only. And now we’re getting winget which is kinda Chocolatey and kinda the Microsoft Store and kinda its own thing, and so on. So it’s not the container bit that’s so hard, but rather getting your app mechanically distributed. That’s more contrasting with e.g. apt or rpm or the App Store (or maybe snap, since that is centralized) than Flatpak. 1. 4 I think this is not quite fair to Microsoft. On Windows, there is a blessed store for all GUI programs: the Microsoft Store. If you don’t like the Microsoft Store, you can distribute over the internet; if you sign your builds, Windows will pop up a non-scary prompt before installing, and if you don’t, Windows will pop up a scary prompt. On Linux, you can also distribute GUI programs over the internet. But there’s no trusted signing built in for programs distributed this way, so it’s somewhat less secure. What about blessed stores? Good grief: first of all, many distros maintain their own, and patch your software without your consent and in some cases refuse to distribute updates to your software (e.g. jwz’s XScreenSaver woes). But from a user’s perspective, perhaps that is ~okay — if you don’t mind out-of-date software. But for users, it gets worse! Where do you install from: the distro? Flatpak? Snaps? Sometime you install one package from one place, and it immediately pops up an alert telling you to uninstall it and install from a different place. But there’s no consistency: it’s not like every package prefers one place or another. And they’re cross-listed, but often with radically different versions! You’re not even guaranteed Flatpak or Snaps are the most up to date: the app developers may have abandoned that distribution method and gone back to shipping binaries in the distro’s repo. Plus if you install from Flatpak or Snaps, which certain programs more-or-less demand, they interface poorly with the rest of your system by default because of bundling their own filesystem images (and in Snap’s case they start slowly as a result). It’s… not great. On macOS, for GUI programs you have two “options”: the Mac App Store, or the internet. If you choose “the internet,” macOS will refuse to run your program unless your users click a checkbox hidden in the main system Settings app. Even if they do, it will prompt them before installing, telling them that anything from the Internet is dangerous (ignoring any signing). Also, the Mac App Store is extremely limited and many programs are impossible to run in their sandboxing. As per usual, Apple’s basic message is that if you’re trying to make programs that run on Macs, and you’re not Apple, they reserve the right to make you miserable. For installing command-line binaries: on Windows you’d either use chocolatey (the old 3rd party package manager) or scoop (the new 3rd party package manager). MS realized that people like command-line package managers, so they’re building an officially-blessed one called winget that will presumably replace those. Winget is not yet released to the general public though. On Linux, generally you’d use your distro’s package manager. But since app developers can’t easily add or update packages in the repo, sometimes your distro does not have the package! Or, as usual, it has some ancient outdated version. Then if you are on Ubuntu maybe you can add the PPA, or if you are not maybe you can go fuck yourself (cough I mean build it from source). On macOS the situation is fairly similar to Windows currently: you can use MacPorts (the old 3rd party package manager), or Homebrew (the new 3rd party package manager). As usual Apple does not care that developers like command-line package managers and is not building a blessed one. 1. 2 When it comes to actually distributing your app, Microsoft loses its attention every five seconds. The Micrsofot Store has been Microsoft’s answer for awhile, but it only relatively recently (last couple of years?) gained the ability to handle non-UWP binaries. Agree with this. It looks like at the moment if you want to sell productivity software for Windows without having to operate your own storefront, the most stable option may actually be Steam? Sure it targets the wrong market segment, but at least it works reliably. 2. 38 I would consider changing the title to “trying to bend Windows to my vim workflow is painful.” 1. 2 This. It’s basically a (justified) rant around how bad the VSCode Vim plugin is. I use it as my daily editor, and it DOES have a bunch of fairly serious warts. Compare and contrast Pycharm/IntelliJ’s superlative Vim plugin :) 2. 15 To be honest, I actually like developing on Windows once you stop treating it like a Unix, and do what it’s best at: GUI applications. I miss Visual Studio every time I have to use gdb. 1. 4 CLion is an absolute joy to use on linux when debugging. 1. 2 CLion isn’t useful to me (unless things changed) because my POSIX sludge isn’t often on Linux, and the projects I work with are rarely CMake compatible. 1. 1 Attach to process still works and CLion can make projects from compilation databases if CMake isn’t available. This is useful if you’re working with other build systems like bazel. 2. 12 I once wasted an entire month trying to resolve some cryptic C# compile errors where Visual Studio simply wouldn’t recognize some of my source files. In the end, the reason was that the compiler silently failed to recognize files with path lengths of longer than 255 characters, even though you can technically create such files on Windows. A prefix like “C:\Users\Benjamin\Documents\ProjectName\src" combined with C#’s very verbose naming conventions meant that a few of my files were just over the path size limit. 1. 8 I feel like Windows is drowning in technical debt even more than Linux is. The APIs to work with long paths have existed for ages now, so most modern software lets you easily create deep hierarchies, but Windows Explorer still isn’t updated to work with those APIs so if you create a file with a long path, you can’t interact with that file through Explorer. There have been solid widgets for things like text entry fields in various Microsoft UI frameworks/libraries for ages now, but core apps like Notepad and - again - Windows Explorer still aren’t updated to take advantage of them, so hotkeys like ctrl+backspace will just insert a square instead of doing the action which the rest of the system has taught you to expect (i.e deleting a word). CMD.EXE is an absolutely horrible terminal application, but it hasn’t been touched in ages presumably due to backwards compatibility, and Microsoft is just writing multiple new terminal applications, not as replacements because CMD.EXE Will always exist, but as additional terminal emulators which you have to use in addition to CMD.EXE. The Control Center lets you get to all your settings, but it’s old and crusty, so Microsoft is writing multiple generations of separately holistic Control Center replacements, but with limitations which make it necessary to use both the new and the old settings editors at the same time, and sometimes Control Center and some new settings program don’t even agree on the same setting. Windows is useful as a gaming OS, but any time I actually try to use it, I just get sad. 1. 6 CMD.EXE is an absolutely horrible terminal application, but it hasn’t been touched in ages presumably due to backwards compatibility, and Microsoft is just writing multiple new terminal applications, not as replacements because CMD.EXE What you think of as cmd.exe is actually a bunch of things, most of which are in the Windows Console Host. The shell-equivalent part is stable because a load of .bat files are written for it, but PowerShell is now the thing that’s recommended for interactive use. The console host (which includes a mixture of things that are PTY-subsystem and terminal emulator features on a *NIX system) is now developed by the Windows Terminal team and is seeing a lot of development. Both cmd.exe and powershell.exe run happily in the new terminal with the new console host and in the old terminal and the old console host. At the moment, if you run them from a non-console environment (e.g. from the windows-R box), the default console host that’s started is the one that Windows ships with and so you don’t get the new terminal. 1. 1 Windows Terminal is great when I can use it, but it does not seem to work well with administrator privileges. 1. 1 You can use the sudo package from scoop. For me it’s good enough. 1. 1 Wow, did not know about this! It looks like it still generates a UAC popup unless you configure those to not exist. Still, far better than nothing. http://blog.lukesampson.com/sudo-for-windows 2. 1 but PowerShell is now the thing that’s recommended for interactive use Which one? ;-) I have some code that extracts config/data/cache directories on Windows (the equivalent of “check if XDG_CONFIG_DIR is set, otherwise use .config” on Linux) and it’s just a hyperdimensional lair of horrors. Basically, the best way to get such info without having to ship native code is to run powershell (version 2, because that one does not have restricted mode) with a base64 encoded powershell script that embeds a C# type declaration that embeds native interop code that finally calls the required APIs.¹ I’m close to simply dropping Windows support, to be honest. ¹ The juicy part of the code for those interested:  static final String SCRIPT_START_BASE64 = operatingSystem == 'w' ? toUTF16LEBase64("& {\n" + "[Console]::OutputEncoding = [System.Text.Encoding]::UTF8\n" + "Add-Type @\"\n" + "using System;\n" + "using System.Runtime.InteropServices;\n" + "public class Dir {\n" + " [DllImport(\"shell32.dll\")]\n" + " private static extern int SHGetKnownFolderPath([MarshalAs(UnmanagedType.LPStruct)] Guid rfid, uint dwFlags, IntPtr hToken, out IntPtr pszPath);\n" + " public static string GetKnownFolderPath(string rfid) {\n" + " IntPtr pszPath;\n" + " if (SHGetKnownFolderPath(new Guid(rfid), 0, IntPtr.Zero, out pszPath) != 0) return \"\";\n" + " string path = Marshal.PtrToStringUni(pszPath);\n" + " Marshal.FreeCoTaskMem(pszPath);\n" + " return path;\n" + " }\n" + "}\n" + "\"@\n") : null;  1. 1 Which one? ;-) PowerShell 7 Core, of course! …for now! …unless you also need to support classic PowerShell, in which case, PowerShell 5! …and be careful not to use Windows-specific assemblies if you want to be cross-platform! 3. 3 The APIs to work with long paths have existed for ages now Well, I’d agree about technical debt, but this claim is a great example of it. As an application developer, you can choose one of these options: 1. Add a manifest to your program where you promise to support long paths throughout the entire program. If you do this, it won’t do anything unless the user has also modified a system-global setting to enable long paths, which obviously many users won’t do, and you can expect to deal with long path related support queries for a long time. This is also only supported on recent versions of Windows 10, so you can expect a few queries from users running older systems. 2. Change your program to use UTF-16, and escape paths with \\?\ . The effect of doing this is to tell the system to suppress a lot of path conversions, which means you have to implement those yourself - things like applying a relative path to an absolute path, for example. This logic is more convoluted on Windows than Linux, because you have to think about drive letters and SMB shares. “D:” relative to “C:\foo” means “the current directory on drive D:”. “..\..\bar” relative to “C:\foo” means “C:\bar”. “\\server\share\..\bar” becomes “\\?\UNC\server\share\bar”. “con” means “con”. I went with option #2, but the whole time kept feeling this is yet another wheel that all application developers are asked to reinvent. 1. 1 Windows is useful as a gaming OS, but any time I actually try to use it, I just get sad. • Microsoft Office and the Adobe Suite (or replacements such as the Affinity Suite). It would be really nice if Microsoft just ported Office. 1. 2 They effectively have. It seems like Microsoft cares far more about the O365 version of Office than any native version — even Windows. 1. 2 They effectively have. It seems like Microsoft cares far more about the O365 version of Office than any native version — even Windows. Office 365 is a subscription service, most of the subscriptions include the Windows/Mac Apps. I guess that you mean Office Online, but it only contains a very small subset of the features of the native versions. I tried to use it for a while, but you quickly run into features that are missing. 2. 1 The separation of the control centre may actually go away soon. If the articles are up be believed, MS finished that migration in the latest version. 1. 1 More details? The only thing I heard was that they were finally killing the working ones. 2. 11 One thing that makes Windows a lot more tolerable is AutoHotKey. It’s one of the best things about Windows and the main reason I’m willing to be a second-class citizen in the software world. (Also definitely try the default Powershell ISE, it’ll help you get up to speed with PS niceness quicker) 1. 2 AHK seems like a neat tool. Personally I’ve built up a massive collection of zsh functions and aliases, and I feel like that’s roughly the same thing. 1. 2 AHK is for allowing you to set arbitrary global keyboard shortcuts that can run any script and interact with the windowing system. It’s pretty different from zsh aliases, unless you never use anything but a terminal emulator. 1. 1 unless you never use anything but a terminal emulator You’re not far off. I use functions wrapping osascript for a few GUI-related things, but mostly I do work in a terminal. I pretty much only use GUIs for things that are unnecessarily cumbersome in a shell, like photo editing. 2. 2 I guess that’s true within your shell. But AHK does more. Someone even used it to make the desktop shell work more like i3. (I’ve only witnessed that in action, never tried it on my own PC. My current windows install is used too little by me and too much by other people to do something like that.) 1. 1 You can do similar shenanigans on Linux too: https://github.com/BurntSushi/dotfiles/blob/master/bin/x11-gnome-do See also pytyle for something even more involved. Probably the main difference is that on Linux, it not in one unified system like AHK is. 1. 1 Nice. I think I wrote half of your xrandr output parser for my qtile config, before I discovered I could get what I wanted from autorandr. I think the existence of things like these and the lack of a unified system of window messages underlying everything like you have on Windows has prevented any one thing exactly like AHK from emerging 2. 7 I use Ubuntu for WSL2-based development because I have fairly boring preferences, but apparently it’s possible to get NixOS running via this repo, which does some hacks to get systemd running: https://github.com/Trundle/NixOS-WSL Might be worth a shot if you’re missing NixOS! 1. 4 I don’t understand the ‘railroaded into Ubuntu’ comment. You install Linux distros in WSL / WSL2 from the Windows Store. I started using Debian, then switched to Ubuntu, but both were installed in exactly the same way. 1. 1 OpenSUSE is also available. 2. 4 Have you tried running vim on Windows? I remember from way back when that this “just worked”. 1. 2 I had the same question. First thing I do with any Windows machine I’m forced to interact with is install cygwin with vim. But I started that habit many many years before WSL came around, which is what I guess you’re supposed to use nowadays. 1. 2 cygwin is great. you aren’t alone in doing it the “old” way. 1. 2 Neovim installs natively! 1. 1 Good to know. I haven’t setup a Windows machine since I switched to neovim. It will be interesting to see how much of my config works. :) 2. 1 I was thinking a native build of gvim, not the console Vim from cygwin. I seem to remember there being a native build around 2002 or so which you could download from the Vim website. That’s from before neovim AFAIK. 1. 1 Both GVim and Neovim work, and they work in the Windows Terminal without Cygwin, in my experience. Neovim actually has a pretty decent terminal emulator, so I end up using it a bit like tmux somewhat when I have a lot of terminal-heavy work to do. 2. 4 If your verdict of this post is that it is painful… this reads downright cheerful and happy to me, compared to my experience of suffering :P. I’m not even joking, half of the paragraphs end up with “it’s ok” and “not perfect, but it works at all”. Consider yourself lucky, my experience is more like “everything I try to do doesn’t work at all or feels like working with my hands tied behind my back”. On the other hand I noticed that simply using the “Remote-Editing via SSH” feature in VS Code solves 90% of my problems, so I can have an editor or 2 up and also have shells displayed, while ignoring the rest of the “system”. It’s the first time this has worked for me, never got that working with other IDEs where it didn’t feel sluggish. But my “work mode” workflow relies so heavily on a tiled window manager and 7 open shell windows and cronjobs and scripts that do things… I never got it to feel even ok with WSL. Still like working on a VM and Windows is still there to annoy me. (JFTR, my main computer has always run Windows because I play games, so it’s not a case that I’ve seen very often of developers going “I haven’t really used Windows for 10 years and use Linux/Mac/etc exclusively”) Examples: Trying to develop Qt GUI apps and the environment not randomly breaking. Trying to build anything in C++ with SSL involved. Mostly things that actually rely on WSL not being involved, because I want an EXE and not an ELF binary for my stuff. But maybe I am conflating “develepment FOR windows” with “ON windows” here. 1. 4 My work desktop is Windows and I mostly live in the Windows Terminal. I used to use Konsole on vcxsrv, but now the Windows Terminal is good enough. This is my setup: I have Ubuntu installed in WSL. I don’t particularly like it, but I don’t hate it more than any other Linux distro. It runs vim, git, and all of the other *NIX tools that I like. My Windows home directory is symlinked to ~/winhome so I can git clone and edit files there. I have vcxsrv installed from Chocolatey and DISPLAY=:0 set in my .bash_rc, so I can also run graphical *NIX applications if I need them and they integrate cleanly with the Windows desktop (the Windows X.org port comes with a fake WM that delegates window management to the host system). I have also installed CMake, Ninja, and LLVM via Chocolatey and have these two lines in my .bashrc: alias vs='cmd.exe /K "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat"' alias vs32='cmd.exe /K "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars32.bat"'  This lets me type vs or vs32 to get a 64- or 32-bit Windows built environment, with all of the tool paths set up for me (and, more importantly, for CMake). When I want to build something, I do: # Linux:$ cd /path/to/linux/build
$ccmake /path/to/sources$ ninja
# Win32
$vs32 > cd path\to\win32\build > cmake-gui.exe path\to\sources -G Ninja > ninja # Win64$ vs
> cd path\to\win64\build
> cmake-gui.exe path\to\sources -G Ninja
> ninja


For *NIX systems (including WSL) I have this in my .bashrc:

# Set the default generator to Ninja.
export CMAKE_GENERATOR=Ninja
# Always generate a compile-commands JSON file:
export CMAKE_EXPORT_COMPILE_COMMANDS=true


So I don’t need to tell CMake to use the sane thing to compile instead of its stupid defaults.

With this setup, I can happily work in my favourite terminal-based text editor, do all of the file stuff I want in the terminal (though I run gitg to stage things for more complex git commits), and have a single source tree that lets me build for 32/64-bit Windows and Linux. The only thing that would make this better would be a port of FreeBSD to the *COW infrastructure that’s used to implement WSL2.

1. 4

I echo a lot of the points made in this post, except for the bits about emacs/vim (I generally use Sublime Text and ignore vim bindings). Windows is, in my experience, not great for development unless you’re developing for Windows. At one point I ended up cobbling together my own version of WSL 2 with a virtual machine running Arch and SMB shares (worked surprisingly well, to tell the truth). It seems the experience is getting better, but it’s not quite there yet – I feel it will get to the point that it’s comfortable to do general development on Windows (especially using *nix tooling), but that might require more involvement from the maintainers or the community at large to port/maintain tools. Git especially sticks out in my mind. Installing a dedicated version of bash&perl, as touched on in the article, is frankly ludicrous when we have WSL, Cygwin, MinGW (etc) as well. This isn’t really git’s fault, and part of me wonders when they’ll phase out Git for Windows and direct people to use WSL instead (as unlikely as the rational part of my brain may think this it).

I guess the crux of the pain is realizing that Windows isn’t *nix, and shouldn’t be treated as a drop in replacement. As @altano pointed out, “I would consider changing the title to “trying to bend Windows to my vim workflow is painful.””, and that rings true to me. While *nix tooling is available for Windows, it’s more or less a second class citizen and workflows should be adapted to these shortcomings, rather than trying to hack around Windows to make it work.

1. 4

Sounds better than trying to use a Mac. At least Windows has WSL.

1. 3

Broadly agree, and I also find that windows is unpleasantly slow compared with other OSes; but, specific nits:

Control-D on an empty prompt didn’t end up closing the session

I don’t use windows, so might be off on this, but: to signal EOF on windows, I believe you need C-z, not C-d.

WSL […] railroad you into Ubuntu

WSL can run arbitrary linuces—it just takes some extra fanangling

1. 2

That EOF still isn’t interpreted as a session closing keybind. You can use clink to inject normal behaviour into cmd but not powershell.

1. 1

Is that for real? Undo will close your session?

1. 1

Is that for real? Suspending a job will close your session?

2. 3

I was expecting the usual “Windows is not Linux therefore terrible” rant and all I got was “I couldn’t make VS Code act like Vim therefore Windows is Terrible”. Not sure if this is progress or not? At least it’s change, I guess.

1. 2

Are you using vim, but in emacs? Like spawning a subshell in emacs to run vim? Could you elaborate a bit more on that, as you glance over it in the first paragraph?

1. 5

Very likely that they’re using evil-mode, possibly via one of the several distributions (e.g. spacemacs, doom emacs) that integrates it deeply.

1. 1

Yes, I’m using evil-mode with spacemacs on Linux. It ruins you.

1. 1

Out of curiosity, Why do you use vim via emacs and not “normal” vim?

1. 1

org-mode and emacsclient mostly.

1. 1

Besides org-mode, which cadey already mentioned, magit is also a killer app. Even when I am developing in CLion, I have an emacs session just for Magit.

2. 2

Did you give vscode-neovim a go? It uses neovim behinds the scenes and allows the 2 to interact. It still has to emulate some things like vsplit but it seems to emulate it the way you expect. It’s differences and edge cases are well documented as well.

1. 2

Nowadays I mostly use WSL+Windows Terminal if I’m ever doing something with go, c, python, bash, on Windows (provided whatever I’m doing with those doesn’t actually depend on running on actual Windows).

I find for stuff like Android/Java/Kotlin/C# development and writing software for Windows, an IDE like IntelliJ or VisualStudio is almost a must.

1. 2

I’ve been happily living in wsl2 and the windows terminal for like a year with neovim and fish, and everything else I used on linux. My development flow hasn’t changed at all I don’t think.

For whatever reason, I fell out of love with tiling window managers after like 10 years, and then decided to try windows + wsl2 and found I prefer it.

1. 2

Hi Cadey, something that might help is installing a custom distro for WSL so you aren’t stuck with Ubuntu.

I’ve been running Arch with this https://github.com/yuk7/ArchWSL and haven’t had any issues. It even supports X-forwarding (although I haven’t tested it out). Things seem to “just work” again.

I also used VSCode when I first switched over to my Windows machine, but now I’m back to using my old Vim setup can couldn’t be happier. Also, since I’m using Windows Terminal as my terminal, there’s the added bonus of a seamless full screen mode (alt+enter).

1. 2

I use Windows for non-work development, and by “use” I mean I have vscode remote and a tmux session running on a Debian machine in a datacenter somewhere. Is that cheating? It’s probably cheating, but I find it significantly less painful than the alternative of trying to do actual dev work on Windows.

1. 2

If you’re on windows, and unhappy with wsl/wsl2 - or for some reason want to work more “in” windows (which is reasonable - after all except for obinouxous proprietary hw - Linux works great as a daily driver - presumably your bought a windows license for a reason..) - i recommend looking at https://scoop.sh in addition to chocolaty.

1. 1

I really feel the pain here. I’ve gone in circles trying so many things, especially since WSL’s release. Nix package management on WSL being one of them. I’ve surrendered mostly to using alacritty with WSL for really basic stuff but for anything more I’m using alacritty to SSH into a dev vm with a mounted share. I’ve tried out a docker dev workflow before but the configuration and maintenence wasn’t worth it and didn’t last very long. The vm overhead isn’t a big deal for my machine.

Oh, just now seeing NixOS-WSL… I may give it a go

1. 1

I have tried to use my Emacs config on Windows (and barring the things that are obviously impossible such as getting Nix to work with Windows) and have concluded that it is a fantastic waste of my time to do this.

What were the issues? Hard-coded unix paths or did some packages just not work?

1. 3

I’ve found I’ve had issues similar to any ecosystem that treats Windows as second-class: a lot of emacs packages are difficult to get running because they make use of bundled C extensions that don’t compile properly on Windows (even with appropriate gcc/clang tooling setup in msys/mingw), or rely on specific features of unix-like OSes that aren’t readily usable on Windows as is (esp. unix domain sockets). Additionally there’s a lot more packages that “work” but have really nasty performance (magit, as an example) because they delegate work to external programs in subprocesses, and process creation is extremely slow on Windows (on the order of hundreds of milliseconds).

1. 1

I think a lot of slow process creating might be down to relying on some emulation layer over the Win32 process, and/or relying on the shell execution, instead of specifying the executable and the arguments directly. In my testing, (in C#), creating a process with UseShellExecute set to false made it so that it’d execute in 80-90 milliseconds, rather than 200+. (I don’t know the Win32 call changes off the top of my head).

Janet provides ways to do both, for example, os/shell vs os/spawn. The second is much faster, and pretty much reaches “this is fast once the exe in question is cached in memory” speeds.

Even with this, it’s likely that windows still spawns processes slower than Linux/Posix, but I think that, overall, process creation on windows via unix emulation layers often leaves performance on the table vs doing it more natively, and I suspect that finding ways to avoid shell execution (since I think this effectively means you have spawn two processes, rather than one, but I haven’t done research there) would also help.

I don’t expect that Emacs folks will try to do much to fix this, though.

2. 1

It just got stuck randomly. Sometimes I’d be in the middle of a leader binding and then it’d just lock up for no good reason and no amount of trying to fix it besides going nuclear with Task Manager would fix it.

3. 1

I am a tortured soul that literally thinks in terms of Vim motions.

I mean, you can just @ me next time xD

1. 1

Then I want to open another file split to the left with :vsplit, so I press escape and type in :vsplit bar.txt. Then I get a vsplit of the current buffer, not the new file that I actually wanted. Now, this is probably a very niche thing that I am used to (even though it works fine on vanilla vim and with evil-mode), and other people I have asked about this apparently do not open new files like that (and one was surprised to find out that worked at all); but this is a pretty heavily ingrained into my muscle memory thing and it is frustrating. I have to retrain my decade old buffer management muscle memory.

I do this all the time. Though usually I :vs